Skip to main content
Close
Menu

Physics for the 21st Century

The Fundamental Interactions Interview with Featured Scientist Srini Rajagopalan

Interviewer: What is your role in the ATLAS project?

SRINI: I’ve taken on many roles in ATLAS, but for the past couple of years I’ve been working on the trigger. For two years I served as the trigger menu coordinator, whose primary responsibility is to set up the physics triggers, for ATLAS. I coordinate the trigger activities in ATLAS with particular emphasis to put together a list of physic triggers, and trigger is the first line of physics in ATLAS. That is where we select what is interesting physics and write it out to disc, which can be analyzed further to extract the physics. So the trigger is crucial. It has to work. It forms the first step in the physics analysis and the physics extraction.

I currently serve as the deputy trigger coordinator in ATLAS, which will transition as the trigger coordinator in a year from now. Our role is to ensure that the trigger works. It includes the development of algorithms, making sure that the algorithms execute efficiently, putting together the physics triggers online to select interesting physics. Our role is to make sure that we deliver interesting physics events to ATLAS physicists.

Interviewer: What does the detector look like?

SRINI: So this shows some features about ATLAS. It’s 46 meters long. It is 12 meters in radius. That means it’s 25 meters tall. It has about 10 to the 8 electronic channels. That’s 100 million. And it weighs 700,000 tons. That’s all of the Eiffel Tower put together. The ATLAS detector is actually located 100 meters below the surface. It was a technical challenge to lower the detector components 100 meters low. There are shafts that are built in through which the detector components are lowered. The control room is on the surface, and you can actually see the Jura mountains in the background.

Interviewer: What’s the unknown physics that ATLAS is looking for?

SRINI: The Standard Model describes the fundamental particles. So everything that you see around you, whether it’s you or whether it’s the universe, the stars, it’s made above 12 fundamental particles, and the Standard Model describes them, their behavior, and the forces that govern them. It describes three out of the four forces, the strong, the weak, and the electromagnetic. It still does not accommodate, the force we all know, which is gravity yet in the model.

The Standard Model describes what we observe experimentally very well, but still it is not complete. There are many questions that remain unanswered. The biggest question that is not answered is what generates the mass of these particles, what gives it its mass? Some particles weigh less, some weigh more, why? There is the theory which postulates that there is a Higgs field with which the particles interact, and the particles acquire their mass based on the strength of the interaction with the Higgs field. The stronger they interact, the more the mass.

So the Higgs is one of the things that we are trying to find. The past experiments at Fermi lab have set a limit on what the mass of the Higgs is. But there are other things that the Standard Model also does not explain. For example, where is the antimatter? When the universe was born there must have been equal amounts of matter and antimatter, and it should have annihilated. So we shouldn’t exist. Which means that somehow there was some excess amount of matter over antimatter. Why? …and we have to understand that.

Interviewer: What do the LHC and the ATLAS detector do?

SRINI: The Large Hadron Collider accelerates protons in opposite directions and ATLAS sits at one of the interaction points. There are different types of magnets that bend, focus, squeeze the particles that come through. There are RF cavities that generate the electromagnetic field to accelerate the protons. The protons are accelerated up to 7 TeV and collide at several interactions points head on—seven TeV on seven TeV for a total of total 14 TeV center of mass.

Protons are collided at several interaction points around the accelarator. ATLAS is a detector that is at one of these interaction points. What we do is we build a detector around the interaction point so that when the protons collide we can capture the result of that collision. So the ATLAS detector is simply several layers like an onion, which captures the debris that results from these collisions.

The LHC will have the highest energy and the highest luminosity of any accelerator built to date. It means we are able to prove higher energy scales. It allows us to probe higher energy scales. It means it allows us to look for particles like the Higgs or other super symmetric particles that could exist, but were beyond the reach of previous generation accelerators.

Higher luminosities mean we will be able to produce more of them, which allows us to make precision measurement of fundamental particles that have already been observed as well. For example, the top quark. The top quark was produced at the Tevetron, which means that the Tevetron did have sufficient energy to produce it. The LHC with its high luminosity and high energy scale will be able to produce a lot more top quarks, which means we can make very precision measurements, the behavior of the properties of the top quark, which by itself might lead to new physics.

ATLAS has approximately 3,000 collaborators from 35 or so countries. Now I don’t know the exact number. The US is one of the biggest participants in ATLAS. We are about 600 members strong. There are about 200 graduate students just from the US alone. There are almost 1,000 graduate students hoping to get their thesis in ATLAS.

Interviewer: Can you please tell me what the trigger is?

SRINI: The trigger selects interesting physics candidates. There are several levers of triggers in ATLAS. There are three levels of triggers in ATLAS. The first one is a hardware trigger, and then there are two other levels, which are software based. They analyze the data and select candidates that are interesting for further analysis offline.

Collisions in ATLAS happen every 25 nanoseconds, which means there are 40 million collisions in one second. Around the LHC there are bunches of protons. Each bunch contains about 10 to the 10, or 10 to the 11 protons. And they’re separated apart. So they pass each other and collide at 40 million collisions per second.

ATLAS is like a camera, which means it is like taking 40 million pictures every second. And ATLAS has to look at it and select interesting events and write them out to disc. We write up events to disc at about 200 hertz, which means we accept 200 of these 40 million pictures every second.

So the trigger has to look at these 40 million pictures, find the interesting ones, select 200, all of this in one second, and it has to keep doing it again and again.

So we are responsible for the development of the algorithms that run on the trigger. We must make sure that they run efficiently, and they run fast. Time is precious in the high-level trigger. There are hundreds of thousands of lines of code. Each section of the code looks at different signatures. For example, there are some that are trying to identify: was that a muon candidate?

So when the LHC starts up, it will have a low luminosity, of the order of 10 to the 31, or 10 to the 32. The LHC is designed for very high luminosities—two orders more in magnitude at 10—supposed to operate at 7 on 7 TeV at 10 to the 34.

Interviewer: Can you please give me more details about “trigger levels”?

SRINI: ATLAS has three levels of trigger. They’re called level one, level two and event filter. The first level is a hardware-based trigger. It accepts its inputs from the detectors, from the calorimeter and the muon detectors, and it looks for some signatures and accepts the event or rejects the event. If the event is accepted by level one, it is given to the second level of trigger, which is an algorithm-based trigger. If the event is accepted at level two, it is passed on to the third level where further analysis is done. And if the event is accepted in the third level, then it is written out. So it is a stepwise approach.

The level one sees the 40 million pictures that I spoke to you about. It puts out about 75 thousand pictures every second. So it has to look at the 40 million pictures, see which ones are interesting, and select about 75 thousand pictures every second. That is now given to this level two. The level two looks at the 75 thousand pictures and gives about 2,000 of those pictures to the event filter. The event filter then takes those and writes out about 200 of those pictures.

The difference between level two and the event filter is, once the event is accepted by level one, let’s say the level one says, I’ve found an interesting signature which looks like an electron in this region of the detector. The level two is a software algorithm that accesses the data, but it accesses data only in the region of interest, which means that it doesn’t look at the entire detected data. It looks at the data around the region where the level one thought it saw something good. And it looks at that and it says, is it good, is it true and it does a fine analysis. Once the event is accepted by level two, the event is built and passed on to the event filter. The event filter can do a much fine-grained analysis because the entire event is available for the event filter to look at, unlike the level two, which only had access to a region of interest. Furthermore, the level two has simpler algorithms running. The event filter has more complex algorithms.

The reason for this stepwise approach is you have more time in the event filter than at level two, than at level one. So at level one, the decision must be made within two and half microseconds. That’s the average time to make that decision, whether to keep this event or not. Once it comes to the level two, the level two has about 10 milliseconds.

So we go from 40 million events in one second, to writing out 200 events, which that means we have to provide a rejection at a trigger one in 200 thousand.

Interviewer: Can you describe a single trigger?

SRINI: Let me give you an example for the trigger. Let’s say we are looking for a Z boson. And Z goes to two electrons. Plus maybe there are some jets associated with it. A classic trigger will be to look for two electrons. What do the electrons do? The electrons are produced, they leave a track in the tracking chambers, and then they deposit their energy in the calorimeter. And there are two of them, which the Z boson produces.

So what do we do in the trigger? The first level of trigger, the level one, would look for two clusters of energy, So we would relax that condition at the first level and we would look for two clusters of energy say at 10 GeV each. The level one finds it and says, aha, I found two clusters of energy, one here, one there. It passes that information on to the level two. The level two then goes and says, let me unpack the data around each location that the level one told me where the cluster of energy was. And it analyzes the data, and it looks at the shape, it looks at what the shower position was. And it says, this is consistent with an electron. And it says, okay, go ahead. It then passes that information on to the event filter, and looks at more detail. It might look for the track associated in the tracking chamber. It says: does the track match with the shower? And it says, this really is two electrons, and do the two electrons add up to the mass of the Z boson. We might apply a new selection to that. And then it just says, yes, it is true, we’d write that event out.

So what have we done? We have set up a trigger requiring two electrons above a certain energy, satisfying some behavior that we think is consistent with what an electron deposits. So that’s one trigger, two electrons above certain energy, with a certain behavior.

We can set up many, many triggers like that. We have a thousand triggers looking at different types of signatures. And we could be looking at two muons. We could be looking at many different combinations. Each combination has a specific purpose. They are trying to look for some new physics. There are many many triggers that we set up that are looking for different physics processes, known physics as well as unknown physics.

Interviewer: How do you know you’re going to record accurate data?

SRINI: The first thing we do when the beam starts up is to calibrate and commission the detectors. We have to make sure that all the detectors are working as expected. That has partly been done using cosmic rays and test beams, which means you take a module, put it in a test beam, or a test beam facility. You expose the module to single electrons or single pions at well-defined energies and look at the response of that module of the detector. So you know how it behaves already. But the final calibration of the detectors will be by observing the already known physics processes. For example, the Z boson. Z goes into two electrons, and we know the mass of the Z, which has been very precisely measured by the previous experiments. So we look at the response of the calorimeter and we make sure that we calibrate it knowing the mass of the Z that feeds us input.

So we calibrate the detectors by looking at their performance against what we expect from known physics processes. It’s like quoting from Feynman: “yesterday’s sensation is today’s calibration”.

Interviewer: Is it possible that you might miss collision data that might be of interest?

SRINI: Like everything else, the trigger has some inefficiencies, which means that when we look at these 40 million pictures, we might throw away some good ones and we might accept some not so interesting ones. It’s like when we look at the pictures, some of them might be overexposed, some underexposed, and the algorithms might not exactly be tuned, and will lead to some inefficiencies. But that’s okay. The point is to not capture all interesting ones. The point is to capture a sufficient number of interesting ones to either claim that there is some new physics, or to make some precision measurements with them.

If I give you a hundred electrons, how many times are you going to come back and tell me was it an electron. Ninety times? Ninety-five times? Fifty times? We cannot be perfect. We might lose some electrons. So if we lose ten of the hundred, we are ninety percent efficient. If you’re fifty percent efficient, that’s terrible. So we have to make sure that the algorithms are efficient in identifying the signatures, the physics signatures.

We have to evolve triggers with luminosity. As the luminosity increases, we are going to produce a lot more top quarks. We may not be able to keep all the top quarks. We may have to give up some. We may have to make tighter cuts so because we want to be sensitive to higher energy scales.

We’ll produce a lot more W bosons and Z bosons So we may not be able to keep all of them. We will have to throw away some, keep sufficient number of them.

Interviewer: What are the greatest challenges for the ATLAS trigger?

SRINI: The different detectors in ATLAS generate a lot of data. The total event data that is generated per event is about one and a half megabytes. So if we write out 200 events in a second, we are generating about 300 megabytes per second. A CD ROM that you have has about 600 megabytes. So we are generating about half a CD ROM per second. If we were to record all the 40 million events per second, we would be generating about 100,000 CDs per second.

Interviewer: Can you comment on the irony that you are using the largest accelerator ever built to probe the very smallest particles?

SRINI: It may sound ironic that we have the largest accelerator, and the biggest detectors to probe the smallest particles, but it shouldn’t be. We are probing higher energy scales, and higher luminosities, for which we need larger accelerators, and bigger detectors to probe smaller particles.

Series Directory

Physics for the 21st Century

Credits

Produced by the Harvard-Smithsonian Center for Astrophysics Science Media Group in association with the Harvard University Department of Physics. 2010.
  • ISBN: 1-57680-891-2