New luminosity record marks great start for LHC run

The Large Hadron Collider has logged a new luminosity record with 2.57/nb/s in ATLAS and 2.69/nb/s in CMS beating the previous figure of 2.4/fb/s.

It is just one week since the start of the final proton physics run for 2011 and already they have returned to colliding the current maximum of 1380 bunches per beam. This run is using a better squeeze of beta*=1.0 meters which should be enough to increase luminosity by 50%. Further records can therefore be anticipated on subsequent runs as emittance and bunch intensity are brought back to former levels.

To have reached this point so quickly after the end of the technical stop is a good sign for the collider.  After previous stops it has typically taken two weeks to iron out problems and return to previous luminosities. The change in the squeeze could have required collimator settings to be adjusted but luckily the old settings have proved more or less sufficient, avoiding delays.

This final run has seven more weeks to go with everyone anxious to see as many inverse femtobarns as possible added to the 2.7/fb already delivered. The increased luminosity and good stability (so far) are good signs that a high total is achievable for 2012 to give good prospects for seeing clear signs of the Higgs boson or other new physics by November.


58 Responses to New luminosity record marks great start for LHC run

  1. Luboš Motl says:

    I saw the luminosity chart at atlas.ch, opened your blog, and the news was here again. ;-) My TRF sidebar table is updated, too. Note that 2.7/nb/s is 85/fb/year – not bad. ;-)

    What did we say about the fraction of this immediate maximal luminosity that may be achieved over a year? About 1/6 of this maximum may actually be achieved which would be 14/fb in 2012, but there may be further improvements.

    • Philip Gibbs says:

      There is still opportunity to increase luminosity before the end of this year using intensity. For next year they have other possibilities, more squeeze, 25 ns, etc, and of course they may go to higher energy. The startup next year may also be delayed with the run continuing into the start of 2013. So still a lot of unknowns, but plenty of /fb on the cards.

    • carla says:

      Well, based upon this year’s 160 day run and the current long term average of 45/pb/day gives a realistic and conservative 7/fb for 2012. Doubling the instantaneous luminosity doesn’t mean doubling the intergrated luminosity because the intensity half life decreases with increasing intensity. Even with 25ns bunches and better squeeze, I doubt they will get significantly more that 10/fb next year

    • Lubos Motl says:

      Dear Carla, is that some common knowledge that half-life decreases with maximum luminosity? I don’t see why that should be the case and I don’t even realize that this effect is manifested on the luminosity charts.

      • prasad says:

        Maybe it’s just the collisions themselves? The higher the luminosity the more the number of collisions per second and so the rate of loss of protons in the beam? At full performance there are supposed to be ~10^9 collisions per second per IP, and the beams have ~10^14 protons spread over ~3000 bunches. So basically you empty out the beam fairly quickly, over tens of hours.

      • Philip Gibbs says:

        prasad, that is not the correct explanation for a couple of reasons. Firstly, if loss of protons from collisions were the dominant cause of luminosity decrease then it would just lead to an exponential decay with fixed half-life because rate of loss is just proportional to luminosity. Secondly, ATLAS have only recorded about 1.7 x 10^14 collisions in total so your arithmetic must be wrong. In fact the intensity lifetime which measures the loss of protons is several hundred hours and even those losses may be more due to losses on the collimators than losses at the IPs.

        The luminosity lifetime is much shorter than the intensity lifetime because during the run the emittance increases, i.e the protons spread out in position and momentum space due to elastic interactions between the bunches as they pass. The rate of luminosity loss increases faster than linear with intensity and luminosity so the half-life is shorter at the beginning of the run. You can see this in the luminosity profile over a long run.

        The benefit that luminosity increases have on the initial lifetime and total integrated luminosity may also depend on how you increase luminosity. I.e bigger beta* may be better than bigger bunch intensity or smaller emittance at injection because the tighter squeeze only affects bunch-bunch interactions at the IP whereas emittance and intensity affect the bunches self interactions too.

        Have a look at this luminosity chart for the longest run https://lhc-statistics.web.cern.ch/LHC-Statistics/PRO/Plots/2011/2000/Luminosity/Lumi_Instantaneous.png The luminosity drops from about 2000 to 1600 in well under three hours at the beginning. Near the end it drops from 1000 to 800 in well over six hours. With a constant half-life these decreases would take the same time. So in fact the luminosity half life has more than doubled during the run.

      • carla says:

        Yes, it’s common knowledge. Steve Myers gave a talk on the present and future performance of the LHC: http://webcast.in2p3.fr/videos-lhc_machine_status_and_prospects_including_upgrades and at 19:35 states that at high luminosities, the luminosity life time becomes comparable to the turn around time of 2.5 hours. At 1/nb/s, 20 hours of beam time gave around 60/pb, whereas at 2/nb/s gave 90/pb rather than 120/pb. For this reason, they’ll use luminosity leveling once they go above 5/nb/s.

  2. kulhous says:

    Heh, 2.9 reached;)

  3. New broken record 3/nb/s!

  4. Tony Smith says:

    Was the power cut on 400 kV line due to lightning ?

    Did the Autotransfer to 130 kV work OK ?

    Was the Cryo station in S12 within the part of the cryogenics covered by the Autotransfer system ?

    Tony

    PS – My questions are based on a Chamonix XII paper and may be out-of-date. If so, my apologies.

  5. Tony Smith says:

    Phil, you said that what LHC needs “now is a series of good long runs”.

    The LHC performance and statistics page pie charts indicate:

    total 2011 LHC efficiency for 488 fills (183 days)
    17.4 per cent No Beam
    21.9 per cent Stable Beams

    while the most recent LCH efficiency for last 10 fills (3 days)
    39.8 per cent No Beam
    14.7 per cent Stable Beams

    Even if some decline in Stable Beams and increase in No Beam
    might have been due to Act of G-d (like lightning) beyond the control of LHC

    are there some adjustments that could be made to get back to the overall average Stable Beam performance ?

    Tony

    • carla says:

      In May, they averaged 30/pb/day runing at 1/nb/s; late August 45/pb/day at 2/nb/s. Right now they’re running at 3/nb/s which should average out at 60/pb/day going by past performance. They only need an average 45/pb/day to give 330/pb/week to give 5/fb by the end of October – enough to rule out the Higgs over the expected mass range, or find a 2.7-sigma signal even at the most difficult 115 Gev. Even with a typical 2 day delay every 2 weeks for the cyrogenics, they should achieve this.

      • What about the prospects of combining both detectors? That would mean about 4sigma even at 115GeV.

      • Philip Gibbs says:

        They promised a combination but then decided against showing it. We don’t know why. There may be more news about the combination at next weeks meetings “Higgs Days at Santander”

      • carla says:

        I think they will only consider combining two 3-sigma signals to minimize the embarrassement of being wrong when more data comes in. Remember that both detectors saw the same excess at around 135-145 with 1/fb only for it to drop with 1.5/fb and a different analysis.

  6. Anonymous says:

    They are now running at 1 m beta and 50 ns spacing and get over 3/nb/s luminosity. Design parameters are 0,55 m beta and 25 ns spacing. Does it mean that they will be able to almost quadruple the luminosity next year?

    • Philip Gibbs says:

      They have already exceeded some design (or “nominal”) parameters such as bunch intensity and emittance. The 0.55m beta was originally only supposed to be possible at full beam energy by design, but with ATS squeeze they have a way to do better.

      However, there are certain limits that they can’t exceed without a major upgrades. If they move to 25ns it doubles the luminosity but only if they keep the bunch intensity the same. Unfortunately that could take the overall beam intensity beyond what the cryogenics can handle. To keep overall intensity constant when moving to 25ns they would have to half bunch intensity, but luminosity goes like the square of bunch intensity. So moving to 25ns could mean half the luminosity rather than double!

      Luckily they may be able to get the luminosity back up by using even lower beta, or emittance. An advantage of being at 25ns is that there is less pileup for a given luminosity and pileup is another limiting factor.

      So for best results they need to use 25ns, Then increase intensity until the cryogenic limit is reached, then reduce beta and emittance until the pileup limit is reached. Then they can go beyond and use luminosity leveling to keep at that limit for more of the run. I think I am oversimplifying but this is already complex enough for me.

      What they do next year depends on many things. Another advance would be to improve machine stability and running efficiency.

  7. Philip Gibbs says:

    Looks like we will pass 3/fb (delivered all time in ATLAS) on the next fill.

    • Marc says:

      Now over 3/fb. But Phil, this is “delivered”. Isn’t the “recorded” a little bit lower?

    • Philip Gibbs says:

      Yes you can wait for the recorded figure to pass the mark in a few days if you prefer but there will be no Champagne left when you arrive at the party :)

    • JollyJoker says:

      With 10 / pb / h and 40 days left, five hours of beam per day is enough to get to 5/fb. 10h/day would give us 7/fb, 17-18h 10/fb.

      Assuming no big surprises, we can be fairly certain of a total of 7-8/fb at the end of 2011, right?

      • carla says:

        @Jolly the intensity falls to half its initial value over 10 hours. I think 60/pb/day to give 5-6/fb for the year is more realistic.

      • JollyJoker says:

        True, fill 2105 only gave 7 / pb / h over 16,5 hours. Of course, if the typical fill lasts that long, the hours per day of beam is going to be high :)

      • carla says:

        @Jolly 3 fills of 4 hours each would give about 120/pb. Long fills approaching eighteen hours start to become inefficient with good turn around times of 3 hours.

      • @Carla So, why do they insist on doing long fills?

      • carla says:

        @Daniel because long fills verify a good, stable setup and currently they’re still increasing intensity. Also more beam dumps occur between the start of a fill and stable beams than during collisions. Typical turn around times are 4-5 hours right now.

  8. Wow, 118/pb in one fill!

    • Philip Gibbs says:

      A new record. the previous was just over 100/pb but that took 26 hours compared to 17 hours this time

  9. I’m puzzled by the long downtime between successful fills. Now we are at 22 hours and counting.

    • carla says:

      @conrad this is typical so don’t be surprised since they’re still commissioning the machine as it enters new territory with increasing bunch intensity – 1.4 today, 1.35 yesterday.

  10. Beams were just dumped right before collisions began. Aren’t those long fills stressing the system?

  11. Philip Gibbs says:

    Good fill in progress now. Started around 3.3/nb/s

    • carla says:

      I’m surprised the gain in intensity isn’t as great with the increase in number of protons per bunch. With 1.31×10^11 p/b they reached a luminosity of 3.1/nb/s so with 1.39×10^11 protons per bunch I’d expect (1.39/1.31)^2 x1.31 = 3.5/nb/s whereas they only got to 3.3/nb/s. Losses must be a problem.

    • Philip Gibbs says:

      Either losses or it is harder to get small emittance with higher intensity

  12. Tony Smith says:

    An LHC performance and statistics web page says that
    Fill 2110 of 15 September 2011 had
    6:20 Duration of Stable Beams
    and
    235 GeV Energy
    and delivered about 58/pb to ATLAS and 57/pb to CMS.

    That Fill 2110 was the only Fill with Stable Beams
    since 15 April 2011 with Energy not 3500 GeV.

    What happened and why ?

    Tony

  13. Tony Smith says:

    If the 235 GeV Energy stated on LHC performance and statistics web pages is “Just a data error”
    then
    why has it not been corrected ?

    Is the current 18:41:43 STABLE BEAM at the energy
    level of 450 GeV at which TCDIs were checked
    as of 13:43:05 ?

    Tony

    PS – What does it mean that
    “… the official stats did not count the last run
    because it was recorded at the wrong energy …” ?

    Was that “wrong energy” the 235 GeV ?

    Does “wrong” refer to a typographical-type error
    and that the actual energy of fill 2110 was 3500 GeV ?
    or
    Does “wrong” mean that the real energy of fill 2110
    was in fact 235 GeV which was “wrong” for physics ?

    • Philip Gibbs says:

      The injection energy is 450 GeV so they are not really setup to run at lower energies. Anyway, the run was 3500 GeV, we saw it.

      I don’t know why they don’t fix the errors on the stats page but it is probably just a question of priorities and maybe they dont have a convenient interface for editing the database. It’s just an info system. I am sure the experiments work out the recorded luminosity to use in experiments independently.

  14. Tony Smith says:

    I have a naive question based on looking at some of the LHC web pages:

    Consider slide 29 of the ATLAS Higgs presentation at Lepton-Photon for Golden Channel H to ZZ to 4l
    which shows histogram background
    and 27 events “selected by the analysis algorithm”.

    Disregard the simulated signal part of the histogram,
    so that the plot is just of events and background
    making it easy to see where events exceed background
    and by how much (i.e., rough analysis of relatively raw data).

    Would it be easy to make an automated plot whereby

    background would be automatically updated based on the total integrated luminosity
    and
    the analysis algorithm would automatically add new events from newly acquired luminosity,
    using the “analysis algorithm”
    since the H to ZZ to 4l events should be so clearly distinct that their acquisition could be easily automated
    ??

    If so,
    then we could see in almost real time how the Golden Channel raw event data is coming along,
    which would be to me a lot more exciting than a football game
    and
    would attract a lot of interest not only in the physics community but also perhaps in the general public,
    showing them how data accumulates leading to better and better understanding.

    Tony

    • Philip Gibbs says:

      It’s a nice idea but I don’t know how easily it can be done. I imagine it is quite difficult to set up but not impossible. There may also be issues about how reliable the information needs to be if it is not being checked and approved by people at every stage.

  15. Tony Smith says:

    Today (21 September) the LHC run 2135 was a very nice duration of 11:17 delivering 60/pb to ATLAS and 73/pb to CMS.

    The peak luminosity for 2135 was about 2600
    whereas for the previous 8 good runs it was about 3200.

    Just before 2135 the lhcstatus twitter said that there were a series of vacuum triggers at point 2
    so that they were starting to fill at lower bunch intensity.
    Does that account for the lower luminosity of 2135?

    If so, I think that the decisions were very good, and that in the long run up to Halloween it is good to give stability priority over pushing intensity, luminosity etc to record levels.

    If there are about 38 days left for proton runs in 2011,
    if each day can give 10 hours of stable beam and 70/pb,
    that would be 38×70 = 2660/pb
    in addition to the 3471/pb already done,
    for
    a total of 6131/pb = 6/fb
    which I think would be great.

    If 6/fb are in by Halloween,
    then a histogram for the Golden Channel (my favorite) might come fairly soon after that.
    More detailed elaborate analysis of it and other channels could come later, but the simple Golden Channel histogram is something that should quickly give very useful hints about how the Higgs really works.

    Tony

    • Philip Gibbs says:

      If pushing for higher luminosity has identified a vacuum problem then it is much better that they found it now rather than early next year, It gives them time to study it and plan for a good fix during the winter shutdown. It shows that they were right to push to the limits. They may yet find a quick fix and continue the adiabatic increase in bunch intensity this year.
      Update: looks like some scrubbing should clear it

      It is nice to think that they could publish the golden channel histogram quickly but my impression is that their strategy is heading in the opposite direction. The signals in the data have reached a point where they need care to interpret and they don’t want to make it look like they don’t understand the plots. I think they will not now release anything new on Higgs until they have done a deeper and broader analysis with combinations and signal plots for individual channels and more.

      Either they will publish a comprehensive set of combinations based on LP11 data or they will wait until the end of year data is ready with hopefully a less ambiguous Higgs signal than the ones we have been seeing lately.

      There are no big conferences until we get to Aspen and Moriond next year so there will not be the preset dates that inspired quick releases this summer. They can release results at any time with a seminar but this would happen when they are ready rather than when deadlines demand. The Santander meeting has been turned into a closed discussion so I suppose that they are looking at what the data is saying and what they need to do to rub out the question marks.

    • carla says:

      Let’s be realistic using a Hubner factor between 0.2 and 0.3 giving 5-8 hours/day, 45-70/pb/day. so for the remaining 38 days, thats another 1.7 – 2.6/pb giving 5-6/pb for the year end. Anything can happen such as another lightening strike causing a loss of one week as happened in late August.

  16. ondra says:

    Any idea guys whats wrong with Atlas data recording? Its missing data on last and current fill?

  17. Tony Smith says:

    Since the peak luminosity was reduced from about 3200 to about 2600
    (that is, for stable fills 2135 and 2138)
    the ratio of delivered luminosity has been
    for ATLAS/ for CMS = 60/73 = 43/53 = about 80 percent
    while
    for the last fill at 3200 the ratio was 77/78 = about 100 per cent
    so
    maybe CMS somehow handles the lower luminosity better than ATLAS ??
    That is just a guess on my part,
    and I have no idea exactly why that might be.

    Tony

  18. Tony Smith says:

    Fill 2140 was even more extreme:
    ATLAS saw peak lumi of about 2300
    but CMS saw peak lumi of about 3000
    and
    the ratio of delivered luminosity was

    for ATLAS / for CMS = 2438 / 49910 = about 5 per cent

    so I see ondra’s point about something might be wrong
    but
    I don’t know what it is.

    Tony

  19. LHC status page says fill 2141 is for PHYSICS with 1.25e11/bunch. What is the usual quantity?

  20. Tony Smith says:

    The Register has an article by Gavin Clarke at
    http://www.theregister.co.uk/2011/09/22/cern_coverity/
    that says in part:

    “… using the application of commercially available static-code analysis tools from development testing specialist Coverity … Tens of thousands of bugs have been eliminated from the program CERN’s atom-smashers are using to identify Higgs boson

    CERN says it has squashed 40,000 bugs living in ROOT, the C++ framework it is relied upon to store, crunch and help analyse petabytes of data from the Large Hadron Collider (LHC). The massive collider generates 15PB of data each year … ROOT contains 3.5 million lines of code while CERN’s army of 10,000 physicists have surrounded that core with a further 50 million lines of software they have built to try and sift out Higgs boson

    CERN reckons the bugs had helped muddy results from the LHC, throwing them off the Higgs-boson scent.
    Further,
    there were programs built by those 10,000 scientists that could never be properly tested prior to Coverity. …”.

    I wonder which channels were most adversely affected by the bugs ???

    Tony

Follow

Get every new post delivered to your Inbox.

Join 275 other followers

%d bloggers like this: