10/fb LHC Update

The Large Hadron Collider has now delivered over 10/fb at 8 TeV during 2012 in the middle of a long 11 week summer run between technical stops. The 10/fb is for ATLAS and CMS but LHCb has also passed 1/fb in 2012 to add to their 1/fb from last year.

About 3.5/fb have been added in the first 5 weeks after a slow start with time taken out from pp luminosity production for floating MDs, 90m physics (TOTEM and ALFA) and VDM scans. The collider has now settled into a straight stretch with about 1/fb added each week. Peak luminosities are a little down compared to before the last technical stop due to problems with beam instabilities but if they keep it steady the results will be good. There are six more weeks before the next stop with time scheduled for more floating MD and 500m physics. We can expect them to end on 16th September with about 15/fb recorded this year in addition to the 5/fb from last year.

click on image for summer schedule

While this run is in progress we can expect to see results from before the last technical stops at a series of specialised conferences SUSY 2012, TOP 2012 etc., see the viXra calendar for details. It seems most likely that the next Higgs update will come around early october with 20/fb of data available. This will be in keeping with past updates where the amount of data has doubled each time. With the Higgs discovery behind them the next update may be a little more low-key but I think there is a good prospect for reporting a significant excess beyond standard model in the diphoton channel. It may even pass three sigma in one of the experiments.

This list of LHC Higgs updates looks roughly like this

  • Moriond, March 2011 – 0.04/fb
  • EPS, July 2011 – 1.2/fb
  • Lepton-Photon, August 2011 – 2.3/fb
  • CERN council, December 2011 – 4.9/fb
  • ICHEP, July 2012 – 10.4/fb
  • October 2012 – about 20/fb ?
  • Dec/Jan 2013 – ??

Assuming they update at around 20/fb in October, can they double the dataset one last time by the end of the year? The final 10 week proton run schedule looks like this,

click for autumn schedule

If they run with the same parameters they will add another 10/fb to the total luminosity, but with the target for the year already achieved I think they will want to do something different for this run. The scheduled scrubbing run after the technical stop only makes sense if they are considering the option of running at 25ns spacing. Earlier MD tests at 25ns have worked but with reduced beam lifetimes. The scrubbing run will help clean the pipes to make the runs more successful. To run at 25ns they will have to reduce the bunch intensity. The PS will need to split the bunches in half one extra time before injection and also because the present high intensities at 25ns would result in too much heating. This means that luminosity will not increase at 25ns unless they can also improve the squeeze.

In fact the MD tests for tighter squeeze down to 0.2m went very well (as far as I know). Current beta* is 0.6m so they have plenty of scope to at least double the luminosity with the tighter squeeze.  The 25ns spacing means less pileup making an increase in luminosity more manageable. I don’t know what the actual plans are but I think a 25ns final run with 0.3m would make complete sense if they can get it to work. As well as giving them a chance at doubling the integrated luminosity yet again it will be a valuable trial for runs after the technical stop which will certainly have to be at 25ns spacing. Running at 25ns this year is a risk but definitely one worth taking.


17 Responses to 10/fb LHC Update

  1. [...] podríamos estar hablando de otros 10 /fb adicionales. Más información en Philip Gibbs, “10/fb LHC Update,” viXra log, August 4, 2012. Tu voto:Comparte esto:StumbleUponDiggRedditCorreo [...]

  2. Tony Smith says:

    Phil, you say about what the LHC might see at 20/fb:
    “… there is a good prospect for reporting a significant excess beyond standard model in the diphoton channel. It may even pass three sigma in one of the experiments. …”.

    What do you think of the paper by Baglio, Djouadi, and Godbole
    at arXiv 1207.1451 where they say:
    “… the discrepancy between the theoretical prediction
    and the measured value of the gg to H to digamma rate,
    reduces to about one standard deviation
    when the QCD uncertainties are taken into account …”.

    Tony

    • Philip Gibbs says:

      I have no particular expertise in this area but just using common sense as an outsider I would make some observations.

      The experiments are supposed to take into account all types of uncertainty including theory errors, so the concern is only that they have done so wrongly, This can easily happen and at this stage the excess is as likely to fade away as it is to strengthern, As well as the stats there are many sources of systematic error that are common to both experiments and they could be reinforcing to give the observed result.

      The other channels have quite good agreement (so far) with theory so whatever might cause the diphoton excess by error has to apply just to this channel in particular. The gg to H part appears to be in good agreement via WW and ZZ. This leaves us to worry about the H to diphoton decay itself unless there is a conspiracy of the stats to make the other channels look better than they are.

      I find it hard to believe that there are QCD processes that can enhance the decay of Higgs to diphoton that are so poorly understood. If the authors of some paper say that such errors are present then they are claiming to know more about possible sources of error than the experimenters and all the theorists who have worked for the collaborations. Since the collaborations themselves have not made any strong claims about the excess yet there is not much conflict here.

      We have seen signals come and go in unexpected ways before, It is the nature of the beast. I am just saying here that if there is something real about this one it will be getting quite significant in the next update. The experiments will then have to convince us that they have taken into account all sources of error correctly. If there are people raising questions outside then I am sure they will be asking many more internally. If they make a strongly worded claim we will know that there is agreement that they have done it all correctly. Any caveates will be signs of uncertainty.

  3. Tony Smith says:

    Djouadi and Godbole (two authors of arXiv 1207.1451) are both
    at the CERN Theory Unit in Geneva,
    so they are not really “outside… authors”.

    In more detail, they say:
    “… because RZZ seems to be in agreement with the SM
    and there is no sign of a new particle in direct searches at the LHC,
    the H → γγ excess will be particularly difficult to accommodate
    in relatively simple and/or well motivated SM extensions

    in the cross section for … gluon–gluon fusion

    the scale dependence,
    the parton distribution functions and
    the use of an effective field theory approach to evaluate some higher order corrections

    are about 10% each
    and if they are combined according to the LHCHWG,
    they reach the level of 30% when the EFT uncertainty is also included.
    However,
    in the experimental analyses …
    the net result is as if they were not included in the total errors
    given by the ATLAS and CMS collaborations.

    This is particularly the case for σ(gg → H) x BR(H → γγ),
    where the ≈ 2σ discrepancy with the SM prediction
    reduces to the level of ≈ 1σ
    if the 30% theory uncertainty is properly considered …”.

    Of course, we should know more with 20/fb around mid-September.
    When is an announcement of the 20/fb results likely to happen ?

    Tony

    • Philip Gibbs says:

      I dont doubt their credentials and they could easily be completely correct about the QCD corrections. However, I think you need to take all results in combination and use the unofficial combinations to get the true state of play.

      They are questioning the gluon fusion process but if there was a large theoretical error there, it could also show up in the other channels, so there must be a minor conspiracy of other errors and fluctuations to cancel that. You then also need further errors in the diphoton decay to conspire towards this effect. It is possible and only requires minor coincidences and I have said repeatedly myself that systematic errors could be the answer.

      However, I do think it is a complete red herring to say that searches have eliminated “well motivated” and “simple” extensions to the standard model. Nobody should be looking for simplicity at this energy scale, consistency is all that is required. We may be approaching an energy scale with a very rich spectrum of new particles for all we know. Well-motivated extensions apparently means things like minimal extensions etc. There are many other possibilities that have not been ruled out.

      Consider the fact that the lumninosity gathered so far is just enough to detect the Higgs in some of its channels. There could easily be other new particles in the same energy range that decay in ways that are even harder to detect. Hadron colliders have huge blind spots. To account for the diphoton excess you just need a charged boson, (scalar or vector) or a vector-like fermion. It could have any lepton number, baryon number, C, P, R-parity, isospin etc. It could even be something more exotic like a magnetic monopole. The decay will depend on these quantum numbers and could involve other new particles.

      I think the theorists have concentrated too much on specific constrained models and should be enumerating the possible characteristics of individual particles that could account for this without showing up in other searches. I would bet that 90% of the more obvious possibilities could be eliminated with careful analysis but there would be a few things that cant. They should be figuring out what those possibilities are and thinking about how searches can be modified to look for them.

  4. ondra says:

    CMS and ATLAS should reach 10/fb recorded and LHCb 1/fb recorded today. things are slower but after all moving forward. Any idea when LHCb will update their rare Bs decays search?

    Ondra

    • Philip Gibbs says:

      LHCb are in less of a hurry to rush out results. The Heavy Flavour CP conference for this year already passed in May. They could report any time but perhaps they will just wait until all data is in and prepare results for next year. If anyone knows better please do chime in.

    • ondra says:

      There is workshop, LHCb week in Davos (September 3-7, 2012)
      http://www.physik.uzh.ch/~strauman/Davos/ ,so maybe there could be something shown.
      For parameters of run after TS3, there is CERN Machine Advisory Committee, 6th Meeting next week http://indico.cern.ch/conferenceDisplay.py?confId=198003 .
      I think after figuring problems after TS2, situation is maybe getting better this week, they could continue with same parameters. Afterall CMS in its “after” Higgs presentations often talk about 30/fb after 2012 (i guess 5+25/fb (7+8 TeV)), so probably 25/fb, 30/fb alone in 2012 would be very optimistic, could be the target with in extended run.

    • Philip Gibbs says:

      Thanks, I’ve added the MAC into the calendar so I dont forget it. I’d like them to go for 25ns because they need to try it out for later. It will be hard to increase luminosity that way because they will take a few weeks to step up the number of bunches. They would probably get a lot less luminosity unless they can use a much better squeeze

    • ondra says:

      I agree Philip, there is trade off between lumi and gaining experience. Actually, i agree, that 25 ns run would be very usefull for 13-14 TeV run after LS1. I think it could be very different to run 13 TeV with 25 ns from 8 TeV with 50 ns, after all Steve Myers said it will be like new machine.

  5. carla says:

    They seem to have improved the losses on the beams that were causing instabilities which appeared to have something to do with the collisions at Alice. They’re at 6.5/nb/s with initial currents of 2x!0^14A and managed 1.12×10^14A a few weeks ago so expect 7/nb/s over the coming weeks :)

  6. carla says:

    Do you know why they’re running at 474 bunches at the moment?

    • Philip Gibbs says:

      A few days ago the CMS solenoid was accidentally drained of its current raising its temperature to 70K. They can run and record without the solenoid but I dont think they would be able do the reconstruction well without the solenoid on, so CMS is essentailly useless until it cools back down. Meanwhile they are doing MD instead of physics.

      I dont know why they cant use the time to do something like the 500m physics planned for later but it is the middle of August when many people are away so maybe that makes it hard to do some of the things planned for later. It does look like they are doing useful studies though, so perhaps they will be able to fix some of the instability problems and catch up the lost time with better luminosity.

      There has not been much info on how they are getting on with fixing CMS and there could be other factors I am not aware of.

    • ondra says:

      Hi carla, Philip is right. There was problem with CMS solenoid, its back now and hopefully it will stay this way. There was also ALICE magnets polarity reversal to NEGATIVE, which required complete validation cycle, hence this low number bunches run.
      They also used this CMS downtime for some more testing of Q20 optics in SPS, could give emitance of 1 micron instead 2-3 and other machine tests.
      Good news is they could make run with 1.55e11 ppb and reach 7e33 lumi.
      For more check weekly report

      http://lhc-commissioning.web.cern.ch/lhc-commissioning/news-2012/presentations/week32/LHC-progress-130812.pptx

  7. Philip Gibbs says:

    Good to see that they are getting record luminosities again

    • carla says:

      Yeah, they’ve hit 1.6×10^14 ppb and 7nb/s which were considered max targets at the beginning of the year, so it seems there’s only improvement still possible on losses during squeeze and adjust, and better run efficiency.

      Fingers crossed, they’re still on schedule to deliver 15/fb at the end of this slot and another well deserved drink at the bar, despite the minor hiccup with the CMS solenoid.

    • Philip Gibbs says:

      From the slides of the MAC meeting it looks like they want to try more 25ns tests after the next technical stop and are trying to make the case that they have enough luminosity. I think collecting more luminosity is still very important because of the Higgs cross-section anomalies.

      I think it would be a fair compromise if they tried a 25ns run with smaller beta* and emittance as suggested above. Without video recordings of the MAC talks it is not clear what they really plan to do yet.

Follow

Get every new post delivered to your Inbox.

Join 276 other followers

%d bloggers like this: