Beyond standard model CP violation has been reported by Mat Charles for the LHCb collaboration at the Hadron Collider Physics conference today. Here is the relevant plot in which the cyan coloured band indicating the measurement fails to cross the black dot as predicted by the standard model.
The numerical result which was already rumoured at the weekend is ΔACP = -0.82% ± 0.25% which is just over 3 sigma significance.
This measurement is sensitive to new physics such as higher mass particles with CP violating interactions so that could be the explanation. On the other hand it is also a very tricky measurement subject to background and systematics. The significance will improve with more data and already twice as much is on tape so this is one to watch. The interesting thing will be to see if the phenomenologists can account for this result using models that are consistent with the lack of other BSM results from ATLAS and CMS.
The Hadron Collider Physics conference starts today in Paris and we eagerly await updates for various searches including the Higgs. 5/fb of luminosity have been collected in each experiment but it is too soon for the analysis of the full data to be out and this week we are only expecting results at 2/fb to be shown (but surprises are always possible) Indeed ATLAS have recently revealed updates for three of the diboson Higgs channels at 2/fb in conference notes and other conferences. These do not make much difference but an update to the diphoton search would be worth seeing. It has so far only been shown by ATLAS at 1/fb. CMS have only released results at 1.6/fb for the main Higgs decay modes so they are even more overdue for updates.
While we are waiting for that we can look forward to next year when results for 5/fb will be revealed, probably in March. When the results are combined we will see 10/fb and here is a plot showing the expected significance at that level. This is for 10/fb at CMS which can be taken as a good approximation for the ATLAS+CMS combination at 5/fb for each.
From this you can see that they expect at least 4 sigma significance all the way from 120 GeV to 550 GeV, which suggests that a good clear signal for the Higgs is almost certain if it exists, but not so fast. There are a couple of caveats that should be added.
Firstly the WW decay channels have been very good for excluding the Higgs over a wide mass range. Here is the viXra combined plot using 2/fb of ATLAS data and 1.5/fb from CMS.
This is only a rough approximation to what would be produced if they did an official version because it assumes a flat normal distribution uses a linear interpolation for CMS points and ignores any correlations.
Within those limitations we get an exclusion from 140 GeV to 218 GeV with a broad excess around 2 sigma extending all the way from 120 GeV to 160 GeV. A Standard Model Higgs in this region would only have a width of a few GeV and no bump of the sort is seen, so what does it mean? ATLAS and CMS will probably need to consider this question for a long time before agreeing to approve results like this with more data along with a suitable explanation. For now you should just bear in mind that this plot suffers from large backgrounds and poor energy resolution due to the use of missing energy to identify the two neutrinos. These effects have been worsened by high pile-up this year. I suspect that this channel will have to be used only where it provides a 5 sigma exclusion and should be left out when looking for a positive signal.
For this reason I have added a red line to the projected significance plot above showing the expected significance for just the diphoton plus ZZ to 4 lepton channels. These decay modes have very good energy resolution because the photons and high energy leptons (electrons and muons) are detected directly with good clarity and are not effected by pile-up. I think that the best early signal for the Higgs boson will be seen in a combination of these channels alone. The projected significance plot shows that with the data delivered in 2011 we can expect a signal or exclusion at a level of significance ranging from about 3 sigma to 6 sigma in the mass range of 115 GeV to 150 GeV where the Higgs boson is now most likely to be found.
Does this mean that we will get at least a 3 sigma “observation” for the Higgs by March? No, not quite. There is one other obvious caveat that is often forgotten when showing these projected significance plots. These are only expected levels of significance and like everything else they are subject to fluctuations. Indeed, given twenty uncorrelated mass points we should expect fluctuations of up to 2 sigma over the range. How could this affect the result? The next plot illustrates what this could mean assuming an expected significance of 4 sigma
In this plot the green line represents the expected level for a positive signal of a standard model Higgs, while the gred line represents the level where there is no Higgs. The data points have error bars at the size you will get when you expect a 4-sigma level of significance. So point A shows where the bars are expected to sit if the SM Higgs exists at a given mass value and point B shows where the bars are expected if there is no Higgs. If they get observed data in these locations they will be able to claim a 4-sigma observation or exclusion, but remember that fluctuations are also expected. Point C show what happens when the Higgs is there but an unlucky one sigma fluctuation reduces the number of observed events. The result is a reduced significance of three sigmas. Likewise point D shows an unlucky one sigma fluctuation when there is no Higgs which still gives a healthy three sigma exclusion. But remember that we expect fluctuations of up to two sigma somewhere in the range. Point E shows what happens when a Higgs is there but an unlucky two sigma fluctuations hits that mass point, and point F shows what happens when there is no Higgs with an unlucky two sigma fluctuation. The points are the same, corresponding to either a two sigma signal or a two sigma exclusion. We have already seen some points that look just like this at the summer conferences. This is why the CERN DG has cautiously promised to identify or exclude the Higgs only by the end of 2012 and not by the end of 2011. More optimistically we can also hope for some lucky fluctuations. If they fall at the mass where the Higgs actually lives we will get a 6 sigma discovery level signal like point G instead of merely a 4-sigma observation.
It’s a simple point and my explanation is a little too long-winded, but I think this had too be said clearly before the next results come out in case people do not see what they thought they should have expected to see. With another year of data 10/fb becomes (perhaps) 40/fb and 4 sigma becomes 8 sigma. Even with unlucky 2 sigma fluctuations they will be left with 6 sigma signals. The results will probably be good enough to claim discoveries even for the individual experiments and some individual decay channels, but for this year’s data there could still be a lot of ambiguity to mull over.
The rumour mill is once again turning its rusty wheels, and there are suggestions that an interesting result will be revealed at Hadron Collider Physics conference in Paris next week. More on that in a minute.
You may think that things have been quietly lately but there have been a lot of workshops going. They have not been reported much but of course us bloggers have been trawling the slides for anything new and exciting. In case you want to search for anything we might have missed here is a convenient list of links:
One thing that turned up was an update to the Higgs -> WW analysis for ATLAS upgrading it from 1.7/fb to 2/fb, The effect is not terribly exciting, nothing has changed.
So now we are waiting for the HCP conference but not much is expected, or is it? The full schedule of talks can be found here. If this is to believed even the new update for H -> WW will not be shown. The only thing certainly new is the ATLAS+CMS combination of data shown at Lepton Photon nearly three months ago.
But then an organizer speaks of a last-minute talk being added and a comment over at NEW says “…or maybe something else violates CP at 3.5 sigma level.” So do we have a new rumour about – perhaps – a result from LHCb, or is someone just hyping the conference?
Apart from that the next big question is when will the next wave of Higgs results be revealed? They must have done more analysis at 2/fb, yet we have not had anything beyond 1/fb for the crucial diphoton search from ATLAS. I am sure they must have also looked at plots using 3/fb to 4/fb but nothing has been said, except a few vague rumours that I don’t find convincing.
Now they will be preparing the 5/fb plots that should be ready for approval in December. We may see them soon after but if the results are really so inconclusive we may have to wait for the 5/fb ATLAS+CMS combination. That means there may be nothing ready to show until Moriond in March, unless…
Rumour Update 24-Nov-2011: The rumour apparently concerns a measurement of ΔACP at 600/pb and will be shown in the last talk today at HCP11. This quantity is the difference between decays of a charmed D meson into Kaons or pions. It is not yet clear if the rumoured 3.5 sigma result is merely a signal of CP violation or a deviation from the standard model.
This year all physics eyes are on the Large Hadron Collider as it approaches its promised landmark discovery of the Higgs Boson (or maybe its undiscovery). At the same time some physicists are planning the future for the next generation of colliders. What will they be like?
The answer depends in part on what the LHC finds. Nothing is likely to be built if there is no sign that it will do anything useful, but decisions are overdue and they have to make some choices soon.
Accelerators like the LHC that collide protons are at the leading edge of the Energy and Luminosity frontiers because they work with the heaviest stable particles that are available. The downside of colliding protons is that they produce messy showers of hadrons making it difficult to separate the signal from the noise. With the Tevatron and now the LHC, hadron colliders have been transformed into precision experiments using advanced detectors.
One technique is to capture and track nearly all the particles from the collisions making it possible to reconstruct the jets corresponding to interesting high energy particles such as bottom quarks created in the heart of the collision. Missing energy and momentum can also be calculated by subtracting the observed energy of all the particles from the original energy of the protons. This may correspond to neutrinos that cannot be detected or even to new stable uncharged particles that could be candidates for dark matter.
High luminosities have been achieved making it possible to scour the data for rare events and build up a picture of the interactions with high statistics. As luminosity increases further there can be many collision events at once making it difficult to reconstruct everything that happens. The LHC is now moving towards a new method of operation where it looks for rare events producing high energy electrons, muons and photons that escape from the heart of the collision giving precise information about new particles that decayed without producing jets or missing energy. In this way hadron colliders are getting a new lease of life that turns them into precision tools very different from how they have been seen in the past.
So what is the future of hadron colliders? The LHC will go on to increase its energy to the design limit of 14 TeV while pushing its luminosity even higher over the coming years. Its luminosity is currently limited by the capabilities of the injection chain and the cryogenics. These could undergo an upgrade to push luminosities ten times higher so that each year they collect 50 times as much data as they have in 2011. Beyond that a higher energy upgrade is being planned that could push its energy up to 33 TeV. The magnets used in the LHC main ring today are based on superconducting niobium-titanium coils to generate magnetic fields of 8.5 tesla. Newer magnets could be built using niobium-tin to push the field up to 20 Tesla to more than double the energy. If they could revive the tunnel of the abandoned SSC collider in Texas and use niobium-tin magnets it would be possible to build a 100 TeV collider, but the cost would be enormous. The high-energy upgrade for the LHC is not foreseen before 2030 and anything beyond that is very distant. Realistically we must look to other methods for earlier advances.
Is the future linear?
The latest linear accelerator built to date is SLAC at Stanford with a centre of mass energy of 90 GeV. As hadron colliders reach their physical limits physicists are returning to the linear design for the next generation of colliders. When accelerating in a straight line there is no advantage in using heavy particles so linear colliders work equally well with electrons and positrons which give much cleaner collisions.
The most advanced proposal is the International Linear Collider which would provide centre of mass energies of at least 500 GeV with 1 TeV also possible. The aim of the ILC would be to study the Higgs boson and top quark with very high precision measurements of their mass, width and other parameters. This may seem like an unambitious goal but if the LHC finds nothing beyond the standard model in the data collected in 2011 this could be the best option. the standard model makes very precise predictions about the quantities that a linear collider could measure. If these can be checked, any deviations could give clues to the existence of new particles at higher energies. Such precision measurements have already been useful in predicting where the mass of the Higgs Boson lies, but once all the parameters of the standard model can be measured the technique will really come into its own. Finding solid evidence for deviations from the standard model would be the requirement to choose and justify the construction of the next collider at the energy frontier.
But there is an alternative. A new innovative design for a compact linear collider (CLIC) is being studied at CERN and it could push the energy of linear colliders up to 3 TeV or even 5 TeV. The principle behind CLIC is to use a high intensity drive beam of electrons at lower energy to accelerate another lower intensity beam of electrons too much higher energy. Just think of how a simple transformer can be used to convert a high current low voltage source of electricity into a low current high voltage source. CLIC does a similar trick but the coils of wire in the transformer are replaced by resonant cavities. It is a beautiful idea, but is it worth doing?
The answer depends on whether there is anything to be found in the extended energy range. This is being explored by the LHC and so far nothing new has been seen with any level of certainty. There is still plenty of room for discovery but decisions must be made soon so the data collected in 2011 will be what any decision has to be based on.
It is going to be a hard choice. For me it would be swung towards CLIC if it could be the start of a design that could lead to even higher energies. Could the same trick be used a second time to provide even higher energies, or is it limited by the amount of power needed to run it? Do other designs have better prospects, such as a muon collider? Big money and decades of development are at stake so let’s hope that the right decision is made based on physics rather than politics.
Perhaps it is worth a poll. If it was a straight choice, which of these would you prefer to see international funds spent on?
Today is scheduled as the end of proton physics at the Large Hadron Collider and the last few fills are circulating this morning. The integrated luminosity recorded this year will end at about 5.2/fb each for CMS and ATLAS, 1.1/fb for LHCb and 5/pb for ALICE. For the remainder of this year they will return to heavy ion physics until the winter shutdown.
The good news this year has been the high luminosity achieved with peaks at 3.65/nb/s. This compares with the expectations of 0.288/nb/s estimated before the 2011 run began. The higher luminosity has been made possible by pushing beam parameters (number of bunches, bunch intensity, emittance, beta*) to give better than expected performance. The not so good news is that out of 230 days that were available for physics runs only 55 (24%) were spent in stable beams. This was due to a barrage of technical difficulties including problems with RF, Vacuum, cryogenics, power stability, UFOs, SEUs and more. There were times when everything ran much more smoothly and the time in stable beams was then twice the average. The reality is that the Large Hadron Collider pushes a number of technologies far beyond anything attempted before and nothing on such scales can be expected to run smoothly first time out. The remarkable amount of data collected this year is testament to the competence and dedication of the teams of engineers and physicists in the operation groups.
After the heavy ion runs they will start looking towards next year. There will be a workshop at Evian in mid December to review the year and prepare for 2012. Mike Lamont, the LHC Machine Coordinator will be providing a less technical overview for the John Adams Lecture on 18th November.
Brian Cox is a professor of physics at Manchester and a member of the ATLAS collaboration. He is very well-known as a television presenter for science, especially in the UK and has been credited with a 40% increase in uptake of maths and science subjects at UK schools. He is clever, funny and very popular. If you are in the UK and missed his appearance on comedy quiz QI last week you should watch it now (4 days left to view).
At the weekend the Guardian published a great question and answers article with Brian Cox and Jeff Forshaw, who I am less familiar with. The answers all made perfect sense except one:
How do you feel about scientists who blog their research rather than waiting to publish their final results?
BC: The peer review process works and I’m an enormous supporter of it. If you try to circumvent the process, that’s a recipe for disaster. Often, it’s based on a suspicion of the scientific community and the scientific method. They often see themselves as the hero outside of science, cutting through the jungle of bureaucracy. That’s nonsense: science is a very open pursuit, but peer review is there to ensure some kind of minimal standard of professionalism.
JF: I think it’s unfair for people to blog. People have overstepped the mark and leaked results, and that’s just not fair on their collaborators who are working to get the result into a publishable form.
I would be interested to know what Brain Cox was thinking of here. Which bloggers does he think see themselves as “the hero outside of science” and what examples back up the idea that bloggers try to circumvent peer review?
It is not clear to me whether Brian Cox is referring to the internal review process that experimental collaboration go through or the peer review provided by journals as a part of publication. Surely it cannot be the latter because most science research and especially that from CERN is widely circulated long before it reaches the desk of any journal editor, not by bloggers but by CERN through conferences, press releases, preprints etc. So Cox must be talking about internal review, but that does not really count as peer-review in the usual sense. In any people within a collaboration do not get away with blogging about results before approval.
There have been a few leaks of results from CERN and Fermilab before approval from the collaboration. For example, one plot featured here earlier this year from a talk that turned out to be not intended for the public. However, by time I had passed it on it was already in Google having been “accidentally” released in a form that made it look like any other seminar where new preliminary results are shown. There were a few other examples of leaks but none that fit what Cox is saying that I can think of.
Given his obvious dislike for blogs I can’t hold much optimism that Brian will comment here and elaborate on what he means, but it would be nice if he did. Otherwise perhaps someone else knows some examples that could justify his answer. Please let us know.
If you have been following our reports on new developments in the search for the Higgs Boson you may be itching to get involved yourself. Now you can by joining LHC@Home 2.0 a new project for people to run simulations of LHC particle collisions on their home PCs.
Projects like this used to be difficult to set up because the software is written to run on LINUX systems, but a new virtual machine environment from Oracle has made it much easier. CERN runs simulations on a powerful global computing grid but you can never have too much CPU time for the calculations they have to do.
Running monte carlo simulations to compare with the latest experiments is an importnat part of the analysis that goes into the plots they show at the conferences. CERN have been making extraordinary efforts to show results as quickly as possible to the public but these calculations is one of the limiting factors that keeps us waiting. Getting the public involved in the process may be one way to solve the problem.
Now the two experiments need to get together to work out which is wrong and why. It is not a sure conclusion that D0 is right but it seems more likely that someone would see an effect that isn’t there by mistake than that someone would fail to see an effect that is there. This gives D0 a strong psychological advantage.
To find out what went wrong they have to compare the raw plots and the background as seen in these plots
The differences are too subtle to see from just the visual image, and it does not help that they used different bins. There does appear to be significant differences in the backgrounds while the data look quite similar. If that is the case then the problem is purely theoretical and they just need to compare their background calculations. However, the detectors are different so perhaps the backgrounds should not look exactly the same. Only the people directly involved have enough details to get to the bottom of it.
I hope they will work it out and let us know because it would be nice to see that their understanding of their results can be improved to give better confidence in any similar future results.