Will the 100 TeV Hadron Collider get built?

March 7, 2014

The possibility for a 100 TeV hadron collider was first mentioned on this blog in 2011 long before your other favoured outlets got excited about it, but before we consider naming it the ViXra Legacy Hadron Collider it should be admitted that the idea was part of a plan formed as long ago as Snowmass-1996 in the US, even if it did take viXra to shake it back into the consciousness of physicists.

vlhc

As I said at the time, it is going to be very hard to get funding for the VLHC because it will require the emptying of quite a lot of penny jars. It also has no guarantee of a discovery unless you think that finding no new physics will discover the multiverse. I do buy that argument but it is going to be a hard to sell to the public especially since a lot of physicists will disagree. The possibility of finding supersymmetry or some other mechanism that would solve the hierarchy problem and make the universe almost natural is a good case to make but I am not sure it will be strong enough.

Already the hope of the US offering funding for this project is about as remote as SPT 0243-49 and for Europe it may not be much nearer. However there is a very real chance now that China will pick up the tab. This is especially true if Japan confirms its plans to build the ILC because China will not want to let Japan continue to have the most prestigious physics project in Asia. Apart from this you will hear many arguments in favour of building aVLHC including the following:

  1. Accelerator projects have produced spin-offs such as the World Wide Web, touch screens and MRI scanners.
  2. Although discoveries at the energy frontier have no technological benefit they make life worth living.
  3. Accelerators foster international collaborations that transcend  international politics.
  4. A hadron collider is about the same price as a good aircraft carrier.
  5. A hadron collider will boost national prestige.
  6. For every x dollars a country spends on a collider y dollars are returned in engineering contracts.
  7. For every x dollars a country spends on a collider the value of the research skills obtained by students and post-docs has a value to that countries economy greater than x – y dollars.
  8. A hadron collider will not destroy the Earth.

In theory research spending is allocated by funding agencies that are independent of political parties but we all know that in practice this is not true and that the bigger the amount being spent, the less true it is. The question then is which of these seven arguments would convince a politician. The case for spin-offs is rather flimsy and easily torn apart by the projects detractors of which there will be many chosen to advise the politicians. Points 2 to 4 are more likely to have a net negative effect on persuading your typical world leader to support the project. In particular the last thing they want is academics fostering relationships that go against the politicains everyday international squabbles, whereas a better aircraft carrier is always high on their list of wants. The prestige argument brings some hope but only in countries where the current leader or his offspring might still be in power when the thing bears fruit.

The case therefore rests of points 6 to 8. Points 6 and 7 seem to add up to a winning case but someone needs to have done the accounting to prove it. Where are the reports from the LHC that count the economic benefit it brought to each country? Of course they don’t exists because if they did the politicians would just start squabbling about who got the best money’s worth.

This leaves the physicists the job of proving point 8. With the LHC world safety was done as an afterthought well after the project was already underway. Only physicists themselves are qualified to make the risk assessment and they have an obvious conflict of interest, so their case needs to be very convincing. For the LHC they were able to show that the collisions they were planning had been done before by cosmic rays in Earth’s atmosphere a million times over in the past without an obvious catastrophe. Given the increased energy and luminosity required for the VLHC this is going to be reduced to a much less convincing factor ( I dare not say how small I think this will be in case someone starts quoting it.) The case was also made that even more physics has been tested by neutron stars but it is less obvious that neutron stars  are as vulnerable to physics accidents as Earth or that they are not sometimes destroyed. I do not think for one second that a VLHC is dangerous but we can only set limits on its safety and there is a chance this point could prove a problem. Again the chance of getting round this will increase if the country hosting the VLHC is not too democratic but that may still leave a lot of people upset around the world.

I do very much want to see the VLHC built but I have no idea how insurmountable the difficulties are going to be. I think it really depends on whether China takes a big interest. There are however many alternative experiments that could lead to progress in physics if the VLHC does not get approved. They may even be cheaper and possible in a much shorter time-scale. As I have remarked before I am especially in favour of the project to build a large proton decay experiment in the antarctic using a scaled up version of the ice-cube. I am disappointed that this experiment is not getting more support from theorists.  I dont think we should be talking down alternatives just to talk up the VLHC or we may end up with nothing.


Moriond Higgs Update

March 6, 2013

The latest Higgs updates are now being presented at Moriond. CMS have kicked off this morning with a presentation of bosonic decays including WW and ZZ but still not including the important diphoton channel. The full LHC run 1 dataset is now being used including 19.6/fb at 8 TeV

In ZZ they get a very clear signal on the event plot

MZZCMS1

Higgs Mass from ZZ is 125.8 +- 0.5(stat) +- 0.2(Syst)

The cross section relative to standard model is 0.91 +- 0.27

ATLAS also updated ZZ with 20.7/fb at 8 TeV to produce a similarly impressive plot

MZZATLAS1Higgs mass for ZZ from ATLAS is 124.3 +- 0.6(stat) +- 0.4(syst)

cross-section 1.7 +- 0.5

Unlike CMS, ATLAS have presented their diphoton results giving a mass estimate of 126.8 +- 0.2(stat) +- 0.7(stat)

MGGATLAS

diphoton cross-section is 1.65 +- 0.24(stat) +- 0.21(syst)

Rumour puts the CMS diphoton excess at 1.0 +- 0.2, to be shown at Moriond QCD next week perhaps (via Jester on twitter)

The excess over the standard model remains high but its significance has not increased because the value has gone down as more data has been added. When we first saw this excess a year ago we were excited that it may be real physics and we hoped that by this time we would have a truely significant effect. This has not happened. We still need to wait for CMS to show their diphoton results before we can draw any conclusion but rumours are that their overexcess has fallen even more dramatically. This means that expectations of significant BSM effects from run 1 are now lower.

CMS also gave us a plot of excesses in the WW channel over the standard model with Higgs at 125 GeV. In other words this plot should only show any excesses attributed to any other Higgs like particles. They said they are now doing this analysis for all the high mass searches which is a good move.

The WW cross-section from CMS is 0.76 +- 0.21

MWWCMS1

This shows that there are not yet any signs of higher mass Higgs particles as would be expected in Higgs multiplet models. If they exist then they must be quite well decoupled from the observed Higgs boson. The usual combined ZZ channel plot tells a similar story with no significant excesses beyond the known Higgs.

MZZCMS2

By the way, we are still waiting for the AMS-02 results due out soon. They had hoped to reveal them yesterday at Moriond but approval was not ready in time. Next oportunity could be the Moriond Cosmology conference next week


CMS looking back

January 30, 2013

The CMS collaboration have kindly posted a pleasant video that reveals the moments when they “unblinded” their Higgs diphoton results within the collaboration in the run-up to the public discovery announcement in July.

I find it interesting to look back and see how these events relate to what was going on publicly on the blogs at the time. From the video we learn that the CMS collaboration were shown the first results on 15th June. The Higgs analysis group within CMS must have seen it at least a day or two before in order to prepare the plots for the talk. We can assume that ATLAS were seeing their results at about the same time. For two days the collaborations were able to walk around knowing that they knew stuff that the outside didn’t until Peter Woit blew the lid with his leak about the new results. This was a little upsetting for them at first as shown by the response from CMS blogger Michael Schmitt who later calmed down a bit. One argument they gave was that they did not want the information to pass between the ATLAS and CMS collaborations because it would spoil their independence but if the rumour can reach the outside world so quickly it is clear that it would not be kept secret at CERN and other institutions where 3000 excited physicists from each collaboration share the same social spaces.

The rumours were saying that the diphoton excess was at around 4 sigma and the video shows that it was indeed around 4.3 for CMS. In my own analysis the next day I estimated that this was based on about 3/fb of the data which turns out to be exactly right for CMS as seen in the video when the camera zooms in at 2:20. I also pointed out that when they add the full dataset the signal could easily go down and in fact it did descend to 4.1 sigma as seen in the next part of the video. I am not always right but I was this time. Subsequently the New York Times reported an email from the spokesperson for ATLAS saying that they should not believe the blogs. Now we know that this was a euphemism for “please don’t report what they are saying because it is perfectly accurate and we were hoping to keep it as a surprise for the next conference”.

Another point worth making here is that the collaborations like to make big statements about how they do their analysis blind. This is supposed to mean that they don’t look at the results until they have fixed the parameters of the analysis so that they cannot introduce any bias. From this video we can see what this really means in practice. They unblind the data as an early check then they “re-blind” it while they adjust the analysis. Then they unblind it again two weeks later with just 30% more data added. Come-on guys own up, this is not quite in the spirit of how blind analysis is meant to work. Luckily the signal is so clear that it is indisputable in any case.

Getting more up-to-date, remember that CMS have not yet published the diphoton update with 13/fb at 8 TeV. Rumours revealed that this was because the excess had diminished. At the Edinburgh Higgs symposium some more details about the situation were given. The talks are not online but Matt Strassler who was there has told us that the results have now been deemed correct. It may be understandable that when the results are not quite what they hope for they will scrutinize them more carefully, but I find it wrong that they do not then publish the results once checked. It was clear that they intended to publish these plots at HCP2012 in November and would have done so if they showed a bigger excess. By not releasing them now they are introducing a bias in what is publicly known and theorists are left to draw conclusions based on the ATLAS results only which still show an over-excess in the diphoton channel. It will all be history once the final results with about 20/fb are released soon but it would be helpful if they could keep this sort of biasing factor to a minimum.

The March meeting at Moriond is slated as the occasion for the final update but only of they are happy with the results. Their analysis has been used and refined many times in the last two years and by now they should be confident enough to say that they will publish regardless of the result they get. The data for the last proton run was available before Christmas so by now the collaborations should have completed their final analysis. The fact that we don’t have any rumours suggests that this time they have decided to confine knowledge of the results to the smaller Higgs groups within the collaborations and they may actually succeed in keeping them secret until the conference.

Update: See Tommaso Dorigo’s response here


We need to find the Theory of Everything

January 27, 2013

Each week the New Scientist runs a one minute interview with a scientist and last week it was Lisa Randall who told us that we shouldn’t be obsessed with finding a theory of everything. It is certainly true that there is a lot more to physics than this goal, but it is an important one and I think more effort should be made to get the right people together to solve this problem now. It is highly unlikely that NS will ever feature me in their column but there is nothing to stop me answering questions put to others so here are the answers I would give to the questions asked of Lisa Randall which also touch on the recent discovery of the Higgs(-very-like) Boson.

Doesn’t every physicist dream of one neat theory of everything?

Most physicists work on completely different things but ever since Einstein’s attempts at a unified field theory (and probably well before) many physicists at the leading edge of theoretical physics have indeed had this dream. In recent years scientific goals have been dictated more by funding agencies who want realistic proposals for projects. They have also noticed that all previous hopes that we were close to a final theory have been dashed by further discoveries that were not foreseen at the time. So physicists have drifted away from such lofty dreams.

So is a theory of everything a myth?

No. Although the so-called final theory wont explain everything in physics it is still the most important milestone we have to reach. Yes it is a challenging journey and we don’t know how far away it is but it could be just round the corner. We must always try to keep moving in the right direction. Finding it is crucial to making observable predictions based on quantum aspects of gravity.  Instead people are trying to do quantum gravity phenomenology based on very incomplete theories and it is just not working out.

But isn’t beautiful mathematics supposed to lead us to the truth?

Beauty and simplicity have played their part in the work of individual physicists such as Einstein and Dirac but what really counts in consistency. By that I mean consistency with experiment and mathematical self-consistency. Gauge theories were used in the standard model, not really because they embody the beauty of symmetry, but because gauge theories are the only renormalisable theories for vector bosons that were seen to exist. It was only when the standard model was shown to be renormalisable that it become popular and replaced other approaches. Only renormalisable theories in particle physics can lead to finite calculations that predict the outcome of experiments, but there are still many renormalisable theories and only consistency with experiment can complete the picture. Consistency is also the guide that takes us into theories beyond the standard model such as string theory that is needed for quantum gravity to be consistent at the perturbative level and the holographic principle that is needed for a consistent theory of black hole thermodynamics.

Is it a problem, then, that our best theories of particle physics and cosmology are so messy?

Relatively speaking they are mot messy at all. A few short equations are enough to account for almost everything we can observe over an enormous range of scales from particle physics to cosmology. The driving force now is the need to combine gravity and other forces in a form that is consistent non-perturbatively and to explain the few observational facts that the standard models don’t account for such as dark matter and inflation. This may lead to a final theory that is more unified but some aspects of physics may be determined by historical events not determined by the final theory, in which case particle physics could always be just as messy and complicated as biology. Even aside from those aspects, the final theory itself is unlikely to be simple in the sense that you could describe it fully to a non-expert.

Did the discovery of the Higgs boson – the “missing ingredient” of particle physics – take you by surprise last July?

We knew that it would be discovered or ruled out by the end of 2012 in the worst case. In the end it was found a little sooner. This was partly because it was not quite at the hardest place to find on the mass range which would have been around 118 GeV. Another factor was that the diphoton excess was about 70% bigger than expected. If it had been as predicted they would have required three times as much data to get it from the diphoton excess but the ZZ channel would have helped. This over-excess could be just the luck of the statistics or due to theoretical underestimates, but it could also be a sign of new physics beyond the standard model. Another factor that helped them push towards the finish line in June was that it became clear that a CMS+ATLAS combination was going to be sufficient for discovery. If they could not reach the 5-sigma goal for at least one of the individual experiments then they would have to face the embarrassment of an unofficial discovery announced on this blog and elsewhere. This drove them to use the harder multivariate analysis methods and include everything that bolstered the diphoton channel so that in the end they both got the discovery in July and not a few weeks later when an official combination could have been prepared.

toeAre you worried that the Higgs is the only discovery so far at the LHC?

It is a pity that nothing else has been found so far because the discovery of any new particles beyond the standard model would immediately lead to a new blast of theoretical work that could take us up to the next scale. If nothing else is found at the LHC after all its future upgrades it could be the end of accelerator driven physics until they invent a way of reaching much higher energies. However, negative results are not completely null. They have already ruled out whole classes of theories that could have been correct and even if there is nothing else to be seen at the electroweak scale it will force us to some surprising conclusions. It could mean that physics is fine tuned at the electroweak scale just as it is at the atomic scale. This would not be a popular outcome but you can’t argue with experiment and accepting it would enable us to move forward. Further discoveries would have to come from cosmology where inflation and dark matter remain unexplained. If accelerators have had their day then other experiments that look to the skies will take over and physics will still progress, just not quite as fast as we had hoped.

What would an extra dimension look like?

They would show up as the existence of heavy particles that are otherwise similar to known particles, plus perhaps even black holes and massive gravitons at the LHC. But the theory of large extra dimensions was always an outsider with just a few supporters. Theories with extra dimensions such as string theory probably only show these features at much higher energy scales that are inaccessible to any collider.

What if we don’t see one? Some argue that seeing nothing else at the LHC would be best, as it would motivate new ideas.

I think you are making that up. I never heard anyone say that finding nothing beyond the Higgs would be the best result. I did hear some people say that finding no Higgs would be the best result because it would have been so unexpected and would have forced us to find the alternative correct theory that would have been there. The truth of course is that this was a completely hypothetical situation. The reason we did not have a good alternative theory to the Higgs mechanism is because there isn’t one and the Higgs boson is in fact the correct answer.

Update: Motl has a followup with similar views and some additional points here


Christmas Rumour

December 25, 2012

[Boxing Day Update: Indication over at NEW are that this rumour is not being backed up by other ATLAS sources. Chances are it will melt away and we will never know its origins. Update: of course it could also be that the analysis has not been communicated to the whole team yet.]

A rumour has surfaced in the comments at Not Even Wrong that ATLAS have a 5 sigma signal (local significance?) in like-sign dimuons at 105 GeV. This plot shows the relevant events from an earlier analysis with 2011 data where a small excess can already be seen.

dimuon

First thoughts are of a doubly charged Higgs boson as predicted in Higgs triplet models with the potential to also explain the digamma over-excess in the Higgs decays. However, the signal is much weaker than expected for a doubly charged Higgs because CMS and ATLAS have already set lower limits around 300 – 400 GeV for H++. In a comment here yesterday on the digamma excess Frank Close pointed out that if a doubly charged Higgs is responsible for the digamma excess it should also affect the Bs to dimuon decay (see e.g. Resonaances) which is disappointingly inline with the Standard Model.

Of course the rumour could be incorrect or based on an analysis too preliminary to hold water, but if it pans out it will certainly pose an intriguing puzzle. A particle that decays to two like-signed muons must have lepton number two as well as charge two, unless the decay breaks lepton number conservation or there are missing neutrinos. It could be a spin two particle rather than a scalar. Working out what best fits other observations is not an exercise that can be done in the head, but it will be interesting to see what other first thoughts come out. It is also possible that this could be related to signals in multi-lepton channels that have been seen in the past (see e.g. Motl at TRF). Until we get an official report at perhaps Moriond 2013 this should not be taken too seriously. Some rumours evaporate during internal review and never see the light of day.

Merry Christmas.


LHC end of proton-run Update

December 11, 2012

This week marks the end of proton physics runs at the LHC. The last days are dedicated to machine development and in particular test runs at 25 ns. This shot shows the scrubbing runs during which they filled the collider to its full capacity for the first time. Record intensities of 270 trillion protons per beam were reached with 2748 bunches injected in 288 bunch trains with 25ns spacing. This doubles the intensity numbers used in the proton physics runs this year but it comes at a cost. In the pictures you can see how fast the beam intensity drops due to losses from the e-cloud effect. The purpose of the scrubbing runs this weekend was to clean out the e-cloud and improve beam lifetime. After nine runs the effect was significantly reduced but not fully removed. During the last few remaining days we may see some runs bringing 25ns beams into collision, but perhaps not at these intensities.

fills25ns

The point of these tests is to work out if and how the next runs can work at 25ns spacing rather than 50ns. That will happen when the LHC restarts at 13 TeV in 2015 after the long shutdown. We still have some heavy ion runs before the shutdown but otherwise it is going to be a long wait for new data.. During the LHCC meeting last week Steve Myres gave an overview of the main considerations for running at 25ns vs 50ns. You can watch the video from here. Myres revealed that other tests had shown that they can increase the brightness of the beams from the injectors by 50% using new optics. In addition to this the beta* in the next runs will come down to 0.5m or perhaps even 0.4m, so with all other things being equal luminosities could be three times as high. The problem is that pile-up with 50ns spacing is already near the limit of what the experiments can take. Switching to 25ns will half the pile-up making the situation much more tolerable. The other alternative would be to use luminosity levelling to artificially keep the luminosity down during the first part of any run.

This means the pressure to run at 25ns is high, it will make a big difference to the physics reach, but the technical issues get very troublesome. As well as the e-cloud problems which could mean losing maximum luminosity far too fast, they also have to worry about excess heating which has already been a problem with this years run forcing them to wait for things to cool down before refills. Another big worry is that UFO events become much more frequent at 25ns so even if they can maintain the luminosity they may keep losing the beams through unplanned dumps. Switching between 25ns and 50ns can lose a week of runs so they must decide which setting to use from the start of 2015 and try to stick to it. This makes the present 25ns tests very important. they had been planned for a few weeks ago to allow plenty of time but some injector problems set them back as explained by Myres in his talk. hopefully they will get all the data they need this week.

Meanwhile this week is also the occasion of the annual Cern Council Meetings. Remember that last year this was the event where they announced the first signs of an excess at 125 GeV in the Higgs searches. There are rumours coming in via twitter of new updates from CMS on Wednesday and ATLAS on Thursday (see calendar comments). There is nothing yet scheduled in indico that I can find apart from a status update on 13th (not physics) and the CCM open session on Thursday. We are still waiting for reports of the analysis using 12/fb at 8 TeV that were missing this year at the HCP meeting in Tokyo, especially the diphoton channel. In anticipation here is the latest CMS combo plot that has been around for a few weeks but which has not been much discussed.

CMS12fbThe peak at 125 GeV is clear but what about the excesses that continue up to 200 GeV? No doubt these are due to systematic errors and fluctuations that will go away, bur any new updates will be keenly awaited, just in case.

The LHC has now delivered 23/fb to CMS and ATLAS at 8 TeV of which about 20/fb will be usable data. The complete analysis could be ready in time for Moriond in March with the diphoton over-excess being the most likely centre of attention.

Update: Indications are that the CMS and ATLAS updates were cancelled.

Update: Peter Woit thinks that ATLAS will give new diphoton and ZZ results at the LHC status meeting tomorrow. Meetings with this title usually indicate technical updates on the running of the collider and its experiments, not new physics results. It looks a lot like they are trying to spring a surprise by stealth :-) A presentation later at KITP confirms that they are planning to talk. It still seems that CMS are not ready to give their diphoton update but they do have a status update.


Higgs at HCP2012

November 14, 2012

This morning ATLAS and CMS reported new Higgs results at the Hadron Collider Physics Conference in Kyoto. Only a subset of the available decay channels have been updated. The crucial diphoton channels in particular have not been updated by either experiment. This may be due to increased difficulties in doing the analysis with possible issues over systematic errors and mass/energy calibration. Obviously the systematics get more significant as the statistical errors diminish. The earlier diphoton update at 8 TeV from ATLAS already showed some signs of inconsistency with the excess peaking at around 127.5 GeV compared to lower estimates of around 125.5 GeV from CMS. We will have to be more patient which they sort it out.

However, there have also been some sensational new updates. Both CMS and ATLAS have provided new results from the ditau decay mode showing an excess of 0.72 +- 0.52 times the standard model at 125 GeV in CMS and 0.7 +- 0.7 in ATLAS.  A crude combination gives 0.71 +- 0.42 times that standard model. This agreement with the standard model using 17/fb in each detector overturns earlier results from CMS in July where the signal seemed a little on the low side.

In the ZZ channels CMS have shown a useful update that extends the mass range up to 1 TeV with no signs of any excess anywhere other than the known 125 GeV. Although this only works directly for the standard model it is also a bit of a blow for models with a second ordinary Higgs in this mass range.


Follow

Get every new post delivered to your Inbox.

Join 281 other followers