Christmas Rumour

December 25, 2012

[Boxing Day Update: Indication over at NEW are that this rumour is not being backed up by other ATLAS sources. Chances are it will melt away and we will never know its origins. Update: of course it could also be that the analysis has not been communicated to the whole team yet.]

A rumour has surfaced in the comments at Not Even Wrong that ATLAS have a 5 sigma signal (local significance?) in like-sign dimuons at 105 GeV. This plot shows the relevant events from an earlier analysis with 2011 data where a small excess can already be seen.

dimuon

First thoughts are of a doubly charged Higgs boson as predicted in Higgs triplet models with the potential to also explain the digamma over-excess in the Higgs decays. However, the signal is much weaker than expected for a doubly charged Higgs because CMS and ATLAS have already set lower limits around 300 – 400 GeV for H++. In a comment here yesterday on the digamma excess Frank Close pointed out that if a doubly charged Higgs is responsible for the digamma excess it should also affect the Bs to dimuon decay (see e.g. Resonaances) which is disappointingly inline with the Standard Model.

Of course the rumour could be incorrect or based on an analysis too preliminary to hold water, but if it pans out it will certainly pose an intriguing puzzle. A particle that decays to two like-signed muons must have lepton number two as well as charge two, unless the decay breaks lepton number conservation or there are missing neutrinos. It could be a spin two particle rather than a scalar. Working out what best fits other observations is not an exercise that can be done in the head, but it will be interesting to see what other first thoughts come out. It is also possible that this could be related to signals in multi-lepton channels that have been seen in the past (see e.g. Motl at TRF). Until we get an official report at perhaps Moriond 2013 this should not be taken too seriously. Some rumours evaporate during internal review and never see the light of day.

Merry Christmas.


End of Year Higgs Roundup

December 20, 2012

It has been a sensational year for particle physics with the discovery of the Higgs”-like” boson making front page news and cash stuffed awards going to some of the deserving scientists at CERN who made it possible. Congratulations to them and sympathies to the many people at CERN who were overlooked from the likes of Steve Myers and John Ellis down to the humble post-grad who was pictured falling asleep at the July announcement after having queued all night for his place.

Reuters reports that they will (maybe) finally be able to remove the “-like” at Moriond in March when the analysis of the full dataset is presented. With alternative spin and parity possibilities already ruled out with low to medium confidence many of us have already reached that conclusion. Serious doubters will wait for the self-couplings to be measured in twenty years time before conceding.

In the same Reuters report it is claimed that CERN scientists have “dismissed suggestions circulating widely on blogs and even in some science journals that instead of just one type of the elementary particle they might have found a pair (of particles)” Of course the truth is that every independent blog that has been following the developments has debunked the two-Higgs claim. This is the kind of no-thanks we have become used to for our efforts, Merry Christmas (Update: see comments). To add a little more credance , here is a plot made by merging two plots from the ATLAS conference note on the ZZ analysis.

ZZobsvs125

This shows the observed signal best fit for Higgs to ZZ decays (black dotted line) overlaid on the simulated one-sigma bands for a 125 GeV Higgs boson. This makes it clear that the observed signal is perfectly consistent everywhere with a 125 GeV Higgs within about one-and-a-bit sigma. It is only when they try to do a fit to this data that they get a discrepancy with other observations. Obviously the right conclusion is that it is too soon to do the fit because the error bands below 125 GeV are still widening too rapidly.

All the channels are now giving signals close to the standard model prediction for a Higgs around 125 to 126 GeV. Most of the new data is loaded into the Unofficial Higgs Combination Applet so you can roll your own, but here are the combined signals by channel at 126 GeV on a scale where 1.0 is the standard model cross-section, with statistical only errors

bb  :  1.24 +- 0.40

ττ :  0.4 +- 0.40

WW :  0.63 +- 0.21

ZZ : 0.99 +- 0.16

γγ : 1.77 +- 0.24

All of these are close enough to the standard model signal except perhaps the γγ where the discrepancy appears to be about 3.2 sigma but with systematic errors included this will drop to about 2.5 sigma. CMS have not yet updated this channel and rumours are that they see less excess. It seems daft that they have not released the results yet especially after the ATLAS delay turned out to be a fuss over nothing. Show us what you’ve got, please.

Here too is the unofficial global combination of high-resolution channels (ZZ, γγ) showing an impressive 9.4 sigma signal at 125.5 GeV and just noise everywhere else.

HiggshiresDec2012

Where does this leave us? Everything looks standard-model-like except the diphoton over-excess which may go away. If it stays there is a good chance that it will be explained by new particles such as a charged Higgs or vector-like fermions waiting to show up in other searches. If it goes we have the possibility of split SUSY or perhaps just the standard model at the LHC scale. Many models have been swept away leaving us to contemplate the implications of an unnatural little mass hirearchy. In one year our view of particle physics has moved on a long way. It’s just not clear which direction yet.


LHC end of proton-run Update

December 11, 2012

This week marks the end of proton physics runs at the LHC. The last days are dedicated to machine development and in particular test runs at 25 ns. This shot shows the scrubbing runs during which they filled the collider to its full capacity for the first time. Record intensities of 270 trillion protons per beam were reached with 2748 bunches injected in 288 bunch trains with 25ns spacing. This doubles the intensity numbers used in the proton physics runs this year but it comes at a cost. In the pictures you can see how fast the beam intensity drops due to losses from the e-cloud effect. The purpose of the scrubbing runs this weekend was to clean out the e-cloud and improve beam lifetime. After nine runs the effect was significantly reduced but not fully removed. During the last few remaining days we may see some runs bringing 25ns beams into collision, but perhaps not at these intensities.

fills25ns

The point of these tests is to work out if and how the next runs can work at 25ns spacing rather than 50ns. That will happen when the LHC restarts at 13 TeV in 2015 after the long shutdown. We still have some heavy ion runs before the shutdown but otherwise it is going to be a long wait for new data.. During the LHCC meeting last week Steve Myres gave an overview of the main considerations for running at 25ns vs 50ns. You can watch the video from here. Myres revealed that other tests had shown that they can increase the brightness of the beams from the injectors by 50% using new optics. In addition to this the beta* in the next runs will come down to 0.5m or perhaps even 0.4m, so with all other things being equal luminosities could be three times as high. The problem is that pile-up with 50ns spacing is already near the limit of what the experiments can take. Switching to 25ns will half the pile-up making the situation much more tolerable. The other alternative would be to use luminosity levelling to artificially keep the luminosity down during the first part of any run.

This means the pressure to run at 25ns is high, it will make a big difference to the physics reach, but the technical issues get very troublesome. As well as the e-cloud problems which could mean losing maximum luminosity far too fast, they also have to worry about excess heating which has already been a problem with this years run forcing them to wait for things to cool down before refills. Another big worry is that UFO events become much more frequent at 25ns so even if they can maintain the luminosity they may keep losing the beams through unplanned dumps. Switching between 25ns and 50ns can lose a week of runs so they must decide which setting to use from the start of 2015 and try to stick to it. This makes the present 25ns tests very important. they had been planned for a few weeks ago to allow plenty of time but some injector problems set them back as explained by Myres in his talk. hopefully they will get all the data they need this week.

Meanwhile this week is also the occasion of the annual Cern Council Meetings. Remember that last year this was the event where they announced the first signs of an excess at 125 GeV in the Higgs searches. There are rumours coming in via twitter of new updates from CMS on Wednesday and ATLAS on Thursday (see calendar comments). There is nothing yet scheduled in indico that I can find apart from a status update on 13th (not physics) and the CCM open session on Thursday. We are still waiting for reports of the analysis using 12/fb at 8 TeV that were missing this year at the HCP meeting in Tokyo, especially the diphoton channel. In anticipation here is the latest CMS combo plot that has been around for a few weeks but which has not been much discussed.

CMS12fbThe peak at 125 GeV is clear but what about the excesses that continue up to 200 GeV? No doubt these are due to systematic errors and fluctuations that will go away, bur any new updates will be keenly awaited, just in case.

The LHC has now delivered 23/fb to CMS and ATLAS at 8 TeV of which about 20/fb will be usable data. The complete analysis could be ready in time for Moriond in March with the diphoton over-excess being the most likely centre of attention.

Update: Indications are that the CMS and ATLAS updates were cancelled.

Update: Peter Woit thinks that ATLAS will give new diphoton and ZZ results at the LHC status meeting tomorrow. Meetings with this title usually indicate technical updates on the running of the collider and its experiments, not new physics results. It looks a lot like they are trying to spring a surprise by stealth :-) A presentation later at KITP confirms that they are planning to talk. It still seems that CMS are not ready to give their diphoton update but they do have a status update.


Colliding Particles

December 7, 2012

You may be familiar with the series “colliding Particles” of short films by Mike Paterson about the search for the Higgs boson covered from some unusual angles. After a break he is back adding some more. Number 10 is about the role played by bloggers (the good parts).

collidingparticles

If you have not already seen them take a look through the earlier ones. There are lots of very good explanations.


FQXi results

December 5, 2012

Congratulations to the winners of the FQXi essay contest “Questioning the Foundations” . The results show an impressive and diverse range of ideas about common assumptions that need to be questioned to progress with foundational physics. This was the fourth contest of its type run by the FQXi institute. These provide a unique opportunity for professional and independent physicists to cross words in a public forum about this kind of subject. I know there will always be criticisms of the results and the imperfect voting system but the contest is still a very worthy exercise. This year there were 272 entries, significantly more than previous contests so the top 36 from the community voting who made the final cut should be extra proud of their success, even if they were not among the final winners. This year I narrowly missed out of joining them but there were many other good essays that did not make it either so there is no need to feel out of it. Taking part and having a chance to air our views on physics is much more important than winning. One last word of congratulations goes to the Perimeter Institute since the vast majority of the winners had strong connections with the centre, such as being past or present researchers there. The Perimeter Institute is well-known for its research on foundational issues so their success here is not surprising. They should also be applauded for their culture which seems to encourage taking part when many professional scientists from other centres are too shy to try it.

The winning essay entitled “The paradigm of kinematics and dynamics must yield to causal structure” was written by Perimeter Institute theorist Robert Spekkens. The idea of questioning the separation of kinematics and dynamics is very original. I never thought of it in this context myself even though I had previously made a similar point in a physics.stackexchange answer about a year ago. Spekkans goes on to link this to causality and the use of POSETs (Partially ordered sets) in models of fundamental physics. This aspect of his essay is a perfect example of what my essay on causality is against. In my view the concept of temporal causality (every effect has a cause preceding in time) is not fundamental at all. It is linked to the arrow of time which emerges as an aspect of thermodynamics. It is not written into the laws of physics which as we know them are perfectly symmetrical under time reversal (or more precisely CPT inversion). I therefore question why it needs to be used in approaches to understanding the fundamental laws of physics. My point did not go down well with other contestants and Spekkens was not the only prize winner who advocated the importance of causality as something to preserve while throwing out other assumptions. Of course this just makes me more pleased that I choose this point to make, winning is not what matters.

Aside from that there is something else about the contest that is of special interest on this blog. According to my count exactly 50 out of the 295 authors (17%) who wrote essays have also submitted papers to the viXra archive. The number who have submitted papers to the arXiv is 95 (32%). This provides a rare opportunity to do a comparative statistical analysis on range of quality of papers submitted to these repositories. By the way 11 of the authors can be found in both arXiv and viXra (including myself), leaving 161 authors (54%) who have not used either. The authors who use arXiv are mostly professional physicists because the endorsement system used by Cornell to filter arXiv submissions makes it difficult, but not impossible for most independent scientists to get approval, so we can conclude that about a third of the FQXi contest entrants are professionals. However I am more interested in what can be learnt about viXra authors.

I started viXra in 2009 to help scientists who have been excluded from the arXiv, either because they do not know anyone who can act as their endorser or because the arXiv administrators have specifically excluded them. Many people at the time said that viXra would only support crackpots and this opinion persists in many places. When someone wrote an entry for viXra on Wikipedia some administrators actively campaigned (unsuccessfully) to have it deleted calling viXra a “crank magnet” and concluding that it had no scientific value. Last month the wave of censorship even reached Google who suddenly removed all viXra entries from Google Scholar. We only had about 3% of our hits coming from there so it was not such a great loss, but it leaves us with no way of tracking citations of viXra papers which is a great disservice. This development reflects the opinions of many professional scientists who have said that viXra at best provides no value to science and only serves to keep crackpots in one safe place. Some are even less charitable and believe that it only promotes bad research and is harmful to science. Are they right?

When viXra was launched I said that it would also serve as an experiment to see if arXiv’s moderation policy was excluding some good science. Nobody should be surprised that there is a lot of bad quality research on viXra because it does not have any filtering and makes no claim to endorse its individual contents (personally I am of the opinion that even bad research can have value as a creative work and may even contain hidden gems of knowledge), but does it nevertheless have work of high value that would otherwise be lost? A recent paper by Lelk and Devine submitted to both arXiv and viXra tried to carry out a quantitative assessment of viXra in comparison to arXiv. It found that 15% pf articles on viXra were published in peer-reviewed journals (based on a very low sample). This may sound low but you should take into account that many independent scientists are less interested in journal publications because they do not need to produce a CV. In any case 15% of 4000 papers is a non-negligeable count if you do think this is a good measure of value.

How else then can the value of viXra by assessed if the papers are not being rated via peer-review? One answer is to use the ratings of its authors as provided by the FQXi contest. Essays in the contest were rated using marks from the authors themselves. This is not a perfect system by any means. There were essays that were placed either much lower or much higher in the results than they deserved. Nevertheless, the overall ranking is statistically a good measure of the papers quality in the terms demanded by the contest rules, with mostly good papers ending up at the top and bad ones at the bottom. It can therefore be used to collectively analyse the range of ability of the authors using either arXiv or viXra.

Let’s start with arXiv whose authors have been endorsed and moderated by its administrators. Given such filtering it is easy to predict that they should do well in the contest. Here is a graph of their placings counted in ten bins of about 29.5 authors. The lowest rated essays are in bin 1 on the left and the highest are in bin 10 on the right.

FQXiarXiv

As expected the majority of arXiv authors have made it into the top bins. 87 were ranked in the top half and only 17 in the lower half.

How would you expect the distribution to look for viXra authors? If we are indeed all crackpots as many people suggest then the distribution would be the opposite with most authors doing badly and hardly any making the top bins that are dominated by the arXiv authors. Here is the actual result.

FQXiviXra

In fact the distribution is essentially flat within the statistical error bars (not shown) and there are plenty of viXra authors who did well. In fact six viXra authors made the final cut.

What should be concluded from this? If someone is identified to you as an author who submits papers to viXra how should you judge their status? Is it justified to assume that they must be a crank with no useful knowledge because they apparently can’t get their research into arXiv? The answer according to this analysis is that you should judge them the same way you would judge a typical author who has submitted an essay to the FQXi contest. They may not be good but they could be of a similar standard to the authors who submit papers to arXiv. I don’t suppose this will change the opinions of our critics but it should. Google are happy to index FQXi essays on Google Scholar so why should they refuse to index viXra papers?

By the way, of all the essays that were written by viXra or arXiv authors, the one that got the lowest rating was an essay by a Cornell professor who has four papers on arXiv. I let you judge.


The Particle at the End of the Universe: Review

December 3, 2012

Somebody kindly offered me a review copy of Sean’s book in the comments so to encourage others to do likewise I shall offer my opinion of “The Particle at the End of the Universe: The Hunt for the  Higgs and the Discovery of a New World” by Sean Carroll.

The book I got was actually a virtual one which has the advantage of being searchable so the first thing I did was check to see if I got a mention. Apparently this blog (or rather its comments) turns out to be the place where the 125 GeV rumour first arrived on the internet. It is not what I would have chosen to be mentioned for but it’s better than nothing I suppose.

After a year of Higgs madness that swamped the media there will be plenty of people wanting to read more about it, so Sean has done well to get this book out in time for Xmas with the full story (so far). There is an obvious list of questions about the boson that people would like to ask and Sean conveniently answers them all chapter by chapter. Sean has been answering these kinds of questions on blogs and usenet before blogs for a long time, so he is pretty good at it, but explaining what the Higgs boson is and does in general terms is notoriously difficult. Sean does his best with a mixture of analogies and more direct explanations. Whether he succeeds would have to be judged by someone who does not already know the answers but I think it would be hard to do much better than this book.

There were two chapters that I found especially interesting. The first was about the thorny question of the Nobel prize for the Higgs. I compiled a list of contributions to the Higgs prediction a while back, but Sean goes one better by fleshing it out with the full story. It is very balanced and will be essential reading for the Nobel committee next year, but they will have to find their own solution to who gets the prize.

There is another side to the story of the Higgs discovery that sets it apart from previous discoveries in physics. It happened in the age of the blogs and social media. Sean is well placed to talk about the impact this had and his chapter about nit is the second one I liked a lot.

So overall it is a very good book. Enjoy.


Follow

Get every new post delivered to your Inbox.

Join 275 other followers