Peer Review 2.0

November 27, 2011

Peer review is an absolute necessity for recognizing good science and rejecting the false, but the traditional method of journal based peer review is not keeping up with modern needs. The more prestigious journals are more concerned with the potential a paper has to enhance their impact factor. With so many papers to choose from they can happily reject many, not because they are wrong but because they are not sufficiently mainstream to attract quick citations. When they do accept they place the final version behind a paywall and charge the taxpayers who funded it $30 to read each paper. Is this right?

Different areas of science have different needs and the resilience of journal based peer review can in part be attributed to its flexibility. The needs of maths and physics surpass what the journals offer and as a result the peer review process has been largely replaced by internal reviews, submission to open archives. Where work is more theoretical and speculative the journals do little to decide the validity. This is determined by citations, open discussions and further research leading eventually to experimental tests (we hope). But even here the journals have not disappeared. They remain because students and postdocs need the official stamp of approval that the journal offers in order to move to their next job. Can this role be replaced?

The role of open discussion on the web is surprisingly controversial. In a recent post I queried a response to a question put to Brian Cox and Jeff Foreshaw in the Guardian. They were basically saying that blogging about science that is not yet peer-reviewed undermines the system. a littler later there was a similar article in the Guardian itself in which astronomer Sarah Kendrew defends blogging. But Cox and Foreshaw are far from isolated in their opinion. A link from that article leads to an interesting story about a question in a course about “Responsible Conduct of Research”. The question was as follows:

A good alternative to the current peer review process would be web logs (BLOGS) where papers would be posted and reviewed by those who have an interest in the work, true or false?

The correct answer according to the course is false. Lose a point if you thought otherwise. Well it is indeed the case that blogs alone cannot replace the current peer review system, but they are becoming increasingly important in discussing and judging some questions. Could it be possible to construct a system of peer review based on open web-based appraisals that would replace the journals? Nearly a year and a half ago I asked this question and suggested that a system based on something like stack exchange might be possible. It would not be easy and one thing is clear: It would have to be backed by people with more clout and credibility than me.

Happily some people who do have that kind of clout are now starting to think of the same idea. In particular Tim Gowers has been asking similar questions for peer review in mathematics (see here and here) As we have seen above, such a system is likely to be highly controversial as well as difficult to put together effectively, but at least it is starting to be discussed by people who matter. Mathematics is an area where it might work most easily because correctness in mathematics is very cut-and-dried. This is one reason why MathOverflow has been so much more successful than Physics Stack Exchnge. But as I said earlier, journal based peer-review holds its place because it is so flexible. To replace it we need a web-based peer-review system that can work across all disciplines.


Where does Higgs fit best?

November 21, 2011

When I looked at this picture of Easter Island and matched it to a recent picture of Peter Higgs the best fit was the first statue, but where does the Higgs Boson fit best on the search plots from the LHC?

It may be a little late now to try to analyse the latest public data from the LHC given that the collaborations themselves are now looking at 3 times the amount of integrated luminosity, but Tommaso Dorigo is claiming that the summer data best fits a Higgs boson at 119 GeV and Peter Woit is pressing the case for no Higgs at all.  I have my doubts about either claim, so how can we see what really fits best?

To answer this we first have to think about what the familiar brazil band plots mean such as this one showing the recent Higgs combination for the summer data from the LHC.

If you look at the 140GeV point you will see that the observed CLs line is crossing the red line. The naive interpretation is that the probability for no Higgs boson at this mass is 0.95 so it is ruled out at the 95% confidence level. However, this is wrong. Such a probability can only be calculated when we plug in our prior probabilistic beliefs for the existence or not of a Higgs boson at that mass. The correct interpretation of the plot is that if there were a Standard Model Higgs boson at 140 GeV then the probability of getting a stronger signal than the one seen would be 0.95. This is a very different statement.

Looking at the plot again we see that there is also a nearly three sigma excess at the 140 GeV point. We tend to discount it because of the exclusion, but again this is the wrong thing to do. The excess tells us that if there were NOT a Higgs boson (SM or otherwise) at this point then the probability of getting a weaker signal than the one seen would be about 99% (roughly). So actually the signal indicating a Higgs boson at 140GeV is five times stronger than the one tending to exclude it. The symmetry between the signal and no signal possibility is best seen on this signal plot that uses the same information differently.

If we were being Bayesian, our prior probability for no Higgs at this point would probably be higher than the probability that one exists because there should be more places where it isn’t than where it is. If we favoured a light Higgs mass for theoretical reasons and discounted non-standard models we might assign a probability of 0.8 to no Higgs boson at around 140 GeV and 0.2 to a SM Higgs at 140GeV. In this case we would look at the 140GeV point on the plot and come down slightly in favour of the Higgs boson at  that mass.

However, the plot contains much more information because it covers the whole mass range where a Standard Model Higgs might be. We can compare the probabilities for a Higgs boson at any mass in the range and see which one is favoured. For this we need to use our prior beliefs for where the Higgs might be over the whole range. For simplicity lets just assume that we believe in a single standard model Higgs boson and we favour equally each of the mass points where they plot a square on the graph. To apply this we need to know the width of the signal that a Higgs boson at a given mass would produce on the signal plot. The underlying decay width for a Higgs boson is predicted by the standard model as shown in this plot.

Below twice the mass of the W the width is very narrow and it is the resolution of the detectors that counts. This varies depending on the channel and the mass but I am going to assume that it is ±5 GeV at worst and fit to a bell curve on that assumption. If you think differently you may get a different result from me. The method is to overlay the bell curve on the signal plot with a peak at 1.0 where we think the mass of the Higgs may be and tending to zero either side. At each mass point we read the signal strength and use the observed data to tell us the conditional probability for that signal strength (assuming a flat normal PDF) . These probabilities are all multiplied together to give the conditional probability for the fit. We can then try all the curves for different Higgs masses we believe in and see which one has the best fit. Here is the result.

As you can see the best fit is actually at 141 GeV. Perhaps we should see how it works for the separate plots from ATLAS and CMS

ATLAS sees the Higgs at 144 GeV and CMS sees it at 141 GeV. That is pretty consistent given the resolution of the detectors. What about using different channel combinations. I will limit this to the three with the most data.

The best fits are 132 for WW (which has poor resolution), 143 GeV for ZZ and 139 GeV for diphoton. So it is a pretty consistent result.

I don’t think it is safe to conclude that the Higgs boson has mass around 140 GeV. All we can say is that the limited data published so far supports that as best fit. The summer data has not probed the 120 GeV region well enough so there could be something there with a stronger signal when we look at the 5/fb data for this winter. Rumours are that there is not much of a signal anywhere with 120 GeV being the best chance, but I am waiting until I have seen the data myself and repeated this objective analysis.


New Higgs Combinations Released

November 18, 2011

The LHC Higgs combination group is presenting their ATLAS+CMS Higgs combination plot at the Hadron Collider Physics conference in Paris today at noon and the slides of the talk (Gigi Rolandi) are already online. It includes some nice individual channel combinations as well as the full one we have been expecting. Before I look at those here is my approximate version of the full combination that I showed here two months ago. This version of it is taken from a slide shown by “Bill and Vivek” for the Higgs Combination Group themselves at a kickoff meeting in September for the plots finally shown today.

So how did I do? Here is a version of the new combination that conveniently shows some of the variations you can get just by using different methodologies.

The viXra version of the plot was produced using the minimal data available in the individual ATLAS and CMS Higgs Combination plots shown at Lepton Photon 2011 and approximates the probability distribution function by flat normal error curves. the calculation takes a few milliseconds. The full combination from the HCG goes back to the original data using the real log likelihood numbers and takes into account all known correlations between the data and background calculations. The calculation takes hundreds of thousands of hours of CPU time, yet the difference between the viXra plot and the official HCG one is no bigger than the differences of using alternative methodologies such as Bayesian. This is a nice demonstration of the power of the central limit theorem which says that and error distribution becomes normal given enough data and a finite variance. It also confirms that the effect of correlations on the plot cannot be very big.

To be clear, I think it is important that the full official combinations are worked out carefully because if you want to claim a discovery you have to make sure you have covered all the sources of error correctly. The Higgs Combination Group have done a good job. But if you just want to see the signal in the data we now know that an approximate combination is good enough.

If you want to compare more closely here is the official version with the viXra combination overlaid in red. The areas where it deviates are regions at high mass where there is low background and few events have been recorded. The approximation is not so good there because the normal distribution approximation is less accurate.

Here is the zoom onto the lower mass region

I Like that the combination group have also produced combinations for all the individual channels. My own verisons of these are a little less reliable because there is less data in each case so the normal distribution is not such a good approximation. Even so my plots were not far out which means that with the next batch of data using two to three times the statistics I can expect to get good results.

Here is the crucial combination for the golden channel. This is one of the best hopes for a signal  because its high resolution and good branching ratio at low mass. If you want to compare with my earlier combination it is here.

The other channel that has the potential to find a low mass Higgs is the direct diphoton decay and there is a new combination for that too

I think it is striking that both these plots have healthy excesses at around 140 GeV and perhaps again at lower mass. To see this better we need to combine them both together.

But this data is by now very old and it is no longer worth speculating on the basis of what the plots might show. The story has already been superceded by rumours over at Résonaances that the 5/fb plots show no more than a 2-sigma excess at 120 GeV. If all goes well we may get first results via the CERN Council Meeting during the week starting 12th December.


OPERA fail to find error in Faster Than Light Measurement

November 18, 2011

The OPERA experiment has failed to find an error in their measurement of neutrino speeds that shows them travelling faster than light. The earlier result was most strongly criticized because of the statistical nature of the measurement which involved fitting the timing profile of many observed events at Gran Sasso to the known shape of long duration pulses of protons 730 km away at the source in CERN. This objection has now been quashed by using much shorter pulses so that the effect can be seen with just a few neutrinos. While the previous measurement used data gathered over three years, this new confirmation took just a few weeks.

The crucial new plot is this one

The timing of the neutrinos is spread over a 50 ns window but is still clearly different from the zero mark that would be consistent with travel at the speed of light. The spread could either be due to inaccuracies in the timing or differences in the speed of the neutrinos themselves, if the effect is real. An interesting question would be whether there is any correlation between the timing offset and the energy of the neutrinos and I don’t know if they have that data.

In fact this spread is the most exciting part of the new result. As far as I know it is bigger than the known systematic errors. If there were an unknown systematic error in the measurement of the distance between the experiments or in the timing, we would expect that to be constant. Here I am assuming that atomic clocks were used at each end to keep the timing stable rather than constant referral to GPS time which could vary. If this is the case then the spread actually rules out several other sources of systematic error.

Factoring in my prior probabilities from preconceived theoretical prejudices I can now say that the probability of the result being correct has increase from 1 in a million to one in 100 thousand (numbers are illustrative :) ). This is sufficient to convince most of the collaboration to sign the paper which may now go forward to peer-review. To convince more theorists they may need to do more checks on the result. The strongest criticisms will now fall on the use of GPS. To eliminate this they should check the timing and the distance calculation independently.  The timing could be checked by flying a portable atomic clock from CERN to OPERA and back at high-speed on a helicopter to calibrate the clocks at either end. Portable clocks can be stable to within a few nanoseconds over the course of a day so it should be possible to carry out this check  with sufficient accuracy and it would not be too expensive. The distance measurement also needs to be repeated, preferably using old-fashioned surveying techniques rather than GPS between the two locations.

If this is also fails to find an error then the probability of the result being correct goes up to one in ten thousand. The next most likely source of errors would be the timing measurement for the collisions that generate the neutrinos at CERN. This involves some electronics with a lag that may not be precisely known. To eliminate this they possibility need to build a near detector to catch neutrino events in the path of the beam near CERN. If the beam is everywhere deep underground this could be an expensive addition to the experiment, but it would be a very significant check taking the probability of the result being real up to one in 100 or better depending on what other possible sources of error might be left.

To really confirm that neutrinos are faster than light requires confirmation from other labs using measurements that could not be subject to correlated errors. Hopefully this will arrive next year.

For more details see TRF where this was reported two days ago based on a tachyonic version of twitter, or AQDS using conventional light-speed technology from arXiv. The official press release is here.

Update: People are telling me that the timing calibration has already been done. An update from Dorigo makes some interesting points including the fact that the timing depends on a 20 megahertz clock signal. This explains the spread of the measurements over 50 ns. In fact it means that the time offest must be very sharp which is not such good news. It makes a constant systematic error seem much more likely.

I think another essential upgrade to the experiment would be to record a timestamp for events with nanosecond accuracy.


Witten and Knots

November 16, 2011

If you are at all interested in mathematical physics you will want to watch Ed Witten’s recent talk on his work in knot theory that he gave at the IAS. Witten gives a general overview of how he discovered that the Jones polynomial used to classify knots turns out to be “explained” as a path integral using Cherns-Simon theory in 3D. More recently the Jones Polynomial was generalised to Khovanov homology which describes a knotted membrane in 4D and Witten wanted to find a similar explanation. He was stuck until some work he did on Geometric Langlands gave him the tools to solve (or partially solve) the riddle.

Geometric Langlands was devised as a simpler variation on the original Langlands program that is a wide-ranging set of ideas trying to unify concepts in number theory. Witten makes some interesting comments during the question time. He says that one of the main reasons that physicists (such as himself) are able to use string theory to answer questions in mathematics is that string theory is not properly understood. If it was then the mathematicians would be able to use it in this way themselves, he says. Referring to the deeper relationship between string theory and Langlands he said.

“I had in mind something a little bit more ambitious like whether physics could affect number theory at a really serious structural level like shedding light on the Langlands program. I’m only going to give you a physicists answer but personally I think it is unlikely that it is an accident that Geometric Langlands has a natural description in terms of quantum physics, and I am confident that that description is natural even though I think it mught take a long time for the math world to properly understand it. So I think there is a very large gap between these fields of maths and physics. I think if anything the gap is larger than most people appreciate and therefore I think that the pieces we actually see are only fragments of a much bigger totality.”

See also NEW


BSM CPV in LHCb at HCP11

November 14, 2011

Beyond standard model CP violation has been reported by Mat Charles for the LHCb collaboration at the Hadron Collider Physics conference today. Here is the relevant plot in which the cyan coloured band indicating the measurement fails to cross the black dot as predicted by the standard model.

The numerical result which was already rumoured at the weekend is ΔACP = -0.82% ± 0.25% which is just over 3 sigma significance.

This measurement is sensitive to new physics such as higher mass particles with CP violating interactions so that could be the explanation. On the other hand it is also a very tricky measurement subject to background and systematics. The significance will improve with more data and already twice as much is on tape so this is one to watch.  The interesting thing will be to see if the phenomenologists can account for this result using models that are consistent with the lack of other BSM results from ATLAS and CMS.

Update: This is also being reported in other blogs of course e.g. here and here, but for the most expert details see the LHCb public page and the CERN bulletin


Expected LHC Higgs Significance at 5/fb+5/fb

November 14, 2011

The Hadron Collider Physics conference starts today in Paris and we eagerly await updates for various searches including the Higgs. 5/fb of luminosity have been collected in each experiment but it is too soon for the analysis of the full data to be out and this week we are only expecting results at 2/fb to be shown (but surprises are always possible) Indeed ATLAS have recently revealed updates for three of the diboson Higgs channels at 2/fb in conference notes and other conferences. These do not make much difference but an update to the diphoton search would be worth seeing. It has so far only been shown by ATLAS at 1/fb. CMS have only released results at 1.6/fb for the main Higgs decay modes so they are even more overdue for updates.

While we are waiting for that we can look forward to next year when results for 5/fb will be revealed, probably in March. When the results are combined we will see 10/fb and here is a plot showing the expected significance at that level. This is for 10/fb at CMS which can be taken as a good approximation for the ATLAS+CMS combination at 5/fb for each.

From this you can see that they expect at least 4 sigma significance all the way from 120 GeV to 550 GeV, which suggests that a good clear signal for the Higgs is almost certain if it exists, but not so fast. There are a couple of caveats that should be added.

Firstly the WW decay channels have been very good for excluding the Higgs over a wide mass range. Here is the viXra combined plot using 2/fb of ATLAS data and 1.5/fb from CMS.

This is only a rough approximation to what would be produced if they did an official version because it assumes a flat normal distribution uses a linear interpolation for CMS points and ignores any correlations.

Within those limitations we get an exclusion from 140 GeV to 218 GeV with a broad excess around 2 sigma extending all the way from 120 GeV to 160 GeV. A Standard Model Higgs in this region would only have a width of a few GeV and no bump of the sort is seen, so what does it mean? ATLAS and CMS will probably need to consider this question for a long time before agreeing to approve results like this with more data along with a suitable explanation. For now you should just bear in mind that this plot suffers from large backgrounds and poor energy resolution due to the use of missing energy to identify the two neutrinos. These effects have been worsened by high pile-up this year. I suspect that this channel will have to be used only where it provides a 5 sigma exclusion and should be left out when looking for a positive signal.

For this reason I have added a red line to the projected significance plot above showing the expected significance for just the diphoton plus ZZ to 4 lepton channels. These decay modes have very good energy resolution because the photons and high energy leptons (electrons and muons) are detected directly with good clarity and are not effected by pile-up. I think that the best early signal for the Higgs boson will be seen in a combination of these channels alone. The projected significance plot shows that with the data delivered in 2011 we can expect a signal or exclusion at a level of significance ranging from about 3 sigma to 6 sigma in the mass range of 115 GeV to 150 GeV where the Higgs boson is now most likely to be found.

Does this mean that we will get at least a 3 sigma “observation” for the Higgs by March? No, not quite. There is one other obvious caveat that is often forgotten when showing these projected significance plots. These are only expected levels of significance and like everything else they are subject to fluctuations. Indeed, given twenty uncorrelated mass points we should expect fluctuations of up to 2 sigma over the range. How could this affect the result? The next plot illustrates what this could mean assuming an expected significance of 4 sigma

In this plot the green line represents the expected level for a positive signal of a standard model Higgs, while the gred line represents the level where there is no Higgs. The data points have error bars at the size you will get when you expect a 4-sigma level of significance. So point A shows where the bars are expected to sit  if the SM Higgs exists at a given mass value and point B shows where the bars are expected if there is no Higgs. If they get observed data in these locations they will be able to claim a 4-sigma observation or exclusion, but remember that fluctuations are also expected. Point C show what happens when the Higgs is there but an unlucky one sigma fluctuation reduces the number of observed events. The result is a reduced significance of three sigmas. Likewise point D shows an unlucky one sigma fluctuation when there is no Higgs which still gives a healthy three sigma exclusion. But remember that we expect fluctuations of up to two sigma somewhere in the range. Point E shows what happens when a Higgs is there but an unlucky two sigma fluctuations hits that mass point, and point F shows what happens when there is no Higgs with an unlucky two sigma fluctuation. The points are the same, corresponding to either a two sigma signal or a two sigma exclusion. We have already seen some points that look just like this at the summer conferences. This is why the CERN DG has cautiously promised to identify or exclude the Higgs only by the end of 2012 and not by the end of 2011. More optimistically we can also hope for some lucky fluctuations. If they fall at the mass where the Higgs actually lives we will get a 6 sigma discovery level signal like point G instead of merely a 4-sigma observation.

It’s a simple point and my explanation is a little too long-winded, but I think this had too be said clearly before the next results come out in case people do not see what they thought they should have expected to see. With another year of data 10/fb becomes (perhaps) 40/fb and 4 sigma becomes 8 sigma. Even with unlucky 2 sigma fluctuations they will be left with 6 sigma signals. The results will probably be good enough to claim discoveries even for the individual experiments and some individual decay channels, but for this year’s data there could still be a lot of ambiguity to mull over.


Follow

Get every new post delivered to your Inbox.

Join 281 other followers