CMS looking back

January 30, 2013

The CMS collaboration have kindly posted a pleasant video that reveals the moments when they “unblinded” their Higgs diphoton results within the collaboration in the run-up to the public discovery announcement in July.

I find it interesting to look back and see how these events relate to what was going on publicly on the blogs at the time. From the video we learn that the CMS collaboration were shown the first results on 15th June. The Higgs analysis group within CMS must have seen it at least a day or two before in order to prepare the plots for the talk. We can assume that ATLAS were seeing their results at about the same time. For two days the collaborations were able to walk around knowing that they knew stuff that the outside didn’t until Peter Woit blew the lid with his leak about the new results. This was a little upsetting for them at first as shown by the response from CMS blogger Michael Schmitt who later calmed down a bit. One argument they gave was that they did not want the information to pass between the ATLAS and CMS collaborations because it would spoil their independence but if the rumour can reach the outside world so quickly it is clear that it would not be kept secret at CERN and other institutions where 3000 excited physicists from each collaboration share the same social spaces.

The rumours were saying that the diphoton excess was at around 4 sigma and the video shows that it was indeed around 4.3 for CMS. In my own analysis the next day I estimated that this was based on about 3/fb of the data which turns out to be exactly right for CMS as seen in the video when the camera zooms in at 2:20. I also pointed out that when they add the full dataset the signal could easily go down and in fact it did descend to 4.1 sigma as seen in the next part of the video. I am not always right but I was this time. Subsequently the New York Times reported an email from the spokesperson for ATLAS saying that they should not believe the blogs. Now we know that this was a euphemism for “please don’t report what they are saying because it is perfectly accurate and we were hoping to keep it as a surprise for the next conference”.

Another point worth making here is that the collaborations like to make big statements about how they do their analysis blind. This is supposed to mean that they don’t look at the results until they have fixed the parameters of the analysis so that they cannot introduce any bias. From this video we can see what this really means in practice. They unblind the data as an early check then they “re-blind” it while they adjust the analysis. Then they unblind it again two weeks later with just 30% more data added. Come-on guys own up, this is not quite in the spirit of how blind analysis is meant to work. Luckily the signal is so clear that it is indisputable in any case.

Getting more up-to-date, remember that CMS have not yet published the diphoton update with 13/fb at 8 TeV. Rumours revealed that this was because the excess had diminished. At the Edinburgh Higgs symposium some more details about the situation were given. The talks are not online but Matt Strassler who was there has told us that the results have now been deemed correct. It may be understandable that when the results are not quite what they hope for they will scrutinize them more carefully, but I find it wrong that they do not then publish the results once checked. It was clear that they intended to publish these plots at HCP2012 in November and would have done so if they showed a bigger excess. By not releasing them now they are introducing a bias in what is publicly known and theorists are left to draw conclusions based on the ATLAS results only which still show an over-excess in the diphoton channel. It will all be history once the final results with about 20/fb are released soon but it would be helpful if they could keep this sort of biasing factor to a minimum.

The March meeting at Moriond is slated as the occasion for the final update but only of they are happy with the results. Their analysis has been used and refined many times in the last two years and by now they should be confident enough to say that they will publish regardless of the result they get. The data for the last proton run was available before Christmas so by now the collaborations should have completed their final analysis. The fact that we don’t have any rumours suggests that this time they have decided to confine knowledge of the results to the smaller Higgs groups within the collaborations and they may actually succeed in keeping them secret until the conference.

Update: See Tommaso Dorigo’s response here


We need to find the Theory of Everything

January 27, 2013

Each week the New Scientist runs a one minute interview with a scientist and last week it was Lisa Randall who told us that we shouldn’t be obsessed with finding a theory of everything. It is certainly true that there is a lot more to physics than this goal, but it is an important one and I think more effort should be made to get the right people together to solve this problem now. It is highly unlikely that NS will ever feature me in their column but there is nothing to stop me answering questions put to others so here are the answers I would give to the questions asked of Lisa Randall which also touch on the recent discovery of the Higgs(-very-like) Boson.

Doesn’t every physicist dream of one neat theory of everything?

Most physicists work on completely different things but ever since Einstein’s attempts at a unified field theory (and probably well before) many physicists at the leading edge of theoretical physics have indeed had this dream. In recent years scientific goals have been dictated more by funding agencies who want realistic proposals for projects. They have also noticed that all previous hopes that we were close to a final theory have been dashed by further discoveries that were not foreseen at the time. So physicists have drifted away from such lofty dreams.

So is a theory of everything a myth?

No. Although the so-called final theory wont explain everything in physics it is still the most important milestone we have to reach. Yes it is a challenging journey and we don’t know how far away it is but it could be just round the corner. We must always try to keep moving in the right direction. Finding it is crucial to making observable predictions based on quantum aspects of gravity.  Instead people are trying to do quantum gravity phenomenology based on very incomplete theories and it is just not working out.

But isn’t beautiful mathematics supposed to lead us to the truth?

Beauty and simplicity have played their part in the work of individual physicists such as Einstein and Dirac but what really counts in consistency. By that I mean consistency with experiment and mathematical self-consistency. Gauge theories were used in the standard model, not really because they embody the beauty of symmetry, but because gauge theories are the only renormalisable theories for vector bosons that were seen to exist. It was only when the standard model was shown to be renormalisable that it become popular and replaced other approaches. Only renormalisable theories in particle physics can lead to finite calculations that predict the outcome of experiments, but there are still many renormalisable theories and only consistency with experiment can complete the picture. Consistency is also the guide that takes us into theories beyond the standard model such as string theory that is needed for quantum gravity to be consistent at the perturbative level and the holographic principle that is needed for a consistent theory of black hole thermodynamics.

Is it a problem, then, that our best theories of particle physics and cosmology are so messy?

Relatively speaking they are mot messy at all. A few short equations are enough to account for almost everything we can observe over an enormous range of scales from particle physics to cosmology. The driving force now is the need to combine gravity and other forces in a form that is consistent non-perturbatively and to explain the few observational facts that the standard models don’t account for such as dark matter and inflation. This may lead to a final theory that is more unified but some aspects of physics may be determined by historical events not determined by the final theory, in which case particle physics could always be just as messy and complicated as biology. Even aside from those aspects, the final theory itself is unlikely to be simple in the sense that you could describe it fully to a non-expert.

Did the discovery of the Higgs boson – the “missing ingredient” of particle physics – take you by surprise last July?

We knew that it would be discovered or ruled out by the end of 2012 in the worst case. In the end it was found a little sooner. This was partly because it was not quite at the hardest place to find on the mass range which would have been around 118 GeV. Another factor was that the diphoton excess was about 70% bigger than expected. If it had been as predicted they would have required three times as much data to get it from the diphoton excess but the ZZ channel would have helped. This over-excess could be just the luck of the statistics or due to theoretical underestimates, but it could also be a sign of new physics beyond the standard model. Another factor that helped them push towards the finish line in June was that it became clear that a CMS+ATLAS combination was going to be sufficient for discovery. If they could not reach the 5-sigma goal for at least one of the individual experiments then they would have to face the embarrassment of an unofficial discovery announced on this blog and elsewhere. This drove them to use the harder multivariate analysis methods and include everything that bolstered the diphoton channel so that in the end they both got the discovery in July and not a few weeks later when an official combination could have been prepared.

toeAre you worried that the Higgs is the only discovery so far at the LHC?

It is a pity that nothing else has been found so far because the discovery of any new particles beyond the standard model would immediately lead to a new blast of theoretical work that could take us up to the next scale. If nothing else is found at the LHC after all its future upgrades it could be the end of accelerator driven physics until they invent a way of reaching much higher energies. However, negative results are not completely null. They have already ruled out whole classes of theories that could have been correct and even if there is nothing else to be seen at the electroweak scale it will force us to some surprising conclusions. It could mean that physics is fine tuned at the electroweak scale just as it is at the atomic scale. This would not be a popular outcome but you can’t argue with experiment and accepting it would enable us to move forward. Further discoveries would have to come from cosmology where inflation and dark matter remain unexplained. If accelerators have had their day then other experiments that look to the skies will take over and physics will still progress, just not quite as fast as we had hoped.

What would an extra dimension look like?

They would show up as the existence of heavy particles that are otherwise similar to known particles, plus perhaps even black holes and massive gravitons at the LHC. But the theory of large extra dimensions was always an outsider with just a few supporters. Theories with extra dimensions such as string theory probably only show these features at much higher energy scales that are inaccessible to any collider.

What if we don’t see one? Some argue that seeing nothing else at the LHC would be best, as it would motivate new ideas.

I think you are making that up. I never heard anyone say that finding nothing beyond the Higgs would be the best result. I did hear some people say that finding no Higgs would be the best result because it would have been so unexpected and would have forced us to find the alternative correct theory that would have been there. The truth of course is that this was a completely hypothetical situation. The reason we did not have a good alternative theory to the Higgs mechanism is because there isn’t one and the Higgs boson is in fact the correct answer.

Update: Motl has a followup with similar views and some additional points here


The Dark Side of Open Access

January 18, 2013

Not Open_Access_logo2If you are an independent researcher as I am you will know the feeling of despair when you find a reference to a useful looking paper that is hidden behind a journal’s paywall with no free version available on the internet. Research institutions pay subscriptions that allow their members unfettered access but the rest of us have to pay a fee. For this reason I welcome the gradual move towards open access journals that will eventually mean that all research is available online with free access to everyone, but there is a darker side to this movement that I am a lot less keen on. Let’s take Philica as an example of an open access journal that I would certainly consider publishing as a show of support. It accepts submissions in any subject and I particularly like it because its peer-reviews are made public and allow for dynamic changes when subsequent research supports or refutes a published work. Unfortunately there is a catch for independent scientists. You can only register to publish in Philica if you are a full-time researcher employed by a  university, hospital and other research institution. Apparently open access does not mean open to submissions from all authors [update: 21/2/2014 The policy at philica has apparently changed a little and independent researchers can apply for membership if they can show that they are capable researchers].

In the traditional publication model it would be very unusual to find a journal that placed explicit limitation on who could publish in its pages. It is not something I had experienced before, but with open access journals this is becoming more common. For now there are still plenty of small open access journals that take submissions from anyone, but will they last? I sense that the thin edge of the wedge is in place and as it is driven in we will see unapproved researchers driven out in an effort to reduce the costs of publication. The result could have unexpected consequences for science and society.

Green, Gold or Diamond

Open access usually means that anyone can access papers for free. This comes in different forms sometimes termed green or gold open access. With green open access the journal allows authors to place a version of their paper on the internet where anyone can access it for free. Usually they do not allow the typeset version produced by the journal in this way but there is nothing to stop the online version being updated to reflect all changes made as a result of the peer-review. This works for the journals because university libraries cannot rely on authors to provide the open access copy and must therefore continue to pay the journal subscription.

With gold open access the journal itself provides a free copy of every paper online. Some long-standing journals experimented with this option but found very quickly that libraries would cancel subscriptions cutting off the journals revenue stream.  In some cases they have agreed to allow open access after a delay of a few years but new research is most relevant as soon as it appears so this is not a very satisfactory solution. Under pressure from funding agencies the new trend is for the journals to move towards payments from authors as an alternative to library subscriptions, but the payments can be several thousand dollars per publication which makes life particularly difficult for areas of theoretical science that can produce many papers with a low-budget. It is of course especially difficult for most independent scientists who may have no funding at all.

For professional scientists the ideal standard for open access is now being called platinum or diamond access meaning that it is free to publish and free to access. However, this does not mean that it is open for anyone to publish. There is no name available for that level of standard because professional researchers do not feel a need for it. Their only real concern is to reduce the cost of publishing which impacts research budgets. In order to make diamond open access possible it is necessary to reduce the cost of running a journal to virtually zero. This is perfectly feasible since the essential work of editors and reviewers is done for free by scientists out of a sense of duty and career promotion. If journals are published online only, the costs are reduced to whatever is required to run a website. This can also be reduced to essentially nil if there is a centrally run infrastructure.

This week Field medalist Sir Timothy Gowers has announced a new initiative funded in France that will provide just such as infrastructure. Scientists will be able to pull together and quickly set up epijournals in whatever area of science they choose at virtually no cost. Although they will be free to charge a publication fee if they wish, this is likely to be very low or zero and reader access will always be freely available because the system will run on the back of the HAL archive which is an arXiv mirror and open access to all readers. This is not the first project that has tried to change the way that science publishing runs but because it will be available to all areas of research and will have solid funding support it is likely to take over as the major platform for peer-review. The catch for independent research is that you will not be able to publish in epijournals unless you can submit to arXiv and that is not possible for everyone.

The scientists and mathematicians who are setting up the system do not seem to regard this as a problem. They believe that any serious researcher can easily find the endorser required to allow them access to arXiv, but as 1700 researchers who use viXra can testify this is not the case. At present about 15% of papers submitted to viXra are accepted in journals after peer-review, but this figure is likely to diminish to near zero if arXiv based journals take hold. To be fair Gowers has said that epijournals could allow linking to repositories other than arXiv. Whether they allow linking to viXra remains to be seen. My guess is that even if the epijournal infrastructure allows it, most individual journals will limit submissions to arXiv. In fact they may go further and only allow submissions from categories within arXiv that are related to the subject areas of the journal. This will reduce the overhead of having to reject too many papers that are off-topic and with near-zero budgets to work with this is going to be an attractive option. This could mean that even authors who find themselves limited to arXiv’s generic categories such as general maths and general physics may find themselves unable to submit to journals. I hope I will be proven too pessimistic but it seems to me that the writing is on the wall.

Why Does it Matter?

You may well ask why this matters. It is clear from the many discussions about open access on the internet that including publication access for all authors is not a concern for professional scientists. Much of the drive towards open access is being piloted by mathematicians and mathematics is rarely a controversial subject. Apart from a few rare cases such as the work of Godel or Cantor, mathematical progress is accepted very quickly. It is hard to argue with a proof. It is unlikely that any barrier could prevent a good work of mathematics from being recognized even if it came from an independent mathematician without the usual affiliations. But what about subjects more infested with the interference of politics? Take climate science as an example. Would it not be very tempting for the establishment to be able to undermine the work of climate skeptics simply by hindering their ability to publish? I suspect that journal editors will find it all too convenient that they can limit who can submit research by such artificial means. The wedge will be driven in further and it will become harder for scientists on the fringe to get the credibility they need from publication, or even to submit their work to someone who is at least required to read and criticize. Science is sleep walking into a Brave New World where anyone can speak but only the approved few can be heard. I think that those who are leading the fight for open access need to understand this now before it is too late. They must define open access to also mean openness for anyone to have access to the ability to submit for peer-review. At present their only concern is to remove the financial cost of access. Later they will see that such short-sightedness also has a cost.


A Good Year for viXra

January 2, 2013

2012 was a good year for viXra so this is a good moment to provide some statistics.

This blog passed the 1 million view mark in December which is not bad considering the low posting rate and the length of time it has been running. Thank you all for your support.

Apart from the blog, the main part of viXra is the pre-print archive which we started in 2009 for scientists and mathematicians who experienced problems submitting to other archives such as arXiv. Since then it has gone from strength to strength as shown in this plots of paper upload and download counts. We now have over 4000 pre-prints online.

uploadstats

downloadstats

Uploads include papers from sciprint.org which was a similar archive that ran from 2007 until 2009 when viXra began in July 2009. The download stats have been filtered to remove indexing robots and multiple downloads of the same paper from the same IP address.

As you can see we had a record number of uploads in 2012 and downloads have been doubling year-on-year. This year by popular request we also started showing download statistics for individual papers with counts backdated from out logs. These can be viewed on the abstract pages. Our rival arXiv only provides a long list of excuses why they don’t provide a similar feature.

For those not familiar with viXra and those who don’t get what it is about, here are some bullet points:

  • viXra was created in 2009 for scientists who have issues submitting to other preprint repositories such as arXiv
  • It is run by its administrators independently of any organisation.
  • All submissions are free and unconditional. The site is funded by adverts (we no longer accept donations but thanks to all the past donors)
  • It is viXra policy to accept all submissions of scientific research.
  • We very occasionally reject submissions which contain personal attacks, adult material, too much repetition, copyright violations etc., but never because we disagree with the content.
  • We only accept works of science and mathematics. If you have works of literature, art, politics etc, there are other places to publish them.
  • Acceptance of papers into viXra does  not indicate any kind of endorsement or bestow any credibility or lack of credibility.
  • The sole purpose of viXra is to provide open access to scientific works with permanent links for reference and time-stamped records of version changes to allow for verification of priority.
  • viXra is not a peer-reviewed journal and as a mater of policy the administrators refuse all requests for feedback on submitted work.
  • Authors retain copyright and can also submit papers to journals for peer-review.
  • Each abstract page has a comment feature that anyone can use to provide feedback. Very few comments are deleted and never just because they are critical.
  • Most authors who submit to viXra are independent researchers who cannot submit to arXiv because of their endorsement policy that makes it impossible to submit if you do not have academic contacts willing to vouch for you. Potential arXiv endorsers are often unwilling to help outsiders because the arXiv threatens to remove their endorsement rights if they endorse work deemed inappropriate by the arXiv moderators.
  • viXra also contains work from people who are not independent of academic institutions. Some of them have found that they have problems with arXiv administrators who often move research that they don’t like to generic categories (e.g. general physics and general maths) In these categories you cannot normally cross-post to other categories and many indexing sites ignore them. The purpose of these categories seem to be to make unpopular research hard to find.
  • Despite the open censorship of scientific research most academics support the arXiv endorsement and moderation policies and believe that it only filters out research of no scientific value.
  • Although a significant number of papers in viXra are of low quality there are also many papers that have been accepted in peer-reviewed journals (estimated at 15% in one independent survey)
  • Our comparison of essays by viXra authors submitted to the 2012 FQXi essay contest which were independently rated, showed that the distribution of scores was similar to the overall distribution from all authors of which about a third were professional scientists who submit to arXiv.
  • Most papers that go against mainstream science are indeed as crazy as they seem, but there are numerous cases in the history of science where work was heavily criticized at first but later turned out to be right. viXra provides a place where any controversial work can be recorded and made available no matter where else it is rejected from. Even if such cases are very rare viXra would provide a service of value to science in this way.
  • Science does not just progress in giant revolutionary steps and viXra also contains many ordinary works of everyday science that find it hard to get published or accepted into other repositories.
  • Even research which contains many errors can nevertheless contain useful insights too. A good example from history was the work of Georg Ohm which was based on a very poor understanding of theoretical physics. Nevertheless it also contained a report of careful experiments that established Ohm’s law. Even papers that are seen to have many errors are worth keeping publically available in case they also have valuable ideas.
  • Even if many papers on viXra turn out to have little scientific value, at viXra we believe that everyone should be encouraged to think for themselves and be given the opportunity to learn by their mistakes. It is also the case that you can never predict what crazy idea may inspire someone else to think of something else of real value.
  • viXra is not “a way round peer-review” which is an important part of scientific evaluation. However, some scientists now agree that peer-review and publication should be formally separated. Traditional peer-review is often seen as flawed because of the role of publishing houses often motived by business interests. despite much discussion scientists and mathematicians have so far failed to implement a viable alternative to peer-review controlled by journals.
  • One other way to access the value of papers over time is by looking at citations. Sadly viXra is now censored by all services that count citations such as Google Scholar, InspireHEP, CiteSeer etc., so it is impossible to evaluate viXra papers this way unless they are also published elsewhere.
  • Despite the opposition from institutional science, we at viXra are encouraged by the support from our authors and will allow future historians to be our judge.

Follow

Get every new post delivered to your Inbox.

Join 270 other followers