MINOS and MiniBooNE poll

June 28, 2010

There has been a lot of discussion around the science blogs about the unexpected observations from neutrino experiments that neutrinos and antineutrinos appear to have different mass. If you have been following, this is your chance to vote on how you think the situation will be resolved


“crackpots” who were right 13: Robbert Goddard

June 28, 2010

Today the term “Rocket Science” is used as a metaphor for any kind of engineering endeavor that requires a mind of the highest caliber. In fact the standard technology for putting something into orbit is a liquid fueled rocket combining kerosene and liquid oxygen to propel a series of rocket stages, while using gyroscopes and steerable thrust to stabilise motion. It is remarkable then, that all these innovations were developed by one man working quietly without much support at a time when most people thought the idea of space travel was no more than a crazy dream to be found in science fiction. That man was Robbert Goddard.

As a young man in Massachusetts at the beginning of the twentieth century, Robert Goddard looked skywards and observed the flight of birds. At the time most people thought that controlled flight would never be a possibility for a man-made machine, but Goddard was one of those who thought differently, and he was no mere dreamer. Already he understood enough physics to work out the theory of flight. He was also an experimenter who had worked with kites and balloons to understand how flight could be possible. In 1907 at the age of 25 he published a paper in “Scientific American” about a method for “balancing aeroplanes”.

His young age and lack of facilities meant that Goddard was not destined to contribute further to the achievements of early powered flight that were just getting under way. Instead he turned to idea that rockets might be used to take men much higher, even into space. In 1909 he made his first innovation when he realised that liquid fuels would be superior to solid fuels for high-powered rockets. While at Princeton a few years later he was struck with tuberculosis and returned to his home town. This gave him the time to turn his ideas into practice. In 1915 he launched his first prototype liquid fueled rockets.

Although his early flights did not get more than a few hundred meters off the ground, Goddard could see that the principle could be scaled up. In 1919 he published a book entitled “A Method of Reaching Extreme Altitudes” with the backing of the Smithsonian Institute. The book went far beyond what anybody else thought possible at the time. It culminated with the suggestion of an experiment to send a rocket beyond Earth’s atmosphere and on to the moon where he hoped its impact could be observed through telescopes.

It is not surprising that such ideas were met with skepticism at the time, but the scale of ridicule and mockery from the US press in reaction to Goddard’s work must have come as quite a shock. The New York Times was particularly vehement. The journalists thought that it would be impossible for a rocket to work beyond the atmosphere because it would have nothing to push against. They lambasted Goddard for his failure to appreciate basic high school physics. Of course it was them who were being ignorant.

The public backlash forced Goddard to retreat to more private research. despite very little funding he persevered and went on to develop many of the basic principles of rocket propulsion from both theory and experiment. While the US rejected his work, elsewhere in the world, and especially in Germany, others saw its value and quickly started to build on it. This gave the Germans a startling lead in rocket science that culminated with the V2 rockets launched against London during the war.

As the war ended Goddard died unaware that the Americans and Russians were secretly vying to capture German rocket technology. It was the beginning of the cold war which would be symbolised by the space race. Within less than 25 years military engineers with vast funds from government would go much further than even Goddard had dreamed. In 1969 Neil Armstrong stepped onto the moon. The technology used was scarcely different from what Goddard had proposed, except in scale. 

Shortly after,  the New York Times issued a correction to its editorials of 49 years earlier that had mocked Goddard:

 “Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th Century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.”

Even within the apology the journalists show an incredible ignorance and arrogance. How could they truly believe that Newton’s laws had not been understood or tested earlier? It raises deeper questions: why did no American physicists speak up for Goddard at the time? They must have seen the basic errors in the criticism against him. Did no scientist of authority think to write a letter to point out how Newton’s laws worked? Were they really so scared on the power of the papers that they did not want to risk their reputation in defense of Goddard?

We shall perhaps never know the truth but we should not forget how dire the consequences nearly were. With a little more work on rockets the Germans would have had weapons of incredible power that might have led to a different end ot the war.

In 1959 the Goddard Space Flight Center became one of several facilities to be named in his honour, so that his legacy may live on in our memory for many years.


LHC recap and plans

June 25, 2010

As reported yesterday, the Large Hadron Collider has now collided nominal intensity proton bunches for the first time. This is an important turning point in the commissioning of the collider and today Oliver Bruning gave a useful talk explaining why that is. This then, could be a good moment to review the progress so far and look at the future plans for the gradual build up of energy and luminosity.  

First of all, here is a table showing how the collider has gradually worked up to this point since its restart at the end of 2009: 

date E/proton nb nc β Ib luminosity (L)
23/11/2009 0.45 TeV 1        
06/12/2009 0.45 TeV 4        
09/12/2009 1.18 TeV 2        
15/12/2009 1.18 TeV 16        
30/03/2010 3.5 TeV 2 1 11m 10 Gp 0.001 MHz/b
24/04/2010 3.5 TeV 3 2 2m 12 Gp 0.01 MHz/b
14/05/2010 3.5 TeV 4 2 2m 20 Gp 0.035 MHz/b
15/05/2010 3.5 TeV 6 3 2m 20 Gp 0.077 MHz/b
24/05/2010 3.5 TeV 13 8 2m 20 Gp 0.21 MHz/b
25/06/2010 3.5 TeV 3 1 3.5m 90 Gp 0.25 MHz/b

E/proton is the energy per proton. The centre of mass energy which is important for the physics is twice this number because two protons collide head-on.  

The protons in the beam are concentrated in bunches and nb is the number of bunches circulating in each direction round the collider ring. The number of collisions per turn nc is what counts towards luminosity and for small numbers of bunches this can be less than nb . It depends on how the bunches are distributed in order to collide at the different intersection points where the experiments live.  

β is a parameter that measures how much the beams are squeezed at the collision points. Squeezing them causes more protons to collide so the luminosity increases. Finally Ib is the number of protons in each bunch (1 Gp = 1 billion protons). The luminosity L (units are given as MegaHertz per barn which is 1030 cm-2 s-1 because this is the unit used on some of the LHC luminosity displays) determines the rate at which collision events can take place and it is roughly proportional to nc x Ib2 /β, so increasing the number of protons per bunch is the most effective way to increase luminosity.  

So that is why the goal for the last few weeks has been to increase Ib to its nominal value of 110 Gp. Going beyond this number may be possible later but it is very difficult because as the protons get closer together they start to interact and form instabilities in the beam. The controllers of the LHC beams have a number of tools at their disposal to fix these problems but with the lower intensities used until now these were not needed. To bring on the higher intensities they had to test and calibrate the tools and that takes time. That is why it has now been nearly four weeks without any real physics runs in the collider, a frustrating time for the experimenters.  

In his talk today Bruning explained in basic terms a little about the methods they use to control the instabilities. These are things that have been worked out in the past at other colliders so they are the results of many years of research. It is because of this experience that it is possible to get the LHC working in such a relatively short space of time. In case you are curious about the details and don’t have the time to sit through the video of the talk, I’ll give a quick summary. The primary way to keep the beam under control is by tuning it so that its chromaticity is positive but not too large. if chromaticity still goes negative it is possible to keep the beam stable using transverse dampers. The instability is measured on one side of the collider ring and then a signal is sent across the diameter of the ring to control magnets at the opposite point. The signal must go at nearly the speed of light to arrive in time before the beam gets there leaving enough time to adjust the magnets and correct the beam.  

Another trick for making the bunches less prone to instabilities is to stretch them out. This lowers the density of protons without much loss of luminosity. When the beam is accelerated to higher energies the length of the bunches shrinks due to Lorentz contraction at the highly relativistic speeds. The LHC has controls to spread them back out. Finally, there is one other system that helps control the stability which is Landau damping using octopole magnets.  

All these systems have now been commissioned or nearly commissioned over the last few weeks, so running the collider with nominal intensity bunches is now possible. The other thing they needed to do was set up the collimators. These are solid blocks that can be positioned near the beam to strip out any protons that move away from the centre. There are no less than 76 of these and each one has to be placed in the optimal position by trail and error. At lower intensities this is a relatively quick setup process, but at higher intensities it is a more delicate process. The energy in the bunches is higher and passes a safe limit for the collider. At lower intensities it is OK to disable some of the colliders built-in protection systems but above 30 billion protons per bunch that would be a very unwise risk. This means that the beams are likely to be dumped during the process of setting up the collimators. For example, if the number of protons drops suddenly by just 0.1% because a collimator isd moved in too far, or if it drops by 50% overall, then the whole beam is dumped and further collimator setup has to wait until they can be reinjected.  

Because this process can take so long it is important to fix the beam parameters now with values that can be used for the rest of this year. Increasing the number of bunches is the one thing they can do to increase luminosity further without having to redo the collimation. As the number of bunches is increased the bunches will come closer together and at some point later this year this could lead to unwanted “parasitic” collisions between bunches away from the intersection points. The way to avoid this is to introduce a cross angle between the beams so they only meet at the desired point, but if the crossing angle is too large it becomes more difficult to squeeze the beams to lower β. One of the smaller experiments LHCf needs a crossing angle to be able to function. In fact the LHCf collaboration would like quite a large crossing angle but the loss of luminosity would not be acceptable to the other experiments. This would not matter so much if they could have different collimation setups ready for different parameters but to save time they have selected one compromise setting with β at 3.5m and a crossing angle of 100 micro-radians.  

These parameters are likely to remain fixed for the rest of this year with just a gradual increase in the number of bunches. In another report at the pLHC conference a couple of weeks ago. Mike Lamont described the schedule for this in the “short to medium term”, where medium term means the next ten years! The long-term plan is to upgrade the LHC to provide higher luminosities after 2020, but even before then there may be scope to increase bunch intensities from the nominal value of 110 Gp to as high as 170 Gp. Taking this into account the tentative plan for the next ten years is shown in this table: 

date E/prot nb nc β Ib luminosity
25/06/2010 3.5 TeV 3 1 3.5m 90 Gp 0.25 MHz/b
01/07/2010 3.5 TeV 4 2 3.5m 100 Gp 0.51 MHz/b
08/07/2010 3.5 TeV 8 4 3.5m 100 Gp 1.0 MHz/b
15/07/2010 3.5 TeV 20 10 3.5m 100 Gp 2.5 MHz/b
01/08/2010 3.5 TeV 24 24 3.5m 100 Gp 6.1 MHz/b
01/09/2010 3.5 TeV 48 48 3.5m 100 Gp 12 MHz/b
15/09/2010 3.5 TeV 96 96 3.5m 100 Gp 24 MHz/b
01/10/2010 3.5 TeV 144 144 3.5m 100 Gp 36 MHz/b
15/10/2010 3.5 TeV 192 192 3.5m 100 Gp 49 MHz/b
01/11/2010 3.5 TeV 240 240 3.5m 100 Gp 61 MHz/b
01/02/2011 3.5 TeV 796 796 2.5m 70 Gp 140 MHz/b
01/05/2013 6.5 TeV 720 720 1m 110 Gp 1.3 GHz/b
01/03/2014 7 TeV 796 796 0.55m 110 Gp 2.9 GHz/b
01/04/2016 7 TeV 2808 2808 0.55m 110 Gp 10 GHz/b
01/07/2018 7 TeV 2808 2808 0.55m 170 Gp 24 GHz/b

Steve Meyers who directs the beams has shown that his plans can be very flexible depending on how well the process goes, but we expect the plan for the rest of this year at least to be not unlike the first part of the above.  As to when interesting new physics will emerge, that depends on what nature has in store for us.


LHC starts its high intensity physics runs.

June 24, 2010

For the past month there have been no physics runs at the Large Hadron Collider while the beam team prepare the systems necessary to start using higher intensity bunches of protons. The commissioning process is now deemed sufficiently complete and this evening they are making their first attempts to produce stable beams for physics runs with the new settings.

The current run has 3 bunches in each beam with about 90 billion protons per beam compared to just 20 billion the last time they did physics. With the new parameters they could achieve the highest luminosity yet seen at the LHC, but only if they can get everything perfectly tuned first time. More likely it will take a few shots to get there.

Getting to this point is an important milestone in the gradual buildup of the LHC’s power. From this point on further increases in luminosity this year will be gained mostly by adding more bunches into the beam. That is an easier process in principle, but they still have to take it slowly because the more energy there is in the beams, the more risk there is that components of the machine can be damaged if they lose control of them. By the end of the year there should be 240 bunches circulating in each beam, well on the way to the design limit of 2808.

Update: They did indeed reach stable beams with high intensity bunches for the first time. Luminosities of 2.5 x 1029 Hz/cm2 were reported which is about 20% up on the previous LHC record

Further Update: On a subsequent run over Saturday night they doubled the luminosity to 5 x 1029 Hz/cm The run lasted 14 hours and the integrated luminosity should be about 15 inverse nano-barns, enough to just about double the accumulated luminosity up to this point in one run (again!) To put this in perspective, the planned-for luminosity up to end of 2011 is 1,000,000 inverse nano-barns and that may still not be enough to find the Higgs boson. There is a long way to go, but this step up to nominal intensity bunches was one of the hardest challenges in the process of building up the power of the LHC.


Why do we still have the old system of peer review?

June 23, 2010

The scientific publishing industry is dominated by some big profit-making organisations who are apparently not well liked but are still embraced by the scientific community. Every now and then one of the publishing houses tries to raise its subscription prices by a zillion percent, there is a big outcry from the university libraries and finally they settle for a mere yillion percent and everyone carries on. Why is this?

The answer seems to be that the publishing businesses are quite clever. They bundle journals and sell them to libraries rather than individuals. These days the subscription includes electronic access. Few scientists now pop over to the library to get down a volume of a periodical, they look it up online instead. If the subscription is stopped the research centre loses electronic access to all the past issues as well as current ones. It is just not possible to give that up, so they pay up instead.

Scientists have tried to combat this by setting up their own open access journals and some of these are working quite well. But now there is a concept of impact factor that measures how good a journal is according to how well cited its articles are. The most prestigious journals can afford to filter our submissions that are not likely to get many quick citations so they keep their impact factors high. The impact factors are used to measure how good people’s publication histories are when they look for a new position. This creates a feedback loop that makes the big publishers very powerful.

So is peer review needed? If so, should it, and can it be wrestled from the grasp of big business? could it be done differently? Over at Quantum Diaries Survivor, Tommaso Dorigo informs us that he is due to give a presentation on such questions and he wants to know what we think. The comment section has some interesting discussions.

These days everyone with access to the internet can publish online without peer-review. if they are excluded by arXiv.org they can use viXra.org and of course there are many other archives that scientists use, or they can just publish on their own blog. But these are not regarded as real publications by the scientific community until they have been peer-reviewed. Despite the internet, the system is still based on the principle that you can “publish” when and if you pass the test of peer review. Peer-review is still important primarily because careful verification is essential (especially in mathematics, experimental physics, medicine etc) but also because of the role peer-review plays in assessing the worthiness of scientists when it comes to job promotion. The ability to make your work available as a pre-print before peer-review exists only as a compromise because the publication process is otherwise too slow for many fast-moving areas of research. Of course, not everyone sees it that way. That’s just one traditional view.

The existing peer-review process is imperfect in many ways aside from its cost. Good papers are rejected by peer-review and this has a real effect on the pace of acceptance. A good case study would be the science of climate research where some people argue that peer review has become corrupted and is biased towards one side of an important scientific debate. (see e.g. what Lubos writes at Reference Frame)

In a perfect world things would be done very differently. Repositories like arXiv and viXra would become the publishing medium and peer-review would become an open and public process of critical review by relevant experts. One simple approach would be to have ratings for articles using a system like Digg. This works nicely for news articles and is a good way to filter out stuff of little interest, but it is far short of peer-review. By the way, there is a site called scirate.com which allows you to rate articles on arXiv in this way but it demonstrates one of the most basic problems with these approaches: It is very hard to get anyone interested. Another site that suffers the same fate is arXiv1.org where you can freely make comments on arXiv papers, but very few people do. Another system that almost works is the trackback system where blog comments are recorded on arXiv itself, but the comments are moderated in a way that some think is biased, so it does not qualify as any kind or review. Citation counts form another indicator that is used, but they usually trace back to positive responses. Peer-review also needs to be negative when appropriate

A proper system of open peer review would have to go beyond basic rating and commenting. The process needs to come to some kind of consensus about the validity and general worthiness of a paper. The people who do the reviewing need to be experts on the subject. This means you need a system of identifying experts. This can be done by looking at their qualifications and position to classify their areas and levels of expertise, but that would be open to bias and the corrupt rule of authority, precisely what we want to avoid. A more open system might allow anyone to review and rate a paper, but the ratings would be weighted according to the reviewer’s reputation which is earned according to the ratings of their own papers in the same subject area. Could such a system work or is it just a Utopian dream? This is the question we discussed over at Tomasso’s blog with some interesting comments but no real conclusion.

There is no doubt that such a system would be hard to get working. You would have to overcome the reviewers reluctance to criticise in public. The existing peer-review system is mostly anonymous and for a good reason. Scientists are human and don’t want to be attacked for their negative reviews. If the process is not anonymous they may not be willing to air their criticism. This is a real issue but anonymity and privacy in the peer-review process also make it hard to challenge a review. The system takes on various forms of corruption, for example, journal editors have a lot of power to influence the peer-review process either by directly affecting the result or by selecting reviewers who they know will be for or against the article. This works in both ways, either creating journals where a group of people can publish low quality research, or excluding a valid opposing view to an area of research. More openness where the reviewers judgement can be further criticised in public should be an important goal. This does not necessarily mean that anonymity must be given up. That could remain as an option. It is really the privacy that creates the problems.

It is understandable that there is skepticism about the possibility of establishing a working system of open review. It is hard to get people interested, However, there are some websites that indicate that this problem is not insurmountable. Despite the initial odds, Wikipedia has established a huge system for building works of reference that attracts considerable expertise in many areas. Wikipedia specifically excludes original research and is not a suitable system for peer-review, but it shows that the right people will get involved if a system has the right features.

Another system that is closer in some ways to what we seek is stackoverflow.com and derivative sites such as mathoverflow.net. These are sites for submitting questions in a particular subject area. That is very different from the requirements of peer-review, but the rating system used is getting close to something that might work. On these sites questions are rated and so are answers. People can also comment on answers so anything can be challenged and this feeds back into the rating. People build up reputations according to how well their answers are rated and the reputations increase their rights to rate questions. There are also moderators who are elected democratically with reputation being an influence. These moderators can shut down off-topic discussions. Not only does this system generate a high level of discussion, it also attracts some well-known experts in the field to make contributions. A peer-review system would be something on a grander scale but the principle might be similar and the evidence is that it could work if the details are right.

The technology on which to base such a system exists. Many archives adhere to the Open Archives Initiatives which means they have an API so that you can query them and integrate them with other repositories into conglomerate systems. A peer review website could work that way.  There would be no need to start a new repository for the purpose . Despite Lubos’s flattering suggestion over at AQDS, it is not likely to be me who makes the first attempt at building such a system. It requires more than the technical expertise of one person I think. A small group with the backing of some big organisations would be more suitable.

Meanwhile I am looking forward to hearing what comes out of Tomasso’s presentation. He is a smart and reasonable person so perhaps he can kick-start something that will grow to be the future of peer-review. I just hope it will be a system that is open and not doomed to a fate of elitism and corruption. I hope it promotes honest scientific progress without stifling a valid new approach just because it does not fit the prevailing dogma. We can only hope.


Dodgy trackback policies

June 22, 2010

Peter Woit over at Not Even Wrong has long complained about the fact that arXiv.org do not log trackbacks from his blog. Backtracks are automatically generated whenever a blog links to an abstract page on arXiv.org and are normally listed on the site. However, some bloggers such as Woit have long suspected that trackbacks from blogs that are against string theory are filtered out in some unknown process of censorship that backtracks the trackbacks, so to speak. (Of course it could be that they have some variant censorship policy that merely correlates to ignoring trackbacks that are against string theory)

Woit decided to test his hypothesis by creating an anti-Woit who loves to hype string theory on his blog. It’s link to this random paper  was quickly accepted as a trackback, thus proving the point. The link from his usual stomping ground presumably will not be accepted in the same way. I hope he continues to maintain this excellent new blog in the spirit it set out with. It has some good content so far.

In case you are wondering, yes, we also accept trackbacks on viXra.org but since blog providers are not likely to implement automated trackbacks for us, it is a manual process. It goes without saying that no censorship will be applied here. If you know of a suitable trackback to an abstract on viXra.org that we missed just drop us a note.

Update: CIP has also taken up the story

Adendum: I have been catching up on some of the old trackbackgate history that I was not following at the time. It becomes clear that Woit’s trick really puts the lie to what was claimed from the arXiv administrators at the time. They had said that some people were excluded from trackbacks because the policy was to include only those from active researchers who use arXiv. They said Woit did not come in that category even though he had submitted to arXiv occasionally. That they accepted the trackbacks from the stringtheoryfan blog shows that it is really only the content that counts and the criterion of active research was most likely made up just to justify excluding Woit.

Of course the trackback from here was not accepted either, that is no surprise, although I have had some trackbacks accepted from other blogs I ran in the past. I am usually pro-string theory except where the hype get too heavy!


MINOS claims CPT violation for neutrinos

June 19, 2010

MINOS is an experiment at Fermilab studying neutrino oscillations where one type of neutrino changes into another. Such changes were recognised some time ago as the explanation for missing neutrino fluxes from the sun and can only happen because the neutrinos have a small mass.

MINOS measures these oscillations and deduces a value for the differences between the squared masses of the different neutrino flavours. For the electron and muon neutrino the value they get is 2.35 x 10-3eV2.

Now they have also succeeded in getting a value for the same number using anti-neutrinos and the value is  3.35 x 10-3eV2 This means that for at least one of the neutrino flavours the mass must differ between the particle and its anti-particle. Differences between the interactions of particles and anti-particles are signatures of CP violation and have been well-studied for heavier particles, but this kind of mass difference would signal a much more surprising violation of CPT symmetry.

In relativistic quantum field theory and all other known generalisation of particle physics models, CPT must be conserved, even in string theories. If it is broken then physicists would have to invent a whole new way of building their theories.

Luckily the measurement comes with errors bars and the statistical significance of the result is only 2 sigmas which mean it could be a statistical fluctuation with a probability of 5%. In particle physics this level of certainty is not considered much to dance about. Given the very strong theoretical constraints against CPT violation we should expect this result to fade away as more data is collected.

However, the precision measurement of neutrino mass parameters is itself an outstanding achievement well worth knowing about.


Follow

Get every new post delivered to your Inbox.

Join 275 other followers