New LHC Records and bunch splitting

June 28, 2011

Yesterday the Large Hadron Collider produced a record run lasting 20 hours and delivering a record 62/pb. The run ended with a programmed dump for the first time in a while. They then turned the machine round in just four hours for a new run with 1380 bunches for the first time. This is the maximum bunch number that will be used for physics runs this year.

At this significant point in the LHC commissioning process it is worth reflecting just how much of an achievement it is to run with so many bunches. For comparison, the Tevatron runs with just 36 bunches per beam. Of course the LHC is bigger so it is possible to get more bunches in, but it is only four and a bit times bigger. To get 1380 bunches in they have to pack them much closer together. In the Tevatron the bunches run about 175 meters apart on average but in the LHC they are on average 20 meters apart.

This improvement in the design of the LHC over previous hadron colliders is just as important as the increase in energy. Hadron collisions are messy processes and to get the full information out the physicists will need to look for very rare events with clear signals of unusual processes. By time the LHC has run its full length it will have collected thousands of inverse femtobarns of data to explore the Higgs sector in the best possible detail. To achieve this it has to run with lots of bunches and with high quality, low emittance beams.

You can’t just inject individual proton bunches into an accelerator very close together because there is a limit to how fast the kicker magnets can change as they inject the bunches into the ring. As the energy increases the magnets have to produce more powerful fields and it gets harder to pack the bunches together. The injection process uses a series of increasingly powerful rings to put together the bunches in trains (see my earlier post about the injection chain). The early stages have lower energy so the bunches can be slotted closer, but the rings are smaller and fill up quickly. You can build up as you go along but this is not enough to get the bunches as close together as they need them.

The trick that made this possible was invented in the 1990′s using the PS accelerator at CERN which is now part of the injection chain for the LHC. They first considered a procedure of debunching the protons in the ring, so that they could then reform new smaller bunches, but they found that this ruined the good emittance properties of the beams. The solution was to split the bunches by starting with a low-frequency RF signal in the ring and gradually boosting one of its harmonics to higher amplitude. If you raise the second harmonic the bunches split in two and if you raise the third harmonic it splits them in three.  In the PS they start with 6 big bunches. These are first split in three to provide 18 bunches. The bunches are then accelerated to a higher energy before being further split into two. The 36 bunch trains are moved to the larger SPS ring and gathered into 144 bunch trains which are further accelerated before being injected into the main LHC ring. Later, possibly next year, they will split the bunches one more time in the PS to double the number of bunches again.

I’ve no idea who worked out how to do this bunch splitting but they are just some of the many unsung heroes of the LHC.


Strings 2011

June 27, 2011

The strings 2011 conference has opend today in Sweden. You can watch videos of the talks live or recorded straight after, starting with the introduction by David Gross.

Gross asked the usual questions starting with the number one “What is String Theory?” and ending with number 11 “What will we learn from the LHC?”

He is a bit disappointed that the LHC has not found anything surprising yet, but he still holds high hopes for SUSY.

There have been promising new results in trying to solve large-N SUSY gauge theory which has beautiful mathematics: twistors, polytopes in the grasmanian for example. Since this theory is dual to string theory Gross thinks these discoveries could tell us about the fundamentals of string theory.

He goes on to mention entropic gravity which he said was also promising but he had a little smirk on his face when he said it and also implied that it is ambitious. There will be a talk from Verlinde later in the conference.

Apparently it is unfortunate that we seem to live in De Sitter space. The theories work much better in anti- De Sitter space. There are lots of questions but the most important product of knowledge is ignorance, then again it would be nice to have some answers he said at the end.

 

 

 

 


Coming soon, Europhysics HEP 2011

June 25, 2011

In just four weeks the Europhysics HEP conference for 2011 will begin. This is a biannual meeting alternating each year with the ICHEP conference. This year it is being held in Grenoble and promises to be the biggest HEP event of the year with several hundred talks and poster sessions.

Abstracts of many talks have already been posted covering a wide diversity of experimental and theoretical subjects, but the main interest will be on the CMS and ATLAS searches. At the recent PLHC conference there were about a dozen reports using new LHC data. A couple form CMS and about ten from ATLAS. At that time about 200/pb of data was available. Although that was only a few weeks ago the LHC has now delivered much more and at Europhysics HEP they should be able to show the results of searches using 1000/pb. That is enough to say something significant about new physics. So far there are 13 abstracts posted for ATLAS that promise to use 2011 data and 20 for CMS.

These talks include a number of SUSY searches. If these find no new physics much of the most likely phase space for supersymmetry will be wiped out. SUSY in some form will still be possible, but it will have to be different from what the phenomenologists have predicted as the most likely scenarios. It will start to look like the last 30 years of research in hep-ph have been on the wrong track. If they do see hints something they will quickly be able to narrow down the available theories and tell us something very useful about the laws of physics.

Other presentations will look at the Higgs searches. With the 1/fb of luminosity likely to be included it should be possible to exclude the Higgs for masses above 135 GeV on the assumption that the standard model is the only physics in play. However, the standard model predicts vacuum instabilities for lighter Higgs masses. If this is the way it plays out we will know that something new must be there even if we don’t yet see it. The other possibility is that a promising signal for the Higgs will be seen at a higher mass. Either way there will be something to remember about Europhysics HEP 2011.

With just the 2010 data of 40/pb each, CMS and ATLAS have already produced hundreds of papers looking at different possible decay channels and search scenarios. As the amount of data available increases the possibility to look for rarer events in different channels goes up. With 1000/pb there will be a lot of things they can explore. Even with 6000 physicists between them divided up into small groups they will have to prioritize what they do for this conference. If you are one of those 6000 you should be too busy to be reading this so get back to work, and don’t expect any rest for a long time.


Tough Week for the LHC

June 24, 2011

Today the Large Hadron Collider has taken another step up  in luminosity by increasing the number of proton bunches per beam from 1092 to 1236. The first run at this new intensity equaled the previous luminosity record. They may beat it in subsequent runs by pushing up the bunch intensity. One more step up is required to reach this years maximum possible bunch count of 1380 bunches per beam. It may be too late to reach that step before the next technical stop.

The advance comes at the end of a tough week with only one run during a period of five days. At least that one run was itself a record with integrated luminosity for a single run of 47.7/pb in ATLAS. For CMS the total delivered was 46.3/pb but they issued a special note to say that they had also recorded 44.3/pb, more than the entire amount recorded during 2010.

The main reason for the delay this week was problems with cryogenics caused by clogged oil filters and possibly worsened by a lightning strike and/or an industrial strike. When any one of the 8 major cryongenic plants fails it can easily take two days to get it fixed and return the superconducting magnets to their working temperature of 1.9 degrees Kelvin. There were two such outages this week with this record run in between.

Update 27-Jun-2011: The situation has improved in the last few days culminating in a run today lasting about 20 hours that delivered a record 62/pb. The luminosity was still above half its peak value at the end of the run demonstrating just how good the luminosity lifetimes are. The next run will attempt 1380 bunches, the maximum possible with 50ns spacing. They have just one day left before machine development time takes over, followed by a technical stop.


LHC Status Report

June 18, 2011

last week we celebrated 1 inverse femtobarn (1/fb) of integrated luminosity delivered to ATLAS and CMS. Of that data ATLAS has recorded about 95% and CMS about 92% so with a little more added ATLAS have now recorded over 1/fb.

The milestones have been celebrated with a CERN press release

Last week the LHC Control group held an Open meeting to report on progress of the beams and experiments. Slides and videos are available for some of the talks including  the Machine Status Report by Steve Myers who revealed that during the last Machine Development period the bunch intensity was tested up to 195 billion protons, going well beyond the 170 billion ultimate intensity limit. The intensity currently in use is about 120 billion, but there is hope that this may be increased later in the year.

Although the LHC has delivered 1/fb in record time as a result of its better than expected early performance, there is some frustration that technical problems are holding it back from achieving even better results. A string of difficulties has been making it hard to get the beams circulating while other glitches cause the beams to be dumped early. The time in stable beams has been about 36% since they started running with 1092 bunches and it should be possible to do better than that.

Unidentified Falling Objects

In his talk Myres gave some more information about UFOs. These are mysterious rapid beam loss events thought to be caused by particles falling into the beam path. They can trigger the protection mechanisms to dump the beams. Studies have shown that they most often occur at the injection points and almost always shortly after injection causing problems before they get to stable beams. Surprisingly their frequency is not increasing with further intensity advances. They were 110 of these UFO events last year and already 5000 this year, but only the strongest cases can trigger a beam dump.

An extensive report on UFOs can be found here

RF Power Couplers

Another series of problems concerns the RF components. The couplers can take 200 kW of power and currently are being loaded up to 190 kW. This figure increases with beam intensity. If one of the ceramic couplers breaks it would put the LHC out of action for five to six weeks. These and other concerns have been preventing them from raising the bunch number to the next step of 1236 bunches. There is also a special report on the RF power issues and how they have been addressed. With the situation coming back under control it is hoped that the next luminosity step can still be taken this weekend.

Mini-Chamonix

In order to decide how to proceed for the rest of the year there will be a “mini-Chamonix” meeting on the 15th July. There We may hear more about addressing these and other problems as well as prospects for any further luminosity increases e.g. by raising bunch intensity.

Status Reports of the Experiments

At the LPCC meeting there were also reports from the individual LHC experiments. CMS has produced 80 papers using LHC data while ATLAS has about 190 and there are also good initial results from LHCb and ALICE. With the luminosity increasing at faster than expected rates there has been more pileup of events in the detector than anticipated. ATLAS reports an average of 6 events for each bunch crossing. There is significant impact on the calorimeter reconstruction resulting in increased systematic uncertainties in the analysis. Low transverse momentum jet events are the worst affected, but it is a small price to pay for so much extra data. Pileup will get worse if the bunch intensity is raised further.

Summary of Physics Results

Most of the physics results published so far have used just the 40/pb of data collected in 2010 with just a handful using up to 240/pb . A selective summary of results from ATLAS is shown on this slide (click to see full-sized). Within a few weeks we will have many more results including some using the 1/fb now collected. the EPS-HEP conference at the end of July is the next major opportunity for physics presentations.


2000 papers at viXra.org

June 16, 2011

Today is the turn of viXra.org to pass an important milestone with over 2000 papers now in the e-print archive. The total has been added since it started a little less than two years ago. When I started on this project I did not imagine that so many people would make use of the service, so I would like to take this opportunity to thank the 600 authors who have supported us by submitting their work. This is also a good moment to thank Huping Hu and Jonathan Dickau who have kindly provided mirror servers for the site and they have helped out with submission administration to keep the service running at times when I am away. Their backup and support gives me confidence to say that viXra.org will survive as a long-term repository. I am also very grateful to those who have made generous donations to help cover the costs of running the server.

Despite the unqualified success of viXra.org I still get a sense that there are a lot more people out there who could benefit from using viXra. Some of them simply don’t know about it so this blog is one thing I do to publicize its existence. viXra.org is primarily here for scientists who do not have access to a qualified endorser for arXiv.org, usually because they are non-professionals working independently of any academic institution. Many of them have left professional research to follow a different carear but retain an interest and continue their research in their spare time. Of course viXra welcomes submissions from anyone and we do even get some contributions from people with .edu adresses along with the occassional highschool student and unqualified amateurs who like to think for themselves. To encourage more of these people to join in, here are a couple of answers to some of the questions people ask.

Does submission to viXra.org give you as much exposure as other archives such as arXiv.org?

It would be a plain lie to say that viXra is as well known or as well browsed as arXiv, yet the stats indicate that papers on viXra get just as many hits as they do on arXiv. Let’s look at the numbers. According to the arXiv usage stats they typically get something like 800,000 web connections each day. From the logs of viXra.org our corresponding figure is around 4000 per day, just 0.5 percent of the arXiv number. But of course we have a lot less papers. ArXiv have reached about 6000 new papers per month while viXra receives just 80 per month, or about 1.3% of their numbers, so relative to new papers submitted arXiv is getting about 2.5 times as many hits as viXra. Then again, arXiv has been around a lot longer and many accesses come from searches on its back-catalog. arXiv has 680,000 papers compared to viXra’s total of 2000 which is just 0.3% . So relative to the total archive we actually get 60% more hits than they do. Real usage is a mixture of looking at old and new papers so within the uncertainties of this crude analysis I think it is fair to say that a paper submitted to viXra gets a similar number of hits as one on arXiv.

How can this be? In fact most hits do not come from people browsing new submissions. They come from people who search on Google and other serach engines for keywords of interest to them. Since viXra is just as well indexed on Google as arXiv is, it actually gets just as many hits per paper.

Will submission to viXra.org damage your credibility because it is full of “crackpots” who cannot make it into arXiv.org?

It is really important to understand that it is not the purpose of an e-print archive to give you or your work credibility. Such recognition comes only from other sources such as acceptance in a peer-reviewed journal, use or citations of your work, or emperical verification of any predictions and the occasional Nobel Prize. All that you get from an archive such as viXra.org or arXiv.org is a longterm repository and a fixed link to your work so that people can find it and make links to it that will stay in place. It also provides independently verified timestamps so that others can check who reported any idea first in questions of priority.

Other scientists find papers in an archive mostly by searching on Google or by being referred from somewhere else. Once they find it they are not bothered about where it is. If they are sufficiently interested in the subject to have come across it they will be qualified to judge it on its own merits.

As to the accusation that vixra.org is full of crank papers, I would be a fool to claim that the work of non-professionals who do not have access to arXiv.org is going to be of the same qualify as the work of professionals who do, at least on average. But there are plenty of people who know their subject and have interesting work to publish who nevertheless do not have access to a qualified endorser as required by arXiv. Conversely, access to such an endorser does not mean someone’s work is good either. There are good papers on viXra.org just as there are bad ones on arXiv.org.

On a lighter note, enjoy this video, (warning: bad words!)


Lunar Eclipse in Progress

June 15, 2011

This is how the Moon looks near maximum eclipse as seen on Google/Slooh. Still a little time left to view it. Hope some of you have clear night skies unlike us!

In case you haven’t noticed you can see the eclipse live on the main google search page.

The intense red colour of the moon is due to light being refracted through the Earth’s atmosphere which scatters the blue light leaving just the red end of the spectrum to bathe the moon in a warm glow. The colour is said to have been deepened by dust from recent volcanoes.

In 2009 Japan’s Kaguya lunar orbiter took some spectacular pictures of a similar eclipse from lunar orbit. The Earth passes in front of the Sun making it look like a solar eclipse except that the Earth is bigger than the Moon so the Sun disappears for much longer. The atmosphere of the Earth continues to be illuminated by the Sun from behind like a continuous ring of twilight. In this sequence the eclipse was rising above the moon’s surface which blocked the beginning of the eclipse as aseen from the orbiter. In the final frame the Sun emerges again from behind the Earth.


Large Hadron Collider provides 1 inverse femtobarn for CMS and ATLAS, already!

June 13, 2011

Today at 21:10 hours European Time the Large Hadron Collider passed an important milestone when it reached 1/fb of integrated luminosity delivered to each of the large experiments CMS and ATLAS. The third major proton experiment LHCb which limits its luminosity has around 0.35/fb. These figures include the 47/pb delivered in 2010, but after another one or two good runs the total for 2011 alone will also surpass the one inverse femtobarn milestone.

Update 14-Jun-2011: With another good run today the total delivered for CMS passed the 1/fb mark for 2011 data alone at about 20:25.

This is an impressive achievement for the worlds most powerful particle accelerator at CERN which had originally expected to collect this amount of data only by the end of 2011. Better than expected performance now means that it records about 30/pb worth of collision data each day on average. With about 120 days of proton physics left this year, the beam operations team can expect to deliver at least 4.4/fb this year if they continue at this rate.

Potential luminosity increases

There is still some potential to increase this figure if they can continue to increase the number of protons circulating in the rings. The current quantity of 1092 proton bunches circulating in each direction will shortly be increased to 1236 and then finally 1380 once they overcome difficulties with power to the RF systems. This will increase the luminsity by 25%. Another goal will be to increase the efficiency by keeping the collider in Stable Beams for longer. Recent figures show that this state is only achieved for around 40% of the time due to a variety of technical issues. As these are sorted the figure may go up to 60% or even higher to give 50% more data. If this can be done quickly it would bring the expected total for 2011 up to around 7/fb. At 1380 bunches the rings are full to capacity with current bunch spacing so further improvements this year are only possible if the bunch intensity can also be increased, but this is not yet planned. 5/fb to 7/fb is already a spectacular number to aim for and they may not want to put these numbers in danger by taking such risks.

Expected conference announcements

The amounts of useful data recorded by the experiments is typically 90% to 95% of the amounts delivered. In a few days these figures will also pass 1/fb and this should be in time for the next big particle physics conference EPS-HEP2011 at the end of July. At last weeks PLHC2011 conference we already saw a few results using 200/pb, one fifth of the current standing. However, there are many standard searches for which we have still only seen plots using the 40/pb collected last year. For example the classic dimuon resonance curve has not yet been shown in updated form for either ATLAS or CMS. This could be because it was too dull to show. The dimuon signal is very clean but it is not expected to be the first place where new physics will appear. On the other hand, it may not have been shown for the opposite reason. If it had an inconclusive signal of a new resonance then they would surely want to wait for more data before showing it.

When will the Higgs be seen?

To get a better impression of just how significant the quantities of data now being collected are, it is useful to look at the projected Higgs Limits as shown in this figure.

With 1/fb of data there will either be a signal or an exclusion for the Higgs boson above 130 GeV. If it is excluded then it will be known to lie between the 115 GeV limit previously set by LEP and a new limit of 130 GeV. This is highly significant because the standard model on its own predicts that the vacuum would be unstable if the Higgs has a mass less than 135 GeV. New particles such as those predicted by supersymmetry would be needed to restore stability. In other words, this exclusion expected from 1/fb would be indirect evidence of physics beyond the standard model at the electro-weak scale. The EPS-HEP conference is likely to be a historic event where they will either describe the first signals for the Higgs Boson or the first good evidence for new BSM physics. If you want to attend today is the last day to register at the standard fee or 350 Euro!

If EPS-HEP is an anti-climax, the next big particle physics conference is Lepton-Photon 2011 during the last week of August. Another femtobarn of data will be available for analysis in time for that. There are smaller workshops and seminar opportunities going on all the time so a new discovery can come at any moment, but the physicists do like to time their results for these big events.

By the end of the year the situation will be even more dramatic. About 5/fb should be available, enough to exclude the Higgs boson at all masses, or more likely to discover it. If it is indeed a light Higgs there is a good chance that some other new particles will be discovered too.

What if it does not show up?

The calculations that are used to calculate the exclusion limits are themselves based on assumptions that the Higgs will decay according to the predictions of the standard model. If the standard model is ruled out by not seeing the Higgs then these calculations will themselves be invalid. For example, a possibility is that there are heavy unknown particles into which the Higgs could decay. If these new particles produce jets that are hidden in the sea of background QCD it may be much harder to detect it. Another even weirder possibility is that the Higgs boson just isn’t there. If nature is devious enough we could still see no new particles this year.


Wednesday Eclipse of Moon on Live Webcam

June 13, 2011

On 15th June 2011 you can watch a total eclipse of the moon. Total lunar eclipses are not very rare, they come on average once a year, but this one is exceptionally long. Firstly,  the moon will pass through the centre of the Earth’s shadow which last happened about 11 years ago, but also because the sun is more distant at this time of year and the Earth is a little closer to the moon. This makes the Sun look smaller and the Earth look bigger from the moon’s surface. In fact the eclipse will be total for 1 hour and 41 minutes. That is only five minutes short of the longest possible.

The Eclipse will be visible over most of Asia, Africa, Europe, Australia and South America, leaving only North America and a few other corners of the world with no chance to see it. About 90% of the world’s population can view at least part of it directly, weather permitting. For the rest the internet saves the day with some live webcasts of the event. So far three organisations are planning webcasts so it is unlikely that they will all be clouded out.

Sky Watchers Association of North Bengal: here or here

Eclipse Chaser Athaenium New Delhi: here

Astronation.net: here

Google/Slooh: here

From first contact to last, the eclipse is visible from 17:23 to 23:02 GMT on Wednesday


D0 sees no bump

June 10, 2011

Sadly the D0 experiment sees no bump in boson+dijet at Fermilab, dismissing the 4.1 sigma result of CDF.

This has already been reported here, here, here, and here. The original paper is here.

Now the two experiments need to get together to work out which is wrong and why. It is not a sure conclusion that D0 is right but it seems more likely that someone would see an effect that isn’t there by mistake than that someone would fail to see an effect that is there. This gives D0 a strong psychological advantage.

To find out what went wrong they have to compare the raw plots and the background as seen in these plots

The differences are too subtle to see from just the visual image, and it does not help that they used different bins. There does appear to be significant differences in the backgrounds while the data look quite similar. If that is the case then the problem is purely theoretical and they just need to compare their background calculations. However, the detectors are different so perhaps the backgrounds should not look exactly the same. Only the people directly involved have enough details to get to the bottom of it.

I hope they will work it out and let us know because it would be nice to see that their understanding of their results can be improved to give better confidence in any similar future results.

By the way, you can still vote for us on 3QuarksDaily


Follow

Get every new post delivered to your Inbox.

Join 270 other followers