Some people have been asking if confidence level plots can be combined now that we have the individual data from Dzero, CDF, ATLAS and CMS. The answer is of course not. You need to combine the underlying event data and all the backgrounds etc., and re-derive the levels from that.

On the other hand, confidence levels can sort of be combined by adding in inverse square, and there is no harm in trying so long as everyone realizes that the result is just a crude unofficial bootleg indicative approximation, right? So in that spirit here is my combined Tevatron plot

Using the same method, here is a combined LHC plot using the ATLAS and CMS plots published yesterday. It excludes all Higgs masses from 145 GeV to 480 GeV. This should be treated with skepticism, but if the Tevatron plot above matches the one that will be shown on Wednesday at EPS you will know that this one has some credibility too.

Finally, combine everything and what you get is this

Yikes! Perhaps we shouldn’t take this too seriously🙂

The formula used is

This is used for the expected levels and the observed levels. My understanding is that this is a roughly correct way to combine the expected confidence levels but it is probably less accurate for the observed levels. When Fermilab provide the combined plot on Wednesday we will get a better idea of how good an approximation it is.

Update 25-July-2011: As an indication of how well this combination formula works here is a plot showing a combination test of the CMS decay channels using the same formula sampled at some mass points. The black dashed line is my estimated combination and the heavy black line is the official combination. It is not good enough to draw reliable conclusions about the size of any excesses but as a rough indication of what we can expect it seems very reasonable.

Like this:

LikeLoading...

Related

This entry was posted on Saturday, July 23rd, 2011 at 7:27 am and is filed under Higgs Hunting, Large Hadron Collider, Tevatron. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

I am looking forward to your combined LHC-Tevatron plot.😉 If you need some help or independent verification, send me the Excel tables.

The SUSY talks that have been posted so far report that SM works perfectly. But at least, I learned that ATLAS stands for A Tool for Locating Any Supersymmetry.😉

Sorry, I am not sure whether you’re adding the things in the way that would look sensible to me.

I would first reconstruct the observed “sigma/sigma_{SM}” for each mass, plus the error of “sigma/sigma_{SM}”. The error may be read from the width of the yellow-green bands and the central value is about 2 sigma higher lower than the declared upper bound.

Then I would represent the measured sigma of the whole Tevatron as the average of the means of the two sigmas, plus minus a standard deviation that can be easily reconstructed. And then I would return to the 95% limits, essentially by returning by 2 sigma.

The deviations from the dotted “expected” line are really not important: your final goal is to see where the measured cross section is relatively to the horizontal line, right? So I don’t think that you did the right thing and I think that the right thing actually may be done by combining the final graphs. It’s just that the “sigma” represents the total cross section for all interesting Higgs-like events combined and the error margin is measured in the right direction of the whole plane, corresponding to the SM-predicted relative abundance of the difference channels.

I guess that when you do it right, you must get stronger limits. Your picture is way too close to the partial estimates.

The green band is the expected line times 1/sqrt(2) to times sqrt(2). the yellow bands are times 0.5 to times 2. This gives them an even width on the log plot.

i am not sure if my combo is the best approximation but this looks in line with previous combinations. You are welcome to try yours and we will see who gets the closest to the Tevatron combo on Wednesday.

I have drawn similar graphs to you in Mathemattica. The mean measured cross section is above the SM between 120 and 150 GeV or so but the upper bound on it is higher, by 2 SD, and it still allows 110 and 160 GeV.

What you misunderstood – and I forgot under your influence – is that the plenary Tevatron talk on Wednesday will *not* be based on the graphs we’re playing with, combinations of cross sections. The recommended figure 114-137 GeV will be based on the Tevatron’s combined accurate measurement of the W-boson and top quark masses – it’s clearly articulated in the press release – so it has nothing to do with our games we’re doing and it will be a “qualitatively new” talk.

The press release says that it will be based on measurements including the precision tests. I think it will use the combined plot with a 90% confidence level to exclude up to about 185 GeV and the precision tests above 185 GeV

As an astronomer, I have no axe to grind about the Higgs particle – I hope it is found as the last ingredient of the standard model. What worries me is that I thought the particle physicists had a higher level of proof than the astronomers. I thought there were supposed to be double-blind analyses of the individual LHC experiments, for example, as well as fully independent analyses between the experiments. But the current process at EPS hardly seems like that! I suppose the problem is that when all the double blind analysts know the remaining available Higgs mass ranges, the process can’t be double-blind anyway! But its a bit disconcerting to see the CMS graph change at a late stage as it apparently did yesterday. Certainly when a lot of money has been spent it is always more difficult to keep the results independent! But it all seems a bit more like astronomy than I thought it would!

What’s wrong with astronomy? Doesn’t it observe all celestial bodies, in an uncontroversial way, with the accuracy of arc seconds in the skies?😉 The cross sections will never reach this accuracy.

It would be nice if all the experiments were double-blind in your sense but you can’t produce hundreds of particle physicists who will work on their data without knowing that the Higgs below or above something has been excluded. They must know it.

In fact, much more generally, they’re using assumptions measured by previous experiments (and processed by theorists) all the time – that’s what the Standard Model, the main method to calculate the backgrounds, really means. It is in principle impossible to make observations “completely unbiased” relatively to any theory.

It is very clear that the Standard Model must be the benchmark to compare the data with because it’s what seems to work. That’s why the theory is called “standard” in the first place. Most of the experimenters’ work is actually composed of comparisons with the standard backgrounds: the theory plays and must play an important role.

The CMS graph change could be due to a mistake in the previous version but also due to the changed datasets. Some channels got 5 times more data which means that their results in the final graph were de facto independent from those in the previous graph.

At any rate, every scientist has the right to correct himself. The CMS people probably don’t get the right graphs immediately. Ever. There may still be a risk of a mistake in the final CMS graph as well. But science has to work with fallible people. If you called an infallible person such as Mr Ratzinger of the Vatican and ask him about the cross sections, he would tell you that the cross shouldn’t be dissected because it’s holy and Christ was crucified on it. That’s infallible but not excessively informative for other particle physicists.😉

What’s wrong with astronomy? Well, take CMB experiments – in the 1980’s when the standard cosmological model was a baryon model with isocurvature initial conditions, the observers got CMB results that agreed with that model. Then in the 1990’s when the standard cosmology was thought to be Omega=1 CDM with adiabatic initial conditions, many CMB experiments again confirmed the theory. A lot of observations rapidly changed after the COBE result came in with unexpectedly big large-scale perturbations! And these days, the observers are getting excellent agreement with the expectations of the Lambda-CDM standard cosmology. Dont get me wrong – science is still being done. But what you can say is that a lot of CMB experiments in agreement appears to be a necessary but not sufficient condition for the result to be correct!

I am hoping that the same isnt true for particle physics experiments!

I could go on but am preparing a bbq under uncertain UK climatic conditions…

For Higgs´ sake, Tom! You are not trying to suggest that the standard cosmological model (whichever one it is today) may need any corrections, are you? If so, you certainly deserve any climatic conditions bestowed on you…

If the origin of rest mass were from some microscopic particles or from some microscopic particle physicists and not from Heaven (Machian Principle) then I would suicide on the Himalayas immediately. That would suggest that all finacial and scientific crises created by the dictatorship of “elites” be legal.

That’s the problem Jin He, they are legal! But I would not remove myself from the chaos yet, Mother Nature favors harmony so there is *always* hope (until there isn’t).

The LHC and total combo is just lovable!🙂 If you can send me the digitized data for ATLAS, CMS, it could be useful, I will try the algorithm of mine. Going to praise you.

This confuses me a bit. Adding as I thought you’ve done, at 140 GeV you combine a 2,5 sigma signal from the LHC with a roughly 0,7 from the Tevatron and get sqrt(2,5^2 + 0,7^2) = 2,6. Yet the combined curve has only two sigma there?

How can the adding of Tevatron data to the LHC data give you less deviation from the expected curve than LHC only when they’re both over the expected? Did the error bars get widened by higher Tevatron uncertainty?

Could you please describe your formula in its full form or is it some secret?

I treated the graphs as graphs of cross sections with some errors, calculated the weighted average of the cross section for 4 detectors, with the correspondingly smaller error (roughly 2 times, because there are 4 datasets), and this plus 2 standard deviations is the 95% new limit.

The result is that my average value for the cross section was very similar to your upper bounds but my upper bounds were much more tolerant and not much better than ATLAS or CMS separately.

For example, how can you possibly eliminate 140 GeV?

I have added the formula to the bottom of the post. I just add anything on the plot using this inverse square sum, where the confidence levels are what’s being plotted on the y-axis

Is it that straightforward to combine Tevatron and LHC? Do they have identical background subtractions and everything? Or can it be done just on the confidence level plots? It would be a lovely thing to have the combo though ….

The most interesting stuff is taking place near 128 GeV and 360 GeV on the combined bootleg plots, but nothing sufficient enough to declare a discovery. It would be cool if 128 GeV is the SUSY Light Higgs. The deficit at 360 GeV is more confusing – is it negative interference from a SUSY Heavy Higgs or a SUSY Pseudoscalar Higgs or both in the same mass range? Considering that the deficit is so close to twice the top quark mass, we probably shouldn’t rule out unexpected effects from a t-t-bar condensate.

it would be very nice to have a posting about the data processing leading to a typical exclusion graph. Or at least some links. I have myself been too lazy to learn how these graphs are produced thinking that I can leave this non-poetic side of physics to the specialists. It seems that I must enter more to the real world.

Examples about stupid questions. Why these Higgs exclusion bands in some mass regions tend to be are above unity and in some ranges below unity? How one can deduce information about possible existence of Higgs at given mass range? Under which conditions it is enough for the exclusion that the band goes below the unity? What the standard model prediction actually means in given situation and how the experimental result is estimated and how one obtains the curve from this?

A qualitative explanation for how these graphs are produced would help enormously all the readers who are usually not silly enough to ask these silly questions;-).

Note that the peak at 116 GeV is nice and somewhat sharp – a hill (the highest part of the 110-120 GeV mountain range with the 112,116,119 local peaks) separating itself from the more flat terrain. If the graph were taken as the exact probability distribution, the Higgs mass 116+-1 GeV would probably be comparably likely to all others combined.😉

I didn’t use the “expected upper bound” curves at all – just the thickness of the error margins (obviously, needed to separate the different graphs of mine in the y-direction) and the observed upper bounds.

I’m really struck by just how tenative the data on the Higgs range are with so much data collected. And, while the bootstrap method you used probably doesn’t get the statistical error exactly right, its result is fairly robust and the crude bootstrap combination probably has less true systemic error than any of the constituent measurements because you are canceling out experiment specific systemic error.

I think it is particularly notable that the Higgs signal has not gotten obviously stronger despite a major increase in the number of inverse pbs in the data set. The signal has been hovering around the two sigma line in the same vicinity for years. Supposedly, LHC should have enough data to find or rule out the Higgs in the remaining regions in six months or so, but the sense that the Higgs is inevitably hiding in the areas where the data isn’t good enough to rule it out is fading. At this point, I’d put at least even odds that there will be a null result for a SM Higgs or seomething like looks like it anywhere.

[…] Some modified form of Higgs mechanism with multiplets may be a better fit to the data. If my predicted full combinations are correct a standard Higgs may already be all but ruled out even at low mass. A SUSY multiplet […]

[…] el CERN (arriba tenéis la figura final, más detalles sobre cómo ha sido obtenida en ”Higgs Combos“); más aún, Philip se ha atrevido ha combinar los datos del Tevatrón y del LHC (combo […]

[…] EPS conference when some new Higgs Exclusion plots were unveiled I has a stab at putting together some combinations of the plots using some basic formulas. Despite the broad caveats I gave them the plots got a surprising amount […]

I am looking forward to your combined LHC-Tevatron plot.😉 If you need some help or independent verification, send me the Excel tables.

The SUSY talks that have been posted so far report that SM works perfectly. But at least, I learned that ATLAS stands for A Tool for Locating Any Supersymmetry.😉

Sorry, I am not sure whether you’re adding the things in the way that would look sensible to me.

I would first reconstruct the observed “sigma/sigma_{SM}” for each mass, plus the error of “sigma/sigma_{SM}”. The error may be read from the width of the yellow-green bands and the central value is about 2 sigma higher lower than the declared upper bound.

Then I would represent the measured sigma of the whole Tevatron as the average of the means of the two sigmas, plus minus a standard deviation that can be easily reconstructed. And then I would return to the 95% limits, essentially by returning by 2 sigma.

The deviations from the dotted “expected” line are really not important: your final goal is to see where the measured cross section is relatively to the horizontal line, right? So I don’t think that you did the right thing and I think that the right thing actually may be done by combining the final graphs. It’s just that the “sigma” represents the total cross section for all interesting Higgs-like events combined and the error margin is measured in the right direction of the whole plane, corresponding to the SM-predicted relative abundance of the difference channels.

I guess that when you do it right, you must get stronger limits. Your picture is way too close to the partial estimates.

The green band is the expected line times 1/sqrt(2) to times sqrt(2). the yellow bands are times 0.5 to times 2. This gives them an even width on the log plot.

i am not sure if my combo is the best approximation but this looks in line with previous combinations. You are welcome to try yours and we will see who gets the closest to the Tevatron combo on Wednesday.

I have drawn similar graphs to you in Mathemattica. The mean measured cross section is above the SM between 120 and 150 GeV or so but the upper bound on it is higher, by 2 SD, and it still allows 110 and 160 GeV.

What you misunderstood – and I forgot under your influence – is that the plenary Tevatron talk on Wednesday will *not* be based on the graphs we’re playing with, combinations of cross sections. The recommended figure 114-137 GeV will be based on the Tevatron’s combined accurate measurement of the W-boson and top quark masses – it’s clearly articulated in the press release – so it has nothing to do with our games we’re doing and it will be a “qualitatively new” talk.

The press release says that it will be based on measurements

includingthe precision tests. I think it will use the combined plot with a 90% confidence level to exclude up to about 185 GeV and the precision tests above 185 GeVAs an astronomer, I have no axe to grind about the Higgs particle – I hope it is found as the last ingredient of the standard model. What worries me is that I thought the particle physicists had a higher level of proof than the astronomers. I thought there were supposed to be double-blind analyses of the individual LHC experiments, for example, as well as fully independent analyses between the experiments. But the current process at EPS hardly seems like that! I suppose the problem is that when all the double blind analysts know the remaining available Higgs mass ranges, the process can’t be double-blind anyway! But its a bit disconcerting to see the CMS graph change at a late stage as it apparently did yesterday. Certainly when a lot of money has been spent it is always more difficult to keep the results independent! But it all seems a bit more like astronomy than I thought it would!

What’s wrong with astronomy? Doesn’t it observe all celestial bodies, in an uncontroversial way, with the accuracy of arc seconds in the skies?😉 The cross sections will never reach this accuracy.

It would be nice if all the experiments were double-blind in your sense but you can’t produce hundreds of particle physicists who will work on their data without knowing that the Higgs below or above something has been excluded. They must know it.

In fact, much more generally, they’re using assumptions measured by previous experiments (and processed by theorists) all the time – that’s what the Standard Model, the main method to calculate the backgrounds, really means. It is in principle impossible to make observations “completely unbiased” relatively to any theory.

It is very clear that the Standard Model must be the benchmark to compare the data with because it’s what seems to work. That’s why the theory is called “standard” in the first place. Most of the experimenters’ work is actually composed of comparisons with the standard backgrounds: the theory plays and must play an important role.

The CMS graph change could be due to a mistake in the previous version but also due to the changed datasets. Some channels got 5 times more data which means that their results in the final graph were de facto independent from those in the previous graph.

At any rate, every scientist has the right to correct himself. The CMS people probably don’t get the right graphs immediately. Ever. There may still be a risk of a mistake in the final CMS graph as well. But science has to work with fallible people. If you called an infallible person such as Mr Ratzinger of the Vatican and ask him about the cross sections, he would tell you that the cross shouldn’t be dissected because it’s holy and Christ was crucified on it. That’s infallible but not excessively informative for other particle physicists.😉

What’s wrong with astronomy? Well, take CMB experiments – in the 1980’s when the standard cosmological model was a baryon model with isocurvature initial conditions, the observers got CMB results that agreed with that model. Then in the 1990’s when the standard cosmology was thought to be Omega=1 CDM with adiabatic initial conditions, many CMB experiments again confirmed the theory. A lot of observations rapidly changed after the COBE result came in with unexpectedly big large-scale perturbations! And these days, the observers are getting excellent agreement with the expectations of the Lambda-CDM standard cosmology. Dont get me wrong – science is still being done. But what you can say is that a lot of CMB experiments in agreement appears to be a necessary but not sufficient condition for the result to be correct!

I am hoping that the same isnt true for particle physics experiments!

I could go on but am preparing a bbq under uncertain UK climatic conditions…

For Higgs´ sake, Tom! You are not trying to suggest that the standard cosmological model (whichever one it is today) may need any corrections, are you? If so, you certainly deserve any climatic conditions bestowed on you…

Could you add steps of 10 to the 100-200 interval on the LHC combined plot? I’ve realized I’m not very good at eyeballing logarithmic scales😦

I cant get Excell to obey, but I will also do a zoom in with linear mass scale for the next plot

Thanks!

If the origin of rest mass were from some microscopic particles or from some microscopic particle physicists and not from Heaven (Machian Principle) then I would suicide on the Himalayas immediately. That would suggest that all finacial and scientific crises created by the dictatorship of “elites” be legal.

That’s the problem Jin He, they are legal! But I would not remove myself from the chaos yet, Mother Nature favors harmony so there is *always* hope (until there isn’t).

The LHC and total combo is just lovable!🙂 If you can send me the digitized data for ATLAS, CMS, it could be useful, I will try the algorithm of mine. Going to praise you.

This confuses me a bit. Adding as I thought you’ve done, at 140 GeV you combine a 2,5 sigma signal from the LHC with a roughly 0,7 from the Tevatron and get sqrt(2,5^2 + 0,7^2) = 2,6. Yet the combined curve has only two sigma there?

How can the adding of Tevatron data to the LHC data give you less deviation from the expected curve than LHC only when they’re both over the expected? Did the error bars get widened by higher Tevatron uncertainty?

These are confidence levels, not measurements of values like a cross section. The excesses don’t combine in that way for confidence levels.

Whether these plots combine them correctly is another question.

Could you please describe your formula in its full form or is it some secret?

I treated the graphs as graphs of cross sections with some errors, calculated the weighted average of the cross section for 4 detectors, with the correspondingly smaller error (roughly 2 times, because there are 4 datasets), and this plus 2 standard deviations is the 95% new limit.

The result is that my average value for the cross section was very similar to your upper bounds but my upper bounds were much more tolerant and not much better than ATLAS or CMS separately.

For example, how can you possibly eliminate 140 GeV?

I have added the formula to the bottom of the post. I just add anything on the plot using this inverse square sum, where the confidence levels are what’s being plotted on the y-axis

Is it that straightforward to combine Tevatron and LHC? Do they have identical background subtractions and everything? Or can it be done just on the confidence level plots? It would be a lovely thing to have the combo though ….

Great stuff, lol!

Night shift’s arrived! Hello Kea.

Yes, it’s rather lonely here all on my own … goodnight.

I’m sure there must be someone else in your timezone, who blogs, about science.

Late night shift here!

The most interesting stuff is taking place near 128 GeV and 360 GeV on the combined bootleg plots, but nothing sufficient enough to declare a discovery. It would be cool if 128 GeV is the SUSY Light Higgs. The deficit at 360 GeV is more confusing – is it negative interference from a SUSY Heavy Higgs or a SUSY Pseudoscalar Higgs or both in the same mass range? Considering that the deficit is so close to twice the top quark mass, we probably shouldn’t rule out unexpected effects from a t-t-bar condensate.

Have Fun!

Phil,

it would be very nice to have a posting about the data processing leading to a typical exclusion graph. Or at least some links. I have myself been too lazy to learn how these graphs are produced thinking that I can leave this non-poetic side of physics to the specialists. It seems that I must enter more to the real world.

Examples about stupid questions. Why these Higgs exclusion bands in some mass regions tend to be are above unity and in some ranges below unity? How one can deduce information about possible existence of Higgs at given mass range? Under which conditions it is enough for the exclusion that the band goes below the unity? What the standard model prediction actually means in given situation and how the experimental result is estimated and how one obtains the curve from this?

A qualitative explanation for how these graphs are produced would help enormously all the readers who are usually not silly enough to ask these silly questions;-).

My version of the LHC-Tevatron combo is here:

http://motls.blogspot.com/2011/07/phil-gibbs-tevatronlhc-higgs-synthesis.html

Note that the peak at 116 GeV is nice and somewhat sharp – a hill (the highest part of the 110-120 GeV mountain range with the 112,116,119 local peaks) separating itself from the more flat terrain. If the graph were taken as the exact probability distribution, the Higgs mass 116+-1 GeV would probably be comparably likely to all others combined.😉

I didn’t use the “expected upper bound” curves at all – just the thickness of the error margins (obviously, needed to separate the different graphs of mine in the y-direction) and the observed upper bounds.

I’m really struck by just how tenative the data on the Higgs range are with so much data collected. And, while the bootstrap method you used probably doesn’t get the statistical error exactly right, its result is fairly robust and the crude bootstrap combination probably has less true systemic error than any of the constituent measurements because you are canceling out experiment specific systemic error.

I think it is particularly notable that the Higgs signal has not gotten obviously stronger despite a major increase in the number of inverse pbs in the data set. The signal has been hovering around the two sigma line in the same vicinity for years. Supposedly, LHC should have enough data to find or rule out the Higgs in the remaining regions in six months or so, but the sense that the Higgs is inevitably hiding in the areas where the data isn’t good enough to rule it out is fading. At this point, I’d put at least even odds that there will be a null result for a SM Higgs or seomething like looks like it anywhere.

[…] Some modified form of Higgs mechanism with multiplets may be a better fit to the data. If my predicted full combinations are correct a standard Higgs may already be all but ruled out even at low mass. A SUSY multiplet […]

They used your combined plot in the LHC morning presentation this monday🙂

So they did🙂

Congratulations, Philip: last slide in http://lhc-commissioning.web.cern.ch/lhc-commissioning/news-2011/presentations/week29/LHC-morning-25-07-11_v2.pptx

Haha it was probably a light hearted dig at you publishing the unoffical CMS plot a few weeks back on this site😉

Well I cant complain about them grabbing my plot considering how many times I nicked theirs LOL.

[…] el CERN (arriba tenéis la figura final, más detalles sobre cómo ha sido obtenida en ”Higgs Combos“); más aún, Philip se ha atrevido ha combinar los datos del Tevatrón y del LHC (combo […]

[…] few days ago I showed how to combine the Higgs confidence level plots by adding in inverse square. At the time I did not understand why […]

[…] EPS conference when some new Higgs Exclusion plots were unveiled I has a stab at putting together some combinations of the plots using some basic formulas. Despite the broad caveats I gave them the plots got a surprising amount […]