Here’s a puzzle. There are three cups upside down on a table. You friend tells you that a pea is hidden under one of them. Based on past experience you estimate that there is a 90% probability that this is true. You turn over two cups and don’t find the pea. What is the probability now that there is a pea underneath? You may want to think about this before reading on.

Naively you might think that two-thirds of the parameter space has been eliminated, so the probability has gone from 90% to 30%, but this is quite wrong. You can use Bayes Theorem to get the correct answer but let me give you a more intuitive frequentist answer. The situation can be models by imagining that there are thirty initial possibilities with equal probability. Nine of them have a pea under the first cup, nine more under the second and nine more under the third. The remaining three have no pea under any cup. This distribution models correctly the 90% that a pea is there since 27 out of 30 do. If you now eliminate the cases where the pea is under the first or second cup you are left with nine instances of it under the third cup and three that it not there. So the correct probability is 9 out of 12 or 75%, much better than the naive 30% guess.

I mention this because I saw a comment over at NEW pointing to this paper about applying Bayesian statistics to the probability of finding SUSY at the TeV scale. The puzzle illustrates that Bayesian rules do not reduce the probability of something existing by as much as you would think if you eliminate a large chunk of the parameter space. Before experiments started to have their say I felt that SUSY at the TeV was a well motivated theory and I like the maths of supersymmetry, so I might have estimated the probability of it being there as 90%. By the time that paper was written LEP had eliminated lower mass SUSY just as you might turn over a couple of cups and not find the pea. At the start of 2011 before the LHC started to have much say I estimated the probability at 75%.

You might argue that another two-thirds of the parameter space has been eliminated since then. By the same analysis this would reduce the probability for SUSY at the TeV scale to 50%. However, we also now know that the mass of the Higgs is around 125 GeV with 4 sigma confidence (actually the mass region around 115 GeV – 120 GeV is still wide open so the story is not concluded yet) If the mass had been 115 GeV it would have been a good indicator for SUSY and at 140 GeV it would have been a strong eliminator. At 125 GeV it still “smells” like SUSY but the aroma is not so sweet. This can’t be quantified but for me it pushes the probability for SUSY back up to about 70%

If you are a SUSY sceptic I know what you are thinking. You think that LEP eliminated much more than two-thirds of the parameter space and the LHC eliminated much more than two-thirds of what was left. Is this really the case? All the diagrams from ATLAS and CMS which show large chunks of the parameter space being eaten up are misleading. Firstly there is no uniform measure of probability that can be assigned to the area of the plot. Secondly and more importantly all these plots rely on highly constrained versions of SUSY to reduce the parameter space to two dimensions so that it can be analysed and plotted. If SUSY phenomenologists have made a mistake it was to think that using these simplified models would be a good way to search for SUSY. This was not well motivated and has been shown wrong. If SUSY is to be found she will be seen in direct searches for particles such as the stop or stau. The Higgs is only starting to be seen in the data now so why should we think that heavier particles would already have shown up? The Higgs was in a place where it was not easy to find but this could also be the case for the stop especially if its mass is near the top (see also Stealth Supersymmetry) Higgs searches are relatively straight forward to analyse because if we know its mass we also know its cross-sections and decay rates (assuming the standard model). This is not the case for the stop, tau or gluinos. We have to keep searching until the limits placed on cross-sections are so small that all possibilities are excluded. The LHC is nowhere near that point yet.

As a curious footnote it is amusing to see that my Stop Rumours post is gradually making its way towards being the most read article on this blog. Why so much interest? Looking into it I found that hit counts on most posts reduce to a trickle after a few days but this post keeps collecting hits at about a hundred a day, even after three months. The stats show that this is because of people searching for the single word “stop” on google. When I do the search myself I find that the post does indeed appear at the bottom of the first page. The “Stop Rumours” title must be enticing enough to lure people to click their way in. I suspect they are a bit baffled by what they find but maybe they will learn something about physics. It is very unusual to get a first page ranking for a single common word like “stop” so why is this happening? A clue is that the Google entry has an attached note saying that “Cliff Harvey shared this”. This is a feature of Google plus where Harvey maintains an excellent column commenting on people’s blog posts. If I log out of Google plus I no longer see my post in the Google search listing but once logged in I notice that a whole load of my search results are there because Harvey has shared them. Judging by the steady trickle of hits on my post this must be the same for a large number of people. If you are interested in SEO you will find this fact quite interesting and perhaps useful until Google tweak their parameters back to something more sensible.

Crying wolf until no one cares anymore…

I totally agree with your solution to the three cups – and the interpretational analogy to the elimination of parts of parameter spaces. Good presentation.

You must have chosen a better title for the stop rumours. I only get to the top of Google if you actually search for stop squark rumor(s). Consequently, I only have 5,700 views on that article which is surely above the average but not astronomically above the average…

The Stop Rumours! post has 12,000 views even though my overall hits are a good factor lower than yours. The Cliff Harvey effect is very powerful.

(I rescued this comment from the auto spam folder)

I’m don’t think that your example of the pea under the cups is correct. Taking your frequentist approach, the distribution of 9:9:9:3 for cup1:cup2:cup3:noPea is correct initially. But, then we turn over two cups and find no pea. This is new information, so the probabilities change leaving us with 0:0:27:3. Another way to look at this is that, if our friend is truthful about their being a pea, then it *must* be under the third cup (since we now know it’s not under the other two). If our friend is lying, then there’s no pea under the third cup either. We decided in advance that we think there’s a 90% chance our friend is telling the truth. So, after finding no pea under the first two cups, we’re basically left relying on our prior to tell us what to expect from the third cup – therefore, there’s a 90% chance that there’s a pea under cup number three.

Dear Andrew, I just want to say that your comment is wrong and Phil’s calculation is right. There is no “friend who tells the truth” here. Phil has exactly defined the problem by saying that there is a 90% prior probability that the marble is under one of the cups and he is asking what is the posterior probability.

But even if you added a friend who is lying or whatever, it doesn’t change anything about the fact that your “transfer” of the figures 9 from the first two shells to the third one is completely irrational. There are 4 distinguishable possibilities here – marble under cup 1, marble under cup 2, marble under cup 3, marble under no cup – and they have separate accounts. The first three possibilities may look similar to each other but this is just an illusion. They have no closer a relationship to each other than they have to the fourth possibility, no marble.

So there cannot possibly exist a valid justification why you transferred the two figures 9, to turn the third 9 into 27, and why you didn’t transfer them or a part of them to the fourth box.

Your solution may be atypical among the wrong solutions because you didn’t reduce the original odds at all – your lesson essentially seems to be “never learn anything from any experiments” – but just having the opposite sign of the error doesn’t change the basic fact that your calculation is still wrong.

Who’s right and who’s wrong depends on whether you choose to update your belief in your prior theory (i.e. 90% chance that there is a pea). Obviously, for an experiment that’s precisely what one wants to do. But, that’s not precisely how the example is stated. If the theory is rock solid (with huge amounts of prior evidence) then the belief shouldn’t change (significaantly).

Dear Andrew, I have no idea what you’re talking about. In the last paragraph of my comment, I was trying to humiliate you by stating that your “method” really means that you never learn anything or update your probabilities by evidence – that you’re a blind bigot.

This exaggerated statement was supported by the fact that your posterior probabilities and prior probabilities are kept the same (but not all of them because some of them have been proved to be zero!) but I thought that it was just an exaggeration.

From your last comment, however, I am learning that I wasn’t exaggerating at all! You indeed seem to believe that the right method to update the probabilities is not to update them at all! This is incredible because it proves that you don’t have the slightest clue what the term “probability” actually means.

If the probability that something happens during an event is 90%, it means that 90% of the people who do the same event will experience the “something” while the remaining 10% won’t. By making observations, one can find positive or negative evidence that he belongs to the first or second group, and therefore modify his own conditional probability that “something” takes place as well. The fact that you believe that a plausible attitude to similar problems is that odds are not updated at all only shows that your brain or your knowledge is completely inadequate to discuss anything related to probabilities.

That’s too bad. I was enjoying debating this, but you’ve descended into name calling. Sorry, not interested. I agree with your math, and understand exactly what you mean. My point was merely that the initial example isn’t as well-stated as it could be. I’m sorry that you’re insufficiently comfortable to debate something without resort to childish tactics.

The puzzle was perfectly unambiguously stated. Concerning your whining, no comment, I carefully avoid similar irrational layers of debates.

@Lubos, I honestly don’t know where you get your patience from :)

I didn’t understand a word of what Andrew was saying in the first paragraph, and then realized he’s probably come across the Monty Hall problem which he doesn’t understand the solution to either. A lot of stupid people try to be clever by bringing in the variation of the game host lying, when the problem is very clear. It’s as if they don’t want to be faced with the reality of their stupidity and so to compensate, they bring in irrelevant nonsense to make themselves feel “clever” and better about themselves. And Andrew seems to be doing the same thing with Phil’s problem by complicating it with Phil’s friend lying.

At this point, my eyes glazed over with the thought: “omfg… another pretentious pseudo-intellectual crapping on a beautifully clear problem and solution”

Dear Carla, thanks for your kind words but I know people who are more patient than me!

You have another point: some people may think that everything is Monty Hall Paradox so they automatically offer their (wrong) solution to the Monty Hall Paradox whenever someone says “probability”. ;-)

Dear Andrew,

maybe the point you are missing is the following: the problem is not to question the a priori probability (i.e. the 0.9=true, 0.1=false) which applies, IN GENERAL, to our friend’s statements.The problem here is to see how much the observation we made (i.e. the first two cups are empty) can help us in guessing if we are in the (a priori 90% likely) true situation or in the (a priori 10% likely) false one IN THIS PARTICULAR case. If you apply Bayes’s formula, or if you follow Philip’ s reasoning, you get the (a posteriori) 75% probability that in this case you are in the true situation (i.e. ball in third cup). Maybe it could help you to accept the fact that the a posteriori probability can differ from the a priori one if instead of having three cups, you had 1 million cups. After you found the first 999.999 cups empty, would you still bet nine dollars agains one that the last cup contains the ball? Or wouldn’t you think that, if THIS TIME the ball was there at all, it was much more likely that you had found it earlier?

The big issue is not so much that parameter space has been narrowed, but that the motivation for SUSY as an elegant solution to the problems it was invented to solve seems weak. The fact that SUSY is not needed to even slightly tweak early LHC energy results certainly would have surprised its inventors.

No, Andrew, your comment is completely wrong as well. This whole issue that you mention is fully incorporated into Phil’s calculation and the result is whatever it is *despite* this factor.

The probability distribution one starts with *does* include the fact that we want to favor more natural values of the parameters, e.g. lighter masses. After all, the probability distribution has to decrease for higher masses for it to be normalizable. So one must use the correct “measure” and according to this measure, 2/3 of the parameter space was estimated to be eliminated. So Phil’s calculation holds and the prior probability 90% has decreased to 75%. Any “additional” punishment would be a double counting, a miscalculation, an error.

The conditional probability (assuming SUSY is there) that these 2/3 of the parameter space (given the measure) could have been eliminated is 1/3 which is not small so the surprise is small, too. There’s really no surprise to speak of: it’s less than a 1-sigma surprise. Your comment that the current state of affairs would “certainly surprise inventors of SUSY” is simply wrong.

Some extra comments of mine…

http://motls.blogspot.com/2012/05/thomas-bayes-and-supersymmetry.html

Phil, Your analysis assumes that the 90 percent confidence in your friend is unshakable. Personally, after failing to find the pea under two cups out of three, I’d take a hard look at the guy and think, hm maybe he’s not as reliable as I thought!

Likewise I think eventually the a priori probability of SUSY eventually has to be reevaluated.

This is complete nonsense, Bill. By the very definition of the posterior probability,

http://en.wikipedia.org/wiki/Posterior_probability

the evidence may only affect the posterior probabilities which is exactly what Phil’s calculation has done. The evidence cannot change prior probabilities.

Let me say it differently.

A variation of your assumptions that is correct says the following:

If there’s a lot of strong evidence that is relevant for a statement, the final posterior probability becomes largely independent of the prior probabilities. Some posterior probabilities are very close to 0%, some probabilities are very close to 100%, and these things hold regardless of the detailed values of the prior probabilities.

This is why one doesn’t need to care about the prior probabilities much. If one does enough science and collects enough evidence, they become largely inconsequential.

But Phil’s calculation clearly shows that the elimination of 2/3 of a parameter space is an extremely small amount of evidence (well, it’s less than 2 bits of information) which doesn’t change the probabilities much – the posterior probabilities are close to the prior probabilities. So the posterior probabilities don’t become independent of the prior probabilities. The prior probabilities matter almost as much as they did before the elimination of this fraction of the parameter space.

My interpretation of your wrong comment is that you realize that the calculation is right but you find this calculation inconvenient, anyway. So you’re trying to “punish” SUSY more than the evidence actually allows you to justify. Violating totally basic rules of statistics (e.g. claiming that evidence may change priors) and assuming that 2+2=5 are among the things you are eager to do in order to “support” your totally invalid preconceptions.

But mathematics, science, or rational reasoning doesn’t contain any justification for the things you want to advocate simply because these things are demonstrably wrong.

Thanks Lubos, you saved me a lot of arguing :)

It seems that the P value in the original post, on cosmic variance, was fan motivated. Low values are closer to the naive value

Dear Philip, it was a pleasure, you did the main job here, the actual definition of the cuppized puzzle and the calculation of the solution here! ;-)

Daniel, for a low P, the posterior probability indeed goes down in proportion with the fraction of the parameter space that survived, see my blog. But if you don’t have good reasons to be confident that SUSY is probably wrong to start with, the exclusion of 2/3 of the parameter space is unlikely to change the odds qualitatively.

Well, just like Phil said, there are many ways to exclude parameter space because there are many SUSY theories or ways to fix their parameters in a graph. There’s a moment where people get tired and the budget makers start thinking, well, as indicated in my first comment, about the boy who cried wolf and become overly skeptical (seeing low P everywhere…).

Dear Daniel, I don’t know what budget makers will think or cry or how they get tired, but what I can say is that Phil has proven that if they ever reduce funding for SUSY research for reasons similar to those that you suggest, then they are as irrational idiots as yourself.

He gave a compelling argument, sure, but not if we had millions of different searches excluding SUSY all the time. Just take a look at Tommaso’s postings.

We don’t have “millions” of searches but even if we had millions, it wouldn’t imply that there’s anything wrong with Phil’s calculations and conclusions. According to a sensible measure on the parameter space, Phil has estimated that the searches have eliminated 2/3 of the parameter space. I would argue it’s an overestimate but even if it were 2/3, the resulting posterior probabilities are calculated by Phil’s formula – or see my blog for the general formula. Whether this exclusion of 2/3 of the parameter space is described in 20 or 60 papers and whether the boundary of the excluded region is composed of 20 or 60 arcs or line segments is completely irrelevant. Nothing that’s been demonstrated is affected by Tommaso’s postings, either. The only comment one could say in relation with Tommaso here is that he is writing tons of irrational bullshit, too.

Incidentally, our national athletes gathered to decide who is right, whether I or Tommaso. Tommaso, a chess player, insisted that my side would only win if his side also got checkmated.

The result has been known for 30 minutes now:

http://www.iihf.com/competition/272/news/news-singleview-2012/recap/6767.html?tx_ttnews%5BbackPid%5D=6249&cHash=804c2ea7c7

For the sake of the argument I was basing the calculation on the assumption that 2/3 was eliminated by LEP and 2/3 of what remained was eliminated by the LHC after that. That would mean 8/9 eliminated. If you think of it in terms of the acceptable mass range for the lightest SUSY particle I dont think that much has been eliminated yet. I would agree that even 2/3 is an overestimate.

Tommaso was talking about the constrained minimal version of SUSY. This is the most artificial version of SUSY and is choosen just because they can search for it more efficiently. It should not be a big surprise that it is being eliminated quickly. I only have real faith in direct searches for new particles but even then you have to make some assumptions about the cross-section and decay modes to rule them out. They really need hundreds if not thousands of inverse femtobarns before they can rule out BSM physics at this scale.

Hi Phil,

It was a coincidence that I wrote about his post… I haven’t checked his RSS by that time. I was referring to his blog posts in general.

Hey Thanks a lot for the shout-out Phil. :] I missed it on my first pass looking at the article. I’ve lurked around here for a while, and especially appreciated the Higgs combo plots and some other stuff. Keep up the good work. Nice article here too. Quite an entertaining comment section this time. ;]

Its interesting what you’re noticing about the stop post, and it seems a little bit mysterious to me. Its a bit surprising that many people are google searching “stop” right now, and specifically from among my G+ friends apparently.

100 “stop” seaches a day has to be from the activity of millions of people so it could be everyone on G+, however I don’t see items shared by any others so it is very mysterious.

When I search for “stop” Google shows mr two articles and one video on the first page that were shared by you.

What does it exactly mean “stop searches”? What is the search query? Are you at the top of searches for the word “stop”? That would be pretty good, wouldn’t it? And it would explain a lot of traffic. ;-)

When I google for “stop” I am at the bottom of the first page, but only if I am logged into Google+. The blog stats seem to show that 100 people a day search for “stop” and click on the vixra link.

This is really mysterious. It seems clearly impossible that 100 people a day who have me in their circles could be searching for “stop”. Maybe it shows up for accounts that are not directly linked to mine but in some further degree of separation…?

Funnily enough, the situation has now changed and I no longer come up on the first page when I google for “stop”. Perhaps the bods at the googleplex read my article and realised that they were overweighting posts shared on Google+.

But it has not made any difference. I still get 100 hits per day via searches for “stop”. My best theory now is that the hits are coming from searches on google images where the picture of the stop hand from the post is the second one listed. It seems much more plausible that people would search for images with this keyword since it is useless as a search keyword on its own for any other purpose. In fact I found the image that way in the first place.

Dear Phil, it’s not bad to have been a top scorer for the word “stop”!

Can’t you find out whether the visitors come from Google Images etc.? Even SiteMeter shows me these basic statistics

http://sitemeter.com/?a=s&s=s24lumidek&r=11

I think wordpress.com is less flexible than blogspot. I can’t embed applets for example. I tried to migrate to wordpress.org which is better but I could not transfer the domain name.

Anyway I can see that 58% of search hits overall are coming from google images so I think the new theory is in good stead and “stop” is not the only keyword that is working.

I really hate how WordPress disregards all the fields Ive filled in when it logs me in (i.e. my name, kind of important). Now that its happened a second time I know its not just me making a mistake.

I think that the prior probability for SUSY is non-existent – there is no experimental evidence for it based on the current Higgs data or anything else. With no valid prior probability, there is no valid posterior probability that can be calculated.

This is a similar mode of “reasoning” as that of the Catholic bigots who wanted to label heliocentrism or evolution or other things taboo. It’s not possible to even consider a hypothesis. The degree of belief in it has to be not only zero but also a blasphemous zero.

In reality, Phil states very explicitly what the prior probability for SUSY was – it was 90%. It was subjective. But that’s not a bug. The prior probability is always subjective. It is subjective by definition because it reflects one’s state of knowledge – immediate knowledge of someone at a specific moment – about the question and his or her evaluation.

My prior probability for low-energy SUSY seen at the LHC was 60% and for SUSY at any scale at 99.9999%. Physics just can’t work without it. People have different prior probabilities and they may be calculated with. Incidentallyi, it’s pretty much the rule that the more dumb a person is, the lower his or her belief in – or awareness of – SUSY is.

It’s ironic that you would accuse me of bigotism when you are the one attacking those that don’t subscribe to the canonized philosophy. I am also aware that you are probably very sensitive to the latest LHC results and I am not one to say that the road for SUSY is completely over, but I think the current LHC configuration is well optimized for SUSY results and we are narrowing the energy range very quickly. I am quite open to new theories, but like anybody who cares enough about finding a better theory, I would not accept it without evidence even if a bunch of “experts” signed up for it. I actually believe that a variant of SUSY is correct, but that variant does not show results at the TeV scale – in fact, there are lower energy experiments that have been ignored which would completely validate fundamental aspects of string theory. It’s the canonization of QFT that is causing the problems. Go ahead and call me all the names (I think you used some of the good ones already on Andrew), in the end there will be validation of a variant of string theory, just not the one you believe in right now. P.S. – my lack of knowledge of SUSY isn’t hurting me much, I am just glad I was never published anything about it.

Reblogged this on In the Dark and commented:

Interesting comments about Bayes’ theorem and the prospects for detecting supersymmetry at the Large Hadron Collider. Also worth reading the comments if you’re wondering whether what people say about Lubos Motl is true..

In the Bayesian framework, if you make unreasonable a priori assumptions, you make compromised predictions. In other words, garbage in, garbage out. This reminds me of Jeffreys-Lindley paradox, which demonstrates a similar problem.

In case of SUSY, if you start with a strong belief (“90% probability of it being there, with all points of the parameter space equally likely”), you need comparably strong evidence to shake it.

I suspect that you did not _really_ have such a strong belief in SUSY. You just picked a number (90%) and tried to do maths with it. Bad idea. You should have listed possible a priori outcomes and assigned probabilities to them. It might have gone like this (I’m also making up numbers, but at least those are more in line what a particle physicist could have reasonably expected in, say, 1990):

very light SUSY (multiple superpartners below 100 GeV, Higgs below 114 GeV, visible at LEP2) – 70%;

moderately light SUSY (superpartners in 100-300 GeV, visible by now at LHC) – 16%;

lightest superpartner outside current levels but below 1 TeV – 2%;

“unnatural” SUSY (no superpartners below 1 TeV) – 2%;

no SUSY at all – 10%.

With these a priori odds, your expectation of seeing SUSY went from 90% in 1990, to 66% in 2000, to 29% today.

What I said was “I might have estimated the probability of it being there as 90%”

In a bayesian world, prior probabilities are posteria probabilities from previous caclulations that depend on earlier experiences. This means they will vary from person to person depending on what has been learnt before. If I say that my prior probability is 90% you have no right to argue with that. Your own prior probability might be very different. That would be because we have used different facts to estimate them. If in the end we are influenced by the same clear results and we are rational we should converge to the same answer.

Here is a related question in the context of a 1970s game show. Who says a bit o’ culture ain’t no good?

http://www.straightdope.com/columns/read/916/on-lets-make-a-deal-you-pick-door-1-monty-opens-door-2-no-prize-do-you-stay-with-door-1-or-switch-to-3

Good luck with trying to start yet another Monty Hall discussion. Most if us got tired of it years ago but who knows you may find someone who will bite. :)

It obviously makes no difference if you switch.

ps. ;)

I am a little confused about this debate. QM/QFT is complete and consistent description of reality. SUSY is a mathematical consistent approach to explain high order corrections. The correct statement is that there is a 100% certainty that SUSY exists. That we have not observed it, or the possibility that we will not observe it should tell us something about nature and the nature of cancellations. The natural question if we lack empirical evidence, what is cancelling out SUSY? Is there a simpler framework we must consider?

Thinking about this in terms of heliocentric vs geocentric approaches, the geocentric approach was actually very flexible and allowed for nice “beautiful” patterns that described planetary motion. It was effectively a type of decomposition of motion that was extraordinarily flexible. The point was that heliocentric approaches allowed for more simplified computations. All one had to do was change the point of rotation. So a simple mathematical procedure cancelled out the epicycles. However, those epicycles return whenever we go back to the geocentric approach, and they can be confirmed by observation!

The point being, SUSY exists with 100% certainty, and the fact that we don’t observe it may be more of a statement about the truly privileged frame of reference we are at and we should take that observation as a very profound piece of evidence.