D0 sees no bump

Sadly the D0 experiment sees no bump in boson+dijet at Fermilab, dismissing the 4.1 sigma result of CDF.

This has already been reported here, here, here, and here. The original paper is here.

Now the two experiments need to get together to work out which is wrong and why. It is not a sure conclusion that D0 is right but it seems more likely that someone would see an effect that isn’t there by mistake than that someone would fail to see an effect that is there. This gives D0 a strong psychological advantage.

To find out what went wrong they have to compare the raw plots and the background as seen in these plots

The differences are too subtle to see from just the visual image, and it does not help that they used different bins. There does appear to be significant differences in the backgrounds while the data look quite similar. If that is the case then the problem is purely theoretical and they just need to compare their background calculations. However, the detectors are different so perhaps the backgrounds should not look exactly the same. Only the people directly involved have enough details to get to the bottom of it.

I hope they will work it out and let us know because it would be nice to see that their understanding of their results can be improved to give better confidence in any similar future results.

By the way, you can still vote for us on 3QuarksDaily

19 Responses to D0 sees no bump

  1. Kea says:

    Amazing! Well, there isn’t going to be a lot of new stuff at the LHC, after all. Some nice b physics, and that’s about it … except for all the cool fairy field exclusions, of course.

  2. […] “DZERO Refutes New CDF Dijet Resonance!,” A Quantum Dairies Survivor; “D0 sees no bump,” ViXra log; […]

  3. Amazing indeed! If the pions in question represent bound states of dark variants of quarks (maybe their colored excitations just as leptopions are assumed to represent) one could perhaps understand the discrepancy in TGD framework.

    The peculiar feature of the reaction kinematics is that pion like states are produced almost at rest in cm system of colliding charges generating the strong non-orthogonal electric and magnetic fields. This has been mentioned also to be the case for Wjj states.

    If there is small error in the determination of the energies of the colliding beams in D0 experiment, the real rest system is not the lab system with good enough precision and dark pion like states would leave the detector volume before decaying to ordinary visible particles. In this case the signature would be missing energy. Similar failure would explain why D0 failed to detect the tau-pions detected by CDF for two and half years ago.

    The universality of the production mechanism strongly suggests that also ordinary pions have octaves. If they correspond to dark matter in TGD sense they would not have been observed in experiments for which target is at rest in lab.

    For a blog posting about this see

    http://matpitka.blogspot.com/2011/06/tgd-based-explanation-for-cdf-d0.html .

  4. Bill K says:

    I hope the next time the offhand suggestion is made that CDF and D0 could decrease the error bars by “combining their data,” or ATLAS and CMS should do so, they are reminded of this episode which illustrates the importance of keeping the two experiments separate and independent.

  5. Kea says:

    Hmmm, Matti’s idea makes some sense, since I was thinking of a dark (mirror) top quark anyway. Hard to explain away a 4 sigma.

  6. Kea says:

    Hmmm. Well, quarks are similar to neutrinos in many ways, since they have a mixing matrix. If we imagine ‘oscillating’ quarks moving one way around the Tevatron and ‘oscillating’ antiquarks moving the other way around the Tevatron, then there could be an oscillating phase difference for the mirror quarks around the ring … providing a fundamental difference between CDF and D0.

  7. Some clarifications to Kea:

    The general idea of the crackpot explanation is that

    * if the 300 GeV mother particle is dark (having thus large hbar in TGD framework) and transforms to visible matter via decays

    and

    * if it it has long enough life time (this could be also partially due to the large value of hbar if the naive scaling iife time propto hbar holds true)

    it can escape detector volume and is visible only as a missing energy. Only when the particle is almost at rest in the lab system (cm of beams now) it has hopes of being detected after decay to ordinary matter.

    This mechanism is very general and one cannot avoid the though how much long lived dark matter particles we have failed to detect within years;-). I gave a long list about the situations where I believe this has occurred. Crackpot experimentalist would direct a special attention to state candidates created nearly at rest and to precise calibration of beam energies.

    If the beam energies in D0 are quite the same it can happen that cm moves relative to laboratory and states almost at rest in it for long enough lifetimes leak so that only missing energy. This however requires long life-time, which is quite possible if the pions can as octaves so that the phase volume is small and if weak interactions are involved with the decay.

    An example from ordinary hadron physics is ordinary pion with lifetime of about 10^-8 seconds. If its dark octave is created with light velocity it can travel distance of order meter before decaying to photon pair and I guess that it could well remain undetected when the bombarding nuclei collide with relativistic velocity to the target at rest in laboratory since cm moves relative to lab with relativistic velocity.

    By the way, the proposed the production mechanism requires neutral pions as mother particles. 300 GeV pion would be charged and produce W and neutral pion. Is exactly the same mechanism is involved as proposed for two and one half year old CDF anomaly (which D0 failed to detect and CDF detected the long-lived particle far from collision point!) and DAMA-Xenon100 discrepancy? If so , a second octave of neutral pion with 600 GeV mass is created first. Lifetime against two-gamma decay would be by naive scaling argument (multiply by mass ratio and would be about 10-12 seconds). Large hbar would make it longer.

  8. Kea says:

    Well, you have your opinion and I have mine.

  9. Still a clarification. I would not consider this mechanism for a single moment seriously unless it would apply to very many anomalies, the first ones discovered at seventies.

    I do not of course think for a moment that average colleague would bother to spend even a minute to consider seriously this proposal. I have collected mountains of evidence for TGD based view about world during last 33 years and developed a refined mathematical theory. Despite this I am completely powerless against ordinary simple stupidity of the average theoretician enjoying monthly salary.

  10. Philip Gibbs says:

    Mud-slinging deleted

  11. Ulla says:

    Thanks

  12. ervin says:

    It is certainly worrisome, in my opinion, that the discrepancy between D0 and CDF on the Wjj bump may be due to some fundamental mismodeling. Do we really understand well enough the physics of dijets and how QCD behaves in these channels to be confident on how to interpret future LHC signals?

  13. Lawrence B. Crowell says:

    A similar thought occurred to me. It is also not entirely comforting to have what might be an instrument dependency here.

  14. Philip Gibbs says:

    It has always been appreciated that hadron colliders can be messy. The QCD background is hard to work with, but this should not have happened.

    It is still not clear that D0 have ruled out the signal because they did not do the same analysis. Their jet selection criteria are quite different and might hide a signal. They did not even take the trouble to use the same bins. They didn’t really seem too concerned about it and did not take the trouble it warranted.

    What I find surprising is that some of the CDF collaboration signed of on this when they had big doubts (e.g. T Dorigo). It is a strange scenario where a PhD student seems to have written up his analysis in his thesis and published it there, probably without the usual feedback from the rest of the collaboration. I have no idea if that is normal practice. After a while it got published for CDF without much change.

    Many of the people in CDF are probably also now involved in other projects such as CMS/ATLAS. Are they getting a bit disinterested and sloppy?

    I don’t mean to be hard on them because they are obviously all working very hard at this critical and exciting time. They can only do so much and have to balance their priorities. However, they clearly need to find out which result is wrong or at least understand where the difference comes from.

  15. ervin goldfain says:

    “What I find surprising is that some of the CDF collaboration signed of on this when they had big doubts (e.g. T Dorigo). It is a strange scenario where a PhD student seems to have written up his analysis in his thesis and published it there, probably without the usual feedback from the rest of the collaboration. I have no idea if that is normal practice. After a while it got published for CDF without much change.”

    Are you hinting that the CDF analysis was published without a thorough peer review? Even if this would be the case, it cannot explain away the discrepancy because CDF came back with further clarifications on systematics and data analysis criteria.

    • Philip Gibbs says:

      As you know, a thesis is reviewed by an examiner at a later date. Often it includes a mixture of work that has been already published by peer review and some that hasn’t. In this case the thesis was a few months ahead of the peer review for at least this part. I don’t know if the thesis was already examined, but in any case an examiner would not be expected to check the the fine details of an analysis.

      When a paper from one of these collaborations is submitted to a journal for peer review it is given an easy time by the referee because it is assumed that the collaboration has been through it in detail and all concerns have been addressed. This story is a sign that as the Tevatron groups wind down their work is not getting the same scrutiny at it was before. I wonder how many of the 300 strong CDF group are now dedicated to this work.

      I think the further clarification came from the same limited group of people in CDF. To my mind it is still possible that they were right. There is no clear indication that the D0 analysis is better, other than it produces a less surprising result. If they had used the same parameters as CDF it would have been more convincing.

      I am describing the way it looks to a casual external observer with limited information. The actual situation may be more complicated, but I think they should resolve the discrepancy as a priority and then make it clearer what did happened.

  16. ervin says:

    Thanks, Phil.

    One can only hope that the smoke will clear up soon. LHC will have the final word in these (and other) pressing matters and the physics community at large should be grateful about that.

  17. Bill K says:

    Discrepancies like this would seem more likely to occur at the LHC, where they are dealing with brand new hardware, software and energy regime. By now CDF’s data analysis should be well understood.

    • Philip Gibbs says:

      These are valid points but there are some counter effects. The experiments at the LHC are technically much better. They give more precise and complete data than the old Tevatron detectors. Also, the CMS and ATLAS collaborations are ten times bigger, giving them more people to check for problems and scrutinize the data.

Follow

Get every new post delivered to your Inbox.

Join 281 other followers

%d bloggers like this: