Since the announcement of the OPERA result there have been numerous theory papers written about the Faster Than Light neutrinos and posted to arXiv and even viXra. For next Friday the CERN theory group have organised a three hour seminar to discuss various theories. The rule of engagement is that nobody is allowed to talk about the result being wrong. They just have to imagine that it has been robustly confirmed and consider how they would explain it. It is a great idea and a pity that they are too shy to webcast it.

In any such discussion I think the first thing to remember is that the measurement was a purely classical one so you have to first address the classical (non-quantum) implications. This can go two ways. Either the Lorentz transforms are (locally) valid or not. In the experiment, protons were fired at a fixed target to generate pions and kaons that decay to provide a beam of neutrinos. If we want to keep the principles of special relativity intact in our explanation then we have to face the fact that the experiment can be transformed to one where a fast-moving target is smashed into stationary protons as seen by someone moving in the reference frame of the protons. This is enough of a Lorentz boost to transform the neutrino worldlines so that they would become anti-neutrinos that began life at the OPERA detector and headed towards CERN to meet the pions. This means they would have to anticipate the experiment so causality is dramatically violated. You can’t escape this result if you want to keep the Lorentz transformations. It does not matter whether the neutrinos are acting like classical tachyons with imaginary mass or if they are passing through a stargate buried underground that teleports them closer to the detector. The fact is that if Lorentz invariance holds then you can use the experiment to send information back in time. Some imaginative people may be able to dream up theories in which time-travel is acceptable due to branching timelines or whatever, but you might as well believe in Dr Who.

The second alternative is to consider violations of Lorentz invariance and this is what most theorists would do. It remains true that the size of the violations is large and classical in nature. This is not some subtle quantum gravity effect that only reveals itself at the Plank scale. It has to be something that is only hidden because of the difficulty is detecting neutrinos. Lorentz violation justifies the headlines that “Einstein was wrong” but not just at scales where spacetime structure is expected to break down. This is being seen at velocity scales accessible to a modest particle accelerator.

The measurements tell us that the superluminal velocity of neutrinos does not vary much with energy. They don’t seem to approach the speed of light as the energy increases as classical tachyons would. In fact the lack of dispersion observed suggests a fixed speed for neutrinos at least over the range of energies produced in the experiment. Other observations of cosmic neutrinos tells us that much lower energy neutrinos seem to travel at the speed of light. You can consider variations on the possible behavior but I think it is difficult to escape one of two possible conclusions. Either the speed of light a few kilometers underground where the neutrinos passed is faster than the speed of light above ground, or there is a second fixed speed everywhere that high energy neutrinos adhere to.

In the first case you could drill a deep hole and send down an atomic clock, when you bring it back up you will find that time has passed more quickly. This would have to be a much bigger effect than the known GR effects. I can’t see how such an effect would not have been seen in some other observation so I wont consider it further.

The remaining possibility is that there are two (or more) constant speeds everywhere in nature. This is not something you can attribute to violations of Lorentz invariance. It simply implies that Lorentz invariance is completely wrong, but wait. Einstein replaced special relativity with general relativity where the spacetime metric is just a dynamical field associated with gravity. In GR the Lorentz transformation is just a subset of a more general transformation that locally preserves the metric. Suppose there were two metric fields that both transform according to the rules of general relativity but one of them is only coupled to neutrinos and other weakly interacting matter. This I think is the best hope for a classical theory that could explain the superluminal neutrinos without causality violations.

However, with two metrics on spacetime you can combine them to define a preferred reference frame. E.g you can multiply one metric by the inverse of the other and construct the eigenvectors of the result to define vector fields that define a stationary frame. Effectively you have created an aether theory, but at least one where the aether filed is dynamical and nearly invisible. I think this is the least radical way to explain the OPERA result if it stands up.

What about the extra dimensional theories that some people are getting excited about? They don’t escape the classical arguments I have given and I suspect that these arguments can be made more robust if someone believes the OPERA result strongly enough to try it. You will either have to accept strong causality violations or an aether field that determines the frame for a second fixed speed. Any such arguments will make assumptions but violating those assumptions would require a paradigm shift to something so radical that we can’t really anticipate it.

Of course the much simpler explanation is that the experiment has neglected some systematic error, but that is too boring.

You did not mentioned the most conservative FTL option, this is the Scharnhorst effect.

http://vixra.org/abs/1109.0051

I hear that one possible mistake is synchronisation of clocks:

http://arxiv.org/abs/1109.6160

I am impatient, because OPERA group have not given any opinion about this synchronisation.

That second paper is based on some pretty gross misunderstandings of the GPS common-view clock synchronization technique. For example, he imagines that the PTB sent an atomic clock to both CERN and LNGS. They didn’t; they sent a GPS antenna, receiver, and time comparator. They measured a time offset 2.3 ns different than the pre-installation calibration done by the Swiss national metrology agency group METAS, which is will under 1 sigma.

GPS common-view clock synchronization does not depend on the accuracy of the GPS signals

at all. They’re just convenient, frequently observable signal sources whose positions are accurately known. Receive the same signal at two locations, apply distance, atmospheric, and Doppler corrections to the received timestamps, and the result is the difference between the clocks used to compute the timestamps. The time the signal is generated cancels out. You know you’re doing the corrections right when the difference does not depend on the satellite location.A positive approach will be to check if the proton and neutrino detection graphs are skewered as suggested by Gilles Henri and me (http://blog.vixra.org/2011/09/19/can-neutrinos-be-superluminal/#comment-11190 + http://blog.vixra.org/2011/09/19/can-neutrinos-be-superluminal/#comment-11221) – or if they simply have forgotten to add those dangerous 59.6 ns in the receiving end at the OPERA detector (http://blog.vixra.org/2011/09/19/can-neutrinos-be-superluminal/#comment-11329) as suggested by Steen Hansen.

Arxiv has over 120 papers on the Cern neutrino finding. Have the experimenters gotten any worthwhile criticisms of the running of the experiment?

It is not really possible for someone outside the experiment to find an error. All anyone can do is suggest possibilities that they have already thought of.

The real aim of announcing the result was either to get another lab to repeat the experiment, or to get further funding to carry out more tests and make better measurements themselves.

And on that score, surely they will be successful.

Well, I have tried to explain repeatedly what the radical paradigm shift could be. Sub-manifold gravity which is the core principle of TGD.

There is no need to give up Causality, Principle of Relativity understood as Poincare invariance, Equivalence Principle, nor General Coordinate Invariance (which has nothing do with isometries and Principle of Relativity). These principles only generalize and at the same time one achieves a unique unification of gravitation and standard model and finds a solution to the problem of general relativity due to the ill-defines of energy and other Poincare charges.

Maximal signal velocity is defined in terms of time to travel between points A or B or more operationally that for travel from A to B to A. It depends on space-time sheet since light-like geodesics for 4-surfaces are not geodesics for imbedding space M^4xCP_2 but more general time-like curves. The maximal signal velocity for space-time sheet can be only asymptotically that for empty Minkowski space M^4 and is never larger than it.

One can also develop an argument providing a qualitative understanding about how the maximal signal velocity could depend on particle type and the scale involved, why it does not depend on energy, and even why it would be highest for neutrinos. Therefore

it is possible to have consistency with known empirical facts.

See my first and second blog posting and the article. The earlier version of article can be found also in Prespace-time Journal.

If you take the Minkowski spacetime, in which part of the lightcone is c. The crucial thing is how time is measured, because there are many different distances. You will have to go to Einstein to have this explained, and maybe then you also will detect non-locality/eternity? Partial non-locality has been suggested. Can it be interpreted in various ways?

Minkowski lightcone can also be interpreted as an Lorentz transformation?

Maybe this also will give us an explanation of time?

Sry, I cannot let this comment be unsaid. It is an idea I had before this FTL fuss, explaining also partly biology, but I am no expert.

Many of the dozens of papers that have appeared during the last two weeks only cite the papers appeared during the last week. Personally, I find this offensive to all people working on this field for decades. That is the case of Kostelecky et al. (http://www.physics.indiana.edu/~kostelec/res.html) who have studied and developed the fundamentals of Lorentz invariance violations during more that two decades. In the framework they developed (called the Standard-Model Extension http://arxiv.org/abs/hep-ph/9809521), superluminal particles are natural features of a theory that extends the Standard Model and General Relativity to incorporate all possible terms that break Lorentz invariance. Additionally, they have shown that this type of extensions do not lead to causality problems as usually stated (http://arxiv.org/abs/hep-th/0012060).

Even the idea of superluminal neutrinos was proposed literally with the title “The neutrino as a tachyon” back in 1985! by Chodos et al. (http://inspirebeta.net/record/15887), which was the first of a series of papers (http://is.gd/WnPma2).

Hear, hear. I find it unethical. Of course, they were so eager to write something, that they had no time to become familiar with the literature. It’s a disgrace. But then, no one here is surprised.

When did you last see the world being fair?

Who was the first to propose black holes?

Answer:

I never saw the world be fair, white man. That is precisely why one should not ignore it.

Phil, your logic is deeply flawed. If OPERA is correct, then clearly some elements of QG are required, and most of us know that Lorentz invariance is an emergent feature of classical spacetime. It can be exact classically, and still

appearto be violated, if you insist that the experiment is entirely classical. But it isn’t.How on earth can spacetime be classical? Everything is quantum mechanical. When you say most of you know, you mean most of you crackpots?

Everything is Quantum was exactly MY point, moron.

Good to hear confirmation that CERN are taking OPERA seriously. I am getting really tired of armchair groupies telling me that’s it wrong, because they read it on slashdot.

If “The rule of engagement is that nobody is allowed to talk about the result being wrong.”

does that mean

1 – only that nobody can criticize the experiment itself,

for example by saying that the trailing edge of the proton function is poorly known

or

2 – that not only can nobody criticize the experiment itself,

but also

that nobody can criticize models of superluminal neutrinos

on theoretical grounds, for example

as done by Cohen and Glashow in arXiv 1109.6562

in which they “refute the superluminal interpretation of the OPERA result” using well-established Standard Model physics

???

If it is 2, then it seems to me that the seminar will deal only with theories so totally disconnected from reality as to be pitifully useless.

If it is 1, then it seems to me that any model put forward will have to refute the paper of Cohen and Glashow, which might be hard to do.

Tony

Given that a neutrino condensate is not all that conceptually distinct from your own quark condensate theory, you may wish to think a little more about the possibility that OPERA is correct, before calling everyone else’s theories ‘pitifully useless’.

A theory is a theory is a theory. I can’t believe how many times I have to remind people.

Many theorists, including myself, have already noted a glaring loophole in this argument, which basically comes down to the assumption of entirely local physics. Since we were not working along those lines for neutrinos, which do NOT fit into standard schemes anyway, since they oscillate, there is no reason for us to take these arguments seriously.

My quark condensate model is quite consistent with the Cohen-Glashow paper arXiv 1109.6562 in which they say:

“… pair bremsstrahllung …[which]… proceeds through the neutral current interaction … allows us to exclude the OPERA anomaly and place a strong constraint on neutrino superluminality …

We have computed … the rate of pair emission by an energetic superluminal neutrino, and … the rate at which it loses energy …

about three-quarters of the neutrino energy is lost in each emission …

using the OPERA baseline … neutrinos with initial energy greater than … a terminal energy of about 12.5 GeV … rapidly approach a terminal energy … about 12.5 GeV …

Few, if any, neutrinos will reach the detector with energies in excess of 12.5 GeV. Thus the CNGS beam would be profoundly depleted and spectrally distorted upon its arrival in Gran Sasso …

The observation of neutrinos with energies in excess of 12.5 GeV cannot be reconciled with the claimed superluminal velocity measurement. …”.

Any theoretical model of superluminal neutrinos such as claimed by OPERA must address the Standard Model physics described by Cohen and Glashow.

If any model does not do so,

I stand by my opinion that it is pitifully useless.

Tony

Tony you can read the agenda of the meeting at http://indico.cern.ch/conferenceDisplay.py?confId=158116 . It is just the systematics of the experiment that will not be addressed. Theoretical constraints are on-topic.

Hi Tony,

Cohen and Glashow are assuming Standard Model Physics as their premise, and then showing it is inconsistent with FTL. Well – I guess someone had to prove that, but wasn’t it obvious?

IMO, we always expected ‘quantum gravity’ to modify ‘relativistic gravity’. Therefore, OPERA’s result – if it stands the rigors of scrutiny and independent verification – may imply quantum gravity effects (which are NOT part of the SM that Cohen and Glashow considered). It seems that Lorentz invariance is weakly broken by weak quantum gravity effects and/or the axion.

Although I think that Higgs theory might be an acceptable origin for the Weak mass scale, and the origin of bosonic (W and Z) mass, I am not convinced of it being a complete origin of fermionic mass. Therefore, I do see the point of condensates (these pseudoscalar symmetry-breakers occur in Condensed Matter Physics), but they still require an origin of mass from ‘somewhere’. Condensates will never be able to explain a superluminal neutrino – if that is one interpretation of OPERA’s results.

You know that Jonathan Dickau and I are writing a paper about this.

Have Fun!

“Some imaginative people may be able to dream up theories in which time-travel is acceptable due to branching timelines or whatever, but you might as well believe in Dr Who.”

The threat of paradox can be ameliorated if the spacelike effects are probabilistic. Then, when time loops do manage to occur, there will be consistent non-paradoxical outcomes.

This idea must have been discussed somewhere already in the extensive literature on tachyons, closed timelike curves, etc, but I can’t provide a reference.

Or rather, I can’t provide a canonical reference. arxiv:1007.2615 might count, and then there’s David Deutsch’s work from the early 1990s. There are also the many attempts to explain QM as arising from bidirectional causality – I consider Mark Hadley’s work the most interesting since it is grounded in general relativity, then there’s John Cramer, Huw Price, and many many others.

The recent paper from New Zealand, arXiv:1110.1162, appears to be the first one trying to explain the OPERA measurements as resulting from a genuinely spacelike effect.

Mitchell,

Can you describe how a time-loop would look like? How does the light behave then?

Relative to the ordinary world of one-way time, a small time loop could be envisioned as a box which, for a short period of external time, has time going “in both directions” on the inside. Relative to external time, the time loop must have a beginning and an end, like the bottom and the top of two escalators going in opposite directions.

If you open the box and look inside, what do you see? One possibility is that you would see two types of objects inside, some going forwards in time, some going backwards in time. The other possibility is that you would only see objects going forwards in time, and the backwards-in-time objects are in some other space (like a wormhole) which only joins up with your space at the beginning or at the end of the time loop (that’s beginning and end as counted in external time, the time of someone looking into the box).

I wonder how many timeloops and wormholes you used doing that text? Is there no simpler way?

Sorry to say it but doesn’t this take too much energy? There must be an effortless way without crossing barriers/opening of boxes all the time. Already now, without this extra effort, we have shortage of energy in biology. The budget is negative.

Memory is universal, not bound to biology. It keeps things together. Kea talks of QG (braidings?) as necessary. Matti of light-like extremals.

And the light? Can it travel backwards too? Time is measured by light. Dark light seems quite odd term :)

Ah, maybe the negative signals are sent backwards in time attracting as spinors to the observer that open up the box?

They should do the same experiment with photons. Probably that photons to have the same behavior.

The neutrinos traveled 700 km through the rock. The photons won’t go through the rock, so you need to specify your experiment. Do you want to drill a hole in the earth and send the photon along the hole? Or do you want to send the photon 700 km but not through rock?

In vacuum, notably.

Let the laser impulse flies over the Earth.

CERN has taken the correct attitude: anyone an invent easy arguments telling why the experiment was possibly wrong and forget the whole thing after that. Why not just assume for a moment that the experiments was ok and try to use imagination.

There has been a lot of work with super-luminality during the decades. Whether it violates causality or not is the basic problem. Second related problem is whether super-luminal Lorentz boosts make sense (I do not think that they are necessary).

The causality problem means that space-like momenta with positive energy can be transformed to those with negative energy by Lorentz transformations. Therefore the time ordering of events depends on Lorentz frame. One solution of the problem assumes a rule of thumb allowing to identify the time ordering in such a manner that particles have always positive energy. Depending on frame one would speak of the decay of pion to muon and its tachyonic neutrino or fusion of pion and tachyonic antineutrino to muon. The proposal makes me feel uneasy.

To my view the most realistic manner to approach the neutrino problem is to admit that we are indeed standing on the shoulders of giants. Physics is an enormous cultural achievement -far beyond the grasp of any individual. The mathematical concepts have been tested again and again. Poincare symmetry has an incredible organizing power and without it we would not have particle physics.

Throwing away all of this painfully gained wisdom and saying that classical geometry is something very out-of-fashioned and even groupy thing does not look to me a very promising approach – to put it mildly. The notion of symmetry has developed hand in hand with physics and is the basis of all modern mathematics: every mathematical structure involves automorphisms respecting this structure. In mathematics throwing out the notion of symmetry away would mean a return to the days before Galois. Not much would be left and categories would be the first to leave;-).

Besides this, to my best knowledge buzz words like “emergent space-time” remain still completely empty. I have found that the attempts by physicists to define the notion involve implicitly the notion of continuous space-time! We should climb up in the tree rather than cut the branch at which we are sitting at.

Following this line of approach, the first question to ask is whether we can generalize the well-tested principles of special and general relativities to understand the nasty neutrinos?

If you actually cared about physics you would be studying very hard to learn abstract concepts of emergent geometry. You prefer to be pompous.

I have actually worked with Poincare reps, and as Mitchell already pointed out, there are tachyonic ones. Learn some maths.

I wonder what you are, dear Kea. Grandious?

One interesting paper that I saw was based on the properties of neutrino propagator which does not vanish for space-like distances. This would allow super-luminal velocities if the neutrinos are interpreted as virtual neutrinos.

The bounds on neutrino masses are however very strong and do not predict long lifetime for the idea: if neutrinos are ordered like charged leptons electron neutrino would have mass of order 10^-8 eV, muonic something like 10^-3 eV and tau something like 10^-2 eV. Probably this theoretically elegant option is easy to kill.

It would be nice if some-one would explain what kinds of things extended Lorentz invariance (allowing super-luminal Lorentz boost velocities) can mean and whether it is necessary at all.

In my humble opinion it might be a good idea to follow the standard practice of physics and increase the resolution of the microscope so that neutrino does not look point-like anymore. Maybe we could try first to imagine what neutrino (or electron, or photon, or…) and neutrino propagation could look like in the improved resolution

String model would be the first guess for what we would see in the improved resolution. TGD would be the second guess and was indeed proposed both as a solution to the energy problem of GRT and a generalization of hadronic string model after its failure.

Two years after my thesis came the super string revolution based on the brilliant idea that although string model fails to describe hadrons it might quite well describe everything;-). The science of landscapeology has taught as that it describes much much more than everything. But not hadrons as LHC has taught us;-).

Dear Matti,

My brain usually starts to hurt when I read your posts, but I recognize some of the words you are using!

“Maybe we could try first to imagine what neutrino (or electron, or photon, or…) and neutrino propagation could look like in the improved resolution”

Since you appear to understand the mathematical concepts and their evolution since Galois, maybe you can figure out how to deal with the following idea of what an electron-proton (a hydrogen atom!) system may look like:

The electron is a sphere of rotating space (i.e., the whole “atomic orbital”), rotating on two of three possible orthogonal axes, surrounding the proton, a similar sphere of rotating space, also rotating on two of three possible orthogonal axes (but which two we can never know). The space is rotating because there is pressure coming from inside the proton and outside the electron: the pressure of space. Rotation must absorb the pressure from every direction in space, so it must be on at least two axes I think. The proton and electron spheres (which also oscillate radially) are created from a neutron sphere which rotates on three axes. The neutron decays because the internal and external fields cannot support it by itself, so it splits into the proton and electron spheres, which the field can support provided the electron and proton exchange magnetic flux quanta (h/2q). The m.f.q. provide half the energy for each sphere; the other half comes from the pressurized internal/external field. Pressure builds in the spatial field because space expands at its full potential, which means a constant rate, at its limit (the limit of the field); but space cannot expand at its full potential inside the field because there is already space there. So (since the universe is not perfect) a crease develops; then rotation of the spheres, with a small region of space inside each rotating sphere, trying to get out, and the rest of the universe outside each rotating sphere, trying to get in. Gravity is then a net motion of space toward rotating spheres.

Anyway, the proton and electron together do not have the mass of the neutron: some mass is missing. The neutrino is then (it seems to me, in this scenario) a kind of gravitational wave.

Since the neutrino is an electrically neutral wave, while the light wave is influenced by the electromagnetic field (a field, mind you, created only when the electron sphere is separated from the proton sphere due to absorption of any wave exactly equal to its frequency of oscillation — not its frequency of rotation — a frequency equivalent to 13.6 volts) and the net-space-motion “gravitational field”, it makes a kind of sense that the electrically neutral wave should be faster than the electromagnetic wave. The speed difference between the neutral wave and the electromagnetic wave might even offer a way to figure out the energy of the “whole field”, i.e. space.

Which is also to say that there is no “electromagnetic” field between the rotating spheres. They live harmonically with each other, held in place (inertial mass) by the space pushing on them from all directions, and maintained by the constant pressure of the field. (This pressure, to be completely extravagant, comes from the future.)

In this scenario, my question is: is it possible to develop a way to trace the path of some point on the surface of such spheres, rotating simultaneously on three (or two) orthogonal axes? Then to figure out what would be the effect on space when a proton sphere (approx. 1860 times smaller than an electron sphere and rotating approx. 1860 times faster) moves through an electron sphere? Because that gross distortion of space could explain why electrons and protons attract each other, etc.

(I cannot be shamed on this: I will flog it to anyone. Ignore it if you like, I will not be embarrassed. But you appear willing to consider new ideas within the whole evolved realm of mathematics and I worship that. My high school algebra works in this scenario to give electron magnetic moment to seven digits after the decimal point. It also gives formulas for the various transition series, Lyman, Ballmer, etc.)

Dear Kea,

you do not seem to fully realize the deep connections of category theory and modern mathematics to physics.

One must simply understand how physics has developed. Differential equations and calculus, partial differential equations, special functions, differential geometry, symmetry groups, Noether theorem,… gradually layer by layer. Any working theoreticians talks in terms of these concepts if the goal is to produce something comprehensible and having meaning.

At the top is category theory and related concepts and of course are part of TGD too. As also p-adic physics and new mathematical notions inspired by TGD. But everything is based on the best mathematics that already exists.

If one do not master these lower levels, category theory is completely useless empty snobbery. Learning all this background of course requires almost inhuman self discipline. It however helps enormously if one has really profound physical idea since it forces you to learn the right things. I would be certainly a crackpot if I had not had not had the luck of discovering this kind of really deep physical idea. We do not develop good ideas, they develop us.

The notion of space-time emergence is unfortunately one of those empty buzz words producing much confusion. If you have some reasonable definition of space-time emergence please give it to so that I am wrong.

One must simply understand how physics has developed. Differential equations and calculus, partial differential equations, special functions, differential geometry, symmetry groups, Noether theorem,…And I am far more qualified than you to discuss all these subjects, having worked as both a physicist and mathematician, and having obtained a theory PhD more recently, having worked most of my life in science, and having worked with the professionals, rather than mouthing off at all of them for being idiots. Go on, keep telling every one else they are an idiot. You too will only be bones in the ground soon enough.

You yourself are pretty good in telling everyone they are idiots, especially those trying to help you. All of us will be bones quite soon, and we never know in which order. Instead of this arrogancy you could respond by facts or proposals, dear Kea. I hardly think the OPERA is interested in such arguments.

For instance, I spent a good part of the 90s studying/researching non linear PDEs, which of course led to an interest in the quantum analog and Hopf algebras via the pioneering work of Kulish and Reshitikhin on the sine-Gordon equation …

When I scanned through your comments here the constructive proposals were – null. Nothing. Just complaints and arrogancy. Do you own a mirror? Sorry to say, but this is how I see it.

I was assisted in understanding by the beginning of vixra:1107.0002, especially the reference to noncommutative geometry. Noncommutative geometry is a well-known and highly developed example in which the algebra is more fundamental than the geometry.

Dear Mitchell,

I do not have anything against non-commutative geometry as a mathematical tool. Finite measurement resolution means a cutoff in “momenta” and this means that fields can commute/anticommute only for a finite number of points with space-like separations.

One indeed obtains non-commutative geometry as an effective tool reflecting the finite measurement resolution. Commutativity condition gives also effective discretization. Finite number of modes– finite number of points. Also non-commutative spinors and their tensor products emerge naturally in finite measurement resolution realized in terms of inclusions of hyper finite factors.

What I do not believe is non-commutative Planck scale mystics without any physical reason-why for it. Finite measurement resolution is present in all length scales and non-commutative spinors provide a tool for its description in all length scales. Braids serve as the space-time correlates.

Dear Matti,

Then what is your interpretation of the Dirac Equation? To me, it is clearly a quaternion equation, gamma^5 is the quaternion pseudoscalar, and quaternions are non-commutative. Why are you separating the math from its natural geometry?

If we are playing around with 10-D string, 11-D M- or 12-D F- Theories, then we can always embed quaternion 3-spheres or octonion 7-spheres within that larger space. Besides, close-packing of 7-spheres leads directly to the E8 Gosset lattice. And although E8 does not have complex representations, and cannot represent CP violation (it probably can’t represent FTL neutrinos for the same reason), it still has some popularity.

Have Fun!

The Dirac cones, expressing the equation. Carbon is octonionic? Can this be made without higher D?

http://physics.aps.org/articles/v4/79

Dear Ulla,

It seems that Carbon can be made in a mere quaternion of (3+1) dimensions, but my models require at least octonion geometry to explain anything FTL.

Please recall that Baryonic matter is a mere fraction of the observable Universe. There should be a lot of stuff out there that we can’t see. What if this Dark Energy and Dark Matter is more stable in – say – 8 dimensions than it is in 4 dimensions?

Have Fun!

Carbon has some structural similarities with the mesons, as if they are a template for C. It has eight valence electrons (2 x 4).

Dear Ulla,

I like Carbon-based structures. The hexagonal graphene lattice is a close-packing of 1-spheres, may represent a G2 algebra, and Subir Sachdev has used it to try to explain AdS/CFT correspondence. I have tried to build a Fermionic structure (that includes tachyons) out of the C-60 truncated-icosahedron lattice. And I don’t think it is a coincidence that rotating nested buckyballs may morph into a lattice-like 2-torus of minimum rank (and effective dimensionality) of seven (looks a lot like octonion and/or G2 symmetries).

I’m more interested in Theoretical Particle Physics than in Nuclear Physics, but I wouldn’t be surprised if Nature uses similar efficient structures at various scales.

Have Fun!

Some comments about the fuzzy notion of emergence. The article of Seiberg is helpful in this respect: see http://arxiv.org/pdf/hep-th/0601234.

When a murder has taken place the first question is “Who has the motive?” This question is useful even when no murder has no yet taken place but there are reasons to suspect that it might take place.

From the above article one learns what most of us actually already know: the notion of space-time is lost in string models. String world sheets define 2-D space-time and string theories would define wonderful theories of everything if our space-time were 2-dimensional surface in 10-D Minkowski space.

Unfortunately this is not the case. Hence it was decided that space-time “emerges”. The first attempt to define what this mysterious emergence might be was as spontaneous compactification and after some ad hoc assumptions led to Calabi-Yaus in superstring context and G_2 manifolds in the case of M-theory. The mirror symmetry between Calabi-Yaus and their duals allows to say that space-time even in 10-D sense is not unique and hence emerges. The background problem could be also solved by saying that there is no unique 10-D space-time. This would be a nice if it worked but it does not and this led to the landscape problem.

The next attempt was branes which in some mysterious manner emerge “non-perturbatively”. In simplified languages this states that we do not have a slightest idea about how branes could pop up. We just need them to save the theory and even this does not help. One can also invent endless variety of dualities between non-existing theories in an attempt to find support for the fact that string theories are not what they obviously are: theories of everything if the space-time were 2-dimensional.

At deeper level one can of course consider getting rid of all kinds of continuous spaces in the fundamental formulation of physics. This approach would force us to eliminate also linear spaces. No spinors. No Hilbert spaces. No continuous symmetries. No continuous algebras. Non continuous number fields? Just rational numbers? Or just finite fields? Or do we deny even their existence? Obviously this leads to a disaster.

The alternative approach is more liberal. Why not to postulate that all mathematical structures which can exist without internal contradictions indeed exist in the sense that physical systems can at least emulate them and that are the only thing that exists is the mathematical structure allowing this emulation: physics as analog of Turing machine.

The pleasant surprise is that geometric existence in infinite-dimensional sense is extremely restrictive notion and that this geometry might be essentially unique if non-trivial and require infinite-dimensional isometry groups and conformal symmetries.

Also a direct connection with classical number fields emerges. Physics would be able to emulate any physical theory three of internal contradictions.

Algebraic geometry is a classical approach in which algebra seems to replace geometry. The introduction of p-adic physics indeed leads to generalization of the number concept and allows a hierarchy of structures with increasing algebraic complexity realized as algebraic surfaces. Behind this is however always continuous geometry: also in p-adic context.

Non-commutative geometry is often assigned with emergence. It emerges as effective geometry associated with a finite measurement resolution and is present in all length scales and has discretization as a natural space-time correlate. 4-dimensional space-time is fundamental notion although strong form of holography reducing to strong form of general coordinate invariance implying effective (but not real) 2-dimensionality and brings in string world sheets, partonic 2-surfaces, and braids.

LHC is many things, but a “modest particle accelerator” is not one of them. At the very least it is a cocky and arrogant particle accelerator.

As far as theories go, after experimental error (or more likely, conceptual in converting experimental inputs into platonic values for its result), the most plausible theory by far is that there is one and only one “c” and that the neutrino measurement is simply closer to “c” than the conventionally measured speed of light due to systemic error in photon based measurements of “c”.

One doesn’t need to go whole hog to a universal aether theory to get there. All one needs is some sort of interaction that a photon has that a neutrino doesn’t that gives rise to what amounts to a refractive index for photons that is similar in order of magnitude to air or helium gas that is common to all of the experiments in which “c” has been measured with precisions greater than in the OPERA experiment but has not been accounted for. If this refractive index is absent in deep space, but not in Earth and Earth orbit, and is not absent in the vicinity of a supernovae, it can also account for the difference in the potential neutrino-photon effective speed in the 1987 supernovae experiment v. OPERA.

This is consistent with the lack of energy dependence seen.

Also, if one must have tachyons, it isn’t necessarily the neutrinos that are tachyonic. It could be that the W boson is tachyonic (after all, the W boson does all sorts of other weird things that no other Standard Model particle does like CP violation, flavor changing, charge changing, having mass while being a boson rather than the fermion, etc.) while the neutrino itself is just a minding its own business, utterly boring, Lorentz invariant, Standard Model particle. Since high energy Ws are profoundly more rare and have much shorter lifetimes in nature than high energy neutrinos, this would lead to fewer observable effects. The W would have to be be massively tachyonic to get a 60*10^-9 second effect with a 3*10-25 second half life. But, once one is in the realm of the weird, why not go whole hog.

“LHC is many things, but a “modest particle accelerator” is not one of them. At the very least it is a cocky and arrogant particle accelerator.”

CERN used the Super Proton Synchrotron (http://en.wikipedia.org/wiki/Super_Proton_Synchrotron) to generate the Neutrinos. The LHC doesn’t come into the picture…

“The W would have to be be massively tachyonic to get a 60*10^-9 second effect with a 3*10-25 second half life.”

Half life at c?

AFAIR this is the rest halflife – as “seen” from “inside” the W boson life is much longer, relativistic.

Remember the Abell galaxy and DM? It has got new dark ‘light’.

http://physicsforme.wordpress.com/2011/10/11/how-a-team-of-enthusiasts-are-mapping-dark-matter/

see also the Penrose’s rings and CMB-sky

http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.2051v1.pdf

FTL neutrinos and 1987a supernovas with 98% dark matter, mix and voila’…

Light react to gravity and gravity reacts to neutrinos and light. But is there any reaction between light and neutrinos? Light is thermodynamic arrow, as is time, because time is light. So, if there is no interaction, then time behaves differently? Neutrinos are not thermodynamic arrows, only gravity? I wish Kea could explain better, so even i would understand.

This is something totally new.

From the first link: colliding clusters (the Bullet Cluster) of galaxies that have passed through one another at unspeakably energetic speeds. As they moved past each other in opposite directions, the stars slowed down a little, and the hot gas, which is the pinkish areas, slowed down a lot. But the dark matter, which doesn’t interact with anything except gravitationally, didn’t slow down. It is represented in blue here, way ahead of the rest of the material in these clusters. It’s not directly visible in this image; the blue shading is inferred from the effect that its gravity has on background radiation. The gravity of dark matter acts like a lens, warping the passing light.

Anisotropy, as was seen in supernovas and Big Bang.

Maybe also completely dark galaxies are out there, showing their existense only through gravitional lenses? Informal open discussions.

http://www.galaxyzooforum.org/index.php?topic=272561.0

http://www.galaxyzooforum.org/index.php?topic=272549.0

http://www.galaxyzooforum.org/index.php?topic=279428.0

http://arxiv.org/abs/1103.2124

BBC News – ‘Pandora’ galaxy cluster crash yields dark matter clues http://www.bbc.co.uk/news/science-environment-13878171

Supernov 1a has a double star, one white dwarf (DM?) http://www.galaxyzooforum.org/index.php?topic=277245.0

http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.3359v1.pdf

I am a boring guy and I would first confirm OPERA results and if (and only if confirmed) then study the possibility of explaining TFL maintaining LI (a possibility is via non-local quantum effects, in that case the TFL is only ‘apparent’). If that is not possible we could start speculating about new physics.

I would add that I am convinced that OPERA guys did something wrong. Myself contributed weeks ago to a joke about their supposed measurement (hashtag #mundaneneutrinoexplanations) that spread in twitter and other social media.

A lot of discussion of superluminal neutrino models has been in some sense abstract,

but

I would like to put the Cohen-Glashow arXiv 1109.6562 arguments in physical terms (my apologies to them if my effort here to do so is an inaccurate representation of their ideas) as follows:

1 – It is an observational fact (accurate to 10^(-15)) that electron velocity is subluminal.

2 – Due to neutral currents, as neutrinos propagate they emit and reabsorb electron-positron pairs.

3 – At subluminal velocity, a virtual electron-positron pair emitted by a neutrino travels alongside the neutrino for a little while and then is reabsorbed into the neutrino.

4 – If the neutrino is superluminal,

as soon as a virtual electron-positron pair is emitted it is decelerated (because of 1.) to subluminal velocity

and

falls behind the neutrino so that it cannot be reabsorbed

and it therefore becomes a real electron-positron pair (bremsstrahllung) that drains energy from the neutrino

as described by Cohen and Glashow.

Physically, any superluminal neutrino model (no matter how nice its mathematical formulation) must violate at least one of the physical propositions 1. and 2. and 3. and 4.

I would like to see exactly how each superluminal model justifies such violation.

Tony

I agree with the points of Tony. Any model of super-luminality remaining in the framework of special and general relativity must violate at least one of these propositions.

Therefore, if the measurement result is correct, the problem is at much deeper level.

What maximal signal velocity means?

This is the question. Are special and general relativity the only possible frameworks satisfying Principle of Relativity (Poincare invariance), Equivalence Principle, and General Covariance. As a matter fact, general relativity fails to satisfy Poincare Invariance and this was the basic motivation of TGD.

For sub-manifold gravity maximal signal velocity depends on sub-manifold so that situation changes. Maximal signal velocity for photon space-time sheets would be smaller than that for neutrino space-time sheets and depend on scale (distance on discrete manner). This would yield apparent super-luminality.

This is quite possible and in this article I propose a mechanism explaining this relying not only on induced metric but also on induced spinor structure providing a geometrization of classical gauge fields in terms of CP_2 geometry. There is no need for tachyons or for breaking of causality.

Here M-theorists meet a difficulty. They can mimic TGD explanation based on induced metric and apply it to branes but induced spinor structure is – for a reason which has remained mysterious to me- is something totally new for them although it is an application of standard procedure of bundle induction.

http://arxiv.org/find/astro-ph/1/au:+Qian_Y/0/1/0/all/0/1

Qian uses a mixing angle phi=12 ; the same as Kea?

He has also studied these oscillation patterns.

Superluminals neutrinos

http://mathbin.net/74445

Hi Number 26,

Just a couple of quick questions –

1) Do you really think that neutrino masses are proportional – in any way – to the Planck scale mass? Let’s face it, SUSY was invented to keep the Weak mass scale stable against Planck scale radiative corrections. And the neutrino mass scale is even smaller than the Weak mass scale – perhaps it is a weak perturbation of zero (such as Howard Georgi’s scale-invariant Unparticles of zero mass), or it is a new mass scale initiated by the weak interactions of the hypothesized axion.

2) I think you counted the Higgs degrees-of-freedom wrong. The SM Higgs has one complex scalar doublet (4 dgf’s), and the SUSY Higgs has two complex scalar doublets (8 dgf’s). You are correct that the SUSY Higgs sector yields 5 Higgs bosons, but I don’t think it is OK to neglect the longitudinal modes of the W and Z. Besides, you completely neglected color and spin dgf’s for quarks and leptons.

Are you an E-Infinity member or student? I expect scale invariance, scale relativity, and perhaps even the Golden ratio to be relevant, but too much of this sounds like numerical coincidence.

I can explain so-called ‘superluminal neutrinos’ in 10-D.

Have Fun!

Hi John,

It seems to me that we need an imaginary non-zero proper mass to travel faster than c. If one flavor of neutrino had imaginary mass, then I think we would have noticed that by now. I guess the rarest known neutrino is the tau neutrino, but even that most-likely has a real non-zero proper mass.

I have not yet read a paper that was complex enough to explain both OPERA and SN1987a results short of saying that ‘OPERA is wrong’. I’m not taking that stand, but I haven’t yet finished the paper on my 10-D model.

I have been thinking about some of these ideas for a couple of years, but was more-focused on a TOE, and never seriously thought that we would ever measure ‘superluminal neutrinos’. So I need to simplify my ideas down to a reasonable explanation of OPERA.

Have Fun!

Hi Ray, then we are in full agreement.

Some of the hypothesis aired, however, assume particles with mass > 0 can travel faster than c. We shall know in a couple of years, when the OPERA-team has checked for all the possible systematic errors, which have been proposed and perhaps has performed new experiments.

As I have said previously, I personally *hope* their results are correct – no more tedious ‘yet another experiment/observation, which proves SR and GR right’, but a whole new line of physics to work on for us, our children and grandchildren – but Murphys’s law probably strikes back once more :-D

Hi John,

I agree that neutrinos have non-zero mass, and cannot, therefore travel *AT* c. I don’t think that Edwin was being that literal, but there are differences between 0.3 ppm slower than c vs. 25 ppm faster than c. Any reasonable model needs to explain the discrepancy.

Hi Ed,

I think my model can explain the apparently conflicting results betwen OPERA and SN1987a.

The SN1987A neutrinos didn’t travel at c, but at ~ 0.9999997c.

SN material becomes transparent for neutrinos some 18-26 hours before photons. Takes time to disperse the “cloud” to a few billion km

Ray,

Just curious as to whether your 10D explanation also accounts for the SN1987A case in which the neutrinos apparently travel at c.

Edwin Eugene Klingman

The neutrino overlighting are an effect of neutrino interaction with extra dimensions, quantum whose distances are shorter than the distance for particles that do not violate CPT

The violation of CPT directly involves the violation of a bounded quantum level, the Lorentz invariance. Is an effect that already includes the standard quantum theory. A particle has an amplitude to be outside the cone of light equal to:

$ latex \

\ exp-\ frac {r} {\ lambda}

\ $

$ latex \

\ lambda = \ frac {\ hbar} {m \ cdot c}

\ $

Moreover, the experiments should show tau neutrino for a violation bounded similar to muon neutrinos. The certificate is due implicitly quantified the effect of space-time-energy

3 lengths in principle be quantified minimum space-time

The length corresponding to particles that do not violate CPT:

$ latex \

r (\ alpha ^ {-1}) = \ Biggles (\ frac {\ alpha ^ {-1}} {4 \ cdot \ pi} \ biggr) ^ {\ frac {1} {2}}

\ $

Alpha is the electromagnetic fine structure constant

2 lengths for the two radii of a bull in 7 dimensions, dimensions compactified Kaluza-Klein

$ latex \

L_ {p} (7D) = \ Biggles (\ frac {2 \ cdot (2 \ cdot \ pi) ^ {7}} {V_ {T} (7D)} \ biggr) ^ {\ frac {1} {7 +2}}

\ $

$ latex \

R_ {BH} (7D) = \ Biggles (\ frac {4 \ cdot (2 \ cdot \ pi) ^ {7}} {(7 +1) \ cdot V_ {T} (7D)} \ biggr) ^ {\ frac {1} {7 +1}}

\ $

Is updated

http://mathbin.net/74445

This is the final expression that explains the OPERA experiment, MINOS, and the speed of neutrinos from supernova SN1987A

If the energy of the muon neutrino beam is lower than 0.105 GeV (energy of the muon), then the neutrino beam has a lower speed of light, since In (E / Emuon) = 0, where E = Emuon

For this reason the supernova SN1987A neutrino with a jet with an energy of about 10 Mev, its speed is lower than that of light

In this expression correction terms appear due to Cherenkov radiation in a vacuum type

$\[

\frac{V(v\mu)-c}{c}=\frac{k\cdot\exp-\Biggl(\frac{r(\alpha^{-1})-R_{BH}(7D)}{r(\alpha^{-1})}\Biggr)^{-1}\cdot\ln\Biggl(\frac{E}{E\mu}\Biggr)}{\ln[\alpha^{-1}(m\mu)]\cdot\ln(\frac{m\tau}{me})}

\]$

$V(v\mu)=speed\; muonic\; neutrinos$

$c=light\; speed$

$k=constant\approxeq\sqrt{30/8}$

$E\mu=muon\; energy\;(Gev)$

$E=energy\; nuonic\; neutrinos\;(Gev$

$\alpha^{-1}(m\mu)=electromagnetic\; coupling\; scale\; muon\; mass$

$m\tau=tau\; mass$

$me=electron\; mass$

If the equations are:

Superluminals neutrinos (quantum Lorentz violation limited)

http://mathbin.net/74445

It appears that time of flight measurements of photons were not properly accounted for:

http://www.technologyreview.com/blog/arxiv/27260/

LC

I also have my serious doubts.

You can even read about GPS, SR & GR on wikipedia – seems incredible the OPERA Collaboration should have overlooked this, even more so as their result almost certainly is contradicting relativity theory.

I think this is very unlikely to be the explanation.

When times from GPS measurement are used they take into account the effects of GR so it seems ridiculous to say that they have neglected an effect of SR. The timing provided by GPS satellites is constantly compared with a very accurate ground based clock and any necessary correction are sent back to the satellites. I only read it quickly but unless I have badly misunderstood it this would show up any error of the sort described in that article.

The above is based on Ronald A.J. van Elburg: “Times of Flight between a Source and a Detector observed from a GPS satelite”, http://arxiv.org/abs/1110.2685v1 (abstract), which *may* explain a 64 ns diff. in timing – assuming the OPERA Collaboration has forgotten what SR is about.

This paper is incorrect.

The contraction in length for a GPS satellite, as seen from an observer on Earth (clock) and applying the satellite speed this paper gives a contraction appointment length from the clock (observer):

The speed of a GPS satellite is about 14,000 km / h (13 946 km / h) or 4000 m / s

The contraction is a factor of:

sqr (1 – (4000/2.99792458E8) ^ 2) = 0.999999999822

Km 730 Km x 0.999999999822 = 729.999999870042

730-729.999999870042 <18 m is deduced from the OPERA experiment

I checked van Elburg’s calc, and got this result:

c= 299 792 458 m/s

OPERA paper:

ΔFlight time(neutrino) = (60.7 ±6.9 (stat.) ±7.4 (system.)) = {46.6 ; 75} ns

BaselineGround = 731 278±0.2 m

van Elburg’s v = 3 900 m/s

ΔT(sat) = BaselineGround * SQRT(1 – SatSpeed² / LightSpeed²) / (LightSpeed + SatSpeed) = 0.0024392949 ns

BaselineSat = ΔT(sat) * c = 731 268.4868666 m

ΔBaseline = 9.51 m

ΔBaseline * 2 = 19.02 m

ΔFlightTime(3 900 m/s) = (9.51 / c) = 31.25 ns

ΔFlightTime(3 900 m/s) * 2 = 63.5 ns = van Elburg’s Δtime

Any v within the below limits are also within the ΔFlight time(neutrino) errorbar:

ΔFlightTime(2 870 m/s) = (14.00 / c) = 46.7 ns

ΔFlightTime(4 600 m/s) = (22.44 / c) = 74.9 ns

Seems to me van Elburg got the calc right.

Whether his assumption, that the OPERA team has forgotten to do the Lorentz calc, is also correct stands to be seen.

Isn’t possible that the correction associated with this calculation is already included in the “GPS common view” procedure that PTB used for OPERA to sinchronize the CERN and LNGS times within an accuracy of 2 ns, as stated in the OPERA paper?

gioR, I will be very surprised if the OPERA team hasn’t made the same simple calc – guess I’ll turn to voodoo as more reliable than science if they haven’t :-D

What does surprise me is that they use a moving satellite instead of a geostationary one, or better, perform sync using Time of Return through an optical fibre (they don’t have a fibre connection, but given the cost of the whole operation and it’s importance, a couple of million Euro shouldn’t really scare them off).

2,99792458E8 + (2,48 E -5 ) x 60ns = 18m

GPS watches, terrestrial reception and correct orbital offset this effect

http://www.phys.lsu.edu/mog/mog9/node9.html

http://www.physicsmyths.org.uk/gps.htm

relativity.livingreviews.org/Articles/lrr-2003-1/

tycho.usno.navy.mil/ptti/1996/Vol%2028_16.pdf

etc, etc

A signal faster than light is perfectly possible if carried by a group velocity, and if the neutrino has mass, it must be represented as a wave packet, so this possibility pertains. Cohen and Glashow show the Standard Theory failing here, which then poses neutron mass as the factor too be examined for unexpected effects. This reasoning has the virtues of classicism and parsimony, but the last exponent of that style was Slater, who left physics in disgust when the quantum debate turned metaphysical. Yet Y.S. Kim rediscovers the principle through energy-time uncertainty as the relativistic appearance of position-momentum uncertainty: arxiv:quant-ph/9710062v1.

oh God i pray they at CERN will at least read my paper

http://vixra.org/abs/1110.0033

http://hal.archives-ouvertes.fr/hal-00630737/en/

http://hal.archives-ouvertes.fr/INSMI/hal-00630737/fr/

A week ago the world went wild over CERN’s tentative claim that it could make neutrinos travel faster than light. Suddenly, intergalactic tourism and day trips to the real Jurassic Park were back on the menu, despite everything Einstein said. Now, however, a team of scientists at the University of Groningen in the Netherlands reckons it’s come up with a more plausible (and disappointing) explanation of what happened: the GPS satellites used to measure the departure and arrival times of the racing neutrinos were themselves subject to Einsteinian effects, because they were in motion relative to the experiment. This relative motion wasn’t properly taken into account, but it would have decreased the neutrinos’ apparent journey time. The Dutch scientists calculated the error and came up with the 64 nanoseconds. Sound familiar? That’s because it’s almost exactly the margin by which CERN’s neutrinos were supposed to have beaten light. So, it’s Monday morning, Alpha Centauri and medieval jousting tournaments remain as out of reach as ever, and we just thought we’d let you know.

http://www.engadget.com/2011/10/17/remember-those-faster-than-light-neutrinos-great-now-forget-e/?a_dgi=aolshare_facebook

“GPS satellites …” is exactly what Ronald A.J. van Elburg told us in his October, 4th: “Times of Flight between a Source and a Detector observed from a GPS satelite”, http://arxiv.org/abs/1110.2685v1 (abstract), which *may* explain a 64 ns diff. in timing

It is worth recalling what Phil wrote at the end of this blog entry:

“Of course the much simpler explanation is that the experiment has neglected some systematic error, but that is too boring.”

Einstein warned us that Nature may be subtle in her ways but never malicious. Despite what so many would hope for, Special Relativity is going to survive yet another reality check.