The 24-cell and 4 qubits

February 28, 2011

A few days ago Lubos reported on an intriguing new paper by Volker Braun describing how to construct a Calabi-Yau manifold with 6 real dimensions and minimal Hodge numbers using the 24-cell. Such manifolds can be applied to the compactification of superstring theory down to our familiar 4 dimensional spacetime. The predictions for physics based on this particular manifold would be unrealistic but its discovery is an important step towards understanding the fuller range of possibilities. It is also of considerable mathematical interest in its own right.

I’m not going to say anything more about that paper but I do want to say something about the 24-cell and its curious relation to 4 qubits as well as a surprising relationship between the invariants of 4 qubits and the platonic solids. I found out about these things after talking to Mike Duff and his coworkers on the qubit/black-hole correspondence (Borsten, Dahanayke, Duff,  Marrani, Rubens). I think this is a bit more specialized than the kind of stuff I usually report on here, but some of our regular commenters expressed an interest and I am always happy to oblige. For anyone who does not understand any of the mathematical terms here they are all explained in wikipedia.

The 24-cell is a very special regular polytope in 4-dimensional space. It has the special property of being self dual in the same sense as the tetrahedron is self-dual in 3 dimensions. It can also be tessellated to fill 4 dimensional space just as a cube can tessellate  to fill 3d space. in fact the 24-cell is the only regular polytope in more than 2 dimensions that has both of these properties. The only comparable shapes in this sense are the triangle, square and hexagon in two dimensions.

The vertices of the 24-cell can be plotted in 4D co-ordinates at the 24 points given by

(±1,0,0,0), (0,±1,0,0),(0,0,±1,0),(0,0,0,±1),(±½,±½,±½,±½)

It’s dual can be plotted at the points,

(±1,±1,0,0),(±1,0,±1,0),(±1,0,0,±1),(0,±1,±1,0),(0,±1,0,±1),(0,0,±1,±1)

As many readers of viXra Log are undoubtedly aware, there are many connected mysteries surrounding the number 24 in mathematics and the 24-cell is one of the more enigmatic. 24 is famous as the dimension of the Leech lattice which is connected to the significance of the number in the theory of finite simple groups, especially examples such as the Mathieu groups, the Conway groups and the Fisher groups. The existence of the Leech lattice can be explained in terms of the 24-bit Golay code which can in turn be constucted using special properties of quadratic residues in Z24. Alternatively the Leech lattice is a reduction of an alternating lattice in 25+1 dimensions using a null vector relying on the fact that  the sum of the first 24 square numbers is 70^2. This closely connects together one set of circumstances where the number 24 appears in mathematics.

Then there is a second interlinked set of places where the number 24 shows up in number theory and the theory of special functions. This includes the  Ramanujan discriminant function, a modular form that is the 24th power of the Dedekind eta function. This can be connected to the fact that the value of zeta(-1) is 1/12. It has implications in bosonic string theory where it is linked to the critical dimension where the two dimensional worldsheet vibrates in the remaining 24 dimensions.

These two sets of places where the number 24 appears, one in group theory and the other in number theory, do not seem to have a causal link. You cannot reason that one implies the other. Yet you can combine the two by compactifying bosonic string theory over the Leech lattice. This was a realisation that led to the famous proof of the monstrous moonshine conjectures and a Fields medal for Richard Borcherds. This much will be familiar to anyone who follows related discussions on the internet and especially if they have read John Baez’s lecture on the number 24. As far as I know there is nothing else that clarifies the mystery of this connection. For example there is nothing that directly links the Golay Code to the Ramanujan Discriminant function except Moonshine.

What about the 24 cell, where does that fit in? According to Baez it relates to the number theory side via elliptic curves, but in a way that involves a group of order 24. Perhaps this is a clue to a missing direct link between between number theory and group theory. The connection described by Baez points to the fact that the moduli space of elliptic curves is given by modding out the group SL(2,3). The vertices of the 24-cell when plotted as quaternions (Hurwitz quaternions)  also form a group, and it is the same one, also known as the ditetrahedral group because it is the double cover of the rotation group of the tetrahedron. This seems very nice but actually there are only 15 groups of order 24 and only seven that are not direct products of smaller groups, so saying that two structures form the same group of order 24 is only a small factor better than saying that they have the same size. What we really need to find is a more direct way in which the 24-cell relates to elliptic curves.

This is where the 4-qubit system comes in. The wavefunction of 4 qubits is represented by a 2x2x2x2 hypermatrix of 16 complex numbers. Local transformation on these qubits take the form of SL(2,C) transformations applied to each qubit independently so the overall symmetry group of the system is SL(2,C) 4 . To understand the entanglement possibilities for 4 qubits the first step is to find the polynomial invariants under this group. This is a non-trivial computation but  it can be shown that there are 4 independent invariants of degree 2, 4, 4 and 6 in the 16 components of the hypermatrix (see e.g. http://arxiv.org/abs/quant-ph/0212069 for a construction.) However, there is a special invariant that is a combination of these known as the hyperdeterminant which is of degree 24. The hyperdeterminant is a discriminant for the hypermatrix that is zero iff the quadriliear form constructed from the hypermatrix has singular points where all derivatives vanish. You don’t have to understand the details, just notice that this is another structure where the number 24 has special significance.

It turns out that the 4 qubit hypermatrix is related in a fundamental way to elliptic curves with the hyperdeterminant being related to the Ramanujan discriminant modular form mentioned above. I have described this relationship in detail at http://arxiv.org/abs/1010.4219 so I won’t repeat it here. This makes a direct link between the number 24 that appears as the degree of the hyperdeterminant and its appearance in the theory of modular forms linked to bosonic string theory. After discussing this with Mike Duff I was also able to link the 4 qubit system directly to bosonic strings and I used this in my FQXi essay.

The classification of 4-qubit entanglement is a tricky business. The SL(2,C) 4 transformation group has 12 independent paprameters so it should be possible to use these transformations to reduce any state with its 16 components to representative states parameterised by just 16-12=4 variables.  A clean solution was provided by Verstraete et al in http://arxiv.org/abs/quant-ph/0109033 . They found nine perameterised classes of states where the largest class known as Gabcd has 4 parameters and includes all states whose hyperdeterminant is non-zero. For present purposes I am only interested in this class. It takes a form that can be written in qubit terms as

Φ =  x (|0000> +|1111>) + y (|0011>+|1100>) + z (|0110> + |1001>) + t (|1010> + |0101>)

For this class of states, we can work out any of the invariants including the hyperdeterminant which is going to be a polynomial of degree 24 in the four variables xyz and t. This has the potential to be a complicated expression, after all the full hyperdeterminant in 16 variables is an expression with 2894276 terms. In practice for the reduced state the hyperdeterminant simplifies and when you work it out you will notice that the result factorised into 24 simple factors

Det(Φ) = x2y2z2t2(x+y+z+t)2(x+y+z-t)2(x+y-z+t)2(x+y-z-t)2(x-y+z+t)2(x-y+z-t)2(x-y-z+t)2(x-y-z-t)2

These factors correspond in an obvious way to the Hurwitz quaternions and therefore the vertices of the 24-cell. This provides a direct link between the number of vertices in the 24-cell and the degree of the hyperdeterminant for 4-qubits which in turn is linked to the exponents in modular forms and the critical dimension of bosonic string theory, just as we wanted.

Is there a better way to understand why the hyperdeterminant factorises so conveniently? Yes there is. Although all the 12 dimensions of the group SL(2,C) 4 were used to reduce the hypermatrix to the class Gabcd , there remains a discrete subgroup that maps states of Gabcd , (in the form above) onto themselves, so this discrete subgroup provides a group of linear transformations on the 4D space parameterised by xy, z and t. This subgroup turns out to be the Weyl group of D4 whose system of root vectors is the 24-cell. The polynomial invariants of this reflection group as functions of the four parameters xyz and t are also of degree 2, 4, 4 and 6 and correspond to the 4 qubit invariants. The Weyl group is the symmetry group of the root system so it just permutes the 24 factors in the hyperdeterminant making it an obvious invariant. Notice that D4 as a Lie-algebra is SO(8) or its split form SO(4,4) which is the group used to construct the 4-qubit/black-hole correspondence. This was what Borsten et al used to classify the entanglement of four qubits using a classification of nilpotent orbits that had already been worked out for black holes in M-theory. Their answer matches the Verstraete classification nicely.

We can go one step further and extend the group of transformations to include permutations of the 4 qubits. This gives a larger discrete group acting on Gabcd ,which can be identified as the Weyl group of F4. The corresponding root system is now the 48 vertices of a 24-cell combined with its dual. The polynomial invariants of this group are of degree 2,6,8 and 12 and they correspond to the primitive invariants of the hypercube that is symmetric under the permutations of the qubits as well as the usual  SL(2,C) 4 transformations.

This leads to one last curious correspondence that I want to point out. The degrees of the primitive invariants of the hypercube (2,4,4,6) are not trivial to work out, but can you see what they are related to? Think of a tetrahedron with 4 vertices, 6 edges and 4 faces. The remaining number 2 corresponds to the inside and outside of the tetrahedron which can be regarded as the two three dimensional parts which cover space in combination with the vertices, edges and faces. Remember that the 24-cell as a group is the double cover of the rotation group of the tetrahedron so there is a connection. For the invariants that are symmetric under permutations, the larger root system of 48 vectors from the two 24-cells combined also forms a group when the root vectors are regarded as unit quaternions. The full set of unit quaternions form the group SU(2) which is the double cover of SO(3) so any finite subgroup must be the double cover of some rotation group in 3D. In this case it is the rotation group of the cube or octahedron. This corresponds to the fact that the symmetric invariants for four qubits are of degrees (2,6,8,12) because the cube has 6 faces, 8 vertices and 12 edges (or you can use the dual octahedron).

So despite the fact that the invariants of the 4 qubit system are non-trivial to construct, their polynomial degrees correspond to the geometric elements of three of the platonic solids. What about the other two regular solids, the dodecahedron and icosahedron? There is another reflection group H4 whose root system corresponds to these solids and it therefore has invariants of degree (2,12,20,30) . Since this group acts on the same 4D sapce you can use it to construct four invariants of the 4 qubit system with these degrees, but there is no lie algebra corresponding to H4 and its significance is not so obvious. However, these three cases are part of a system of mysterious “trinities” as noted by the mathematician Vladimir Arnold. This means that there must be a lot more going on that we don’t really understand yet.

The LHC lost its hump

February 26, 2011

After the LHC restarted beam operations last week the physicists had at least one pleasant surprise. An unknown source of interference dubbed The Hump that had plagued the collider since December 2009 has vanished over the winter shutdown. The reason for its disappearance is as mysterious as its former existence. Nobody knows where it went or whether it will come back.

This is good news because the Hump had been quite a nuisance for the beam operators. When it was around it could destabilize the beam leading to diminished luminosity, or even an unwanted beam dump. Its failed appearance this year will help with the maximum collection of physics data.

Already the process of setting up the beam parameters for this year is well under way with the machine performing as well as it did last year. This year they want to increase the luminosity and that will require a tighter squeeze of the beams at the intersection points where the protons collide inside the detector experiments. Last year the squeeze was taken down to a beta* of 3.5m but this year they want to get it down to 1.5m. In plane terms that means an improvement by a factor of 3.5/1.5 in the amount of physics data that they can collect. The squeeze is a delicate process performed in a gradual reduction of beta*. The LHC is designed to ultimately reach a squeeze of 0.55m but that will only be possible at the design energy of 7TeV per proton. At the current operating energy of 3.5TeV per proton getting down to 1.5m is quite a challenge. In the first attempts last week the beams were lost at just below 2m. Another go at getting to 1.5m is planned for today.

Update: Rumours of the humps disappearance were premature. It suddenly switched back on. Looks like the time travelers had it switched off during the shutdown to minimise the chances of the source being discovered, but they were just a little slow turning it back on again. Must be a traffic jam in the wormhole.

Update (28 Feb 2011): The squeeze to 1.5m was successfully carried out on Saturday afternoon.

Four Reasons Why I like String Theory

February 25, 2011

It is exactly one year since I started this blog, so to celebrate I will give my four top reasons for liking string theory.

This is partly a response to a recent survey on Cosmic Variance which included a question about what likelihood people gave to string theory being correct. With about 170 people responding, about half of them gave string theory 10% or less, many said 1% or even 0%. Now, science isn’t settled by democratic votes especially by a random sample of commenters on one particular blog. Nevertheless it is a revealing outcome and there are plenty of other physicists who think the same. The reasons people gave were roughly along the lines of “It has not had any experimental success after a long time” or “it is unfalsifiable”. I dont agree that these are real issues but instead of talking about that I want to review why I think it is still a theory worthy of being excited about.

(1) My top reason for thinking that string theory is a correct approach to unifying physics is that it provides a consistent perturbative description of particle physics with the inclusion of gravitons, and there is no known alternative. Gravity is a very weak force and spacetime is nearly flat on small scales. There must be some perturbative description of the quantized interaction of particles with gravity as a series of approximations. A direct quantisation of GR cannot do this, but string theory can. Furthermore it achieves this in a way that did not have to work, but it does because of surprising cancellations. There are five consistent string theories in 10 dimensions which are all related by non-perturbative dualities. The reductions to 4 spacetime dimensions is a consistent process which is now reasonably well understood, except we dont know the correct compactification manifold. The only alternative way to get a consistent perturbative theory is possibly from supergravity, but by now we understand that supergravity too is just another  limiting case of string theory. Some physicists have suggested that there may be a chance of finding other non-perturbative solutions to the quantum gravity problem, but no complete solution of that type has been found yet. Until it has, this reason alone is a very strong indication that string theory is on the right path.

(2) Supersymmetry! There are many ways that string theory can reduce to low energy particle physics and not all of them would result in observable supersymmetry. On the other hand, supersymmetry is a natural byproduct of string theory and if it does exist in nature at scales currently being probed by the LHC then it can explain several mysteries. These include the hierarchy problem, dark matter, a light Higgs and the convergence of the running coupling constants at the GUT scale where SUSY says they all have a value of around 1/24. In the last few weeks we have seen the exclusion limits for SUSY greatly extended by CMS and ATLAS. They say that if you throw a frog into hot water it will quickly jump out, but if you put it in cold water and gradually heat the water up it will stay there until it is boiled to death. You should not try this experiment at home but it seems like nature is trying it on physicists who like supersymmetry. In the 1980s we thought that supersymmetric partners would have light masses to avoid fine tuning. If this was right they would have been seen at LEP or the Tevatron. Now the LHC has pushed the minimum masses to uncomfortably high values implying quite a lot of fine tuning. The water is heating up but we will stay put because we now know that the multiverse allows for such fine tuning provided it is in the best interests of our existence. Perhaps the higher masses were needed to allow dark matter to form galaxies or some such.

(3) My third best reason for supporting string theory is that it provides a solution to the black hole information paradox via the holographic principle. This is a much more theoretical argument but it is still quite convincing, I find. Although there may never be any evidence for Hawking radiation from black holes, we know theoretically that it has to be there. Some reasoning using semiclassical quantum gravity tells us the laws of entropy for a black hole, and this should remain correct for any complete theory of quantum gravity such as string theory. Further arguments also tell us that the rules of thermodynamics must obey a holographic principle to avoid the paradox of thermodynamic information being lost inside a black hole. Again, any theory of quantum gravity worth its salt has to comply. It is therefore a triumph for string theory that the AdS/CFT correspondence shows that string theory does (or can) realize the holographic principle.  It is another indication that string theory is on the right track.

(4) My final reason for liking string theory is that it comes with a multiverse. For some people this is the favourite reason for not liking string theory and my reasoning for thinking otherwise is partly philosophical, so only people with similar philosophical leanings will agree with me. Ten years ago I did not favour anthropic reasoning. That was because the anthropic principle requires a range of theories that the universe can choose from so that one customised for intelligent life can be selected. I am comfortable with the platonic view that all mathematically consistent universes exist and we just inhabit some part of that realm, but in order to explain the symmetries that govern the laws of physics I think you need to invoke a further principle. For me that principle is universality in the sense of universal behavior seen in complex dynamical systems such as those seen in critical phenomena. I think there is a universal behavior of some type in the realm of complex mathematical systems which overwealms all other possible laws of physics so that only one unique possibility complete with all its beautiful symmetries can be what we experience. You can see that this does not fit well with the anthropic principle. However, there are good indications that the laws of physics are somehow selected to promote intelligent life in a way that would not be consistent with a single unique set of physical laws, contradiction! Luckily the multiverse comes to the rescue in the form of the string landscape. It turns out that string theory does indeed follow from some unique over-arching M-theory, but it can be realized in many forms in lower dimensions by a choice of vacuum determined by the compactification manifold. A wide range of these vacua are stable and there could be as many as 10^500 of them, plenty enough to account for anthropic reasoning. In my view it is the perfect outcome.

So those are my four best reasons for liking string theory. This does not mean that I don’t value other approaches to quantum gravity. We still need to find its complete non-perturbative formulation andIi am sure that such a thing must exist even if string theory has nothing to do with the laws of physics. Other theories such as Loop Quantum Gravity, Non-Cummutative Geometry or Group Field Theory lead to rich mathematical concepts. I see this as a sign that they are telling us something about our world, but I think you have to look for what it says about possibilities for non-perturbative string theory. For example, Loop Quantum Gravity tells us that knot polynomials and spin networks should be important. I like the fact that recently Witten has explored implications of  high dimensional generalisation of the knot polynomials (Khovanov homology) to branes from M-theory. This is the kind of outcome I expect from alternative approaches.

So what of the problems people say are issues for string theory? I see the multiverse landscape as an asset, not a problem. It means that string theory cannot tell us much about low energy physics so we will have to look for Planck scale effects instead. Such predicted effects may not be known until the non-perturbative side of string theory is understood, and after that it may be a long time before technology allows us to test them. That I am afraid is the nature of the game. We have no automatic right to expect nature to be kind to us and provide an easy test of any theory of quantum gravity. We are suddenly in a position where almost anything we can observe seems to be  covered by the standard model + general relativity so it should be no surprise that testing string theory is very difficult. Any other theory of quantum gravity is likely to have the same problem.

The LHC starts its 2011 run

February 19, 2011

This evening the Large Hadron Collider started circulating proton beams in the main ring to start the 2011 runs,  two days ahead of schedule. Hopefully first collisions are not too far away.

String Theory and Partition Numbers

February 12, 2011

Recently there was an intriguing breakthrough in the study of partition numbers. partitions numbers P(n) count the number of ways of expressing n as a sum of positive integers. E.g. P(4) = 5 because 4 = 1+1+1+1 = 1+1+2 = 1+3 = 2+2 = 4. The sequence of partition numbers goes 1,1,2,3,5,7,11,15,22, … It starts off wanting to be the Fibonacci numbers but then after the first five it changes its mind and gives the first five prime numbers before degenerating into a rapidly increasing sequence of interesting numbers.

There are lots of well known things that can be said about the partition numbers but I’ll leave that to Ken Ono who is the main figure in the new discovery, actually two discoveries. The first discovery is a new finite formula for partition functions and the second is a new explanation for some congruence relations discovered by Rananujan. I highly recommend this low level lecture by Ken Ono as an introduction to the new finds.

One thing that is not mentioned in all the recent news coverage is the important connection between partition numbers and string theory that is very easy to see even a very basic level. From the theory of musical harmonics you know that a string has vibration modes labelled by integers k, whose frequency is ωk = kα for some constant α that depends on things like the tension in the string. When a string is quantized it can be treated like a set of decoupled harmonic oscillators with energy levels Ek = (1/2 + mk)ħωk where the sequnece of non-negative integers mk labels the eignestates of the oscillators.  So the total energy is given by

E = Σk (1/2 + mkkα

The zero-point energy is E0 = 1/2 ħkα(1+2+3+4+…) . We can either ignore this as an irrelevant constant while pretending not to notice that it is infinite, or we can use zeta regulation to deduce that 1+2+3+4+ … = ζ(-1) = -1/12. In any case, what we are really interested in is the rest of the sum and to understand it we just need a simple trick. Write mk k = (  k+k+…+k ) (mk times) Then you will immediately notice that the number of states with an energy En = E0nħα is exactly P(n), the partition number of n. This is also valid for the relativistic bosonic string in 26 dimensional spacetime, except that then you need to multiply by 24 because the one dimensional string can vibrate independently in any of the 24 space dimensions transverse to the string.

The partition function for bosonic string theory is therefore given by

Z = Σn P(n) exp( – (24n-1)ħα)

Perhaps that’s why they call it the partition function :)

LHC cooled down for 2011 run

February 11, 2011

The Large Hadron Collider’s cryogenic systems have now cooled down all the superconducting magnets to 1.9 degree Kelvin as required for this year’s physics runs. This means that the main accelerator ring is essentially ready for beam injection which is scheduled to start on 21st February.

The plan will be to bring the collider back to last years peak luminosity of 0.2/nb/s as quicky as possible so that the experiments can start to add significant data to what they have already collected. The bunch spacing used will be 75 ns instead of the 150 ns used before. With this closer packing it should be possible to circulate about 900 bunches in each beam to more than double the beam intensity. Further luminosity increases will be achieved with a tighter squeeze of the beams at the collision points. Overall they hope to slowly progress towards 1.0/nb/s luminosity then just work on maximum running efficiency.

Some problems identified towards the end of last year will have to be dealt with. This includes Unidentified Falling Objects in the beam pipe and a build up of electron clouds. To clean the pipes they will “scrub” with extra high intensity beams at lower energy using 50ns spacing to pack in even more bunches.

As last year, the proton physics operations will end in November to allow time for some more heavy ion collisions. This year they will aim for a significant increase in luminosity for these collisions. At the end of the extended proton physics run in 2012 they may try out collisions of protons on heavy ions.

As last year we plan to closely follow the progress on viXra Log.

How Earthlike are Kepler’s Latest Exoplanets?

February 3, 2011

I am sure everyone is aware of the latest release of exoplanet data from Kepler that has multiplied the number of known exoplanet candidates by a factor of about five. Kepler detects its exoplanets by looking for stellar transits so it is only going to see them in the rare cases where we are in alignment with the plane of the stars planetary system. Luckily it can look at a lot of stars in a patch of the sky all at the same time. In its first few months it has found well over a thousand by this method. Some of these may prove to be glitches and must be verified either by land-based observations of by repeat transits observed from Kepler.

So which is the most Earthlike  planets they have seen? To answer this you need to peruse the full set of data which can be found here. Even then the answer depends on what you consider to be the most important parameters to define an Earthlike planet. After due consideration I am going to go for Keplar-268 which has an estimated radius of 1.75 times the Earth, a year of 110 days and it sits at 0.41 astronomical units from its parent star. This should give it an estimated surface temperature of 295 degree Kelvin or 22 degrees Celcius. Admittedly it is a bit large so its gravity is going to be stronger than  we would probably enjoy.

The estimated temperature that NASA uses is based on the amount of received radiation. I’m not sure if there is any correction for greenhouse effects which depend on the density and content of its unknown atmosphere. In any case it is at least reasonable to assume that its rotation will not be locked to its star so it has a chance of being habitable with liquid water present. On the other had it’s high gravity may mean it retains too much atmosphere and suffers from permanent clouds making its surface very hot and high pressured.

This is just the first big release of data from Kepler and more can be expected, especially since many Earthlike planets will not have done a full revolution of their star in the time it has been l0oking. The results so far suggest that when all data is collected there should be some candidates for really Earthlike planets, at least in terms of size and ambient temperature. Once their location is known it will be the job of other telescopes to look at them in more detail. This will include the best Earth-based telescopes using adaptive optics and interferometry to focus in on the systems. A little later the James Webb Space Telescope should take over, if and when it successfully reaches its position to start observing in space.