Fundamental Physics 2013: What is the Big Picture?

November 26, 2013

2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra.

see_the_big_picture

The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model:

  • What is dark matter?
  • What was the mechanism of cosmic inflation?
  • What mechanism led to the early production of galaxies and structure?
  • Why does the strong interaction not break CP?
  • What is the mechanism that led to matter dominating over anti-matter?
  • What is the correct theory of neutrino mass?
  • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant?
  • How are the four forces and matter unified?
  • How can gravity be quantised?
  • How is information loss avoided for black holes?
  • What is the small scale structure of spacetime?
  • What is the large scale structure of spacetime?
  • How should we explain the existence of the universe?

It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum.

But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it.  The GDP of China is predicted to surpass that of the US and the EU within 5 years.

But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before.

Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to.

Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed.

The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views.

Causality

Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality.

Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos,  eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided.

The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy.

The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities.

Symmetry

When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic  side would follow, perhaps uniquely.

It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution?

The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence.

While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states.

My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded.

While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them.

After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right.

Cosmology

We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood.

Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by.

The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood.

It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows.

I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct.

In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos.

The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model.

Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly.

The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form?

I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large  black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been.

The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies.

This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape.

Naturalness

It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us.

Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable?

At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages.

My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table.

It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive.

The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent.

The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified.

Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds.

I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse.

String Theory

String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By  2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime.

Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided.

String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic.

What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn.

How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem.

Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale.

Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that?

Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that.

Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group.

String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation  symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings.

This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection.

The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again.

Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances.

Loop Quantum Gravity

If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all.

What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions.

While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.

 String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent  this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need.

Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG.

I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign.

The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation.

Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied  to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory.

The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures.

Quantum Mechanics

Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is  about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity.

I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum?

In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space.

Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear.

The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory.

Stepping back though we might notice that quantum field theory also has some schizophrenic  characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic  Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us?

My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step.

Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory.

What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse.

Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it.

Black Hole Information

We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality.

There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology.

Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way.

It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry.

Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation.

Ontology

Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature.

In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works.

In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation.

It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival.

Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place.

Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two.

Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours.  It is the higher level concept that is universal, not the definition.

Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it.

Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too.

Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent.

I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect.

Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories.  These structures also seem to be important in physics.

I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level.

What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm.

When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21st century theories.

From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe.

Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm.

Experimental Outlook

Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from?

In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case.

The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward.

A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible.

However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models.

There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature.  If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding.

With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies.

Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value.

Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space.

Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators.

There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 1034 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 1036 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised.

Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn.


Naturally Unnatural

July 18, 2013

EPS-HEP

Today is the first day of the EPS-HEP conference in Stockholm, the largest particle physics conference of the year. In recent years such conferences have been awaited with great anticipation because of the prospects of new results in the latest LHC and Tevatron reports but this year things are a little more subdued. We will have to wait another two years before the LHC restarts and we can again follow every talk expecting the unexpected. Perhaps there will be some surprises in a late LHC analysis or something from dark matter searches, but otherwise this is just a good time to look back and ask, what did we learn so far from the LHC?

Nightmare Scenario

The answer is that we have learnt that the mass of the Higgs boson is around 125 GeV and that this lies near the minimum end of the range of masses that would allow the vacuum to be stable even if there are no new particles to help stabilize it. Furthermore, we do indeed find no evidence of other new particles up to the TeV range and the Higgs looks very much like a lone standard model Higgs. Yes, there could still be something like SUSY there if it has managed to hide in an awkward place. There could even be much lighter undiscovered particles such as those hinted at by some dark matter searches, if they are hard to produce or detect at colliders, but the more obvious conclusion is that nothing else is there at these energies.
This is what many people called the “nightmare scenario” because it means that there are no new clues that can tell us about the next model for particle physics. Many theorists had predicted SUSY particles at this energy range in order to remove fine-tuning and have been disappointed by the results. Instead we have seen that the Higgs sector is probably fine tuned at least by some small factor. If no SUSY is found in the next LHC run at 13 TeV then it is fine-tuned at about the 1% level.

Fine-tuning

Many physicists dislike fine-tuning. They feel that the laws of physics should be naturally derived from a simple model that leaves no room for such ambiguity. When superstring theory first hit the street it generated a lot of excitement precisely because it seemed to promise such a model. The heterotic string in particular looked just right for the job because its E8 gauge group is the largest exceptional simple lie algebra and it is just big enough to contain the standard model gauge group with suitable chiral structures. All they needed to do was figure out which calabi-yau manifold could be stabilised as a compactification space to bring the number of dimensions down from 10 to the 4 space and time dimensions of the real world. They would then see quickly how the symmetry gets broken and the standard model emerged at low energy, or so they hoped.

The problem is that there has been evidence for fine-tuning in nature for a long time. One of the earliest known examples was the carbon resonance predicted by Hoyle at precisely the right energy to allow carbon to form in stellar nucleosynthesis. If it was not there the cosmos would not contain enough carbon for us to exist. Hoyle was right and the resonance was soon found in nuclear experiments. Since then we have realized that many other parameters of the standard model are seemingly tuned for life. If the strong force was slightly stronger then two neutrons would form a stable bond to provide a simple form of matter that would replace hydrogen. If the cosmological constant was stronger the universe would have collapsed before we had time to evolve, any weaker and galaxies would not have formed. There are many more examples. If the standard model had fallen out of heterotic string theory as hoped we would have to accept these fine tunings as cosmic coincidences with no possible explanation.

The Multiverse

String theorists did learn how to stabilize the string moduli space but they were disappointed. Instead of finding a unique stable point to which any other compactification would degenerate they found that fluxes could stabilize a vast landscape of possible outcomes. There are so many possible stable states for the vacuum that the task of exploring them to find one that fits the standard model seems well beyond are capabilities. Some string theorists saw the bright side of this. It offers the possibility of selection to explain fine-tuning. This is the multiverse theory that says all the possible states in the landscape exist equally and by anthropic arguments we find ourselves in a universe suitable for life simply because there is no intelligent life in the ones that are not fine-tuned.

Others were not so happy. The conclusion seems to be that string theory can not predict low energy physics at all. This is unacceptable according to the scientific method or so they say. There must be a better way out otherwise string theory has failed and should be abandoned in favor of a search for a completely different alternative. But the stting theorists carry on. Why is that? Is it because they are aging professors who have invested too much intellectual capitol in their theory. Are young theorists doomed to be corrupted into following the evil ways of string theory by their egotistical masters when they would rather be working on something else? I don’t think so. Physicists did not latch onto string theory just because it is full of enchanting mathematics. They study it because they have come to understand the framework of consistent quantum theories and they see that it is the only direction that can unify gravity with other forces. Despite many years of trying nothing else offers a viable alternative that works (more about LQG is for another post).

Many people hate the very idea of the multiverse. I have heard people say that they cannot accept that such a large space of possibilities exist. What they don’t seem to realize is that standard quantum field theory already offers this large space. The state vector of the universe comes from a Hilbert space of vast complexity. Each field variable becomes an operator on space of states and the full Hilbert space is the tensor product of all those spaces. It’s dimension is the product of the dimensions of all the local spaces and the state vector has a component amplitude for each dimension in this vast multiverse of possibilities. This is not some imaginary concept. It is the mathematical structure that successfully describes the quantum scattering of particles in the standard model. The only significant difference for the multiverse of string theory is that many of the string theory states describe different stable vacuua whereas in the standard model the stable vacuua are identical under gauge symmetry.  If string theory is right then the multiverse is not some hypothetical construct that we cannot access. It is the basis of the Hilbert space spanned by the wavefunction.

Some skeptics say that there is no such fine-tuning. They say that if parameters were different then life would have formed in other ways. They say that the apparent finetuning that sets the mass of the Higgs boson and the small size of the cosmological constant is just an illusion. There may be some other way to look at the standard model which makes it look natural instead of fine-tuned. I think this is misguided. During the inflationary phase of the universe the wavefunction sat in some metastable state where the vacuum energy produced a huge effective cosmological constant. At the end of inflation it fell to a stable vaccum state whose contribution to the cosmological constant id much smaller. Since this is a non-symmetrical state it is hard to see why opposite sign contributions from bosons and fermions would cancel. Unless there is some almost miraculous hidden structure the answer seems to be fine-tuned. The same is true for the Higgs mass and other finely tuned parameters. It is very hard to see how they can be explained naturally if the standard model is uniquely determined.

People can complain as much as they like that the multiverse is unscientific because it does not predict the standard model. Such arguments are worthless if that is how the universe works. The multiverse provides a natural explanation for the unnatural parameters of physics. We do noy say that astrophysics is unscientific because it does not give a unique prediction for the size and composition of the Sun. We accept that there is a landscape of possible stellar objects and that we must use observation to determine what our star looks like. The same will be true for the standard model, but that does not stop us understanding the principles that determine the landscape of possibilities or from looking for evidence in other places.

What does it mean for life in the universe?

If the landscape of vacuua is real and the world is naturally unnatural it may take many centuries to find convincing evidence, but it will have consequences for life in the universe. If you think that life arises naturally no matter what the parameters of physics are then you would expect life to take a very diverse range of forms. I dont just mean that life on Earth is diverse in the way we are familiar with. I mean that there should be different solutions to the chemistry of life that work on other planets. On Earth there is just one basic chemistry based on DNA and RNA. This also includes the chemistry of metabolism, photosynthesis and other biochemical processes without which life on Earth would be very different. If we find that all higher lifeforms on other planets uses these same processes then we can be sure that physics is fine-tuned for life. If any one of them did not work there would be no life. Either this fine-tuning must arise naturally from a multiverse or we would have to accept that the existence of life at all is an almost miraculous coincidence. If on the other hand we find complex lifeforms based on molecules unlike DNA and supported by completely different mechanisms then the argument for fine-tuning in nature is weaker.

Theorist Nima  Arkani-Hamed recently suggested that it would be worth building a 100 TeV hadron collider even if the only outcome was to verify that there is no new physics up to that energy, It would show that the Higgs mass is fine-tuned to one part in 10,000 and that would be a revolutionary discovery. If it failed to prove that it would find something less exciting such as SUSY. I don’t think this argument will raise the funding required but if the LHC continues to strengthen the case for fine-tuning we must accept the implications.

Update 29-Jul-2013: I am just updating to add some linkbacks to other bloggers who have followed up on this. Peter Woit takes the usual negative view about what he continues to call “multiverse mania” and summarised my post by saying “Philip Gibbs, … argues that what we are learning from the LHC is that we must give up and embrace the multiverse.” To respond, I don’t think that recognising the importance of the multiverse constitutes giving up anything except the failed idea of naturalness ( In future I will always couple the word “naturalnness” with the phrase “failed idea” because this seems to be a successful debating technique ) In particular phenomenologists and experimenters will continue to look for physics beyond the standard model in order to explain dark matter, inflation etc. People working on quantum gravity will continue to explore the same theories and their phenomenology.

Another suggestion we see coming from Woit and his supporters is that the idea that physics may be unnaturally fine-tuned is coming from string theory. This is very much not the case. It is being driven by experiment and ordinary TeV scale phenomenology.  If you think that the string theory landscape has helped convert people to the idea you should check your history to see that the word “landscape” was coined by Lee Smolin in the context of LQG. Anthropic reasoning has also been around since long before string theory. Of course some string theorists do see the string theory landscape as a possible explanation for unnaturalness but the idea certainly exists in a much wider context.

Woit also has some links to interesting documents about naturalness from the hands of Seiberg and Wilczek.

Lubos Motl posted a much more supportive response to this article. He also offers an interesting idea about Fermion masses from Delta(27) which I think tends to go against the idea that the standard model comes from a fine-tuned but otherwise unspecial compactification drawn from the string lanscape. It is certainly an interesting possibility though and it shows that all philosophical options remain open. Certainly there must be some important explanation for why there are three fermion generations but this is one of several possibilities including the old one that they form a multiplet of SO(10)  and the new one from geometric unity.


Why I Still Like String Theory

May 16, 2013

There is a new book coming up by Richard Dawid “String Theory and the Scientific Method. It has been reviewed by Peter Woit and Lubos Motl who give their expected opposing views. Apparently Woit gets it through a university library subscription. I can’t really review the book because at £60 it is a bit too expensive. Compare this with the recent book by Lee Smolin which I did review after paying £12.80 for it. These two books would have exactly the same set of potential readers but Smolin is just better known which puts his work into a different category where a different type of publisher accepts it. I dont really understand why any author would choose to allow publication at a £60 price-tag. They will sell very few copies and get very little back in royalties, especially if most universities have free access. Why not publish a print-on-demand version which would be cheaper? Even the Kindle version of this book is £42 but you can easily self publish on Kindle for much less and keep 70% of profits through Amazon.

My view is equally predictable as anyone elses since I have previously explained why I like String Theory. Of the four reasons I gave previously the main one is that it solves the problem of how quantum theory looks in the perturbative limit about a flat space-time with gravitons interacting with matter. This limit really should exist for any theory of quantum gravity and it is the realm that is most like familiar physics so it is very significant that string theory works there when no other theory does. OK, so perturbative string theory is not fully sewn up but it works better than anything else. The next best thing is supergravity which is just an effective theory for superstrings.

My second like is that String Theory supports a holographic principle that is also required for quantum gravity. This is a much weaker reason because (a) it is in less well known territory of physics and requires a longer series of assumptions and deductions to get there (b) It is not so obvious that other theories wont also support the holographic principle.

Reason number three has not fared so well. I said I liked string theory because it would match well with TeV scale SUSY, but the LHC has now all but ruled that out. It is possible that SUSY will appear in LHC run 2 at 13 TeV or later, or that it is just out of reach, but already we know that the Higgs mass in the standard model is fine-tuned. There is no stop or Higgsino where they would be needed to control the Higgs mass. The only question now is how much fine-tuning is there?

Which brings me to my fourth reason for liking string theory. It predicts a multiverse of vacua in the right quantities required to explain anthropic reasoning for an unnatural fine-tuned particle theory. So my last two reasons were really a hedge. The more evidence there is against SUSY, the more evidence there is in favour of the multiverse and the string theory landscape.

Although I dont have the book I know from Woit and Motl that Dawid provides three main reasons for supporting string theory that he gathered from string theorists. None of my four reasons are included. His first reason is “The No Alternatives Argument”, apparently we do string theory because despite its shortcomings there is nothing else that works. As Lee Smolin pointed out over at NEW, there are alternatives. LQG may succeed but to do so it must give a low energy perturbation theory with gravitons or explain why things work differently. Other alternatives mentioned by Smolin are more like toy models but I would add higher spin gravity as another idea that may be more interesting. Really though I dont see these as alternatives. The “alternatives theory view” is a social construct that came out of in-fighting between physicists. There is only one right theory of quantum gravity and if more than one idea seems to have good features without them meeting at a point where they can be shown to be irreconcilable then the best view is that they might all be telling us something important about the final answer. For those who have not seen it I still stand by my satirical video on this subject:

A Double Take on the String Wars

Dawid’s second reason is “The Unexpected Explanatory Coherence Argument.” This means that the maths of string theory works surprisingly well and matches physical requirements in places where it could easily have fallen down. It is a good argument but I would prefer to cite specific cases such as holography.

The third and final reason Dawid gives is  “The Meta-Inductive Argument”. I think what he is pointing out here is that the standard model succeeded because it was based on consistency arguments such as renormalisability which reduced the possible models to just one basic idea that worked. The same is true for string theory so we are on firm ground. Again I think this is more of a meta-argument and I prefer to cite specific instances of consistency.

The biggest area of contention centres on the role of the multiverse. I see it as a positive reason to like string theory. Woit argues that it cannot be used to make predictions so it is unscientific which means string theory has failed. I think Motl is (like many string theorists) reluctant to accept the multiverse and prefers that the standard model will fall out of string theory in a unique way. I would also have preferred that 15 years ago but I think the evidence is increasingly favouring high levels of fine-tuning so the multiverse is a necessity. We have to accept what appears to be right, not what we prefer. I have been learning to love it.

I dont know how Dawid defines the scientific method. It goes back many centuries and has been refined in different ways by different philosophers. It is clear that if a theory is shown to be inconsistent, either because it has a logical fault or because it makes a prediciton that is wrong, then the theory has to be thrown out. What happens if a theory is eventually found to be uniquely consistent with all known observations but its characteristic predictions are all beyond technical means. Is that theory wrong or right? Mach said that the theory of atoms was wrong because we could never observe them. It turned out that we could observe them but what if we couldn’t for practical reasons? It seems to me that there are useful things a philosopher could say about such questions and to be fair to Dawid he has articles freely available on line that address this question, e.g. here, so even if the book is out-of-reach there is some useful material to look through. Unfortunately my head hits the desk whenever I read the words “structural realism”, my bad.

update: see also this video interview with Nima Arkani-Hamed for a view I can happily agree with

 https://www.youtube.com/watch?v=rKvflWg95hs


We need to find the Theory of Everything

January 27, 2013

Each week the New Scientist runs a one minute interview with a scientist and last week it was Lisa Randall who told us that we shouldn’t be obsessed with finding a theory of everything. It is certainly true that there is a lot more to physics than this goal, but it is an important one and I think more effort should be made to get the right people together to solve this problem now. It is highly unlikely that NS will ever feature me in their column but there is nothing to stop me answering questions put to others so here are the answers I would give to the questions asked of Lisa Randall which also touch on the recent discovery of the Higgs(-very-like) Boson.

Doesn’t every physicist dream of one neat theory of everything?

Most physicists work on completely different things but ever since Einstein’s attempts at a unified field theory (and probably well before) many physicists at the leading edge of theoretical physics have indeed had this dream. In recent years scientific goals have been dictated more by funding agencies who want realistic proposals for projects. They have also noticed that all previous hopes that we were close to a final theory have been dashed by further discoveries that were not foreseen at the time. So physicists have drifted away from such lofty dreams.

So is a theory of everything a myth?

No. Although the so-called final theory wont explain everything in physics it is still the most important milestone we have to reach. Yes it is a challenging journey and we don’t know how far away it is but it could be just round the corner. We must always try to keep moving in the right direction. Finding it is crucial to making observable predictions based on quantum aspects of gravity.  Instead people are trying to do quantum gravity phenomenology based on very incomplete theories and it is just not working out.

But isn’t beautiful mathematics supposed to lead us to the truth?

Beauty and simplicity have played their part in the work of individual physicists such as Einstein and Dirac but what really counts in consistency. By that I mean consistency with experiment and mathematical self-consistency. Gauge theories were used in the standard model, not really because they embody the beauty of symmetry, but because gauge theories are the only renormalisable theories for vector bosons that were seen to exist. It was only when the standard model was shown to be renormalisable that it become popular and replaced other approaches. Only renormalisable theories in particle physics can lead to finite calculations that predict the outcome of experiments, but there are still many renormalisable theories and only consistency with experiment can complete the picture. Consistency is also the guide that takes us into theories beyond the standard model such as string theory that is needed for quantum gravity to be consistent at the perturbative level and the holographic principle that is needed for a consistent theory of black hole thermodynamics.

Is it a problem, then, that our best theories of particle physics and cosmology are so messy?

Relatively speaking they are mot messy at all. A few short equations are enough to account for almost everything we can observe over an enormous range of scales from particle physics to cosmology. The driving force now is the need to combine gravity and other forces in a form that is consistent non-perturbatively and to explain the few observational facts that the standard models don’t account for such as dark matter and inflation. This may lead to a final theory that is more unified but some aspects of physics may be determined by historical events not determined by the final theory, in which case particle physics could always be just as messy and complicated as biology. Even aside from those aspects, the final theory itself is unlikely to be simple in the sense that you could describe it fully to a non-expert.

Did the discovery of the Higgs boson – the “missing ingredient” of particle physics – take you by surprise last July?

We knew that it would be discovered or ruled out by the end of 2012 in the worst case. In the end it was found a little sooner. This was partly because it was not quite at the hardest place to find on the mass range which would have been around 118 GeV. Another factor was that the diphoton excess was about 70% bigger than expected. If it had been as predicted they would have required three times as much data to get it from the diphoton excess but the ZZ channel would have helped. This over-excess could be just the luck of the statistics or due to theoretical underestimates, but it could also be a sign of new physics beyond the standard model. Another factor that helped them push towards the finish line in June was that it became clear that a CMS+ATLAS combination was going to be sufficient for discovery. If they could not reach the 5-sigma goal for at least one of the individual experiments then they would have to face the embarrassment of an unofficial discovery announced on this blog and elsewhere. This drove them to use the harder multivariate analysis methods and include everything that bolstered the diphoton channel so that in the end they both got the discovery in July and not a few weeks later when an official combination could have been prepared.

toeAre you worried that the Higgs is the only discovery so far at the LHC?

It is a pity that nothing else has been found so far because the discovery of any new particles beyond the standard model would immediately lead to a new blast of theoretical work that could take us up to the next scale. If nothing else is found at the LHC after all its future upgrades it could be the end of accelerator driven physics until they invent a way of reaching much higher energies. However, negative results are not completely null. They have already ruled out whole classes of theories that could have been correct and even if there is nothing else to be seen at the electroweak scale it will force us to some surprising conclusions. It could mean that physics is fine tuned at the electroweak scale just as it is at the atomic scale. This would not be a popular outcome but you can’t argue with experiment and accepting it would enable us to move forward. Further discoveries would have to come from cosmology where inflation and dark matter remain unexplained. If accelerators have had their day then other experiments that look to the skies will take over and physics will still progress, just not quite as fast as we had hoped.

What would an extra dimension look like?

They would show up as the existence of heavy particles that are otherwise similar to known particles, plus perhaps even black holes and massive gravitons at the LHC. But the theory of large extra dimensions was always an outsider with just a few supporters. Theories with extra dimensions such as string theory probably only show these features at much higher energy scales that are inaccessible to any collider.

What if we don’t see one? Some argue that seeing nothing else at the LHC would be best, as it would motivate new ideas.

I think you are making that up. I never heard anyone say that finding nothing beyond the Higgs would be the best result. I did hear some people say that finding no Higgs would be the best result because it would have been so unexpected and would have forced us to find the alternative correct theory that would have been there. The truth of course is that this was a completely hypothetical situation. The reason we did not have a good alternative theory to the Higgs mechanism is because there isn’t one and the Higgs boson is in fact the correct answer.

Update: Motl has a followup with similar views and some additional points here


String Theorists get biggest new science prize

July 31, 2012

Yuri Milner is a Russian hi-tech investor who dropped out of physics classes as a student. He must have done quite well with his investments because he has just given away $27,000,000 in prizes to nine physicists in $3,000,000 chunks. He plans to do the same every year making his the biggest recurring science prize of them all. Recipients of the prize this year which is given in fundamental physics are Ed Witten, Alan Guth, Nima Arkani-Hamed, Jaun Maldacena, Nathan Seiberg, Maxim Kontsevich, Ashoke Sen, Alexei Y. Kitaev and Andre Linde. Congratulations to them all.

Past winners will select future winners so we can expect to see a lot of rich people in string theory and cosmology in the coming years.


String Theory returns to symmetry

July 31, 2012

The strings 2012 conference has finished and it is great to see that all the talks are online as slides and videos. Despite what you hear from some quarters, string theory is alive and progressing with many of the brightest young people in physics still wanting to do strings. Incredibly the next three strings conferences in Korea, US and India are already being organised. How many conference series have that many groups keen to organise them?

It has become a tradition for David Gross to give some kind of outlook talk at these conferences and this time he said there were three questions he would like to see answered in his lifetime

  • How do the forces of nature unify?
  • How did the universe begin and how will it end?
  • What is string theory?

The last of these questions is one he has been asking for quite a few years now. We know string theory only as a small set of perturbative formulations linked together by non-perturbative dualities. There has to be an underlying theory based on some unifying principle and it is important to find it if we are to understand how string theory works at the all-important Planck scale. This time Gross told us that he has heard of something that may answer the question. Firstly he now thinks the correct question to ask is “What are the underlying symmetries of string theory?” and he thinks that work on higher spin symmetries could lead to the answer. What is this about?

For about 16 years it has been known that an important element of quantum gravity is the holographic principle. This says that in order to avoid information loss is black holes, the amount of information in any volume of space must be bounded by the area of a surface that surrounds it in Planck units. This might mean that the theory in the bulk of spacetime is equivalent to a different theory on the boundary. How can that happen? How can it be that all the field variables in the volume of spacetime only carry an amount of information that can be contained on the surface. We can reason that measurement below the Planck length is not possible, but even then there should be at least a few valid field parameters for each plank volume of space. If the holographic principle is right there must be a huge amount of redundancy in this volumetric description of field theory. Redundancy can be taken to imply symmetry. Each degree of symmetry or dimension of the group Lie algebra tells us that one field variable is redundant and can be taken out by gauge fixing it. In gauge theories we get one set of redundant parameters for each point in spacetime but if the holographic principle is correct there must be a redundancy for almost every field variable in the bulk of spacetime and we will need it to be supersymmetry to deal with the fermions. I call this complete symmetry and I’ve no idea if anyone else appreciates its significance. It means that the fields of the theory are given by a single adjoint representation of the symmetry. This does not happen in normal gauge theories or in general relativity or even supergravity, but it does happen in Chern-Simons theory in 3D which can be reduced to a 2D WZW model on the boundary, so perhaps something is possible. Some people think that the redundancy aspect of symmetry means that it is uninportant. They think that the field theory can be reformulated in a different way without the symmetry at all. This is incorrect. The redundant nature of the local symmetry hides the fact that it has global characteristics that are not redundant. In holographic theories you can remove all the local degrees of freedom over a volume of space but you are left with a meaningful theory om the boundary.

If there is symmetry for every degree of freedom in the bulk then the generators of the symmetries must match the spin characteristics of the fields. Supergravity only has symmetries corresponding to spin half and spin one fields but it has fields from spin zero scalars up to spin two. String theory goes even further with higher excitations of the string providing an infinite sequence of possible states with unlimited spin. This may be why the idea of higher spin symmetries is now seen as a possible solution to the problem.

Surprisingly the idea of higher spin symmetry as a theory of quantum gravity is far from new. It goes back to the 1980s when it was founded by Vasiliev and Fradkin. It is a difficult and messy idea but recent progress means that it is now becoming popular both in its own right and as a possible new understanding of string theory.

There is one other line of development that could lead to a new understanding of the subject, namely the work on supersymmetry scattering amplitudes. Motl has been following this line of research which he calls the twistor mini-revolution for some time and has a nice summary of the conference talk on the subject by Nima Arkani-Hamed. It evolved partly out of the need to calculate scattering amplitudes for the LHC where people noticed that the long pages of solutions could be simplified to some very short expressions. After much thought these expressions seem to be about permutations and Grassmanians with things like infinite dimensional Yangian symmetry playing a big role. Arkani-Hamed believes that this is also applicable to string theory and could explain the holographic principle. The Grassmanians also link nicely to algebraic geometry and possibly work on hyperdeterminants and qubits.

I have to confess that as an undergraduate at Cambridge University in the late 1970s I was completely brainwashed into the idea that symmetry is the route to the underlying principles of nature. At the time the peak of this idea was supergravity and Stephen Hawking - who had just been inaugurated into the Lucasian chair at Trinity college – was one of its greatest advocate. When string theory took over shortly after, people looked for symmetry principles there too but without convincing success. It is true that there are plenty of symmetries in string theory including supersymmetry of course, but different sectors of string theory have different symmetry, so symmetry seems more emergent than an underlying principle. I think the generations of undergraduates after mine were given a much more prosaic view of the role of symmetry and they stopped looking out for it as a source of deep principles.

Due to my brainwashing I have never been able to get over the idea that symmetry will play a huge role in the final theory. I think that all the visible symmetries in string theory are remnants of a much larger hidden symmetry so that only different residual parts of it are seen in different sectors.  In the 1990s I developed my own idea of how infinite dimensional symmetries from necklace algebras could describe string theory in a pregeometric phase. The permutation group played a central role in those ideas and was extended to larger string inspired groups with the algebra of string creation operators generating also the Lie algebra of the symmetry. Now that I know about the importance of complete symmetry and higher spin symmetry I recognise that these aspects of the theory could also be significant. Perhaps it is just a matter of time now before string theorists finally catch up with what I did nearly twenty years ago :)

In any case it is good to see that there is now some real hope that the very hard problem of understanding string theory from the bottom up may finally have some hope of a solution. It will be very interesting to see how these ideas mature over the next few strings conferences.


Bayes and String Theory

June 12, 2012

If Supersymmetry is found or excluded at the Large hadron Collider, how will it affect your opinion on string theory as unification of gravity and particle physics? This is a hard question and opinions differ widely across the range of theorists, but at the least any answer should be consistent with the laws of probability including Bayes Law. What can we really say?

A staunch string theorist might want to respond as follows:

“I am confident about the relevance of superstring theory to the unification of gravity and the forces of elementary particles because it provides a unique way to accomplish this that is consistent in the perturbative limits (Amongst other reasons.) Unfortunately it does not have a unique solution for the vacuum and we have not yet found a principle for selecting the solution that applies to our universe. Because of this we cannot predict the low energy effective physics and we cannot even know if supersymmetry is an observable feature of physics at energy scales currently accessible. Therefore if supersymmetry is not observed at the TeV scale even after the LHC has explored all channels up to 14 TeV with high integrated luminosities, there is no reason for that to make me doubt string theory. On the other hand, if supersymmetry is observed I will be enormously encouraged. This is because there are good reasons to think that supersymmetry will be restored as an exact gauge symmetry at some higher scale, and gauged sypersymmetry inevitably includes gravity within some version of supergravity. There are further good reasons why supergravity is not likely to be fully consistent on its own and would necessarily be completed only as a limit of superstring theory. Therefore if supersymmetry is discovered by the LHC my confidence in string theory will be greatly improved.” 

On hearing this a string theory skeptic would surely be seen shaking his head vigorously. He would say:

“You cannot have it both ways! If you believe that the discovery of supersymmetry will confirm string theory then you must also accept that failure to discover it falsify string theory. Any link between the two must work equally in both directions. You are free to say that supersymmetry at the electro-weak scale is a theory completely Independent of string theory if you wish. In that case you are safe if suppersymmetry is not found but by the same rule the discovery of supersymmetry cannot be used to claim that superstring theory is right. If you prefer you can claim that superstring theory predicts supersymmetry (some string theorists do) but if that is your position you must also accept that excluding supersymmetry at the LHC will mean that string theory has failed. You can take a position in between but it must work equally in both directions.”

  The Tetrahedron of Possibilities

What does probability theory tell us about the range of possibilities that a theorist can consider for answers to this problem? Prior to the experimental result he will have some estimate for the probability that string theory is a correct theory of quantum gravity and for the probability that supersymmetry will be observed at the LHC. In my case I assign a probability of PST = 0.9 to the idea that string theory is correct and PSUSY = 0.7 to the probability that SUSY will be seen at the LHC. These are my prior probabilities based on my knowledge and reasoning. You can have different values for your estimates because you know different things, but you can’t argue with mine. There are no absolutely correct global values for these probabilities, they are a relative concept.

However, these two probabilities do not describe everything I need to know. There are four logical outcomes I need to consider altogether:

  • P1 = the probability that both string theory is correct and SUSY will be found
  • P2 = the probability that string theory is correct and SUSY will not be found
  • P3 = the probability that string theory is wrong and SUSY will be found
  • P4 = the probability that string theory is wrong and SUSY will not be found

You might try to tell me that there are other possibilities, such as that SUSY exists at higher energies or that string theory is somehow partly right, but I could define my conditions for correctness of string theory and for discovery of SUSY so that they are unambiguous. I will assume that has been done. This means that the four possible outcomes are mutually exclusive and exhaustive. We can conclude that P1 + P2 + P3 + P4 = 1. Of course the four probabilities must also be between 0 and 1. These conditions map out a three-dimensional tetrahedron in the four-dimensional space of the four probability variables with the four logical outcomes at each vertex. This is the tetrahedron of possible prior probabilities and any theorists prior assessment of the situation must be described by a single point within this tetrahedron.

So far I have only given two values that describe my own assessment so to pinpoint my complete position within the three-dimensional range I must give one more value. If I thought that string theory and SUSY at the weak scale were completely independent theories I could just multiply as follows

P1 = PST .PSUSY = 0.63
P2 = PST .(1 – PSUSY) = 0.27
P3 = (1 – PST) .PSUSY = 0.07
P4 = (1 – PST) .(1 – PSUSY) = 0.03

The condition that the two theories are independent fall on a surface given by the equation P1 . P4 = P2 . P3 that neatly divides the tetrahedron in two.

As I already explained I do not think these two things are independent. I think that SUSY would strongly imply string theory. In other words I think that the probability of SUSY being found and string theory being wrong is much lower than the value of 0.07 for P3 . In fact I estimate it to be something like P3 = 0.01. I must still keep the other probabilities fixed so P1 + P2 = PST = 0.9 and P1 + P3 = PSUSY = 0.7. This means that all my probabilities are now known

P1 = 0.69
P2 = 0.21
P3 = 0.01
P4 = 0.09

Notice that I did not get to fix P1 separately from P3. If I know how much the discovery of SUSY is going to affect my confidence in string theory then I also know how much the non-discovery of SUSY will affect it. It is starting to sound like the string theory skeptic could be right, but wait. Let’s see what happens after the LHC has finished looking.

Suppose SUSY is now discovered, how does this affect my confidence? My posterior probabilities P’2 and P’4 both become zero and by the rules of conditional probabilities P’ST = P1/PSUSY = 0.69/0.7 = 0.986. In other words my confidence in string theory will have jumped from 90% to 98.6%, quite a significant increase. But what happens if SUSY is found to be inaccessible to the LHC? In that case we end up with P’ST = P2/(1-PSUSY) = 0.21/0.3 = 0.7 . This means that my confidence in string theory will indeed be dented, but it is far from falsified. I should still consider string theory to have much better than level odds. So the skeptic is not right. The string theorist can argue that finding SUSY will be a good boost to string theory without it being falsified if SUSY is excluded, but the string theorists has to make a small concession too. His confidence in string theory has to be less if SUSY is not found.

Remember, I am not claiming that these probabilities are universally correct. They represent my assessment and I am not a fully fledged string theorist. Someone who has studied it more deeply may have a higher prior confidence in which case excluding SUSY will not make much difference at all to him even if he believes SUSY would strongly imply string theory.


Witten and Knots

November 16, 2011

If you are at all interested in mathematical physics you will want to watch Ed Witten’s recent talk on his work in knot theory that he gave at the IAS. Witten gives a general overview of how he discovered that the Jones polynomial used to classify knots turns out to be “explained” as a path integral using Cherns-Simon theory in 3D. More recently the Jones Polynomial was generalised to Khovanov homology which describes a knotted membrane in 4D and Witten wanted to find a similar explanation. He was stuck until some work he did on Geometric Langlands gave him the tools to solve (or partially solve) the riddle.

Geometric Langlands was devised as a simpler variation on the original Langlands program that is a wide-ranging set of ideas trying to unify concepts in number theory. Witten makes some interesting comments during the question time. He says that one of the main reasons that physicists (such as himself) are able to use string theory to answer questions in mathematics is that string theory is not properly understood. If it was then the mathematicians would be able to use it in this way themselves, he says. Referring to the deeper relationship between string theory and Langlands he said.

“I had in mind something a little bit more ambitious like whether physics could affect number theory at a really serious structural level like shedding light on the Langlands program. I’m only going to give you a physicists answer but personally I think it is unlikely that it is an accident that Geometric Langlands has a natural description in terms of quantum physics, and I am confident that that description is natural even though I think it mught take a long time for the math world to properly understand it. So I think there is a very large gap between these fields of maths and physics. I think if anything the gap is larger than most people appreciate and therefore I think that the pieces we actually see are only fragments of a much bigger totality.”

See also NEW


Dirac Medal for Chris Isham

July 1, 2011

Chris Isham has been awarded this years Dirac Medal of the Institute of Physics for his work on quantum gravity. For information about his many contributions to the field you can just look at the IOP page about the award.

In addition to the Dirac Medal the IOP has just announced a whole slew of other medals named after British Physicists. The Newton Medal this year goes to Leo Kadanoff who noticed the important role of scale invariance and universality in critical systems. The Faraday Medal is taken by Alan Watson for leadership of the Pierre Auger Observatory that studies ultra high energy cosmic rays. The Chadwick Medal is won by Terry Wyatt for work on Hadron Colliders. Another Imperial College prof being honoured is Arkady Tseytlin for string theory research who got the Rayleigh Medal.

There are a load more which you can read about here. Congratulations to them all.


Strings 2011

June 27, 2011

The strings 2011 conference has opend today in Sweden. You can watch videos of the talks live or recorded straight after, starting with the introduction by David Gross.

Gross asked the usual questions starting with the number one “What is String Theory?” and ending with number 11 “What will we learn from the LHC?”

He is a bit disappointed that the LHC has not found anything surprising yet, but he still holds high hopes for SUSY.

There have been promising new results in trying to solve large-N SUSY gauge theory which has beautiful mathematics: twistors, polytopes in the grasmanian for example. Since this theory is dual to string theory Gross thinks these discoveries could tell us about the fundamentals of string theory.

He goes on to mention entropic gravity which he said was also promising but he had a little smirk on his face when he said it and also implied that it is ambitious. There will be a talk from Verlinde later in the conference.

Apparently it is unfortunate that we seem to live in De Sitter space. The theories work much better in anti- De Sitter space. There are lots of questions but the most important product of knowledge is ignorance, then again it would be nice to have some answers he said at the end.

 

 

 

 


Follow

Get every new post delivered to your Inbox.

Join 270 other followers