Fundamental Physics 2013: What is the Big Picture?

November 26, 2013

2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra.

see_the_big_picture

The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model:

  • What is dark matter?
  • What was the mechanism of cosmic inflation?
  • What mechanism led to the early production of galaxies and structure?
  • Why does the strong interaction not break CP?
  • What is the mechanism that led to matter dominating over anti-matter?
  • What is the correct theory of neutrino mass?
  • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant?
  • How are the four forces and matter unified?
  • How can gravity be quantised?
  • How is information loss avoided for black holes?
  • What is the small scale structure of spacetime?
  • What is the large scale structure of spacetime?
  • How should we explain the existence of the universe?

It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum.

But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it.  The GDP of China is predicted to surpass that of the US and the EU within 5 years.

But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before.

Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to.

Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed.

The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views.

Causality

Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality.

Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos,  eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided.

The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy.

The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities.

Symmetry

When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic  side would follow, perhaps uniquely.

It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution?

The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence.

While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states.

My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded.

While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them.

After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right.

Cosmology

We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood.

Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by.

The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood.

It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows.

I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct.

In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos.

The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model.

Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly.

The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form?

I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large  black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been.

The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies.

This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape.

Naturalness

It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us.

Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable?

At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages.

My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table.

It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive.

The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent.

The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified.

Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds.

I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse.

String Theory

String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By  2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime.

Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided.

String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic.

What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn.

How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem.

Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale.

Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that?

Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that.

Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group.

String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation  symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings.

This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection.

The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again.

Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances.

Loop Quantum Gravity

If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all.

What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions.

While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.

 String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent  this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need.

Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG.

I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign.

The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation.

Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied  to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory.

The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures.

Quantum Mechanics

Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is  about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity.

I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum?

In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space.

Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear.

The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory.

Stepping back though we might notice that quantum field theory also has some schizophrenic  characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic  Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us?

My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step.

Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory.

What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse.

Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it.

Black Hole Information

We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality.

There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology.

Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way.

It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry.

Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation.

Ontology

Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature.

In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works.

In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation.

It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival.

Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place.

Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two.

Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours.  It is the higher level concept that is universal, not the definition.

Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it.

Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too.

Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent.

I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect.

Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories.  These structures also seem to be important in physics.

I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level.

What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm.

When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21st century theories.

From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe.

Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm.

Experimental Outlook

Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from?

In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case.

The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward.

A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible.

However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models.

There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature.  If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding.

With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies.

Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value.

Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space.

Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators.

There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 1034 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 1036 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised.

Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn.


Why I Still Like String Theory

May 16, 2013

There is a new book coming up by Richard Dawid “String Theory and the Scientific Method. It has been reviewed by Peter Woit and Lubos Motl who give their expected opposing views. Apparently Woit gets it through a university library subscription. I can’t really review the book because at £60 it is a bit too expensive. Compare this with the recent book by Lee Smolin which I did review after paying £12.80 for it. These two books would have exactly the same set of potential readers but Smolin is just better known which puts his work into a different category where a different type of publisher accepts it. I dont really understand why any author would choose to allow publication at a £60 price-tag. They will sell very few copies and get very little back in royalties, especially if most universities have free access. Why not publish a print-on-demand version which would be cheaper? Even the Kindle version of this book is £42 but you can easily self publish on Kindle for much less and keep 70% of profits through Amazon.

My view is equally predictable as anyone elses since I have previously explained why I like String Theory. Of the four reasons I gave previously the main one is that it solves the problem of how quantum theory looks in the perturbative limit about a flat space-time with gravitons interacting with matter. This limit really should exist for any theory of quantum gravity and it is the realm that is most like familiar physics so it is very significant that string theory works there when no other theory does. OK, so perturbative string theory is not fully sewn up but it works better than anything else. The next best thing is supergravity which is just an effective theory for superstrings.

My second like is that String Theory supports a holographic principle that is also required for quantum gravity. This is a much weaker reason because (a) it is in less well known territory of physics and requires a longer series of assumptions and deductions to get there (b) It is not so obvious that other theories wont also support the holographic principle.

Reason number three has not fared so well. I said I liked string theory because it would match well with TeV scale SUSY, but the LHC has now all but ruled that out. It is possible that SUSY will appear in LHC run 2 at 13 TeV or later, or that it is just out of reach, but already we know that the Higgs mass in the standard model is fine-tuned. There is no stop or Higgsino where they would be needed to control the Higgs mass. The only question now is how much fine-tuning is there?

Which brings me to my fourth reason for liking string theory. It predicts a multiverse of vacua in the right quantities required to explain anthropic reasoning for an unnatural fine-tuned particle theory. So my last two reasons were really a hedge. The more evidence there is against SUSY, the more evidence there is in favour of the multiverse and the string theory landscape.

Although I dont have the book I know from Woit and Motl that Dawid provides three main reasons for supporting string theory that he gathered from string theorists. None of my four reasons are included. His first reason is “The No Alternatives Argument”, apparently we do string theory because despite its shortcomings there is nothing else that works. As Lee Smolin pointed out over at NEW, there are alternatives. LQG may succeed but to do so it must give a low energy perturbation theory with gravitons or explain why things work differently. Other alternatives mentioned by Smolin are more like toy models but I would add higher spin gravity as another idea that may be more interesting. Really though I dont see these as alternatives. The “alternatives theory view” is a social construct that came out of in-fighting between physicists. There is only one right theory of quantum gravity and if more than one idea seems to have good features without them meeting at a point where they can be shown to be irreconcilable then the best view is that they might all be telling us something important about the final answer. For those who have not seen it I still stand by my satirical video on this subject:

A Double Take on the String Wars

Dawid’s second reason is “The Unexpected Explanatory Coherence Argument.” This means that the maths of string theory works surprisingly well and matches physical requirements in places where it could easily have fallen down. It is a good argument but I would prefer to cite specific cases such as holography.

The third and final reason Dawid gives is  “The Meta-Inductive Argument”. I think what he is pointing out here is that the standard model succeeded because it was based on consistency arguments such as renormalisability which reduced the possible models to just one basic idea that worked. The same is true for string theory so we are on firm ground. Again I think this is more of a meta-argument and I prefer to cite specific instances of consistency.

The biggest area of contention centres on the role of the multiverse. I see it as a positive reason to like string theory. Woit argues that it cannot be used to make predictions so it is unscientific which means string theory has failed. I think Motl is (like many string theorists) reluctant to accept the multiverse and prefers that the standard model will fall out of string theory in a unique way. I would also have preferred that 15 years ago but I think the evidence is increasingly favouring high levels of fine-tuning so the multiverse is a necessity. We have to accept what appears to be right, not what we prefer. I have been learning to love it.

I dont know how Dawid defines the scientific method. It goes back many centuries and has been refined in different ways by different philosophers. It is clear that if a theory is shown to be inconsistent, either because it has a logical fault or because it makes a prediciton that is wrong, then the theory has to be thrown out. What happens if a theory is eventually found to be uniquely consistent with all known observations but its characteristic predictions are all beyond technical means. Is that theory wrong or right? Mach said that the theory of atoms was wrong because we could never observe them. It turned out that we could observe them but what if we couldn’t for practical reasons? It seems to me that there are useful things a philosopher could say about such questions and to be fair to Dawid he has articles freely available on line that address this question, e.g. here, so even if the book is out-of-reach there is some useful material to look through. Unfortunately my head hits the desk whenever I read the words “structural realism”, my bad.

update: see also this video interview with Nima Arkani-Hamed for a view I can happily agree with

 https://www.youtube.com/watch?v=rKvflWg95hs


Evidence for a charged Higgs Boson?

October 12, 2012

Last week Upsala was home to a specialised HEP workshop about the search for a charged Higgs bosons. Such particles are predicted in some beyond standard model theories such as supersysmmetry. There is not much direct evidence yet for such charged scalar bosons but the searches as described at the workshop have not looked beyond the 2011 data using 5/fb at most. There is still a lot of room left for them to appear.

The best hope for BSM observations in the data so far comes from anomalies in the Higgs decay rates. In particular the decay to two tauons has not been observed where expected and the rate for decay to two photons is too large. In my opinion the tau decay is not a very convincing discrepancy yet because the stats are low, especially because ATLAS has not yet done the analysis with 2012 data. The diphoton excess is also not fantastically convincing with a combined significance of about 2.2 sigma according to Joe Incandela (CMS spokesperson) but it has persisted since 2011 and is seen by both ATLAS and CMS. It is probably too big to be explained by theory errors from the analysis of the standard model so some BSM explanation is a real possibility. Both observations will be considerably clarified at the Hadron Collider Physics conference in Kyoto next month.

Meanwhile there is little to stop theorists thinking about what could account for such anomalies if they turn out to be real. This is not just idle speculation. Any theory that might explain the anomalies could make unique predictions for new physics that could prioritize the searches to help the collaborations home in on new physics more quickly. This is crucial to plan future accelerators.

The diphoton decay channel is especially sensitive to new physics because the basic Higgs boson is not charged. Photons only interact with charged particles so the Higgs can only decay to photons via loop diagrams that include massive charged particles. We know of several such particles in the standard model and the ones that contribute the most in this case are the W bosons and the top quark. If you know anything about the type of Feynman diagrams involved you will know that bosons and fermions in loops interfere deconstructively. In this case the W bosons have the larger amplitude and the top quark reduced it by about 40%. This means that to increase the decay rate and explain the tentative excess you would need to postulate the existence of (at least) a new heavy charged boson, such as a charged Higgs scalar. It has to be heavier than about 105 GeV otherwise it would have been observed at LEP, but upper limits depend on its properties.

As it happens there are phenomenologists who are too skilled at their job so that they can explain the excess in many other ways, e.g. using “vector-like” fermions or a fermiophobic Higgs or even just QCD corrections. I am simply going to be skeptical and suggest that they are thinking wishfully about their pet theories. To the unbiased mind the new charged boson is the most obvious explanation for an excess. That still leaves open the question of what spin ( and other properties) the boson has. A spin one charged boson would have to be very similar to a W gauge boson and would mediate new forces. The limits on such new particles is already good.  Higher spin would make it a charged graviton. Let’s not go there.

Another major parameter for a new particle to determine is its lepton number. If the particle had lepton number one (like a scalar lepton) then its R-parity would be odd. All standard model particles have even R-parity so if lepton number is conserved our mystery particle would either have to be stable or decay to another new stable particle. Heavy charged particles are easy to detect and lighter stable particles would be hard ot miss at the LHC. ATLAS and CMS were designed with missing energy searches in mind so that they could look effectively for supersymmetry. Indeed a scalar tau would be a good candidate except that SUSY searches have already gone a long way to exclude them.

So there are many possible explanations for the diphoton excess, if it is real physics, but the scalar charged boson with zero lepton number is the simplest case that still has a good chance of being around still. Any such scalar charged boson would immediately be identified as a likely charged Higgs if it was found.

Coming back to last weeks workshop, it is good to see that the charged Higgs as an explanation for the diphoton excess was indeed the subject of a talk. The speaker Stefano Moretti concentrated on the Higgs triplet model which has charged and doubly charged Higgs bosons. The doubly charged Higgs would be particularly effective in explaining the diphoton excess because doubling the charge quadruples the extra cross-section since there are two gamma vertices. Of course some next to minimal SUSY models have a similar feature. Here is the set of Feynman diagrams involved

With so many contributions all adding to the diphoton excess the charged Higgs can comfortably be heavier than limits set by direct searches so far. Soon we will get more information with a better determination of the excess and better charged Higgs searches. The 2012 data at 8 TeV will be much more penetrating than the 2011 data heavy new particles and by now we have three times as much of it. Of course this story could go in many directions from here. The diphoton excess may fade or be explained by better standard model calculations. It might even be some systematic error symptomatic of a less than perfect understanding of the detectors. If it does hold up there are lots of new physics possibilities, but if I had to put my chips down at this point I think the charged Higgs has the best odds all things considered.


SUSY 2012

August 13, 2012

The SUSY 2012 conference starts in Beijing today. It is the biggest supersymmetry conference of the year and we expect to see the latest results using the 5/fb gathered in 2012 at 8 TeV before the last technical stop. Actually at least some of the results have already appeared with three new conference notes from ATLAS this morning here, here and here. CMS released their results earlier, see their twiki page .

Because of the high masses being searched for the extra TeV of energy over last year’s 7 TeV actually provides 2 to 3 rimes as much sensitivity, so even without combining the new results with the similar amount of data collected last year we get significantly better depth. Sadly there is nothing yet observed in these notes beyond standard model expectations. This is disappointing but there may be other searches released later and there are always places for SUSY to hide from the LHC.

The most promising anomaly at this time is the 1.8 times SM excess in the diphoton channel seen in the Higgs search which currently has 2.5 sigma significance BSM in ATLAS and 1.5 sigma in CMS. If the peaks coincided the combined significance would be about 2.8 sigma but they are at slightly different masses so the combined result is actually no better than ATLAS on its own. You could argue that this might be a callibration error and the 2.8 sigma is good. In any case there will be twice as much data available in a few weeks and we will see if the excess is a statistical fluctuation or not. Looking at the four individual results from the two experiments and last year vs this year they can be plotted on a mass vs signal scale roughly as follows

The green line is the standard model expectation, blue circles are CMS and red are ATLAS. Black is the unofficial combination. The results are comparable to throwing 4 dice and getting four sixes. Was it a fluke or were the dice loaded, and if so, how?

If the effect is not statistical it could easily be a combination of systematic errors. This would most likely be due to errors in the theoretical calculations that would affect both experiments. (TS pointed out this paper which fingers QCD uncertainties) Many people would suggest we wait for the dice to be rolled again and then look at systematics more carefully before taking this too seriously. However, by time that has happened the long shutdown will be on us. If there is a possibility for something to be seen here it makes sense to look at what it could be. Theorists might then make predictions that could be tested this year if triggers can be adjusted in time.

I am assuming that the excess in the diphoton channel is due to extra particles that affect the Higgs decay loop and that the production rate via gluon fusion is close to SM predictions. This may be wrong but it is what the data looks like so far. That being the case, the Higgs diphoton loop can most easily be enhanced if there is a new charged particle that adds to the loop. A boson would probably add to the cross-section while a fermion would subtract from it but some knowledgeable theorists say that “vector-like” fermions are also a possibility and who am I to argue. It must be colourless to avoid spoiling the gluon fusion production rate. It could carry lepton number which would affect its decay possibilities. Mass would be greater than 105 GeV otherwise it would be produced via mediated photons at LEP, but less than about 300 GeV to have a significant affect on the loop. Best candidates are scalar leptons like the stau or charged scalars like a charged Higgs, but vectors such as a W’ are also possible. These things have been searched for and already excluded in the required mass range, but only under model specific assumptions. Hadron colliders ahve big blind spots especially when particles decay via jets. There is still hope that something is being missed.


Bayes and String Theory

June 12, 2012

If Supersymmetry is found or excluded at the Large hadron Collider, how will it affect your opinion on string theory as unification of gravity and particle physics? This is a hard question and opinions differ widely across the range of theorists, but at the least any answer should be consistent with the laws of probability including Bayes Law. What can we really say?

A staunch string theorist might want to respond as follows:

“I am confident about the relevance of superstring theory to the unification of gravity and the forces of elementary particles because it provides a unique way to accomplish this that is consistent in the perturbative limits (Amongst other reasons.) Unfortunately it does not have a unique solution for the vacuum and we have not yet found a principle for selecting the solution that applies to our universe. Because of this we cannot predict the low energy effective physics and we cannot even know if supersymmetry is an observable feature of physics at energy scales currently accessible. Therefore if supersymmetry is not observed at the TeV scale even after the LHC has explored all channels up to 14 TeV with high integrated luminosities, there is no reason for that to make me doubt string theory. On the other hand, if supersymmetry is observed I will be enormously encouraged. This is because there are good reasons to think that supersymmetry will be restored as an exact gauge symmetry at some higher scale, and gauged sypersymmetry inevitably includes gravity within some version of supergravity. There are further good reasons why supergravity is not likely to be fully consistent on its own and would necessarily be completed only as a limit of superstring theory. Therefore if supersymmetry is discovered by the LHC my confidence in string theory will be greatly improved.” 

On hearing this a string theory skeptic would surely be seen shaking his head vigorously. He would say:

“You cannot have it both ways! If you believe that the discovery of supersymmetry will confirm string theory then you must also accept that failure to discover it falsify string theory. Any link between the two must work equally in both directions. You are free to say that supersymmetry at the electro-weak scale is a theory completely Independent of string theory if you wish. In that case you are safe if suppersymmetry is not found but by the same rule the discovery of supersymmetry cannot be used to claim that superstring theory is right. If you prefer you can claim that superstring theory predicts supersymmetry (some string theorists do) but if that is your position you must also accept that excluding supersymmetry at the LHC will mean that string theory has failed. You can take a position in between but it must work equally in both directions.”

  The Tetrahedron of Possibilities

What does probability theory tell us about the range of possibilities that a theorist can consider for answers to this problem? Prior to the experimental result he will have some estimate for the probability that string theory is a correct theory of quantum gravity and for the probability that supersymmetry will be observed at the LHC. In my case I assign a probability of PST = 0.9 to the idea that string theory is correct and PSUSY = 0.7 to the probability that SUSY will be seen at the LHC. These are my prior probabilities based on my knowledge and reasoning. You can have different values for your estimates because you know different things, but you can’t argue with mine. There are no absolutely correct global values for these probabilities, they are a relative concept.

However, these two probabilities do not describe everything I need to know. There are four logical outcomes I need to consider altogether:

  • P1 = the probability that both string theory is correct and SUSY will be found
  • P2 = the probability that string theory is correct and SUSY will not be found
  • P3 = the probability that string theory is wrong and SUSY will be found
  • P4 = the probability that string theory is wrong and SUSY will not be found

You might try to tell me that there are other possibilities, such as that SUSY exists at higher energies or that string theory is somehow partly right, but I could define my conditions for correctness of string theory and for discovery of SUSY so that they are unambiguous. I will assume that has been done. This means that the four possible outcomes are mutually exclusive and exhaustive. We can conclude that P1 + P2 + P3 + P4 = 1. Of course the four probabilities must also be between 0 and 1. These conditions map out a three-dimensional tetrahedron in the four-dimensional space of the four probability variables with the four logical outcomes at each vertex. This is the tetrahedron of possible prior probabilities and any theorists prior assessment of the situation must be described by a single point within this tetrahedron.

So far I have only given two values that describe my own assessment so to pinpoint my complete position within the three-dimensional range I must give one more value. If I thought that string theory and SUSY at the weak scale were completely independent theories I could just multiply as follows

P1 = PST .PSUSY = 0.63
P2 = PST .(1 – PSUSY) = 0.27
P3 = (1 – PST) .PSUSY = 0.07
P4 = (1 – PST) .(1 – PSUSY) = 0.03

The condition that the two theories are independent fall on a surface given by the equation P1 . P4 = P2 . P3 that neatly divides the tetrahedron in two.

As I already explained I do not think these two things are independent. I think that SUSY would strongly imply string theory. In other words I think that the probability of SUSY being found and string theory being wrong is much lower than the value of 0.07 for P3 . In fact I estimate it to be something like P3 = 0.01. I must still keep the other probabilities fixed so P1 + P2 = PST = 0.9 and P1 + P3 = PSUSY = 0.7. This means that all my probabilities are now known

P1 = 0.69
P2 = 0.21
P3 = 0.01
P4 = 0.09

Notice that I did not get to fix P1 separately from P3. If I know how much the discovery of SUSY is going to affect my confidence in string theory then I also know how much the non-discovery of SUSY will affect it. It is starting to sound like the string theory skeptic could be right, but wait. Let’s see what happens after the LHC has finished looking.

Suppose SUSY is now discovered, how does this affect my confidence? My posterior probabilities P’2 and P’4 both become zero and by the rules of conditional probabilities P’ST = P1/PSUSY = 0.69/0.7 = 0.986. In other words my confidence in string theory will have jumped from 90% to 98.6%, quite a significant increase. But what happens if SUSY is found to be inaccessible to the LHC? In that case we end up with P’ST = P2/(1-PSUSY) = 0.21/0.3 = 0.7 . This means that my confidence in string theory will indeed be dented, but it is far from falsified. I should still consider string theory to have much better than level odds. So the skeptic is not right. The string theorist can argue that finding SUSY will be a good boost to string theory without it being falsified if SUSY is excluded, but the string theorists has to make a small concession too. His confidence in string theory has to be less if SUSY is not found.

Remember, I am not claiming that these probabilities are universally correct. They represent my assessment and I am not a fully fledged string theorist. Someone who has studied it more deeply may have a higher prior confidence in which case excluding SUSY will not make much difference at all to him even if he believes SUSY would strongly imply string theory.


Bayes and Susy

May 10, 2012

Here’s a puzzle. There are three cups upside down on a table. You friend tells you that a pea is hidden under one of them. Based on past experience you estimate that there is a 90% probability that this is true. You turn over two cups and don’t find the pea. What is the probability now that there is a pea underneath? You may want to think about this before reading on.

Naively you might think that two-thirds of the parameter space has been eliminated, so the probability has gone from 90% to 30%, but this is quite wrong. You can use Bayes Theorem to get the correct answer but let me give you a more intuitive frequentist answer. The situation can be models by imagining that there are thirty initial possibilities with equal probability. Nine of them have a pea under the first cup, nine more under the second and nine more under the third. The remaining three have no pea under any cup. This distribution models correctly the 90% that a pea is there since 27 out of 30 do. If you now eliminate the cases where the pea is under the first or second cup you are left with nine instances of it under the third cup and three that it not there. So the correct probability is 9 out of 12 or 75%, much better than the naive 30% guess.

I mention this because I saw a comment over at NEW pointing to this paper about applying Bayesian statistics to the probability of finding SUSY at the TeV scale. The puzzle illustrates that Bayesian rules do not reduce the probability of something existing by as much as you would think if you eliminate a large chunk of the parameter space. Before experiments started to have their say I felt that SUSY at the TeV was a well motivated theory and I like the maths of supersymmetry, so I might have estimated the probability of it being there as 90%. By the time that paper was written LEP had eliminated lower mass SUSY just as you might turn over a couple of cups and not find the pea. At the start of 2011 before the LHC started to have much say I estimated the probability at 75%.

You might argue that another two-thirds of the parameter space has been eliminated since then. By the same analysis this would reduce the probability for SUSY at the TeV scale to 50%. However, we also now know that the mass of the Higgs is around 125 GeV with 4 sigma confidence (actually the mass region around 115 GeV - 120 GeV is still wide open so the story is not concluded yet) If the mass had been 115 GeV it would have been a good indicator for SUSY and at 140 GeV it would have been a strong eliminator. At 125 GeV it still “smells” like SUSY but the aroma is not so sweet. This can’t be quantified but for me it pushes the probability for SUSY back up to about 70%

If you are a SUSY sceptic I know what you are thinking. You think that LEP eliminated much more than two-thirds of the parameter space and the LHC eliminated much more than two-thirds of what was left. Is this really the case? All the diagrams from ATLAS and CMS which show large chunks of the parameter space being eaten up are misleading. Firstly there is no uniform measure of probability that can be assigned to the area of the plot. Secondly and more importantly all these plots rely on highly constrained versions of SUSY to reduce the parameter space to two dimensions so that it can be analysed and plotted. If SUSY phenomenologists have made a mistake it was to think that using these simplified models would be a good way to search for SUSY. This was not well motivated and has been shown wrong. If SUSY is to be found she will be seen in direct searches for particles such as the stop or stau. The Higgs is only starting to be seen in the data now so why should we think that heavier particles would already have shown up? The Higgs was in a place where it was not easy to find but this could also be the case for the stop especially if its mass is near the top (see also Stealth Supersymmetry) Higgs searches are relatively straight forward to analyse because if we know its mass we also know its cross-sections and decay rates (assuming the standard model). This is not the case for the stop, tau or gluinos. We have to keep searching until the limits placed on cross-sections are so small that all possibilities are excluded. The LHC is nowhere near that point yet.

As a curious footnote it is amusing to see that my Stop Rumours post is gradually making its way towards being the most read article on this blog. Why so much interest?  Looking into it I found that hit counts on most posts reduce to a trickle after a few days but this post keeps collecting hits at about a hundred a day, even after three months. The stats show that this is because of people searching for the single word “stop” on google. When I do the search myself I find that the post does indeed appear at the bottom of the first page. The “Stop Rumours” title must be enticing enough to lure people to click their way in. I suspect they are a bit baffled by what they find but maybe they will learn something about physics. It is very unusual to get a first page ranking for a single common word like “stop” so why is this happening? A clue is that the Google entry has an attached note saying that “Cliff Harvey shared this”. This is a feature of Google plus where Harvey maintains an excellent column commenting on people’s blog posts. If I log out of Google plus I no longer see my post in the Google search listing but once logged in I notice that a whole load of my search results are there because Harvey has shared them. Judging by the steady trickle of hits on my post this must be the same for a large number of people. If you are interested in SEO you will find this fact quite interesting and perhaps useful until Google tweak their parameters back to something more sensible.


Stop rumours!

February 7, 2012

Meaning that there are rumours going round about stops or scalar tops, not that we should stop spreading rumours. In SUSY theories stops are the lightest sfermions (scalar fermions are bosons not fermions) related to top quarks which are the heaviest leptons and indeed the heaviest particle in the standard model. If stops exist they would help stabilise the Higgs vacuum which could be too unstable if the Higgs mass is around 125 GeV as now expected, but noone has seen one yet and the situation for theorists has been getting a bit desperate because they had expected to see them at the LHC and the so the anti-SUSY bloggers have been poking fun and saying I told you so.

Now rumours have been squarked to the blogotwittersphere via Motl at TRF and Jester of Resonaances that a signal for the stop has been seen in the data. the story so far has been summed up by Cliff Harvey on Google+ so look there for the details. There is a seminar next week that could be relevant to the rumour but Jester’s last tweet says knowingly  “Caution: theorists rumoring about stops is fact, but what is now out on blogs is 100% false. Dont jump unless more reliable rumors appear” so what is going on?

Sooner or later someone is going to start a rumour just to catch us out. So is the greatest news story in the history of science about to break or have we been duped by a revengeful experimentalist who saw the next seminar as an opportunity to get back at us for all those earlier leaks on the theory blogs? Is it indeed a slepton or something we should have slept on?

By the way there is an LHCb seminar about to be webcast and they are the only ones with plausible BSM signals so far so let’s slide back to reality and enjoy that, until next week.


What would a Higgs at 125 GeV tell us?

December 4, 2011

13 December: please follow the live blog for up-to-date news

The rumours tell us that next week ATLAS and CMS will announce a strong but inconclusive signal for the Higgs boson at about 125 GeV. This may be wrong and even if it is right there may be other candidate signals to think about, and it will take much more data to verify that the signal is indeed correct for the Higgs, but if it is right, what then are the implications of the Higgs at this mass?

This question will be the subject of much discussion in the coming months and I can only touch on it here. Certainly the central topic of the debate will be the stability of the vacuum and whether it implies new physics, and if so, at what scale?

It has been known for about twenty years that for a low Higgs mass relative to the top quark mass, the quartic Higgs self-coupling runs at high energy towards lower values. At some point it would turn negative indicating that the vacuum is unstable. In other words the universe could in theory spontaneously explode at some point releasing huge amounts of energy as it fell into a more stable lower energy vacuum state. This catastrophe would spread across the universe  at the speed of light in an unstoppable wave of heat that would destroy everything in its path. Happily the universe has survived a very long time without such mishaps so this can’t be part of reality, or can it?

As it turns out a Higgs mass of 125 GeV is quite a borderline case. The situation was analysed taking into account the best recent valued for the top mass and weak coupling constants by Ellis et al in 2009. Here is their most relevant graphic with a line running across at 125 GeV (plus or minus 1 GeV) added by me. The horizontal axis tells us the energy at which the coupling constant goes negative. The yellow band indicates the limit for vacuum stability. Because of uncertainty in the top mass and the weak coupling, and also due to some theoretical unknowns, the exact point at which this limit is reached is not known exactly. The yellow band covers the range of possibilities.

The second plot taken from Quiros shows the scale of instability as a function of Top and Higgs mass. I have added a green spot where we now seem to live.

At 126 GeV the vacuum might remain stable up to Plank energies (see e.g. Shaposhnikov and Wetterich). If this is the case then there is nothing to worry about, but depending on the precise values of the standard model parameters, instability could also set in at energies around a million TeV. This is well above anything we can explore at the LHC but such energies are found in the more extreme parts of the universe and nothing bad has happened. The most likely explanation would be that some new unknown physics changes the running of the coupling to avert it from going negative. Examples of something that could do this include the existence of a Higgsino or a stop as predicted by supersymmetry, but there are other possibilities.

It is also possible that some amount of vacuum instability could really be present. If there is meta-stability the vacuum could remain in its normal state. There would be the possibility of disaster at any moment but the half-life for the decay of the vacuum would have to be  more than about the 13 billion years that it has survived so far. In the plot above the blue band indicates the region where a more immediately unstable vacuum is reached. It is unlikely that this case is realised in nature.

As the plot shows, if the mass of the Higgs turns out to be 120 GeV despite present rumours to the contrary then the stability problem would be a big deal. This would be a big boost for SUSY models that stabilize the vacuum amd mostly prefer the light Higgs mass. If on the other hand the Higgs mass was found at 130 GeV or more, then the stability problem would be no issue. 125 GeV leaves us in the uncertain region where more research and better measurements of the top mass will be required. It will still encourage the SUSY theorists as work such as that of Kane shows, but the door will still be open to a range of possibilities.

There are other things apart from the stability of the vacuum that theorists will look at. What is the nature of the electro-weak phase transition implied by this Higgs mass? Can it play some role in inflation or other phenomenology of the early universe? How does the result fit with electro-weak precision measurements and what else would be required to reconcile theory with experiment in such tests, especially the muon magnetic anomaly? 2011 has been a great year for the experimenalists but next year the theorists will also have a lot of work to do.


A Typical LHC plot

August 28, 2011

Here is a typical LHC plot :)

As you can see, with 1.1/fb CMS has observed one event in a channel that may give a signal of a Higgs through decay to two Z bosons which in turn decay to two tau leptons and two other leptons. This is consistent with standard model backgrounds shown.

It will require about 100 times as many events for this channel to make any real impact on the search for the Higgs boson. Luckily the LHC will eventually record a few thousand /fb so this channel will be very useful.

There are other channels with better cross sections but results ao far shown have still used just a few events, or they are swamped by thousands of background events. It is possible to combine several channels and compare with what is expected from a particular theoretical model such as a standard model Higgs boson or MSSM supersymmetry, but such models tend to work in a reduced parameter space and may not match reality well. In the case of supersymmetry they look at models where a stable lightest particle is in reach of the LHC so that it shows up in missing energy searches. It would have been nice if this led to a quick discovery but it hasn’t.

Int ime each of these channels will be populated with lots of events and can be compared with standard model backgrounds. Bumps could appear anywhere leading to the discovery of some new particle. Once its properties are mapped through its different decay modes it can be fitted into a new model, which may or may not correspond to a supersymmetric multiplet.

People are starting to say that supersymmetry is in a corner, or even that the LHC seems to be incapable of producing new physics. It is far too early for any such conclusions. We need to be patient.

 


Has the LHC seen a Higgs Boson at 135±10 GeV?

August 13, 2011

Once again rumours are circulating that the Higgs Boson has been seen and now they are more stronger than ever. At the EPS conference it was seen that both ATLAS and CMS have an excess of events peaking at around 144 GeV. Fermilab had a signal in the same place but much weaker. At the Lepton-Photon conference starting 22nd August ATLAS and CMS will unveil their combined plot. The question is, will the combined signal at 144 GeV be enough to announce an observation over 3-sigma significance?

Needless to say some early versions of the combined plot have already been leaked but rather than show results that may change I am just going to discuss my own unofficial combinations that are not very different. So here again is my combined plot for CMS, ATLAS and the Tevatron.

This shows a brought excess peaking at 144 GeV where it is well over 3-sigma significance. It extends from 120 GeV to 170 GeV above 2-sigma most of the way but it shows an exclusion above 147 GeV at 95% confidence. The signal is the expected size for a standard model Higgs boson from 110 GeV up to 145 GeV but is excluded by LEP below 115 GeV. What could it be, a Higgs boson, two Higgs bosons or something else?

The width of the Higgs boson is determined by its lifetime and at this mass it should be no more than 10 GeV. However there is a lot of uncertainty in the measured energy in some of the dominant channels. Some useful plots shown at Higgs Hunting 2011 by Paris Sphicas show what a simulated signal looks like in the WW channels and it is clear from these that a Higgs boson at 130 GeV or 140 GeV is perfectly consistent with the broad signal now observed.

There is also a hint of a signal around 120 GeV but it is not strong enough for a claim. I would say that overall this plot is consistent with a single Higgs boson with mass between about 125 GeV and 145 GeV or more than one Higgs boson in the range 115 GeV to 150 GeV. Whatever it is, the significance is enough to claim that a Higgsless model is now unlikely to be right unless some other particle is mimicking the Higgs boson in this plot and it is probably a scalar. Afterall, we can’t really say that the signal is definitely a Higgs boson until we can confirm that it has the right cross-section in some of the individual channels.

What does this say for SUSY and other models? The MSSM requires a Higgs boson below 140 GeV. In detail the signature would be different from the standard model Higgs boson. If there were a Higgs below about 130 GeV the vacuum would be unstable (but perhaps metastable) I think something as light as 120 GeV would be hard to accept as a standalone Higgs boson and would have to be stabilised with something that looks like either a SUSY stop or a Higgsino. On the other hand a 140 GeV Higgs can easily exist on its own and requires no new physics even at much higher energy scales. At this point we cannot rule out either MSSM or a lone Higgs boson.

Earlier I said that the electroweak fits could kill the standard model and that is still the case. At Higgs Hunting 2011 Matthias Schott from the gfitter group told us that a Higgs at 140 GeV has just a p-value of 23% in the fit which includes the Tevatron data. This is far short of what is required to rule it out but it tends to suggest that there may be something more to be found if the gfitter data is good (count the caveats in that sentence.) So just how good is the gfitter data?

This plot shows the effect on the electroweak fit of leaving out any one of the measurements used.

The green bar shows the overall preferred fit for the Higgs boson mass giving it a mass of 71 GeV to 122 GeV. But anything below 114 GeV is excluded by LEP. Anything below 122 GeV would certainly favour SUSY which is why this plot has been encouraging for theorists who prefer the BSM models. Indeed it is possible to get a much better fit to the data with just about anything other than the standard model.

How seriously should we take this? To get back some sanity have a look at the effect of the Al measurement. The fit includes two separate measurements of this parameter, one from LEP and one from SLD (SLAC Large Detector). The reason for using the two is that they disagree with each other at about 2-sigma significance. This could just be statistical error in which case we should use the combination of them both, but suppose it is a systematic error in one or other of the experiments, such as a mismodelled background? Removing the SLD measurement would push the preferred Higgs mass up and widen the error bars so that anything up to 160 GeV becomes a reasonable fit.  This is just one example of how a measurement could compromise the fit. That being the case I think we should not take the fit too seriously if we have good direct evidence for something different, and now we do.

In conclusion

From reliable sources I am expecting CERN to issue a press release about the status of the search for the Higgs Boson next week in advance of the LP2011 conference. If the official Higgs combination is similar to my version (the leak shows that it is) then they have the right to claim an observation (but not a discovery) of a strong signal consistent with a Higgs boson at 144 GeV (or soewhere else nearby). They cannot excluded other BSM signals including MSSM. I don’t know exactly how they will spin it but they will want the media to take notice.

For more details we will need to await the next analysis. Given present results and the extra data already recorded I am sure we will not have to wait too long.


Follow

Get every new post delivered to your Inbox.

Join 276 other followers