Brain Power

June 28, 2013

Supercomputers

In 1984 when big brother was meant to invade our privacy I was a graduate student in Glasgow working on lattice gauge theories. As part of the research towards my doctorate I spent a week on a special mission to Germany where I was allowed into a secret nuclear base to borrow some computer time on a Cray-XMP. It was the world’s fastest supercomputer of the time and there was only one in Europe so I was very privileged to get some time on it even if it was only a few hours of CPU. Such resources would have been hugely expensive if we had to pay for them. I remember how the German’s jokingly priced the unseen cost in BMWs. The power of that computer was 400 Megaflops and it had a massive 512 Megabyte ram disk.

The problem I was working on was to look for chiral symmetry breaking in QCD at high temperatures and densities using lattice simulations. In the last few years this has been seen experimentally at the LHC and other heavy ion accelerators but back then it was just theory. To do this I had to look at the linear discretised Dirac equation for quarks on a background of lattice gauge fields. This gave a big hermitian NxN matrix where N is the number of lattice sites times 3 for the QCD colours. On a small lattice of 164 sites (working in 4D spacetime) this gave matrices of  196608 square and I had to find its smallest eignevalues. The density of this spectrum says whether or not chiral symmetry is broken. Those are pretty big matrices to calculate the eigenvalues of, but they are sparse matrices with only 12 complex non-zero components in each row or column. My collaborators and I had some good tricks for solving the problem. Our papers are still collecting a trickle of citations.

tianhe-2

Thirty years later big brother has finally succeeded in monitoring what everyone is doing in the privacy of their own homes and my desktop computer has perhaps 100 times the speed and 30 times the memory of the Cray-XMP, which makes me wonder what I should be doing with it. The title for the fastest supercomputer has recently been taken by China’s Tianhe-2 which has been benchmarked at 33.86 Petaflops and it has a theoretical peek performance of 53.9 Petaflops so it is about 100,000,000 times faster than the Cray. This beats Moore’s law by a factor of 5000 which may be in part due to governments being willing to spend much more money on them. The US who more commonly hold the record wont be beaten for long because the NSA is said to have a secret and very expensive project to build a supercomputer to surpass the Exaflop mark in the next few years. I doubt that any HEP grad students will have  a chance to use it.

This begs the question: Why do they need such powerful computers? In the past they may have been used to simulate nuclear explosions or design stealth fighters. Now they may be needed to decrypt and search all our e-mails for signs of dissenting tendencies, or perhaps there is an even more sinister purpose.

Artificial Intelligence

When computer pioneers such as Von Neumann and Turing conceived the possibility of building electronic computers they thought it would be easy to make computers think like humans even though they had no idea how fast computers would become. This turned out to be much harder than expected. Despite some triumphs such as “superhuman” chess programs which can now crush the best grandmasters (see discussion at World Science Festival) the problem of making computers think like us has seen little progress. One possibility that looked promising back in the 1980s was neural networks. When I left academia some of my colleagues at Edinburgh were switching to neural networks because the theory and the computing problems were very similar to lattice calculations. Today their work has applications in areas such as facial recognition but it has failed to deliver any real AI.

Now a new idea is raising hopes based on the increasing power of computers and scanning technologies. Can we simply map the brain and simulate it on a computer? To get a flavour of what is involved you can watch this TED talk by neuroscientist Sebastian Seung. His aim is to simulate a small part of a mouse brain, which seems quite unambitious but actually it is a huge challenge. If they can get that working then it may be simply a case of scaling up to simulate a complete human brain. If you want to see a project that anyone can join try OpenWorm which aims to simulate the 7000 neuro-connections of a nemotode worm, the simplest functioning brain in nature (apart from [insert your favourite victim here]).

Brain Scans

An important step will be to scan every relevant detail of the brain which consists of  100 billion neurons connected by a quadrillion synapses. Incredibly the first step towards this has already been taken. As part of the European Human Brain Project funded with a billion Euros scientists have taken the brain of a 65 year old women who died with a healthy brain, and they have sliced into 7404 sections each just 20 microns thick (Obama’s Brain Mapping Project which has had a lot of publicity is just a modest scaled down version of the European one). This is nearly good enough detail to get a complete map of the synaptic connections of every neuron in the brain, but that is not clear yet. If it is not quite good enough yet it is at least clear that with an order of magnitude more detail it will be, so it is now only a matter of time before that goal is achieved.

If we can map the brain in such precise detail will we be able to simulate its function? The basic connectivity graph of the neurons forms a sparse matrix much like the ones I used to study chiral symmetry breaking but with about a trillion times as many numbers. An Exaflop supercomputer is about a trillion times more powerful than the one I used back in 1984, so we are nearly there (assuming linear scaling). The repeated firing of neurons in the brain is ( to a first approximation ) just like multiplying the signal repeatedly by the connection matrix. Stable signals will be represented by eigenvectors of the matrix so it is plausible that our memories are just the eigenvalue spectrum of the synaptic map and the numerical methods we used in lattice gauge theories will be applicable here to.

However, the processes of logical reasoning are more than just recalling memories and will surely depend on non-linear effects in the brain just as the real physics of lattice QCD depends on the highly non-linear interactions of the gauge field. Will they be able to simulate those for a human brain on a computer? I have no idea, but the implications of being able to do so are enormous. People are starting to talk seriously about the moral implications as well as what it may bring in capability. I can understand that some agencies may want any such simulations to be conducted under a veil of secrecy if possible. Is this what is driving governments to push supercomputer power so far?

It would be ironic if the first true artificial intelligence is actually a faithful simulation of a human brain. No doubt billionaires will want to fund the copying of their own brains to giant supercomputers at the end of their lives if this becomes possible. But once we have the capability to simulate a brain we will also start to understand how it works, and then we will be able to build intelligent computers whose power of thought goes far beyond our own. Soon it may no longer be a question of if this is possible, just when.



Why I Still Like String Theory

May 16, 2013

There is a new book coming up by Richard Dawid “String Theory and the Scientific Method. It has been reviewed by Peter Woit and Lubos Motl who give their expected opposing views. Apparently Woit gets it through a university library subscription. I can’t really review the book because at £60 it is a bit too expensive. Compare this with the recent book by Lee Smolin which I did review after paying £12.80 for it. These two books would have exactly the same set of potential readers but Smolin is just better known which puts his work into a different category where a different type of publisher accepts it. I dont really understand why any author would choose to allow publication at a £60 price-tag. They will sell very few copies and get very little back in royalties, especially if most universities have free access. Why not publish a print-on-demand version which would be cheaper? Even the Kindle version of this book is £42 but you can easily self publish on Kindle for much less and keep 70% of profits through Amazon.

My view is equally predictable as anyone elses since I have previously explained why I like String Theory. Of the four reasons I gave previously the main one is that it solves the problem of how quantum theory looks in the perturbative limit about a flat space-time with gravitons interacting with matter. This limit really should exist for any theory of quantum gravity and it is the realm that is most like familiar physics so it is very significant that string theory works there when no other theory does. OK, so perturbative string theory is not fully sewn up but it works better than anything else. The next best thing is supergravity which is just an effective theory for superstrings.

My second like is that String Theory supports a holographic principle that is also required for quantum gravity. This is a much weaker reason because (a) it is in less well known territory of physics and requires a longer series of assumptions and deductions to get there (b) It is not so obvious that other theories wont also support the holographic principle.

Reason number three has not fared so well. I said I liked string theory because it would match well with TeV scale SUSY, but the LHC has now all but ruled that out. It is possible that SUSY will appear in LHC run 2 at 13 TeV or later, or that it is just out of reach, but already we know that the Higgs mass in the standard model is fine-tuned. There is no stop or Higgsino where they would be needed to control the Higgs mass. The only question now is how much fine-tuning is there?

Which brings me to my fourth reason for liking string theory. It predicts a multiverse of vacua in the right quantities required to explain anthropic reasoning for an unnatural fine-tuned particle theory. So my last two reasons were really a hedge. The more evidence there is against SUSY, the more evidence there is in favour of the multiverse and the string theory landscape.

Although I dont have the book I know from Woit and Motl that Dawid provides three main reasons for supporting string theory that he gathered from string theorists. None of my four reasons are included. His first reason is “The No Alternatives Argument”, apparently we do string theory because despite its shortcomings there is nothing else that works. As Lee Smolin pointed out over at NEW, there are alternatives. LQG may succeed but to do so it must give a low energy perturbation theory with gravitons or explain why things work differently. Other alternatives mentioned by Smolin are more like toy models but I would add higher spin gravity as another idea that may be more interesting. Really though I dont see these as alternatives. The “alternatives theory view” is a social construct that came out of in-fighting between physicists. There is only one right theory of quantum gravity and if more than one idea seems to have good features without them meeting at a point where they can be shown to be irreconcilable then the best view is that they might all be telling us something important about the final answer. For those who have not seen it I still stand by my satirical video on this subject:

A Double Take on the String Wars

Dawid’s second reason is “The Unexpected Explanatory Coherence Argument.” This means that the maths of string theory works surprisingly well and matches physical requirements in places where it could easily have fallen down. It is a good argument but I would prefer to cite specific cases such as holography.

The third and final reason Dawid gives is  “The Meta-Inductive Argument”. I think what he is pointing out here is that the standard model succeeded because it was based on consistency arguments such as renormalisability which reduced the possible models to just one basic idea that worked. The same is true for string theory so we are on firm ground. Again I think this is more of a meta-argument and I prefer to cite specific instances of consistency.

The biggest area of contention centres on the role of the multiverse. I see it as a positive reason to like string theory. Woit argues that it cannot be used to make predictions so it is unscientific which means string theory has failed. I think Motl is (like many string theorists) reluctant to accept the multiverse and prefers that the standard model will fall out of string theory in a unique way. I would also have preferred that 15 years ago but I think the evidence is increasingly favouring high levels of fine-tuning so the multiverse is a necessity. We have to accept what appears to be right, not what we prefer. I have been learning to love it.

I dont know how Dawid defines the scientific method. It goes back many centuries and has been refined in different ways by different philosophers. It is clear that if a theory is shown to be inconsistent, either because it has a logical fault or because it makes a prediciton that is wrong, then the theory has to be thrown out. What happens if a theory is eventually found to be uniquely consistent with all known observations but its characteristic predictions are all beyond technical means. Is that theory wrong or right? Mach said that the theory of atoms was wrong because we could never observe them. It turned out that we could observe them but what if we couldn’t for practical reasons? It seems to me that there are useful things a philosopher could say about such questions and to be fair to Dawid he has articles freely available on line that address this question, e.g. here, so even if the book is out-of-reach there is some useful material to look through. Unfortunately my head hits the desk whenever I read the words “structural realism”, my bad.

update: see also this video interview with Nima Arkani-Hamed for a view I can happily agree with

 https://www.youtube.com/watch?v=rKvflWg95hs


Book Review: Time Reborn by Lee Smolin

April 24, 2013

Fill the blank in this sentence:-

“The best studied approach to quantum gravity is ___________________ and it appears to allow for a wide range of choices of elementary particles and forces.”

time_rebornDid you answer “String Theory”? I did, but Lee Smolin thinks the answer is his own alternative theory “Loop Quantum Gravity” (page 98) This is one of many things he says in his new book that I completely disagree with. That’s fine because while theoretical physicists agree rather well on matters of established physics such as general relativity and quantum mechanics you will be hard pushed to find two with the same philosophical ideas about how to proceed next. Comparing arguments is an important part of looking for a way forward.

Here is another non-technical point I disagree with. In the preface he says that he will “gently introduce the material the lay reader needs” (page xxii) Trust me when I say that although this book is written without equations it is not for the “lay reader” (an awkward term that originally meant non-clergyman). If you are not already familiar with the basic ideas of general relativity, quantum mechanics etc and all the jargon that goes with them, then you will probably not get far into this book. Books like this are really written for physicists who are either working on similar areas or who at least have a basic understanding of the issues involved. Of course if the book were introduced as such it would not be published by Allen Lane. Instead it would be a monograph in one of those obscure vanity series by Wiley or Springer where they run off a few hundred copies and sell them at $150/£150/€150 (same number in any other currency) OK perhaps I took too many cynicism pills this morning.

The message Smolin wants to get across in that time is “real”  and not an “illusion”. Already I am having problems with the language. When people start to talk about whether time is real I hear in my brain the echo of Samuel Johnson’s well quoted retort “I refute it thus!” OK, you can’t kick time but you can kick a clock and time is real. The real question is “Is time fundamental or emergent?” and Smolin does get round to this more appropriate terminology in the end.

In the preface he tells us what he means when he says that time is real. This includes “The past was real but is no longer real” “The future does not yet exist and is therefore open” (page xiv) In other words he is taking our common language based intuitive notions of how we understand time and saying that this is fundamentally correct. The problem with this is that when Einstein invented relativity he taught me that my intuitive notions of time are just feature of my wetware program that evolved to help me get around at a few miles per hour, remembering things from the past so that I could learn to anticipate the future etc. It would be foolish to expect these things to be fundamental in realms where we move close to the speed of light, let alone at the centre of a black-hole where density and temperature reach unimaginable extremes. Of course Smolin is not denying the validity of relative time, but he wants me to accept that common notions of the continuous flow of time and causality are fundamental, even though the distinction between past and future is an emergent feature of thermodynamics that is purely statistical and already absent from known fundamental laws.

His case is even harder to buy given that he does accept the popular idea that space is emergent. Smolin has always billed himself as the relativitist (unlike those string theorists) who understands that the principles of general relativity must be applied to quantum gravity  How then can he say that space and time need to be treated so differently?

This seems to be an idea that came to him in the last few years. There is no hint of it in a technical article he wrote in 2005 where he makes the case for background independence and argues that both space and time should be equally emergent. This new point of view seems to be a genuine change of mind and I bought the book because I was curious to know how this came about. The preface might have been a good place for him to tell me when and how he changed his mind but there is nothing about it (in fact the preface and introduction are similar and could have been stuck together into one section without any sign of discontinuity between them)

Smolin does however explain why he thinks time is not fundamental. The main argument is that he believes the laws of physics have evolved to become fine-tuned with changes accumulating each time a baby universe is born. This is his old idea that he wrote about at length in another book “Life of the Cosmos” If this theory is to be true he now thinks that time must be fundamentally similar to our intuitive notions of continuously flowing time. I would tend to argue the converse, that time is emergent so we should not take the cosmological evolution theory too seriously.

I don’t think many physicists follow his evolution theory but the alternatives such as eternal inflation and anthropic landscapes are equally contentious and involve piling about twenty layers of speculation on top of each other without much to support them.  I think this is a great exercise to indulge in but we should not seriously think we have much idea of what can be concluded from it just yet.

Smolin does have some other technical arguments to support his view of time, basically along the lines that the theories that work best so far for quantum gravity use continuous time even when they demonstrate emergent space. I don’t buy this argument either. We still have not solved quantum gravity after all. He also cites lots of long gone philosophers especially Leibniz.

Apart from our views on string theory, time and who such books are aimed at I want to mention one other issue where I disagree with Smolin. He says that all symmetries and conservation laws are approximate (e.g. Pages 117-118). Here he seems to agree with Sean Carrol and even Motl (!) (but see comments). I have explained many times why energy, momentum and other gauge charges are conserved in general relativity in a non-trivial and experimentally confirmed way. Smolin says that “we see from the example of string theory that the more symmetry a theory has, the less its explanatory power” (page 280). He even discusses the preferred reference frame given by the cosmic background radiation and suggests that this is fundamental (page 167). I disagree and in fact I take the opposite (old fashioned) view that all the symmetries we have seen are part of a unified universal symmetry that is huge but hidden and that it is fundamental, exact, non-trivial and really important. Here I seem to be swimming against the direction the tide is now flowing but I will keep on going.

Ok so I disagree with Smolin but I have never met him and there is nothing personal about it. If he ever deigned to talk to an outsider like me I am sure we could have a lively and interesting discussion about it. The book itself covers many points and will be of interest to anyone working on quantum gravity who should be aware of all the different points of view and why people hold them, so I recommend to them, but probably not to the average lay person living next door.

see also Not Even Wrong for another review, and The Reference Frame for yet another. There is also a review with a long interview in The Independent.


UK Open Access policy launches today

April 1, 2013

The UK Research Councils RCUK have today begun the process of making all UK government-funded publications open access. Details of the scheme can be found here.

Some other countries are looking at similar initiatives or have already implemented them in some subjects (e.g. medicine in the US) but the UK scheme will be watched as a pioneering effort to bring Open Access to all public research.

Not Open_Access_logo2In fact the system will be phased in over a period of five years with 45% of publications to be open access this year. Both gold and green open access standards are approved. In the case of the gold standard publishers will be paid up front to make papers open access from the publisher’s website immediately from publication. The budget for this has been set at about £1650 per paper but there is considerable variability depending on the journal. It will be interesting to see if market forces can keep these prices down. Money will be allocated to research institutions who will distribute it around their departments. The figures are set out here.

The system will also accept the option of green open access where a journal simply allows the author to put there own copy of the paper online. Here there is a big catch: The RCUK will accept that the journal can embargo public access for six months or maybe even a year. To my mind this is not real open access at all. Publications should be open access from the moment they are accepted if not before. And there is another catch with this option. I don’t see anything in the RCUK guidelines to ensure that the document is put online by the author or that it will be kept online. For both open access standards it is not clear that there can be any guarantee that papers will be kept online forever. What if a gold standard journal disappears? What if a repository disappears? Under what circumstances can an author withdraw a paper? Perhaps there are answers to these questions somewhere but I don’t see them.

Another set of questions might be asked about how Article Processing Charges will affect the impartiality and standards of journals. You might also want to know if paying for open access up front will eventually reduce the cost to libraries of paying for subscriptions, or will they still always have to pay for access to papers published under the old system, and for papers that are privately funded?

I hope that the answer is that none of this will matter for long because another system of open access will evolve with a new way to do non-profit peer-review without the old journal system at all, but perhaps that’s just a pipe dream.


Fifth FQXi Essay Contest: It From Bit, or Bit From It?

March 26, 2013

The Fifth essay contest from the Foundational Questions Institute is now underway. The topic is about whether information is more fundamental than material objects. The subject is similar to the contest from two years ago but with a different slant. In fact one of the winning essays by Julian Barbour was called “Bit From It”. Perhaps he could resubmit the same one. The topic also matches the FQXi large grant awards for this year on the physics of information. Sadly I have already been told, unsurprisingly, that my grant application fell at the first hurdle but the essay contest provides an alternative (less lucrative) chance to write on this subject. Last year I did not get in the final but that really doesn’t matter. The important thing is to give your ideas an airing and discuss them with others, honestly.

In last year’s FQXi contest 50 essays were submitted by viXra authors. With the number of viXra authors increasing rapidly I hope that we will increase that figure this year. There has been a change in the rules to try to encourage more of FQXi’s own members to take part and improve the voting. Members will automatically get through to the final if they vote for 5 essays and leave comments. Last year there were about 15 FQXi member essays in the competition and if I am not mistaken only two failed to make the final, so it will not affect the placings much, but it should encourage the professional entrants to enter into the discussions and community rating which cannot be a bad thing.

For many of the independent authors who submit their work to viXra, getting feedback on their ideas is very hard. The FQXi contest is one way that can get people to comment, so get writing. We have until June to make our entries.

Please note that FQXi is not connected to viXra in any way.


Planck thoughts

March 22, 2013

It’s great to see the Planck cosmic background radiation data released, so what is it telling us about the universe? First off the sky map now looks like this

Planck_CMB_565x318

Planck is the third satellite sent into space to look at the CMB and you can see how the resolution has improved in this picture from Wikipedia

PIA16874-CobeWmapPlanckComparison-20130321

Like the LHC, Planck is a European experiment. It was launched back in 2009 on an Ariane 5 rocket along with the Herschel Space Observatory. The US through NASA also contributed though.

The Planck data has given us some new measurements of key cosmological parameters. The universe is made up of  69.2±1.0% dark energy, 25.8±0.4% dark matter, and 4.82±0.05% visible matter. The percentage of dark energy increases as the universe expands while the ratio of dark to visible matter stays constant, so these figures are valid only for the present. Contributions to the total energy of the universe also includes a small amount of electromagnetic radiation (including the CMB itself) and neutrinos. The proportion of these is small and decreases with time.

Using the new Planck data the age of the universe is now 13.82 ± 0.05 billion years old. WMAP gave an answer of 13.77 ± 0.06 billion years. In the usual spirit of bloggers combinations we bravely assume no correlation of errors to get a combined figure of 13.80 ± 0.04 billion years, so we now know the age of the universe to within about 40 million years, less than the time since the dinosaurs died out.

The most important plot that the Planck analysis produced is the multipole analysis of the background anisotropy shown in this graph

Planck_power_spectrum_565w

This is like a fourier analysis done on the surface of a sphere are it is believed that the spectrum comes from quantum fluctuations during the inflationary phase of the big bang. The points follow the predicted curve almost perfectly and certainly within the expected range of cosmic variance given by the grey bounds. A similar plot was produced before by WMAP but Planck has been able to extend it to higher frequencies because of its superior angular resolution.

However, there are some anomalies at the low-frequency end that the analysis team have said are in the range of 2.5 to 3 sigma significance depending on the estimator used. In a particle physics experiment this would not be much but there is no look elsewhere effect to speak of here, any these are not statistical errors that will get better with more data. This is essentially the final result. Is it something to get excited about?

To answer that it is important to understand a little of how the multipole analysis works. The first term in a multipole analysis is the monopole which is just the average value of the radiation. For the CMB this is determined by the temperature and is not shown in this plot. The next multipole is the dipole. This is determined by our motion relative to the local preferred reference frame of the CMB so it is specified by three numbers from the velocity vector. This motion is considered to be a local effect so it is also subtracted off the CMB analysis and not regarded as part of the anisotropy. The first component that does appear is the quadrupole and as can be seen from the first point on the plot. The quadrupole is determined by 5 numbers so it is shown as an everage and a standard deviation.  As you can see it is significantly lower than expected. This was known to be the case already after WMAP but it is good to see it confirmed. This contributes to the 3 sigma anomaly but on its own it is more like a one sigma effect, so nothing too dramatic.

In general there is a multipole for every whole number l starting with l=0 for the monpole, l=1 for the dipole, l=2 for the quadrupole. This number l is labelled along the x-axis of the plot. It does not stop there of course. We have an octupole for l=3, a hexadecapole for l=4, a  dotriacontapole for l=5, a tetrahexacontapole for l=6, a octacosahectapole for l=7 etc. It goes up to l=2500 in this plot. Sadly I can’t write the name for that point. Each multipole is described by 2l+1 numbers. If you are familiar with spin you will recognise this as the number of components that describe a particle of spin l, it’s the same thing.

If you look carefully at the low-l end of the plot you will notice that the even-numbered points are low while the odd-numbered ones are high. This is the case up to l=8. In fact above that point they start to merge a range of l values into each point on the graph so this effect could extend further for all I know. Looking back at the WMAP plot of the same thing it seems that they started merging the points from about l=3 so we never saw this before (but some people did bevause they wrote papers about it). It was hidden, yet it is highly significant and for the Planck data it is responsible for the 3 sigma effect. In fact if they used an estimator that looked at the difference between odd and even points the significance might be higher.

There is another anomaly called the cold spot in the constellation of Eridanus. This is not on the axis of evil but it is terribly far off. Planck has also verified this spot first seen in the WMAP survey which is 70 µK cooler than the average CMB temperature.

What does it all mean? No idea!


Abel Prize 2013 goes to Pierre Deligne, and Milner Prize to Alexandre Polyakov

March 20, 2013

PierreDeligne
The Abel prize in mathematics for 2013 has been awarded to Pierre Deligne for his work on algebraic geometry which has been applied to number theory and representation theory. This is research that is at the heart of some of the most exciting mathematics of our time with deep implications that could extend out from pure mathematics to physics.

Deligne is from Belgium and works at IAS Princeton.

I obviously can’t beat the commentary from Tim Gowers who once again spoke at the announcement about what the achievement means, so see his blog if you are interested in what it is all about.

Update: Also today the fundamental Physics Prize went to Polyakov, another worthy choice.

Update: Some bloggers such as Strassler and Woit seem uncertain this morning about whether Polyakov got the prize. He did. They played a strange trick on the audience watching the live webcast from CERN by running a 20 minute film just before the final award. They did not have broadcast rights for the film so they had to stop the webcast. After that the webcast resumed but you had to refresh your browser at the right moment to get it back. The final award to Polyakov was immediately after the film so many people would have missed it. I saw most of it and can confirm that Polyakov was the only one who finished the night with two balls (so to speak). To make matters worse there does not seem to have been a press announcement yet so it is not being reported in mainstream news, but that will surely change this morning. As bloggers we are grateful to Milner for this chance to be ahead of the MSM again.

I would have done a screen grab to get a picture of Polyakov but CERN have recently changed their copyright terms so that we cannot show images from CERN without satisfying certain conditions. This contrasts sharply with US government rules which ensure that any images or video taken from US research organisations are public domain without conditions.


Follow

Get every new post delivered to your Inbox.

Join 275 other followers