Graham writes …
Isaac Newton was the first to really appreciate the concept that mathematics has the power to reveal deep truths about the physical reality in which we live. Combining his laws of motion and gravity, he was able to construct equations which he could solve having also developed the basic rules of what we now call calculus (1). A remarkable individual achievement, which unified a number of different phenomena – for example, the fall of an apple, the motion of the moon, the dynamics of the solar system – under the umbrella of one theory.
Coming up to the present day, this pursuit of unification continues. In summary, the objective now is to find a theory of everything (TOE), which is something that has occupied the physics community since the development of the quantum and relativity theories in the early 20th century. Of course, the task now is somewhat more challenging than that of Newton. He had ‘simply’ gravity to contend with, whereas now, physicists have so far discovered four fundamental forces – gravity, electromagnetism, the weak and strong nuclear forces – which govern the way the world works. Along the way in all this, we have accumulated a significant understanding of the Universe, both on a macroscopic scale (astrophysics, cosmology) and on a micro scale (quantum physics).
One surprising consequence of all this, is that we have discovered that the Universe is a very unlikely place. So, what do I mean by this? The natural laws of physics and the values of the many fundamental constants that specify how these laws work appear to be tuned so that the Universe is bio-friendly. In other words, if we change the value of just one of the constants by a small amount, something invariably goes wrong, and the resulting universe is devoid of life. The extraordinary thing about this tuning process is just how finely-tuned it is. American physicist Lee Smolin (2) claims to have quantitatively determined the degree to which the cosmos is finely tuned when he says:
“One can estimate the probability that the constants in our standard theories of the elementary particles and cosmology would, were they chosen randomly, lead to a world with carbon chemistry. That probability is less than one part in 10 to the power 220.”
The reason why this is so significant for me, is that this characteristic of the Universe started me on a personal journey to faith some 20 years ago. Clearly, in itself, the fine-tuning argument does not prove that there is (or is not) a God, but for me it was a strong pointer to the idea that there may be a guiding intelligence behind it all. At that time, I had no religious propensities at all and the idea of a creator God was anathema to me, but even I could appreciate the significance of the argument without the help of Lee Smolin’s mysterious calculations. This was just the beginning of a journey for me, which ultimately led to a spiritual encounter. At the time, this was very unwelcome, as I had always believed that the only reality was physical. However, God had other ideas, and a belief in a spiritual realm has changed my life. However, that is another story, which I tell in some detail in the book if you are interested (3).
The purpose of this blog post is to pose a question. When we look at the world around us, and at the Universe at macro and micro scales, we can ask: did it all happen by blind chance, or is there a guiding hand – a source of information and intelligence (“the Word”) – behind it all?
There are a number of thought-provoking and intriguing examples of fine-tuning discussed in the book (3), but here I would like to consider a couple of topics not mentioned there, both of which focus on the micro scale of quantum physics.
There is general agreement among physicists that something extraordinary occurred then, which in the standard model of cosmology is called the ‘Big Bang’ (BB). There is debate as to whether this was the beginning of our Universe, when space, time, matter and energy, came into existence. Some punters have proposed other scenarios; that perhaps the BB marked the end of one universe and the beginning of another (ours), or that perhaps the ‘seed’ of our Universe had existed in a long-lived quiescent state until some quantum fluctuation had kicked off a powerful expansion – the possibilities are endless. But one thing we do know about 13.8 billion years ago is that the Universe then was very much smaller than it is now, unimaginably dense and ultra-hot. The evidence for this is incontrovertible, in the form of detailed observations of the cosmic microwave background (4).
If we adopt the standard model, the events at time zero are still a mystery as we do not have a TOE to say anything about them. However, within a billionth of a billionth of a billionth of second after the BB, repulsive gravity stretched a tiny nugget of space-time by a huge factor – perhaps 10 to the power 30. This period of inflation (5) however was unstable, and lasted only a similarly-fleeting period of time. The energy of the field that created the expansion was dumped into the expanding space and transformed (through the mass-energy equivalence) into a soup of matter particles. It is noteworthy that we are not sure what kind of particles they were, but we do know that, at this stage of the process, they were not the ‘familiar’ ones that make up the atoms in our body. After a period of a few minutes, during which a cascade of rapid particle interactions took place throughout the embryonic cosmos, a population of protons, neutrons and electrons emerged. In these early minutes of the universe, the energy of electromagnetic radiation dominated the interactions and the expansion dynamics, disrupting the assembly of atoms. However, thereafter, there was a brief window of opportunity when the Universe was cool enough for this disruption to cease, but still hot enough for nuclear reactions to take place. During this interval, a population of about 76% hydrogen and 24% helium resulted, with a smattering of lithium (6).
In all this, an important feature is the formation of stable proton and neutron particles, without which, of course, there would be no prospect of the development of stars, galaxies and, ultimately, us. To ‘manufacture’ a proton, for example, you need two ‘up’ quarks and one ‘down’ quark (to give a positive electric charge), stably and precisely confined within a tiny volume of space by a system of eight gluons. Without dwelling on the details, the gluons are the strong force carriers which operate between the quarks using a system of three different types of force (arbitrarily labelled by three colours). Far from being a fundamental particle, the proton is comprised of 11 separate particles. The association of quarks and gluons is so stable (and complex) that quarks are never observed in isolation. Similarly, neutrons comprise 11 particles with similar characteristics, apart from there being one ‘up’ quark and two ‘down’ quarks, to ensure a zero electric charge.
So what are we to say about all this? Is it likely that all this came about by blind chance? Clearly, the processes I have described is governed by complex rules – the laws of nature – to produce the Universe that we observe. So, in some sense the laws were already ‘imprinted’ on the fabric of space-time at or near time zero. But in the extremely brief fractions of a second after the BB where did they come from? Who or what is the law giver? Rhetorical questions all, but there are a lot of such questions in this post to highlight the notion that such complex behaviour (order) is unlikely to occur simply by ‘blind chance’.
There have been many ‘eureca moments’ in physics, when suddenly things fall into place and order is revealed. One such situation arose in the 1950s, when the increasing power of particle colliders was generating discoveries of many new particles. So many in fact that the physics community was running out of labels for the members of this growing population. Then in 1961, a physicist named Murray Gell-Mann came up with a scheme based upon a branch of mathematics called group theory which made sense of the apparent chaos. His insight was that all the newly discovered entities were made of a few smaller, fundamental particles that he called ‘quarks’. As we have seen above, protons and neutrons comprise three quarks, and mesons, for example, are made up of two.
Over time this has evolved into something we now refer to as the standard model of particle physics, which consists of 6 quarks, 6 leptons, 4 force carrier particles and the Higgs boson, as can be seen in the diagram (the model also includes each particle’s corresponding antiparticle). These particles are considered to be fundamental – that is, they are indivisible. We have talked about this in a couple of previous blog posts – in October 2021 and March 2022 – and it might be worth having a look back at these to refresh your memory. Also, as we have seen before, we know that the standard model is not perfect or complete. Gravity’s force carrier, the graviton, is missing, as are any potential particles associated with the mysterious dark universe – dark matter and dark energy. Furthermore, there is an indication of missing constituents in the model, highlighted by the recent anomalous experimental results describing the muon’s magnetic moment (see March 2022 post).
I have recently been reading a book (7) by physicist and author Paul Davies, and, although his writings are purely secular, I think my desire to write on today’s topic has been inspired by Davies’s thoughts on many philosophical conundrums found there. The fact that the two examples I have discussed above have an underlying mathematical structure that we can comprehend is striking. Furthermore, there are many profound examples of this structure leading to successful predictions of physical phenomena, such as antimatter (1932), the Higgs Boson (2012) and gravitational waves (2015). Davies expresses a view that if we can extract sense from nature, then it must be that nature itself is imbued with ‘sense’ in some way. There is no reason why nature should display this mathematical structure, and certainly no reason why we should be able to comprehend it.
Would processes that arose by ‘blind chance’ be underpinned by a mathematical, predictive structure? – I’m afraid, another question left hanging in the air!
I leave the last sentiments to Paul Davies, as I cannot express them in any better way.
“How has this come about? How have human beings become privy to nature’s subtle and elegant scheme? Somehow the Universe has engineered, not just its own awareness, but its own comprehension. Mindless, blundering atoms have conspired to spawn beings who are able not merely to watch the show, but to unravel the plot, to engage with the totality of the cosmos and the silent mathematical tune to which it dances.”
(1) Graham Swinerd and John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, 2020, Chapter 2.
(2) Lee Smolin, Three Roads to Quantum Gravity, Basics Books, 2001, p. 201.
(3) Graham Swinerd and John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, 2020, Chapter 4.
(4) Ibid., pp. 60-62.
(5) Ibid., pp. 67-71.
(6) Ibid., primordial nucleosynthesis, p.62.
(7) Paul Davies, What’s Eating the Universe? and other cosmic questions, Penguin Books, 2021, p. 158.
DNA as a ‘Bond Villain’.
John writes …
As we have written in the book, one of the essential requirements for life is an information-carrying molecule that can be replicated/self-replicated. That molecule is DNA, the properties and structure of which fit it absolutely beautifully for its job. That DNA is the ‘genetic material’ seems to be embedded in our ‘folk knowledge’ but it may surprise some to know that the unequivocal demonstration of DNA’s role was only achieved in the mid-1940s. This was followed nine years later by the elucidation of its structure (the double helix), leading to an explosion of research on the way that DNA actually works, how it is regulated and how it is organised in the cell.
It is latter feature on which I want to briefly comment. In all eukaryotic organisms (organisms with complex cells, which effectively means everything except bacteria) DNA is arranged as linear threads organised in structures called chromosomes which are located in the cell’s nucleus. Further, as was shown when I was a student, the sub-cellular compartments known as mitochondria (which carry out ‘energy metabolism’) and in plant cells, chloroplasts (which carry out photosynthesis) contain their own DNA (as befits their evolutionary origin – see the book (1)). However, there has been debate for many years as to whether there are any other types of DNA in animal or plant cells. Indeed, I can remember discussing, many years ago, a research project which it was hoped would answer this question. Obviously if a cell is invaded by a virus, then there will be, at least temporarily, some viral genetic material in the cell but as for the question of other forms of DNA, there has been no clear answer.
Which brings me to a recent article in The Guardian newspaper (2). Scientists have known for some time that some cancers are caused by specific genes called oncogenes (3). It has now been shown that some oncogenes can exist, at least temporarily, independent of the DNA strands that make up our chromosomes. In this form, they act like 'Bond villains' (4) according to some of the scientists who work on this subject, leading to formation of cancers that are resistant to anti-cancer drugs. In order to prevent such cancers we now need to find out how these oncogenes or copies thereof can 'escape' from chromosomes to exist as extra-chromosomal DNA. I would also suggest that they may be deactivated by targeted gene editing.
(1) Graham Swinerd and John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, 2020, Chapter 5
(2) ‘Bond villain’ DNA could transform cancer treatment, scientists say. Cancer Research, The Guardian newspaper.
(3) Oncogenes are mutated versions of genes that normally have a regulatory role in cell proliferation. When mutated they promote unregulated cell proliferation. They occur in all vertebrate animals and homologous sequences have been found in fruit flies.
(4) The villains in James Bond films are often both sinister and subtle.
John writes …
‘Save the planet!’ is one of the slogans in campaigns that encourage governments to aim, as soon as possible, for ‘net zero’ carbon dioxide emissions. However, planet Earth itself does not need to be saved. As we discuss in Chapter 8 of the book, it been through many climatic fluctuations in its 4.6 billion-year history and it is a testament to its survival of those ups and downs that we are here to talk about it. Leaving aside the extreme temperature fluctuations of the very young Earth and focussing ‘only’ on the 3.8 billion years during which there has been life on the planet, it is clear that Earth has had periods of being significantly warmer than now. Thus, it has been calculated that in the Cretaceous the average global temperature was at least 12°C higher than at present and there is evidence that trees grew on Antarctica and on land within the Arctic Circle. By contrast, Earth’s overall climate is at present cooler than average and has been so for about half a million years. The name for this is an Ice Age. The climate is characterised by alternation of very cold glacial periods (often called ‘ice-ages’ in colloquial usage), of which there have so far been five (the most recent ‘peaked’ about 22,000 years ago) and less cold interglacial periods, like the one that we are in now.
Each of the varying climatic eras in Earth’s history has been characterised by its own particular ‘balance of nature’ in which the atmosphere, geosphere, hydrosphere and biosphere are in a dynamic equilibrium, as described in James Lovelock’s Gaia hypothesis. If physical conditions on Earth change dramatically, then, in the words of Lovelock ‘Gaia is re-set’ or in other words, a new dynamic equilibrium is established.
However, that equilibrium may be ‘challenged’ by phenomena such as volcanic eruptions which push a lot of carbon dioxide into the atmosphere. I was therefore very interested to read a recent article in New Scientist (2) which reported on research at Pennsylvania State University showing that the resulting increase in temperature would be countered by an increase in the rate of weathering and erosion of particular types of rock. This would lead, via a series of simple chemical reactions, to the carbon dioxide being trapped as ‘carbonate minerals’. The scenery at Guilin in southern China is made of types of rock whose weathering is thought to have contributed to this process. The author spoke of the system as a sort of coarse thermostat but added that it works too slowly to ameliorate the current rate of increase in the concentration of atmospheric carbon dioxide.
‘Modern’ humans, Homo sapiens, have only been around for a very brief segment of the Earth’s long history. Our species arose in Africa about 250,000 years ago, when the third of the recent series of glaciations that I mentioned earlier was at or near its peak. From those origins in Africa, humans spread out to inhabit nearly the whole of the planet; human culture, society and then technology, grew and flourished. Humans are clearly the dominant species on Earth. Nevertheless, we can identify times in our history when climate would have made habitation of some areas impossible, namely the glacial interludes referred to above. Although modern technology makes it possible to live, after a fashion, on land covered in several metres of ice, that was not true 30,000 years ago. Thus, we can envisage climate-driven migration of both humans and Neanderthals, moving towards the equator as the last glacial interlude held the Earth in its icy grip.
Returning to the present day, even the relatively modest (in geo-historical terms) increase in temperature that is currently being caused by burning fossil fuels is very likely to drive human migration. Some of this will be a result of an indirect effect of rising temperatures, namely the rising sea level (see next paragraph) but some will occur because there are regions of the world where, if temperature rise is not halted, the climate itself and the direct effects thereof will make human habitation more or less impossible. My colleague at Exeter University, Professor Tim Lenton, has been part of a team doing the detailed modelling, one of the results of which is this map which predicts the extent of uninhabitable zones in the year 2070 if we adopt a ‘business as usual’ approach to climate (3). I will leave our readers to think about the implications of this.
In the previous paragraph I briefly mentioned rising sea levels. The increase so far has been, on average, about 17 cm which at first sight seems quite small. However, it is large enough to have increased the frequency of coastal flooding in low-lying coastal regions. Thus, much of, for example, Bangladesh (4) (including half of the Sundarbans (5), the world’s largest mangrove forest) is highly vulnerable, as are many Pacific islands. Some island communities are already planning to leave and to rebuild their communities, including all their traditions and folk-lore, in less flood-prone locations. The islanders will be a part of the 600 million sea-level-refugees projected to be on the move by the end of this century, in addition to those moving away from the uninhabitable zones mentioned above.
The rise in sea level is and will be caused by the melting of Arctic ice, including the Greenland ice-sheet and especially by the melting of Antarctic ice. Thinking about this reminded me of an article in The Guardian newspaper in 2020 (6). In order to illustrate this, I would like you to take an ice-cube out of your freezer (probably at -18°C) and place it at ‘room’ temperature (probably somewhere between 15 and 18°C). It does not melt instantly – and indeed, may last, depending on its size and the actual room temperature, a couple of hours. And so it is with environmental ice. In terms of the rise in sea level attributable to global warming, ‘we ain’t seen nothin’ yet’. The author of the article, Fiona Harvey, states, based on a paper in the scientific journal Nature, that even if we manage to keep the overall rise in temperature to 2°C, Antarctic ice will go on melting well into the next century and will raise sea levels not by a few cm but by about 2.5 metres, a scenario that hardly bears thinking about. Dealing with climate change and the knock-on effects thereof is thus a matter of extreme urgency.
(1) Steven Earle, Physical Geology (2nd Edition), 2019. Physical Geology - Open Textbook (opentextbc.ca)
(2) Rock weathering ‘thermostat’ is too slow to prevent climate change | New Scientist
(3) Future of the human climate niche | PNAS
(4) Intense Flooding in Bangladesh | nasa.gov and Introduction to Bioethics (Bryant and la Velle, 2018) p 298.
(5) The Sundarbans straddles the India/Bangladesh border.
(6) Melting Antarctic ice will raise sea level by 2.5 metres – even if Paris climate goals are met, study finds | Climate crisis | The Guardian
Artemis 1: Picture Gallery.
Graham writes …
After a number of frustrating delays, the uncrewed, moon orbiting, test mission designated Artemis 1 finally got off the launchpad on 16 November 2022. You may recall that one of these delays was not insignificant, due to the arrival of Hurricane Ian which caused a return of the launch vehicle to the shelter of the Vehicle Assembly Building. The mission lasted about three and half weeks, finally splashing down in the Pacific Ocean on 11 December 2022. The objectives of the mission were achieved, the principal one being to test out the Orion spacecraft systems, prior to the future launch of crewed missions. The mission profile is rather more elaborate than that of the Apollo missions, as illustrated by the accompanying graphic.
This blog post is essentially a picture gallery of aspects of the mission. In retrospect, I realised that the beautiful blue orb of the Earth features significantly in my choice of images. Also many of the images are ‘selfies’, showing elements of the spacecraft. This is achieved by using imagers attached to the spacecraft’s solar panels. All images are courtesy of NASA. I hope you enjoy the beauty and grandeur of God's creation in what follow ...!
4. Orion’s camera looks back on ‘the good Earth’ as the vehicle makes its way to the moon. The picture is captured by a ‘selfie stick’ installed on one of the solar panels. The image also shows the European service module’s propulsion system, featuring the main orbit transfer engine, the smaller trajectory adjustment thrusters, and, smaller still, the attitude control thrusters. One of the solar panels is prominent.
So, after the successful flight of Artemis 1, what does the future hold? In contrast to the ‘manic’ flight schedule of the Apollo programme leading up to the first landing in July 1969, the Artemis schedule is frustratingly more relaxed! The next event is the launch of Artemis 2, which is planned for May 2024. This will be the first crewed mission of the Orion system with a planned duration of 10 days. Note that we no longer refer to ‘manned’ missions, as the upcoming flights will involve the participation of lady astronauts! This second flight will take people out to a flyby of the moon, thus giving the system a thorough test with people on board.
Then, planned for 2025, the Artemis 3 mission will land astronauts on the moon for the first time since Apollo 17 in 1972. To supply the flight infrastructure to transfer astronauts from lunar orbit to the moon’s surface, and back, NASA have contracted the private company SpaceX. In recent times, this enterprise has proved itself in supplying a reliable transfer system, taking astronauts to and from the Earth-orbiting International Space Station. SpaceX proposes using a lunar landing variant of its Starship spacecraft, called the Starship HLS (Human Landing System). See the image above, showing an artist’s impression of the SpaceX HLS on the lunar surface – a monstrous vehicle in comparison to the Apollo era Lunar Excursion Module. There seems to be a lot of questions as to why NASA has chosen this route, but that’s a story for another time!
DNA and the doctors.
John writes …
The past few months have seen announcements of several new medical treatments based on manipulating DNA. I want to highlight just a couple of these which illustrate how techniques developed in research labs are helping doctors to use knowledge about genes and genetics to treat conditions which had previously been incurable.
The first concerns haemophilia, a condition in which blood fails to clot. It is caused by the absence of a protein involved in the clotting process which is in turn caused by a mutation in the gene encoding that protein. In 85% of haemophilia cases, it is Factor VIII that is missing. This particular mutation is famous because of its presence in Queen Victoria’s family, as described beautifully in Queen Victoria’s Gene by D.M. and W.T.W. Potts. The remaining 15% of cases involve a different clotting factor, Factor IX, and are again caused by a mutation in the relevant gene.
Both conditions are suitable targets for somatic cell gene therapy in which the cells that make the protein – both these factors are made in the liver – are supplied with a functional copy of the gene. But, despite the basic simplicity of that process, achieving it is much more difficult, as my students have often heard me say. Indeed, despite early optimism, treating Factor VIII deficiency via gene therapy has not yet been achieved. However, there has been success in treating the rarer condition, Factor IX deficiency, as reported by the BBC back in the summer: Transformational therapy cures haemophilia B – BBC News.
The process is relatively simple. The gene encoding Factor IX is inserted into an engineered harmless adenovirus which can make itself at home in the liver and thus deliver the desired gene to the liver cells. One person, Elliott Mason (pictured below), who has undergone this treatment, which, for the patient, involves a one-hour infusion of the engineered virus into the liver, said that it was astonishing to see that his Factor IX levels had gone from 1% of normal to being in the normal range. He added "I've not had any treatment since I had my therapy, it's all a miracle really, well it's science, but it feels quite miraculous to me." The team of scientists and doctors involved in developing this treatment believes that the patients who received it will not need another gene infusion for at least eight years.
My second example is successful treatment of T-cell acute lymphoblastic leukaemia which had resisted all other treatments. Earlier this month, doctors at the world-famous Great Ormond Street Hospital in London (www.gosh.nhs.uk) announced a ‘first’ in that they had used DNA base-editing to cure a 13-year-old girl, Alyssa, of her leukaemia: see Base editing: Revolutionary therapy clears girl’s incurable cancer – BBC News. I’ll explain what they did later in this post but for the minute I want to go back seven years. DNA base-editing is a very precise and sophisticated form of genome editing. Genome editing was used in 2015, also at Great Ormond Street, to treat a baby, Layla Richards, with an equally resistant acute lymphoblastic leukaemia. The technique was completely new; it had never been used on a human patient but the local medical ethics committee readily gave permission for its use because without it, the little girl was certain to die. As I have previously described (1), donated T-cells (T-cells are the immune system’s hunters) were subjected to very specific and targeted genetic modification and genome editing to enable them to hunt down and eradicate the cancer cells. The modified T-cells were infused into Layla and within two months she was completely cancer-free. Building on this success, the team used the same technique a few months later to treat another very young leukaemia patient (2).
Returning to the present day, the team treating Alyssa again used donated T-cells. These were then modified by DNA base-editing as shown in the diagram below.
As with the earlier treatments, a genetic modification was also required to enable the edited T-cells to bind to and destroy the cancerous T-cells. Alyssa is part of a trial that also includes nine other patients but she is the first for whom results of the treatment are available. She says ‘"You just learn to appreciate every little thing. I'm just so grateful that I'm here now. It's crazy. It's just amazing [that] I've been able to have this opportunity, I'm very thankful for it and it's going to help other children, as well, in the future.”
One of the inventors of DNA base-editing, Dr David Lui, was delighted that the technique had been used in this life-saving way: “It is a bit surreal that people were being treated just six years after the technology was invented. Therapeutic applications of base-editing are just beginning and it is humbling to be part of this era of therapeutic human gene-editing."
(1) Introduction to Bioethics, 2nd edition, John Bryant and Linda la Velle, Wiley, 2018. p139.
(2) Two baby girls with leukaemia ‘cured’ using gene-editing therapy – Genetic Literacy Project.
There is a time for everything and a season for every activity under the heavens … (Ecclesiastes 3, v1)
He has made everything beautiful in its time. He has also set eternity in the human heart …
(Ecclesiastes 3, v11)
John writes …
I quoted these verses at the front of my PhD thesis many years ago. They sandwich a passage in which ‘[there is] a time to …’ as used in the folk song ‘Turn, turn, turn.’ Many people have some familiarity with at least part of the passage without knowing that it comes from the Bible. The version of ‘Turn, turn, turn’ recorded in 1966 by Pete Seeger was recently played on BBC radio. It was obvious that the programme’s presenter was one of those who do not know that the words are Biblical – he ascribed them to the anti-war movement of the 1960s.
In recent days I have gone back to these verses and again thought about them in relation to the creation. The existence of seasons results from the way our planet is set up but it doesn’t have to be that way. Planets without seasons are perfectly good planets. So, as I enjoy the lovely autumn colours and indeed understand the underlying biology, I thank God that He has made everything beautiful in its time.
But that also challenges me. Autumns are getting warmer; the biological changes, except those driven entirely by day-length, are occurring later. Ecosystems are changing because the climate is changing; this is a matter for prayer and urgent action.
All pictures are credited to the author.
The Nobel Prize for Physics 2022.
Graham writes …
In October’s post, John discussed the winner of the Nobel Prize of Physiology and Medicine, and this month I’d like to say something about the work of the winners of the Physics Prize, and the extraordinary things it says about the nature of reality. There were three winners, Alain Aspect, John Clauser and Anton Zeilinger, and broadly speaking they earned the prize for their work on the topic of quantum entanglement. So what is quantum entanglement, and why is it important? I’d like to say something about this concept, hopefully in language that is accessible to non-physicists
As you may know, prior to the 1900s, the laws devised by Isaac Newton reigned supreme. It is also fair to say that for many engineering and science applications today, Newton’s theory still works. This classical theory is elegant, compact and powerful, and is still part of the education of a young science student today. One of the main aspects of Newton’s physics is what it says about the nature of reality. Put simply, if you tell me how the world is now, then the theory will tell you precisely how the world will be tomorrow (or indeed yesterday). In other words, if the positions and velocity of all the particles in the Universe were known at a particular time, then in principle Newton will be able to determine the state of all the particles at another time. This total determinism is a defining facet of Newtonian physics.
However, in the early years of the 20th Century, the comforting edifice of classical physics collapsed under the onslaught of a new theory. Physicists investigating the world of the very small – the realm of molecules, atoms and elementary particles – found that Newton’s laws no longer worked. A huge developmental effort on the part of scientists, such as Einstein, Planck, Bohr, Heisenberg, Schrödinger and others, ultimately led to an understanding of the micro-world through the elaboration of a new theory called quantum mechanics (QM). However, it was soon realised that the total determinism of classical physics was lost. The nature of reality had changed dramatically. In the new theoretical regime, if you tell me how the world is now, QM will tell you the probability that the world is in this state or in that.
Einstein was one of the principal founders of the theory of QM, but it is well known that over time he came to reject it as a complete description of the Universe. Much has been made of Einstein’s resistance to QM, summed up by his memorable quote that “God does not play dice with the world”. However, Einstein could not deny that QM probabilities provided a spectacularly accurate prediction of what was going on in the microworld. Instead, he believed that QM was a provisional theory that would ultimately be replaced by a deeper understanding, and the new theory would eliminate the probabilistic attributes. He could not come to terms with the idea that probabilities defined the Universe, and felt there must be an underlying reality that QM did not describe. He believed that this deeper understanding would emerge from a new theory involving what has become known as ‘hidden variables’. On a personal note, I have to say I have great sympathy with Einstein’s view. As an undergraduate, with a very immature appreciation of QM, I too could never get to grips with it from the point of view of its interpretation of how the Universe works. This is one of the reasons why I studied general relativity – Einstein’s gravity theory – at doctorate level, which is inherently a classical theory.
Getting back to the discussion, Einstein strived to find his ultimate theory until the end of his life. Along the way, he was always attempting to find contradictions and weaknesses in QM. If he believed that he had found something, he would throw out a challenge to his circle of eminent ‘QM believers’. This stimulating discourse continued for many years. Then in 1935, with the publication of a paper with coauthors Podolsky and Rosen, Einstein believed he had found the ultimate weakness in QM in a property referred to as quantum entanglement (QE). This publication became known as the ‘EPR paper’. In broad terms, QE can be summarised along the lines of – if two objects interact and then separate, a subsequent measurement of one of them revealing some attribute would have an instantaneous influence on the other object regardless of their distance apart.
You might ask, why does this have such a profound impact on our understanding of reality? To grasp this, we need to discuss in a little more detail what this means when we consider quantum objects, like subatomic particles, and their quantum qualities, such a quantum spin. We have discussed the enigma of quantum spin before (see the March 2022 blog post). If we measure the quantum spin of a particle about a particular axis, then the result always reveals that it is spinning either anti-clockwise or clockwise (as seen from above), with the same magnitude. The former state is referred to ‘spin up’ and the latter ‘spin down’. There are just two outcomes, and this is a consequence of the quantised nature of the particle’s angular momentum (or rotation). As I have said before – nobody said quantum spin was an intuitive concept! It is possible to produce two particles in an interaction in the laboratory such that they zoom off in opposite directions, one in a spin up state and the other in a spin down state (for example). In the process of their interaction the two particles have become entangled, and we can measure their spin in detectors placed at each end of the laboratory.
Another part of this story is understanding the nature of measurement in QM. In the example we have chosen above, the conventional interpretation of QM says that the particle’s spin state is only revealed when a measurement takes place. Prior to this moment the particle is regarded as being in a state in which it is neither spin up nor spin down, but in a fuzzy state of being both. The probability of one or other state is defined by something called the wave function, and a collapse of the wave function occurs the moment a measurement is made, to reveal the actual spin state of the particle. This process is something of a mystery, and is still not fully understood. However, that is another story. For interested readers, please Google ‘the collapse of the wave function’ for more detail.
So, in our discussion, we have two ways of interpreting our experiment. That of QM which says that the spin state of the particle is only revealed when a measurement is made, and that of Einstein who believed in an underlying reality in which the spin state has a definite value throughout. If you think about the two entangled particles created in the lab, discussed above, then QE only presents us with an issue if QM is correct and Einstein is wrong. In this case, the measurement of the spin of one particle reveals its value (up or down), and an instantaneous causal influence will reveal the state of the other (the opposite value), even if the two particles are light years apart.
Einstein called this "strange spooky action at a distance”, and it troubled him deeply, particularly as both his theories of relativity forbid instantaneous propagation of any physical influence. QM could not, in his view, give a full final picture of reality. For years, nobody paid much attention to the EPR paper, mostly because QM worked. The theory was successful in explaining physics experiments and in technology developments. Since no one could think of a way of testing Einstein’s speculation that one day QM would be replaced by a new theory that eliminated probability, the EPR paper was regarded merely as an interesting philosophical diversion.
Einstein died in 1955, and the debate about QE seemed to die with him. However, in 1964 an Irish physicist called John Stuart Bell proved mathematically that there was a way to test Einstein’s view that particles always have definite features, and that there is no spooky connection. Bell’s simple and remarkable incite was that doable experiments could be devised that would determine which of the two views is correct. Put another way, Bell's theorem asserts that if certain predictions of QM are correct then our world is non-local. Physicists refer to this ‘non-locality’ as meaning that there exist interactions between events that are too far apart in space and too close together in time for the events to be connected even by signals moving at the speed of light. Bell’s theorem has been in recent decades the subject of extensive analysis, discussion, and development by both physicists and philosophers of science. The relevant predictions of QM were first convincingly confirmed by the experiment of Alain Aspect (one our Nobel Prize winners) et al. in 1982, and they have been even more convincingly reconfirmed many times since. In light of these findings, the experiments thus establish that our world is non-local. I emphasise once again that this conclusion is very surprising, given that it violates the theories of relativity, as mentioned above.
In summary then, this year’s Nobel Prize for Physics has been awarded to Alain Aspect, John Clauser and Anton Zeilinger, whose collective works have used Bell’s theorem to establish to most people’s satisfaction (1) that Einstein’s conventional view of reality is ruled out (2) that quantum entanglement is real and (3) that quantum mechanics and quantum entanglement can be used to develop new technologies (such as quantum computing and quantum teleportation).
Usually, the Nobel Physics Prize is awarded to scientists whose work makes sense of Nature. This year’s laureates reveal that the Universe is even stranger than we thought, and in addition they achieved the rarest of things – they proved Einstein wrong!
John writes …
I am sure that many of our readers, on seeing the title of this post, will think of the use of DNA ‘fingerprinting’/ DNA profiling in police detection work. The discovery, by Alec Jeffreys and his team at Leicester University, that these profiles were unique to an individual enabled an identification of miscreants at a level of statistical certainty that had not previously been possible. Many of us remember the first conviction secured on the basis of DNA, that of Colin Pitchfork for the rape and murder of two teenage girls in Leicestershire (1). However, this was not the first time that the technique had been employed in the public arena. DNA fingerprinting had been used to establish that a young man from Nigeria was indeed the son of someone already living in the UK and could therefore stay here; the Home Office had disputed his claim and wanted to deport him. Both these examples show how the findings of science can be used in a way that promotes societal good.
One of the features of DNA profiling that often causes surprise is the small amount biological material needed in order obtain the profile. Obviously if a larger amount is available (such as from a blood sample), all well and good but if push comes to shove, the DNA from one cell is enough material to work with. This is nicely illustrated by a technique called Pre-implantation Genetic Diagnosis (PGD). As I have described elsewhere, prospective parents who are at risk of having a child with a genetic disorder may elect to undergo in vitro fertilisation (IVF) in order that the embryos may be tested for the presence or absence of the genetic mutation. In order to do this, just one cell is removed from the embryo at the eight-cell stage. This provides enough material for the genetic test.
One of the things we hear from time to time in relation to forensic use of DNA profiling is that a case has been re-opened because of ‘new DNA evidence’. Quite often this arises because forensic scientists are becoming better and better at extracting and purifying DNA from what appear to be unpromising biological samples. But these skills also have a role in other types of investigation, of which I will give three examples. The first concerns resistance to the plague-causing bacterium, Yersinia pestis. An international group of scientists have extracted DNA from the teeth of 206 human skeletons that were buried before, during and after the 14th century plague pandemic known as The Black Death (2). The level of detailed analysis that they were able to achieve with this material is truly remarkable. They were able to show that people with a mutation in a gene that regulates part of the immune system – a mutation that makes that part of the immune system more active – were 40% more likely to survive the plague than those without the mutation. Further, that mutation is still present in the population of modern Europe and people who possess it are more likely to suffer from an over-active immune system, leading to a variety of auto-immune diseases.
We think quite rightly that being able to analyse in detail the DNA from teeth that have been buried for 700 years is remarkable. However, the next two examples are even more amazing. A research team based at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany has extracted DNA from teeth and fragments of bone belonging to a group of eleven Neanderthals living in cave in Siberia 51,000 years ago. The team analysed DNA from the Y-chromosome, specific for males and mitochondrial genes which are passed down only via females. The analysis shows clearly the family and social structure of the little group as well as some insights into the life of this hunter-gatherer community. We do not have space to comment further on this but it is well worth reading a fuller commentary on this work, for example in New Scientist (3).
This brings us to our third example. One of the scientists who investigated the Siberian group of Neanderthals was Professor Svante Pääbo. Earlier this year he was awarded the Nobel Prize of Physiology and Medicine (4). He is an interesting choice because, unlike many Nobel Prize winners, he is not very well known in the wider science scene, despite his huge contributions to our understanding of the evolution of early humans and other hominins. This has involved refinement and improvement of methods for extraction of ancient DNA enabling Pääbo and his team to analyse and compare DNA from bones and teeth of Neanderthals, Denisovans (see Chapter 6 of the book) and early humans and DNA from a range of modern humans. This analysis included a full sequence of the nuclear genome of Neanderthals which was a truly remarkable achievement, almost worthy of a Nobel Prize on its own! His work showed that Neanderthals and Denisovans were ‘sister-groups’ existing for a time in parallel with humans (as we show on p. 148 of the book). He has also been able to measure gene-flow between these species resulting from limited inter-breeding and the extent to which present-day humans carry Neanderthal genes (and for Melanesian humans, Denisovan genes). Svante Pääbo is certainly an amazing DNA detective and a worthy winner of the Nobel Prize.
(1) I recently had the privilege of meeting one of the detectives (now retired) who had worked on the case.
(2) Evolution of immune genes is associated with the Black Death | Nature and Black Death 700 years ago affects your health now | BBC News.
(3) Neanderthal family life revealed by ancient DNA from Siberian cave | New Scientist.
(4) The Nobel Prize in Physiology or Medicine 2022 – Advanced information.
Graham writes ...
In case you were wondering what happened to the Artemis 1 mission …? Hurricane Ian put paid to the launch attempts and the SLS had to ‘run’ for cover back to the Vertical Assembly Building. The date of the next launch attempt is uncertain at the time of writing, but it is hoped that it may be in November 2022.
Graham writes ...
The DART spacecraft successfully impacted the asteroid Dimorphos in the early hours of this morning (UK time: 27 Sept), and I thought you might want to see what happened!
Please click here to see a video courtesy of BBC News showing the moments just before impact. We will learn in the coming days whether the experiment was successful in changing Dimorphos's orbital speed, and consequently its orbit around Didymos.
John Bryant and Graham Swinerd comment on biology, physics and faith.