|
Graham writes ... I read with great regard last month’s blog post from John, in which he discussed a topic of interest to himself, but of great importance for the rest of us – photosynthesis. John’s career focussed mainly on the study of the genetics of plants and, as we all know, the ability of plants to utilise an external energy source (the Sun) to fix carbon for their own development changed the planet around 2.5 billion years ago when oxygen-producing photosynthesis began. Clearly, the oxygenation event changed the future development and evolution of all life (including us) on planet Earth. This motivated me to go back to my origins in terms of long-term science interests to produce something this month. Throughout my career in the space sector, I have always had a fascination with gravity. Effectively, there two theories in current use to describe gravitation – Newton’s theory which was published in 1687, and Einstein’s (the general theory of relativity (GTR)) which hit the physics community in 1915. The two theories are fundamentally different. Newton’s theory regards gravity as a ‘force’ and has a mathematical structure which is relatively simple in comparison to its more recent counterpart. Strangely, Einstein’s theory does not regard gravity as a force, but instead proposes that gravity is a manifestation of curved space and time, which makes the mathematical framework of the theory extremely complex. Both theories have lasted well – Newton’s theory reigned for around two centuries before observations caught up with it and revealed anomalies when the theory’s predictions were compared to the real world. Einstein too has yet to be found wanting in this respect after about 110 years, which is amazing since the recent tests of his theory (in the strong field regime) have been more demanding. However, before we get into all that, it’s worth giving a brief account of Einstein’s first major contribution to the world, his special theory of relativity (STR) which was published in 1905. That year marked the end of classical physics, when Einstein’s new insights into the nature of reality swept away the Newtonian view of the Universe. We need to keep in mind that at the time Einstein was unknown to the physics community and was working as a lowly patent clerk in the Federal Office for Intellectual Property in Bern, Switzerland. Initially, his contribution was overlooked, but there were some eminent physicists, notably Max Planck, who appreciated that a ‘new Newton’ had burst upon the scene. Einstein’s efforts in developing the STR were sparked by various issues arising in classical physics around the turn of the 20th Century (see, for example, references to the Michelson-Morley experiment). Perhaps the first question is – why is the special theory special? This is simply because it describes a special case, in as much as it does not account for gravity or accelerated motion. Consequently, it concerns itself with observers in ‘inertial frames of reference’; that is, observers travelling at constant speed in a straight line. This sounds like quite a constraint, but then you’ve got to start somewhere. The second main outcome of all this is that our understanding of the nature of reality was transformed. Newton’s concept of space and time was represented by a rigid and unchanging 3-dimensional spatial grid against which the motion of objects was measured, while in the background a clock ticked away marking the universal passage of time. Einstein’s theory swept this away, and introduced the concept of a 4-dimensional entity called ‘spacetime’ to manage the notion that space and time are flexible (varying, depending upon observers) and inextricably connected to each other – in other words, space and time are not absolute as Newton had supposed. To develop his special theory Einstein proposed two principles or starting points. Firstly, that the laws of physics are the same for all inertial frames of reference – that is, there is no special ‘absolute rest’ frame of reference in the Universe. Secondly, that all observers measure the same value of the speed of light c in a vacuum (c ~ 299,792,458 m/s) no matter how fast they are moving. The second of these, which has been experimentally verified, is the key attribute that forces space and time to behave differently to what our everyday intuition might expect. The outcome of adopting these axioms resulted in a paradigm-changing theory with a number of consequences. The most significant of these can be summarised as follows:
Getting back to gravity, and to help appreciate what follows, it’s worth describing briefly what is meant by the idea that gravity is produced by the curvature of space and time. If you have a copy of the book (2), then you can skip this section and read a fuller, and hopefully more helpful, explanation on pages 52 to 56. Einstein’s general theory of relativity was developed during the period 1907 to 1915, when he wrestled with the physics and, in particular, the mathematics required to create his theory. He considered his own mathematical skills to be poor (!), and given the complexity of the mathematics required to describe his theory, he was grateful for the help of others (including Marcel Grossmann, a close friend and one-time classmate and David Hilbert, a renowned mathematician who finalised the field equations for general relativity around the same time as Einstein). Fortunately, despite this complexity, the basics of his theory can be explained in relatively straightforward terms. The foundation of his theory is the principle that massive objects, like the Sun, distort the geometry of the spacetime surrounding them. This is the celebrated ‘warped space’, which has become so ‘familiar’ to us all, from science fiction books, TV and cinema (“warp-factor 5 Mr. Sulu”!). However, although we have heard a lot about it in sci-fi stories, an intuitive appreciation of what a ‘curved four-dimensional spacetime continuum’ means is still difficult to comprehend, even for those equipped to cope with the mathematics! Einstein’s basic idea of motion in a gravity field is that objects move in such a way as to take a path which gives the shortest distance between two points in the curved geometry. These paths are referred to as geodesics, and examples of these in simpler contexts are straight lines in flat (Euclidian) space and great circles on the curved 2-dimensional surface of a sphere. The accompanying pictures illustrate what geodesics look like in the setting of the Solar System. In summary, to describe how the theory works you could say that “matter (e.g. the Sun) tells spacetime how to curve, and the curvature of spacetime tells matter how to move”. For readers interested in more technical details see Text Box 3.3 on page 56 of the book (2). As an aside, this picture of gravity as a result of the curvature of space and time, rather than being a ‘force’, poses a significant problem when physicists try to unify gravity with the other three fundamental forces of nature – which is something that the physics community has been trying to do for the last hundred years or so. It is also worth noting that Einstein became a Nobel Laureate, not for developing his two monumental theories of relativity, but for his work on the quantum-mechanical implications of the photo-electric effect! Einstein's elevator thought experiment was crucial in aiding Einstein in his struggles to introduce gravity and accelerated motion into his relativistic theories, serving as the "happiest thought" of his life (around 1907-1908) that bridged special relativity and general relativity. By imagining an accelerating elevator, he developed the principle of equivalence, which states that gravity and acceleration are indistinguishable, allowing him to propose that gravity is the curvature of spacetime, not just a force. If you are standing on the Earth’s surface and you drop something, it will accelerate towards the Earth’ centre at a rate of 9.81 metres per second per second (m/s/s) (neglecting other forces such as friction or aerodynamic drag). This means that the object will increase in speed by 9.81 metres per second for each second of its fall. This is referred to as a 1 g environment, and it is this gravitational influence that keeps us firmly attached to the ground. The essence of the thought experiment can be summarised by the considering the following scenario. Imagine yourself (or indeed Albert Einstein – see Fig. 1) in a small elevator compartment with no windows, and experiencing a ‘normal’ 1 g environment. The easiest conclusion to draw from this is that the elevator is indeed stationary in a gravity field while resting on the Earth’s surface. However, there is another possibility. The elevator could be in deep space, very distant from any gravitating objects such as stars, and accelerating ‘upwards’ at a rate of 9.81 m/s/s (it’s obviously a very strange elevator with some sort of rocket attached to it!). Albert will experience the same 1 g environment as he did with the elevator resting on the Earth’s surface, and if he drops something it will appear to fall at a rate of 9.81 m/s/s as the floor of the elevator is accelerating upward at this rate towards the object. So, the two cases are indistinguishable, in one case the 1 g environment caused by gravity and in the other by acceleration. This equivalence principle helped Einstein realise that an observer in a ‘sealed room’ cannot distinguish between being at rest in a gravity field or being accelerated in free space. So how did this help him to take the crucial step of considering gravity to be a manifestation of spacetime curvature rather than a ‘force’? Going back to Albert in his elevator, imagine an intense pencil beam of light (a laser beam?) entering through a small hole on one side of the compartment. If the elevator is accelerating ‘upwards’ in free space, then in the time it takes for the beam to traverse the compartment, the elevator would have moved upwards a little. Hence the beam will arrive at the opposite wall at a slightly lower position compared to the entry hole. The beam will appear to have been bent slightly downwards (see Fig. 2, in which the effect is greatly exaggerated for the sake of clarity). Einstein figured that the equivalence principle would suggest that the same thing – the bending of light – will also happen in a gravity field. This marked the beginning of a torturous eight-year journey, enabling Einstein to shift away from treating gravity as a Newtonian force and toward understanding it as the effect of the curved geometry of spacetime. Another curious feature of Einstein’s general theory is the notion that time slows down (clocks tick more slowly) when they are close to a gravitating object. This gravitational time dilation has been experimentally verified, and is furthermore incorporated into the engineering of the GPS system that we all use in our cars these days. Without taking account of this effect, positioning estimates would be kilometres in error after a couple of days. This attribute can also be predicted using the elevator model, but the explanation is a little bit more difficult, so I have decided to pass over the details. Finally, Einstein’s elevator can help in the understanding of weightlessness. Imagine you (or Albert – see Fig. 3) are floating freely inside the compartment, and around you other objects are floating as well, and you feel totally weightless. Does this mean you are far away from all gravitating objects, somewhere in deep space? Again, you cannot be sure. Alternatively, you and the elevator could be in a gravitational field but in a state of free fall. In this case you and everything else within the elevator, and the elevator itself, would all be accelerating at the same rate so that, inside, no influence of gravity can be detected, hence establishing that a free-falling frame is equivalent to an inertial frame in empty space. This aids understanding of the kind of weightlessness experienced by astronauts on the International Space Station (ISS). The spacecraft has not escaped Earth’s gravity, but is in a continuing state of free fall. Its forward motion along its orbit curves towards Earth in this falling state, but of course the Earth’s surface curves away as well, so that, fortunately, the spacecraft’s trajectory never intersects the Earth’s surface! I hope you have enjoyed this excursion into the realm of thought experiments, and have found it helpful. Einstein was a master of this art, and used it continually to challenge the advocates of quantum mechanics at a time in his life when he felt the theory was incomplete.
Graham Swinerd Southampton, UK March 2026. (1) Einstein and Besso: Correspondence 1903-1955, Editor P. Speziali, Hermann Academic Press, 1972. (2) From the Big Bang to Biology: where is God?, Graham Swinerd & John Bryant, Kindle Direct Publishing, 2020.
0 Comments
John writes ... I started to write this blog post on February 4th, approximately half-way between the Winter solstice and the Spring equinox. In the Christian church, February 2nd is celebrated as Candlemas, remembering the presentation of the infant Jesus, the Light of the World, in the Temple. It is also the date of the ancient Celtic festival of Imbolc which hints at the ‘pregnancy’ of Earth in relation to the return of light in the lengthening days. All of this reminds me of the relationship of our beautiful planet Earth with the Sun. Our distance from the Sun is one of the major factors in providing conditions that are ‘just right’ for the development and flourishing of life, so much so that several scientists, including Paul Davies and the late Stephen Hawking, have talked about planet Earth being located in the ‘Goldilocks zone’; indeed, one of Paul Davies books is entitled ‘The Goldilocks Enigma’ (Penguin Books, 2006). As we sit in that zone, the amount of sunlight that hits Earth’s surface every hour is enough to meet the energy needs of human society for a year, a fact that I find totally amazing. Just pause and think about that for a minute or so. And of course we are harnessing some of that energy, albeit a tiny fraction of the total available, to generate electricity via photovoltaic cells. However, all over the globe, there are living organisms that utilise many times more of that energy than humankind is able to do. Those organisms are plants, including algae, and blue-green bacteria, i.e. organisms capable of photosynthesis – the capture (‘fixation’) of carbon dioxide driven by solar light energy, with oxygen as a by-product. But there’s more to it than that. Life as we know it is nearly all dependent on photosynthesis: 99% of living organisms, including humans, are directly or indirectly dependent on photosynthesis. Without photosynthesis there would be no complex life on Earth. Nature as we currently know it, including humankind, would not exist. Let me put that another way: without photosynthesis, we would not be here – another awesome thought. So, as we continue to move away from using fossil fuels, is there any way by which we can use the power of photosynthesis to help us achieve that aim? Well, in one sense we already do. All of the carbon contained in all the biofuels in use or in development was initially fixed by photosynthesis (see Biofuels and Bioenergy, John Love and John Bryant, Wiley, 2017). But that’s not what I am thinking of. I am thinking of the possibility of using the light-harvesting mechanism of photosynthesis in a more direct way. This was first proposed as long ago as 1912 by the very forward-thinking Italian chemist Giacomo Ciamician. At that point, nothing was known about the actual mechanisms used by plants to transform light energy into chemical energy. Ciamician envisaged the invention of photo-chemical devices that mimic the photo-chemical events of photosynthesis in order to synthesise compounds that could be used as fuels. He believed that the switch from fossil fuels to radiant energy could decrease the wealth gap between the poorer nations of southern Europe and the richer nations of northern Europe and, on a wider scale, would contribute to human progress and happiness. At this point, we need to think briefly about how photosynthetic organisms actually capture and utilise light energy. The primary photo-receptive compound is chlorophyll which is ‘excited’ by light, leading instantly to the photolysis (light-driven splitting) of water, releasing oxygen and electrons. Cambridge biochemist Robin Hill discovered this process in 1937 and it has been named the Hill reaction in his honour. The oxygen produced in this reaction is dissipated to the atmosphere, while the electrons are passed along a short chain of electron-carrying molecules embedded in the chloroplast membrane, at the end of which this bio-electric energy is used to drive the synthesis of two energy-carrying chemicals. These are ATP (adenosine triphosphate) and the electron donor NADPH (reduced nicotinamide adenine di-phosphate). It is the energy in these molecules that enables the biochemical reactions involved in fixing carbon dioxide into sugars. Thus, the molecules that capture the energy from the ‘light reactions’ drive the ‘dark reactions’ (more details are available in Functional Biology of Plants, Martin Hodson and John Bryant; Wiley-Blackwell, 2012). In connection with our need to move away from using fossil fuels, is there any way in which we can either mimic these photosynthetic processes, as envisaged by Giacomo Ciamician, or even use them directly? Thinking first of artificial photosynthesis, the key problem is to find a way in which the excited state of a photo-receptor can be transformed into chemical energy. There has been some very good progress with this in the light-driven formation of hydrogen and of methane, both of which can be used as fuel. However, I need to say that neither of these has yet been scaled up to anything like the extent needed to make a real contribution to our need for non-fossil fuels. Indeed, anaerobic digestion of biological waste already produces many times more methane than can be currently produced by solar fuel cells. Nevertheless, these processes show promise, so ‘watch this space’. However, as I briefly mentioned above, there is another approach. Is it possible to use more ‘natural’ systems? Researchers at Cambridge University, certainly think this is feasible. Several approaches are being taken, of which I will briefly mention two. The first approach, initially developed in the Department of Biochemistry several years ago, uses colonies of blue-green bacteria or of green algae. These are immobilised as a biofilm on a surface that acts as an anode in a ‘bio-voltaic cell’. Illumination of the biofilm results in the splitting of water as in normal photosynthesis but the resulting bio-electric energy, in the form of electrons released from the cells via an electron carrier, is then passed to an electrode thus completing the electrical circuit. The amount of electricity generated is not huge but is enough to power micro-processors and small electrical devices, such as digital clocks. Indeed, in more recent improvements of the process, one of the photo-voltaic cells, the size of an AA battery, ran a micro-processor for a year, giving hope that wider applications may be possible (see Algae-powered computing: scientists create reliable and renewable biological photovoltaic cell | University of Cambridge). The second approach is seen in a more recent development by a research team in the university’s Department of Chemistry (see This artificial leaf turns pollution into power | ScienceDaily). This starts with non-natural photo-receptors which use the captured solar energy to split water (as in photosynthesis); the resulting bio-electric energy drives the fixation of carbon dioxide to form not a sugar (which happens in the natural ‘dark reactions’) but formic acid which can be used as a starting point for synthesis of several important biochemicals, including pharmaceuticals. All the components for this process, namely the photo-receptors, electron carriers and enzymes, are immobilised on an inert matrix to make a ‘semi-artificial leaf’ which is a ‘hybrid’ structure of non-natural and natural components. Overall then, we have some promising innovations in the direct use of photosynthesis, raising hopes that that these or similar processes may one day make a significant contribution to our energy needs. The research goes on!
John Bryant Topsham, Devon. February 2026 Graham writes ... Welcome everyone to another year of blog posts by John and myself. I hope you all had a lovely time anticipating and celebrating Emmanuel – God with us – over the Christmas period! And now that the calendar has clicked over to 2026, it is customary to wish all our readers a blessed and peaceful New Year. I don’t normally bother with New Year resolutions but this year I made one which was to try to make these blog posts shorter – however, I think I may have broken it already with this post? Anyway, enjoy! If you have read any of our past posts, or indeed this one, please leave a greeting or a comment at the end of this one – thank you. The topic today involves another anniversary – 10 years since the first gravitational wave detection. However, that’s not quite accurate since the first wave to be detected hit our instruments at 10.51 UTC on 14 September 2015. It then took an army of researchers from 80 Institutions in 15 countries until February 2016 to work out what had been ‘seen’ with sufficient confidence before going public. So, it is approximately a decade since the first rippling ‘whisper’ in spacetime was announced, an event that is now catalogued, among many others, simply as GW150914. So, this year we mark the 10th anniversary of this historic event, and reflect on what this discovery means, how it transformed astrophysics, and what lies ahead in the new era of gravitational-wave astronomy. In the above, I described the signal as a ‘whisper’, as it was extremely weak. This is a consequence of the event being very distant (1.3 billion light-years away) and also because, of the known fundamental forces, gravity is 10 to the power of 36 times weaker than electromagnetism (the other long-range force). Detecting the wave in this instance came down to measuring a change in distance of the order of a fraction of the diameter of a proton – a task that seemed impossible for the many decades since gravitational waves were discovered theoretically by Albert Einstein. Interestingly, Einstein published his discovery in 1916, so it took the experimentalists exactly 100 years to catch up with him! For me, it was one of those events when you remember where you were and what you were doing when the news broke. So how come this rather esoteric happening registered as so significant for me? Many years ago in 1975 I graduated with a PhD on the topic of gravity waves in Einstein’s theory, and I honestly thought that the detection of such events would not happen in my lifetime. When the first historic detection of gravity waves occurred, I was at Lee Abbey, Devon (LAD) co-hosting a conference with John. For those of you who are familiar with LAD, 2015 into 2016 was the period when a major refurbishment of the main house was underway, and as a consequence the conference was held in the neighbouring youth activity centre called the Beacon – great days. At the time of the discovery there were only two gravity wave observatories (referred to as LIGO, standing for ‘Laser Interferometer Gravitational wave Observatory’) operating in the world and both of these were in America, one in Livingston, Louisiana, and the other in Hanford, Washington State. This meant that it was not possible to triangulate the position of the event in the night sky, but the attenuation of the energy allowed the distance to be estimated. So, what was it that caused the subtle ‘chirp’ in the detectors? Quoting the Executive Director of the LIGO at that time, David Reitze - "Take something about 150 km in diameter, and pack 30 times the mass of the Sun into that, and then accelerate it to half the speed of light. Now, take another thing that's 30 times the mass of the Sun, and accelerate that to half the speed of light. And then collide them together. That's what we saw here. It's mind boggling." Basically he’s describing two monster black holes spiralling around each other, getting closer and closer to each other due to the huge amount of orbital energy being lost in the form of gravitational waves. One has a mass of 30 solar masses (1 solar mass = the mass of our Sun) and the other about 35 solar masses. In the moments just before they impact and coalesce they are orbiting each other several tens of times per second. At the moment when their event horizons merge and they become one, the event produces a pulse of pure radiate energy in the form of gravitational waves equivalent to 3 solar masses (E equals m c squared!). It is the huge energy of this pulse that allowed the LIGO systems to detect the event, even though the black holes were so far away. To put this pulse of gravity wave energy into perspective, in that brief moment of impact the energy produced was more than the combined luminous output of all the stars in the Universe! Extraordinary …! Way back in February 2016, I was blogging on a promotional website for another book (1), this one about spaceflight for lay persons, and of course I had to write about the events described above. There is a whole lot of interesting detail explained there, hopefully in an accessible way, that I wouldn’t want to repeat here. So, if you are sufficiently interested, please have a look at that post which can be found here. I think it conveys nicely my excitement at the time, and covers things like:
For those of you not keen on taking this diversion, however, how each LIGO works can be explained briefly by the following summary. Each LIGO is effectively a laser interferometer, where a high-power laser produces a light beam which is divided by a beam splitter. Each beam is then directed at right angles to each other down two 4 km long evacuated tunnels that are arranged in an L-shaped configuration. The two beams are then bounced back and forth by mirrors, before eventually returning to their starting point. If the passage of gravity waves has disturbed the curvature of space-time in the observatory there will be a difference in the length of the light paths of the two beams, which is estimated by analysis of the interference between the beams in the detector. This difference in the length of the light paths is anticipated to be miniscule – less than the diameter of a sub-atomic particle, as mentioned above! Ten years on from the first detection there is now a global network of gravitational wave detectors, LIGO (USA), Virgo (Italy) and KAGRA (Japan), so it is now possible to determine the position of each event. This allows events to be observed by both gravitational wave detectors and across the electromagnetic spectrum (gamma- and X-rays, optical and radio) – an activity referred to as multi-messenger astronomy. In that time nearly 300 gravitational wave events have been catalogued including neutron star (2) collisions. Gravitational wave astronomy has matured into a science which has unveiled a Universe full of intriguing, violent events which were previously unforeseen. It has also allowed us to test our current physical theories, especially Einstein’s gravity in the strong field regime. Remarkably, after 110 years Einstein still stands, but he has a long time to go to beat the 200-odd years that Newton’s theory reigned before being falsified. KAGRA (Kamioka Gravitational Wave Detector) is the most recent detector to became operational (February 2020) in Japan, and is the first gravitational wave detector built underground and the first to utilize cryogenic mirrors, which help reduce thermal noise and improve sensitivity. Looking beyond this, next-generation observatories like the Cosmic Explorer (USA) and the Einstein Telescope (Europe) promise an order-of-magnitude leap in reach and precision. So, what opportunities for future research does the relatively new science of gravitational wave astronomy offer? As well as providing new tests for our theoretical understanding of gravity, the main avenues foreseen at present include:
There has recently been much research concerning the evaluation of Hubble’s constant Ho, which describes the rate of expansion of the Universe. A variety of existing techniques has produced results which are not consistent with each other – a situation that has been labelled ‘the Hubble Tension’ or the ‘Crisis in Cosmology’ (see the blog post for June 2024 for details – click on the relevant date displayed on the right hand side of this page). Gravitational wave events, particularly neutron star mergers, offer an independent way to measure the Universe's expansion which is a useful addition to the debate. The distance d to the merger can be estimated from the gravity wave signal’s amplitude and its recessional speed v from the redshift of its host galaxy's light, hence giving an estimate of Ho ( Ho = v/d). Probing the early Universe For thousands of years after the Big Bang the Universe comprised a very hot and dense ‘fireball’, which was opaque to the transmission of electromagnetic radiation. Then, about 380,000 years after the initial event, matter and radiation ‘decoupled’ and the light we now see as the cosmic microwave background (CMB) was free to propagate throughout the Universe. As a consequence ‘conventional’ astronomy, using the electromagnetic spectrum, is effectively barred from ‘seeing’ the creation event near time zero. However, we do know that cosmic inflation (see blog post for May 2023 for details), if it happened, would have produced copious amounts gravitational radiation which would still exist today as a gravitational wave background. And I suspect that such a background would be saturated with information about the creation event, in the same way that the microwave background is packed with information about the decoupling era of the early Universe. I have no idea what such a gravity wave background observatory might look like, but gravity wave cosmology may be the only means of acquiring direct observations of the early events that gave birth of the Universe we see today.
To finish I just wanted to share a poem composed by a good friend Carol Plunkett in 2016. She is not in any way a scientist, but she was nevertheless inspired to write this by the discovery of gravitational waves. The Only Echo Did man just hear the echo of the Universe’s start? When something out of nothing oscillated from a Heart? When Life, vast and unfathomable occurred when it was not, and Space and Time immeasurable commenced its divine plot. So that, in some millennia, a knowledge would arise and render beings capable of listening to the skies. And while with probing instruments they scoured the realms above they almost tuned their frequency to that first Word Of LOVE. Carol Plunkett © Carol Plunkett, February 2016. The 2015 detection was like hearing a whisper from the cosmos — subtle, fleeting, yet transformative. Over the past decade, that whisper has grown into a chorus, with each gravity wave event telling a story of cataclysm, transformation and cosmic evolution. As we celebrate ten years, the journey is far from over. The detectors will get better, the observations richer, and who knows — perhaps in another decade we will trace gravitational waves back to the very origin of the Universe itself. I think I was born too early!
Graham Swinerd Southampton, UK January 2026 (1) How Spacecraft Fly – spaceflight without formulae, Graham Swinerd, Spinger Science, 2008. (2) Neutron stars are formed as the remnant of a dying star. At the end of life, the star runs out of fuel which causes a catastrophic collapse. To become a neutron star, it needs an initial mass between roughly 8 and 25 solar masses, leading to a collapsed core mass (after a supernova) of about 1.4 to 3 solar masses. The result of this process produces a neutron star, a super-dense object comprised of neutrons. Anything more massive than about 3 solar masses will collapse further to form a black hole. Graham writes … This is a very brief heads-up for all you space enthusiasts out there – the launch of a crewed mission to the moon in early February, if all goes well with pre-flight activities. NASA is targeting February 6, 2026, for the launch of Artemis II, the first crewed mission to the vicinity of the Moon in over 50 years (since Apollo 17 in December 1972). This mission, an echo of the Apollo 8 mission in December 1968, will send a crew of four astronauts on a 10-day journey to perform a lunar flyby, testing critical spacecraft systems before future landing missions. Mission Overview Mission Goal: A crewed 10-day flight that will travel approximately 4,600 miles (7,400 km) beyond the far side of the Moon on a free-return trajectory. This mission serves as a critical test for the Space Launch System (SLS) rocket and the Orion spacecraft's life support and navigation systems. The Crew: The four-member crew are Commander Reid Wiseman, Pilot Victor Glover, and Mission Specialists Christina Koch (NASA) and Jeremy Hansen (Canadian Space Agency). Launch Windows: If the February 6 attempt is delayed, NASA has identified additional launch opportunities within the same window (February 7, 8, 10, and 11) and subsequent periods in March and April 2026. Pre-Launch Status
Rocket Rollout: The fully integrated SLS rocket and Orion capsule are being rolled out from the Vehicle Assembly Building to Launch Pad 39B at Kennedy Space Centre as I write (17 January, 2026). Wet Dress Rehearsal: A final "wet dress rehearsal" – a full practice countdown including propellant loading – is planned for the end of January to ensure all systems are flight-ready. Flight Readiness Review: Following these tests, mission managers will conduct a final assessment before officially committing to the February 6 launch date. I shall probably blog again on this topic, but otherwise you can track mission progress and the official countdown on the Artemis II mission page. Graham Swinerd Southampton, UK January 2026 Graham writes ... It's well into December now, and I was due to write this month's blog post. However, it's clear that I'm not going to get to it - my sincere apologies. However, both John and I would like to wish all our readers a Good Christmas and a peaceful and blessed New Year! Please take a moment to play the attached YouTube video of O come, O come, Emmanuel, one of my favourite carols. As you may know, it is an Advent carol, when we look forward in anticipation of the coming of Jesus. This arrangement, by Taylor Scott Davis, is not the traditional one, but hopefully it will allow a brief moment of peace and reflection among the business of the Christmas season. It is performed by the VOCES8 Foundation Choir and Orchestra, with members of Apollo 5. I hope to resume 'normal service' in the New Year. A most likely topic at present is the marking of the 10th anniversary of the announcement of the detection of gravitational waves in February 2016. If you have been following our monthly posts over 2025, please leave an indication that you are there - a like, a greeting or a brief comment. Thank you. Graham Swinerd
Southampton, UK December 2025 John writes ... Introduction It is now 72 years since the publication of the papers which presented the double helical structure of DNA to the scientific community (and eventually to the world). It is an event embedded both in the history and the folklore of molecular biology and genetics, as are the names of at least two of the scientists involved, namely Francis Crick and James Watson. Watson died on November 6th, aged 97. Later in this post I will present a brief obituary, However, before that I want to look back, ‘beyond the double helix’, well before 1953, to give us the historical context and to consider how science works in the ‘real world’. Standing on the shoulders of giants We need to go back to 1869 to note the actual discovery of DNA. A biologist called Friedrich Miescher, working in the chemistry laboratory of the University of Tübingen (which was actually in Tübingen Castle) used pus from clinical bandages as sources of human cells for chemical analysis. He discovered a compound rich in phosphate and nitrogenous bases which he showed to be located in the cell nucleus; he was also able to isolate the same compound from salmon sperm. He called the compound ‘nuclein’, which we now know to have been DNA, and later speculated that it might have some connection with inheritance. Miescher is one of the ‘forgotten people’ of DNA research and deserves to be much more widely known about than he actually is (see R. Dahm, 2005). Three years before Miescher’s discovery, an Austrian friar, Gregor Mendel, Abbott of St Thomas’s Abbey, Brno (then in the Austro-Hungarian Empire, now in the Czech Republic), had published the findings from his experiments on inheritance of traits in pea plants. One of the key conclusions that he made from his work was that heritable traits were based in actual physical, albeit invisible, ‘factors’ which were passed on from generation to generation. However, the paper was not widely noticed until 1900, when it was rediscovered independently by Hugo de Vreis and Carl Correns; Mendel thus started to receive the credit that he deserved for his ground-breaking work. But what the Mendelian units of heredity (named ‘genes’ by Danish botanist, Wilhelm Johannsen in 1909) actually were, remained a mystery. A deoxyribonucleotide. A base, in this case adenine, is joined to deoxyribose phosphate. Thus, by the early years of the 20th century, there were two paths across the genetic landscape. However, they were about to merge. Analysis of cell nuclei revealed the existence of a substance called ‘chromatin’ which contained nuclein/DNA and protein. Individual units of chromatin appear as chromosomes (‘coloured bodies’) prior to cell division and behave during division in a manner consistent with there being or containing the Mendelian units of heredity. By this time too, the general structure of DNA was being worked out, namely a large molecule made up of just four different deoxyribonucleotides. I need to unpack this: a nucleotide is nitrogenous base that is linked to a sugar-phosphate molecule. In DNA, the sugar is deoxy-ribose (ribose lacking an oxygen atom), hence, deoxyribonucleotides and deoxyribonucleic acid; in RNA, the sugar is ribose. How the four different deoxyribonucleotides were arranged along the length of the molecule was at that time unknown; one possible model was that DNA was a set of repeats of a tetra-deoxyribonucleotide, i.e., an array of linear groups of deoxyribonucleotides, each group containing one copy of each of the four types (A,C,G,T). So, chromosomes behave as if they contain genes, the Mendelian units of heredity. But which component of chromatin actually carries the genetic information? The general opinion was that proteins had the wide variety needed whereas it was thought that DNA, a molecule made up of only four building blocks, did not. At this point I need to introduce Frederick Griffith, a British medical scientist working at the Liverpool Royal Infirmary. In the late 1920s, he showed that a non-virulent form of Pneumococcus could be transformed into a virulent form if co-injected into mice with dead cells of a virulent strain. The dead cells thus contained something that provided the genetic information to confer virulence. Griffith called this the ‘transforming principle’ but he did not know what it was. It took another 16 years after Griffith’s publication for the transforming principle to be identified. An American team, Oswald Avery, Colin MacLeod and Maclyn McCarty, working at the Rockefeller Institute for Medical Research, separated the cellular components of virulent-strain Pneumococcus, focussing in particular on proteins and DNA. These were then used in attempts to transform the non-virulent strain into the virulent strain. The results were clear – proteins did not transform the non-virulent strain but DNA did. DNA was thus shown to carry genetic information. It is my view that the experiments of Griffiths and of Avery, Macleod and McCarty were absolutely key moments in research on DNA which eventually led, via elucidation of the double helix, to modern molecular genetics. The demonstration that DNA is the ‘genetic material’ inevitably led to a flurry of research directed at understanding its structure, both in terms of its detailed chemical composition and of its ‘architecture’. One of those who focussed on DNA was Erwin Chargaff, working at Columbia University in New York (having fled from Germany in 1935 because of Nazi attitudes to and policies about Jews). He analysed DNA from several different organisms and came up with two major findings, published in 1950, which became known as Chargaff’s rules. The first rule is that in any sample of DNA, the molecular concentration of the base A equals the molecular concentration of the base T and the molecular concentration of the base G equals the molecular concentration of the base C. Thus, somehow, in synthesising DNA, the cell equates the amounts of the larger two-ring bases (A and G) specifically with the amounts of the smaller single-ring bases. How was that done? Chargaff’s second rule was that the overall concentration of A+T and G+C varies between organisms. This is to be expected if DNA is the genetic material (and also debunks the tetra-deoxyribonucleotide hypothesis that I mentioned earlier). Chargaff visited Cambridge in 1952 to talk about his work and while there he met Crick and Watson. He was not impressed. In an interview with a science historian, Horace Judson he said that ‘they impressed me by their extreme ignorance’, referring specifically to what he perceived as their ignorance of organic chemistry. And so to the double helix It has been quite a journey since 1869, traversing scientific landscapes in genetics and biochemistry/biophysics – but we are now in the very early 1950s. And here are Francis Crick and James Watson in Cambridge and Maurice Wilkins, Rosalind Franklin and Ray Gosling at King’s College, London. Gosling was a PhD student working under the direction of Franklin and later of Wilkins. It was he who took the famous ‘Photo 51’, an X-ray crystallographic image of DNA, showing clear evidence for a double helical structure. He and Franklin co-authored the second of the three papers published consecutively in the leading science research journal Nature on April 25th, 1953, the first, of course, being the major reveal of the double helix by Watson and Crick (see M.J.Tobin, 2004).The latter pair were fortunate to have had access to Photo 51 – or as Wikipedia puts it so tactfully: ‘The crystallographic experiments of Franklin and Gosling, together with others by Wilkins, produced data that helped James Watson and Francis Crick to infer the structure of DNA. In a church service recently, I asked the congregation if they had heard of Watson and Crick (I emphasise that the question was entirely appropriate for the talk I was giving!). Almost everyone put their hand up. I then asked about Rosalind Franklin; about half of the congregation showed that they had heard of her. However, when I asked about Ray Gosling, only one person, a senior lecturer in Maths at Exeter University, raised his hand. Gosling, who died in 2015, aged 88, is one of those (nearly) forgotten heroes of science which is why I have focussed on him here. I need to add that after obtaining his PhD, he had a successful career in science, eventually becoming Professor of Physics Applied to Medicine at Guys Hospital Medical School. Returning to DNA, the team at King’s College were already of the opinion that it had a helical structure and Crick, with his experience in and knowledge of biophysics, took it a bit further in proposing a double helix. But how did that tie in with the chemistry? He and Watson knew of Chargaff’s rules (see above) but had not yet developed the concept of base-pairing. Eventually however, after a lot of model building, some brilliant flashes of intuition and some pure guess-work, the double helical model ‘emerged’. The reason for Chargaff’s rule became clear: in the double helix, A or G in one strand are base-paired with, respectively T or C in the other. However, it was either a stroke of genius or a brilliant guess that, in order to fit the dimensions implied by the X-ray data, one strand had to be upside-down in relation to the other (the two strands are ‘anti-parallel’). When we look at the structure of DNA we can see how perfectly it is designed. The genetic code is a linear array of bases. There is no constraint on which base (deoxyribonucleotide) is joined to which base in that linear array, so enormous variety is possible. The only constraint is that a base in one strand determines which base occurs at that position in the opposite strand. The specificity of this base pairing means that DNA is copied accurately in preparation for cell division. The two strands separate and each acts a template for synthesis of its complement; the code is thus passed on. Further, specific base-pairing also means that working copies of a gene can be made in order for the cell to read and act on the code in that copy (the working copy is actually an RNA molecule, messenger RNA). As I have said elsewhere, the design of DNA is a work of genius. One last question: would the London team have eventually come to the same conclusion? Most scientists who were aware of their work believe that they, and in particular, Franklin and Gosling, would have done so. But of course, the answer is irrelevant. Crick and Watson got there first. A career in DNA I think there can be little disagreement with the view that elucidation of the structure of DNA was the most significant discovery in biology in the 20th century. From it has flowed a vast amount of research and application of the findings of that research. The ‘golden age’ of research on genes was already underway when I arrived in Cambridge about a decade after the famous papers had been published and the place was obviously buzzing with excellence in nucleic acid and protein research. Crick was still there (Watson had gone back to the USA) and had moved from the Medical Research Council’s lab in the Cavendish Laboratory (Physics Department) to the same organisation’s newly established (but already very prestigious) Laboratory of Molecular Biology on the southern edge of the city. I had gone to Cambridge with a strong interest in plants and vaguely expected to emerge from my Natural Sciences degree as a plant ecologist. However, I was thrilled by lectures on molecular genetics and biochemistry which pulled me to the lab rather than the field. In my PhD project, I looked at the onset of DNA replication as plant cells emerge from dormancy and that set me on a career in research on the biochemical mechanisms (and the control of those mechanisms) involved in gene expression and especially in DNA replication in plants. I am grateful for that career and feel, as Dame Dr Jane Goodall has also stated about her work on chimpanzees and on environmental conservation, that I was following God’s calling. Obituary – James Watson, 1928 -2025 It was obvious from an early age that James Watson was very bright. He went to the University of Chicago aged 15 and graduated with a degree in Zoology at 19. He was very interested in Ornithology but was persuaded by Schrödinger (yes, that Schrödinger) that he should study the more molecular and chemical aspects of biology. Thus, his PhD research, conducted at Indiana University, Bloomington, was on the properties of bacteriophages (viruses which infect bacteria). He then spent a year as a post-doctoral researcher in Copenhagen before joining Francis Crick in the Cavendish Laboratory in Cambridge. As is evident from what I have already written, their collaboration was very successful. They were very aware of the significance of their work – at the end of the day in which they had finally worked out the double helical structure, they walked into the Eagle pub in Bene't Street, Cambridge and announced: ‘We have discovered the secret of life’. In his book, The Double Helix, Watson says that he rarely saw Francis Crick in a modest mood – but those who knew them both, suggest that he could have said the same about himself. On returning to the USA, Watson spent two years at the California Institute of Technology, working on the structure of RNA, followed by another year at the Cavendish Laboratory, before joining the academic faculty in the Biology Department at Harvard University at ‘the other Cambridge’ – across the river from Boston, Massachusetts. There, he was part of a group of scientists working on the roles of RNA in gene expression and who thus made a major contribution to our understanding of how genes actually work in the cell to direct protein synthesis. However, he was not always the easiest of colleagues. Those working in non-molecular aspects of Biology felt that he denigrated their work, believing it to be less important or less significant than his. In 1968, Watson was appointed Director of the Cold Spring Harbor Laboratory on Long Island, New York. His time there was very successful. In the words of Tim Radford, former science correspondent of The Guardian, he turned the Laboratory into a ‘scientific powerhouse’ (see Tim Radford’s obituary here), especially in cancer genetics and molecular biology but also in several other fields, including plant molecular biology. Following on from that success, in 1990 he was appointed as Director of the Human Genome Project. The project, initiated that year, was based at the National Institutes of Health in Bethesda, Maryland (although there were subsidiary centres such the Wellcome Sanger Institute, near Cambridge, UK). Having set up the main (US) branch of the project and ensured that genes sequences would be published (and not patented), Watson returned to Cold Spring Harbor in 1992, where he was appointed President of that institution. In respect of the Human Genome Project itself, as our readers will know already, it was a great success and by 2003 (50 years after the publication of the structure of DNA) a complete sequence of an ‘average’ human genome was published.
There can be no doubt that over the course of a long career, James Watson has made a very large contribution to our knowledge of molecular biology. This was recognised and honoured by the world science community, and by universities and governments all over the globe. However, in 2007, a shadow was cast over that career when he stated that people of African heritage were genetically less intelligent than people of Caucasian heritage. The Science Museum in London withdrew an invitation to lecture and he was asked to resign from his post at Cold Spring Harbor (although he was given an honorary fellowship). In 2019, in a TV documentary, he repeated those views and at that point the Cold Spring Harbor Laboratory withdrew his honorary emeritus fellowship. A sad end to an otherwise glittering career. John Bryant Topsham, Devon November 2025 Graham writes ... Graham and John co-led the 7th conference in the ‘Big Bang to Biology’ series, which was hosted by Lee Abbey, Devon during the week of 6th-10th October 2025. And what a great place to do it! Lee Abbey is a Christian retreat, conference and holiday centre situated on the North Devon coast near Lynton, which is run by an international community of predominantly young Christians. The main house nestles on the hillside, overlooking a grand view of the beautiful North Devon coastline in a 280-acre (113 hectare) Estate of farmland, woodland and coastal countryside. It even has its own beach, but there wasn’t too much call for bathing during this early October period! The house in its current gothic revival style was originally built in the 1850s as a family home, and it wasn’t until 1946 that the house was adopted ‘to equip and serve the church and its people’. The house now has been refurbed to modern standards and is a delightful place to host a conference. The South West Coast Path passes through the Estate and the Exmoor National Park is just a short drive away. The meeting took place just before the Autumn clock change so that nightfall occurred at a reasonable time (not too late) in the evening. The site is on the edge of the Exmoor National Park Dark Sky Reserve, and has an amazing night sky, but unfortunately a full moon and persistent high-level cloud prevented any organised star-gazing activity. We were pleased to welcome around 40 guests to the conference, who had booked in for a Science and Faith extravaganza. One of the joys of Lee Abbey is that speakers and delegates share the whole experience; meeting, eating and talking together for the whole week. The guests were very enthusiastic, encouraging and gracious (which made the week a pleasure for us speakers) coming as they did with a hunger to learn more and to share their own thoughts and experiences in discussion. It was also great to welcome Liz Cole back to the conference, with her new publication ‘God’s Cosmic Cookbook’ (1) – cosmology for kids! The usual format for the conference is to have two one-hour sessions each morning, with the afternoons remaining free to allow time to relax or to explore the local area. The opportunities to walk the Estate or the ‘Valley of Rocks’ are many and varied, and guests often find that the car remains in the car park for the week. The Valley of Rocks is very close-by, literally a 15- or 20-minute walk from the House, and is designated an Area of Outstanding Natural Beauty. Its U-shaped, dry valley is known for its dramatic cliffs and ancient rock formations. Unusually the valley runs parallel to the coast, and it is believed that a river once flowed here. Regarding the conference sessions, Graham kicked-off on Tuesday morning with talks on the limitations of science (2) and the remarkable events of the early Universe (3), followed on Wednesday by a presentation on the fine-tuning and bio-friendliness of the laws that govern the Universe, combined with the story of his own journey of faith (4). Following on, John presented a session entitled ‘We are Stardust’ in which he discussed the origin of life, and the difficulty that science currently has in understanding how it all started (5). In his second session on Thursday morning, ‘There is more to life than the Double Helix’, John discussed human evolution and what it means to be human (6). This was followed by a one-hour slot to give guests (and speakers!) the welcome optional opportunity to receive prayer ministry. The final session was a hour-long Q&A session in the late afternoon on Thursday. This was both a pleasure and a challenge for us speakers, with some piercing questions asked. As mentioned earlier, after the efforts of the mornings, guests are free in the afternoons to enjoy the delights of the Lee Abbey Estate and the adjoining Valley of Rocks, followed by entertainment in the evening. However, additional activities were also arranged by speakers or community. Another attribute of the Lee Abbey Estate is that it is a working farm, and on Tuesday afternoon guest were invited to visit the Lee Abbey farm by Estate Manager Simon Gibson. Later that afternoon, guests were also invited to attend an optional interactive work shop ‘DNA & Genetics: what’s Ethics got to do with it?’, led by John who has significant expertise and experience of this topic. On Wednesday afternoon John offered a walk to see the local flora, fauna and geology of the Estate and the Valley of Rocks and he was appreciative of the able assistance in this of a guest, Prof Tony Hurford, who is a professional geologist. Wednesday afternoon also saw an entertaining presentation in the late afternoon by Dave Hopwood on Film and Faith. Graham, John and Liz were able to sell several copies of our respective publications during the week, and on Thursday afternoon offered a book signing event prior to the Q&A session. Thank you to all who booked in and made the experience so enjoyable and worthwhile.
Graham Swinerd Southampton, UK John Bryant Topsham, Devon, UK October 2025 Picture credits: All pictures were taken by Graham or Marion Swinerd, unless indicated otherwise. Postscript: Graham and John have been offered a week at Lee Abbey in the Spring of 2027 to run another science & faith conference. Our response to this kind offer has been along the lines of ‘we will do it, God willing!’, given our advancing years. We will seriously consider the offer, but the current conference may have been the last time …? (1) Elizabeth Cole, God’s Cosmic Cookbook: your complete guide to making a Universe, Hodder & Stoughton, 2023. (2) Graham Swinerd and John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, November 2020, Chapter 2. (3) Ibid., Chapter 3. (4) Ibid., Chapter 4. (5) Ibid., Chapter 5. (6) Ibid., Chapter 6. John writes ... Science sometimes comes up with results which are puzzling and/or difficult to fit into current understanding. Many years ago, early in my career, one of my PhD students was doing research on tobacco mosaic virus, which like the virus which causes COVID, has a genome made of RNA and not DNA. Thus, we expected infected plants to express a virus gene encoding an enzyme which copies RNA into RNA (so that the virus genome is replicated in the infected plant). This expectation was fulfilled but very puzzlingly, there were two such enzymes, not one, with the second one being present in uninfected plants. The latter fact means that it was encoded in the plant’s genome, not the virus’s. We checked and double-checked but the result was still clear: plants had an enzyme which copied RNA into RNA but why they did was a complete mystery. Our paper attracted some attention but then was quietly forgotten. In my recent blog post, I wrote about types of RNA that cells synthesise as part of a process to get rid of unwanted messenger RNA molecules* that are no longer needed. One of these regulatory RNAs, anti-sense RNA was discovered about ten years after we discovered our mysterious enzyme. We now know that our ‘orphan enzyme’ has a major role in the synthesis of anti-sense RNA, although the discoverers of the latter were actually credited with discovering the enzyme. It was several years later, in a conversation between me and one of the leaders of the anti-sense research group, that it was recognised that our discovery was indeed the enzyme that made anti-sense RNA and it was a pity that our paper had not gained as much attention as it should have done. But, hey, that’s science and my research group has been very happy in making significant contributions to our understanding the control of the replication of DNA genomes (including the discovery of another pivotal enzyme).
* See pages 115 – 121 in the book if you need to know more about messenger RNA. John Bryant Topsham, Devon October 2025 John writes ... First, some cool science. I am sure that all our readers are familiar with the central facts of molecular biology, namely that genes are copied into molecules called messenger RNA (mRNA) and that the code in mRNA is translated by the cell in order to make proteins (see pages 115-121 in the book). You will also be aware that genes can be switched on and off, so that for example, when a particular protein is no longer required, the gene that encodes it is switched off. But there is a problem: many mRNA molecules are fairly stable; they remain in the cell after the relevant gene is switched off and thus, the now unwanted protein can still be synthesised. However, mechanisms have evolved to deal with this problem. Cells are able to synthesise various different types of RNA which are complementary to part of the sequence of the relevant mRNA, thus base-pairing with it, forming a short section of double helix in the mRNA. This inhibits the mRNA from being translated and marks it for de-activation and/or degradation. Different genes make use of different types of these inhibitory RNAs; the type I want to focus on here is microRNA. Please keep this in the back of your mind for recalling later in this blog post. Huntington’s Disease. Huntington’s Disease is a very distressing neurodegenerative condition caused by a dominant mutation in the HTT gene that encodes an essential brain protein called huntingtin (Htt). The mutant protein does not function properly; it accumulates in neurons and eventually causes death of neuronal cells. Because the mutation is dominant, the offspring of anyone with the gene have a 50% probability of having the condition. Further, as the gene is passed down the generations, so the age of onset becomes earlier. For example, I once met a man in his mid-30s who was already showing signs of the disease. The way the disease develops has been described as a combination of motor neurone disease, Parkinson’s disease and dementia. Personality and behavioural changes often occur in the early stages, as exemplified in a conversation I had several years ago (at Lee Abbey in fact) with a woman whose husband was becoming increasingly verbally aggressive and angry as the disease started to take hold. At any one time in the UK, there are about 6,700 people at various stages of progression of the disease which equates to about one sufferer in every 8,065 people. That may not seem many but for individual patients and their families that is irrelevant. The degree of suffering they experience is immense and it matters not how many or how few other sufferers there are. But there is hope, as I discuss in the next section. A cure for Huntington’s Disease? Yes: in the past two days (I am writing on September 25th) there has been an amazing announcement, followed by appropriate commentary, that a cure has indeed been developed. The key players in this work are a research team at University College Hospital, London, led by Professors Ed Wild and Sarah Tabrizi, in collaboration with uniQure NV, a Dutch pharmaceutical company that focuses on gene therapy. So, how did the team develop a cure? Is it possible to inactivate the mutant gene whilst leaving the normal gene working properly? Yes it is, but not in a way which works directly with genes at the level of DNA. Referring back to the first paragraph, the sequences of the mutant and normal messenger RNAs are different enough to allow the research team to make a microRNA that is specific for the mutant message. In other words, it is possible to specifically target the mutant mRNA for inactivation/degradation. The next challenge is to deliver a consistent supply of the microRNA to a patient’s brain cells. This challenge was met by a ‘slice’ of pure genius. A tiny gene, a very short piece of DNA encoding the microRNA, was synthesised and inserted into a benign virus which was infused into the brains of the 29 people taking part in the trial. That process in itself was very complex, as described in the BBC’s report on this work (Huntington’s disease successfully treated for the first time) and brought about what was, in effect, genetic modification of the brain cells enabling them to make their own supply of the microRNA. Three years on from this procedure, the results for patients have been remarkable. Disease progression has been slowed by 75% while death and loss of brain cells have been dramatically reduced. Patients who expected to be in wheelchairs are still able to walk and one who had retired on health grounds has been able to return to work. Whilst this not a complete cure, it is still an amazing result and holds out hope that with a bit of tweaking, that 75% may be improved on. It also raises hopes for people who know they have the mutant gene but who are not yet showing symptoms, exemplified by Jack May-Davis who featured in the BBC report. He is 30 years old but recalls that his father first showed symptoms when he was in his late 30s and died at the age of 57. Having been one of the 29 taking part in the trial, Jack stated that this "breakthrough has left him overwhelmed" and that he can envisage a future that "seems a little bit brighter, it does allow me to think my life could be that much longer". Epilogue. I am thrilled by this work for two reasons. Firstly of course, because it brings hope to those who have the mutant Huntington’s gene, whether or not they have yet developed symptoms. Secondly, I am thrilled because this is a brilliant use of good science in the service of humankind. The various inhibitory RNAs, of which microRNAs are one type, are relatively recent additions to our knowledge of how genes work. That knowledge was acquired by curiosity-driven research on gene expression as scientists worked, without any ‘commercial’ or ‘applied’ agenda, to reach a greater understanding of the fundamentals of molecular biology. Postscript.
As I wrote this post, it was clear that this news had gained a large amount of attention, with reports appearing across a wide range of print, broadcast and digital media. This even involved me because shortly after I started writing, I was invited (and accepted the invitation) to give an interview about the work on Trans-World Radio (UK). John Bryant Topsham, Devon September 2025 Graham writes … I mused long and hard about what to write about this month, and then I saw an interesting article in Nature on this, a very intriguing topic. The article, by a senior reporter for Nature Elizabeth Gibney (1), tries to analyse the results of a recent survey of quantum mechanics (QM) practitioners and theorists on whether QM tells us anything about the nature of the subatomic world that it is used to investigate. At a recent event to commemorate the 100th anniversary of the theory, eminent specialists in the field came together to argue about the issues. To gain an insight into how the wider community interprets quantum physics in its centenary year, Nature carried out a survey on the subject. They emailed more than 15,000 researchers whose recent papers involved quantum mechanics, and also invited attendees of the Centenary Meeting, held on the German island of Heligoland, to take the survey. The 1100 or so responses they received showed how widely researchers vary in their understanding of the most fundamental features of quantum theory and experiments. My own association with QM began in 1971 – only 46 years after its inception – as an undergraduate student. The Institution in which I studied mathematical physics had a research focus on QM topics, with the consequence that we undergrads got rather a lot of QM-related teaching in our course. My scientifically immature attitude to QM at the time was that it seemed to be a very successful theory in terms of predicting the outcome of experiments, but intuitively it made no sense. I guess everything else that I had encountered in my studies up to that point stemmed from classical physics, which did make sense of the underlying reality of the classical world – of which people are of course a part. This came as a bit of a shock, and the issue arose about how to cope with QM to achieve a pass in my final exams! A typical pragmatic approach for an undergraduate student …? I decided that I would simply use quantum theory without engaging with what it means — the ‘shut up and calculate’ approach (or more formally, an epistemic approach). As a consequence, I ultimately developed a dislike of QM, tending to believe that there was an underlying reality in the quantum world that the existing theory was not able to reveal. A result of all this was that when I began my in PhD studies in 1972 I had decided that I would not engage with QM – instead I embarked on an enjoyable three years of research on the topic of Einstein’s theory of gravity (his general theory of relativity) which is inherently a classical theory. It's interesting to note that Einstein had a similar attitude to QM that I had unknowingly adopted as an undergraduate (it is also fair to say that we had very different motivations!). Despite the fact that he was one of the originators of QM, Einstein became troubled by what he perceived as an incomplete picture of reality that QM presented. All of his criticisms of the theory throughout his life stemmed from the notion that he believed that there was an underlying reality that was sciences’ job to uncover. However, despite Einstein’s misgivings, it is undeniable that the mathematics of QM work beautifully, as witnessed by its successful application in the development of many recent technologies, such as nuclear engineering, medical imaging, computer chip manufacture and, indeed, the relatively new science of quantum computing. It has also provided the most accurate predictions of the outcome of experiments of any physical theory (see for example the discussion of the ‘muon g-2 experiment’ in the blog post of March 2022 – to see this, click the date on the archive list on the right hand side of the screen). So, should it just be regarded as an epistemic theory which tells us little about the nature of reality? This is one of the many questions posed to experts in the recent survey. But before we get into that, let’s take a brief look at how QM theory works. The most common approach is the so-called Copenhagen Interpretation which could be regarded as the standard “textbook” view. This was developed by Niels Bohr and Werner Heisenberg in the 1920s, and is named after the university at which they did their seminal work. Other eminent physicists also played a major role in this endeavour, in particular the German physicist Erwin Schrödinger, who developed his wave equation which is central to QM theory. An object’s behaviour is characterized by its wavefunction, which is a mathematical expression calculated using Schrödinger’s equation. The wavefunction describes a quantum state (the particles’s position or spin, for example) and how it evolves as a cloud of probabilities. As long as it remains unobserved, a particle seems to spread out like a wave, interfering with itself and other particles. According to this interpretation, a quantum particle exists in a fuzzy state of many possibilities until a measurement is made. Only when you look – through an experiment or observation – does the wavefunction ‘collapse’ into a definite outcome. However, the issue of what counts as a ‘measurement’, and why the act of observation should change reality has long been discussed by physicists. In the survey, the Copenhagen Interpretation was the most popular preference, comprising 18% of experts who were confident or fairly confident in making their choice. Another approach is the Many-Worlds Interpretation, introduced by American physicist Hugh Everett III in 1957, and this got rid of wavefunction collapse issue altogether. Instead, every time a quantum choice is made, the universe splits. In one world, the particle is here and in another it is there. Both outcomes are real, but we only experience one branch of the ever-multiplying multiverse. I don’t know what you might think of this, but I have always considered it to be totally crazy – but nevertheless it was favoured by 8% (confident or fairly confident) of the survey respondents. To give an impression of the diversity of opinions about QM theory among the practitioners and theorists, 9 interpretation options were offered in the survey (I have only discussed three of them for the sake of brevity), and in some instances equal numbers of respondents took diametrically opposing views, showing how widely researchers vary in their understanding of the most fundamental features of quantum mechanics. Interestingly 10% of respondents agreed with me and opted for the epistemic (information-based) approach. I think if you asked many of the physicists attending the Centenary Meeting if QM was wrong, most of them would say something like ‘it’s incomplete’. From what we have said, I think this is reasonable as there is certainly something of value in the theory. However, some scientists are rather more outspoken. In this latter group I would include Roger Penrose, an eminent theoretical physicist and professor Emeritus at Oxford University, and Lee Smolin, an American physicist with associations with Yale and Pennsylvania State Universities and co-founder of the innovative Perimeter Institute of Theoretical Physics at Waterloo, Canada. In their writings (2), (3) (4), they have both been unequivocal in their opinions that the current theory is simply wrong. But then, at the end of the day, why is this important? The key to answering this question is the fact that there are currently two main pillars of modern physics – quantum mechanics (the theory of the very small) and Einstein’s theory of gravity (general relativity – the theory of the very large). Both of these theories were launched during an amazingly productive decade of the twentieth century from 1915 to 1925. Both theories have stood the test of time remarkably well. However, all attempts to unify them into a ‘theory of everything’ – a theory of quantum gravity have so far failed. So, when we look at problems where the domains of the two theories overlap – such as at the initial instant of the Big Bang, or at the centre of a black hole where gravity and quantum effects are both very relevant – we do not have a theory to describe what is happening. And this is not just a recent problem. The physics community have been struggling with this for a century – and efforts continue. But what if Penrose and Smolin (and others I’ve not mentioned) are right in their belief that QM is wrong. Then our efforts at unification are doomed.
So at the end of the day we have a quantum theory that doesn’t say very much that the experts can agree upon about the underlying reality of the world of molecules, atoms and elementary particles. And that the current version of QM may be an inappropriate starting point for the process of unification. Recently I heard, or read, a quote from someone – I can’t remember who – ‘Maybe we should give up on the process of trying to quantise gravity, and try gravitising quantum mechanics instead’. I’m actually not sure what they meant by ‘gravitising’, but I understand and appreciate the sentiment. Graham Swinerd Southampton, UK August 2025 (1) Elizabeth Gibney, Nature, Vol. 643, pp. 1175-1179, 31 July 2025. (2)* Roger Penrose, Fashion, Faith and Fantasy in the new physics of the Universe, Princeton University Press, 2016. (3) Lee Smolin, The Trouble with Physics, Penguin Books, 2006. (4) Lee Smolin, Einstein’s Unfinished Revolution: the search for what lies beyond the quantum, Penguin Books, 2019. * Warning: The publisher’s blurb about this book suggests that the content is suitable for the layperson. It is not. |
AuthorsJohn Bryant and Graham Swinerd comment on biology, physics and faith. Archives
March 2026
Categories |



RSS Feed