Graham writes … The launch window for this mission to visit Jupiter’s moon Europa opens in October, but the engineers at NASA’s Jet Propulsion Laboratory are currently troubleshooting a serious issue with the Europa Clipper spacecraft. The objective of the mission is principally to determine whether Europa is a suitable place for life to develop, and as such it is generating a fair degree of excitement amongst astrobiologists. Looking at Europa – a distant and cold, ice-covered world – it doesn’t look at all like an environment where life could flourish. However, in this case, appearances are deceptive. There is strong evidence that beneath the ice crust there is a warm water ocean, the heat being generated most likely by volcanic vents on Europa’s ocean bed. The problem with the spacecraft lies with the transistor elements, which are essentially the building blocks of the micro-processors onboard. The Jupiter system, where Europa Clipper will operate, exposes the spacecraft to intense radiation similar to the Earth’s Van Allen radiation belts, but 50 times more intense. In order to survive this environment, the spacecraft’s electronics need to be ‘radiation-hardened’ to achieve its planned 4-year mission lifetime. However, the hardness rating for these elements turned out to be incorrect, and the transistors were found to fail before they should in laboratory tests. This poses a real headache for the engineers. There are currently two main avenues of investigation; firstly, the obvious route of replacing the transistors, and secondly to assess how long the existing integrated spacecraft could survive the radiation environment and whether it could achieve its mission in a shorter time scale. The second option would at least allow them to launch the currently integrated spacecraft in October, but with the prospect of a shorter mission at Jupiter. The first option however is also possible, but it would risk missing the 3-week launch window in October. There are however later launch opportunities, but not until 2025 and 2026. So, everything is very much ‘up in the air’ at the moment, while the engineers mull over the options. I will most likely write a post on the various aspects of this fascinating mission in October – when hopefully we will know better what’s going on. I hope to ‘see’ you then, and in the meantime please see my main blog post for August below (… on water on Mars). Thereafter, I hand over to my co-author John for the September blog! God bless all.
Graham Swinerd Southampton, UK August 2024
0 Comments
Graham writes ... “When I consider your heavens, the work of your fingers, the moon and the stars, which you have set in place, …”: Psalm 8. As we have commented before on this blog page, Mars has not always been the arid desert that we see today. The confirmation of this view from data acquired by orbiting, imaging spacecraft is overwhelming, with clear evidence of water erosion and features such as river deltas and lakes. See for example, my blog post in February 2021 (just click on that date in the blog archive list on the right-hand side of this page), concerning the immanent adventures of NASA’s Perseverance rover as it set out to explore what was once a Martian lake bed. The second post in March 2021 looks more generally at the question of life elsewhere in the Universe. Coming back to Mars however, we can ask 'where has all the water gone?'. The planet is small – about half the size of the Earth – and the consequence of this is that Mars’ gravity field was not strong enough to retain the atmosphere that it had more than 3 billion years ago when it was a ‘water world’. As the atmosphere slowly leaked away into space, the conditions were set for the surface water to evaporate rapidly (in geological terms). Recently, a groundbreaking discovery has added a new layer of intrigue to Mars – the presence of liquid water deep beneath its surface. This finding, made possible through the detailed analysis of seismic data from NASA’s Insight lander, marks a significant milestone in our understanding of Mars and its potential to support life. The Insight lander, which touched down on Mars in 2018, was equipped with a seismometer that recorded vibrations from Mars quakes over four years. By carefully analysing these seismic waves, scientists were able to detect the presence of liquid water reservoirs located approximately 10 to 20 kilometres below the Martian crust – a process that is often used here on planet Earth to detect oil or water deposits underground. This discovery is particularly significant because it provides the first direct evidence of water on Mars, beyond that previously identified frozen in Mars’ ice caps. The amount of water discovered is staggering – enough to uniformly cover the planet’s surface to a depth of more than a kilometre. There is speculation that this underground water was there in Mars’ early history when surface water was plentiful, and that its underground location sustained it as the surface was transformed into an arid landscape. So, why does all this matter? Well, as the astrobiologists will tell you (or any other biologist comes to that …), water is a crucial element for life as we know it. The presence of liquid water on Mars opens up new possibilities for the planet’s habitability. While the surface of Mars is a cold, arid desert, these underground reservoirs could potentially harbour microbial life. Moreover, any such underground life would likely to be quarantined from Earth-based life, so providing an uncontaminated environment to try to understand how life began (both on Mars and the Earth). It is also clearly a great resource for future missions with the objectives of exploring and possibly colonizing Mars - access to water would be vital for sustaining human life and supporting agricultural activities on the planet. However, before we get carried away with all this, it is obvious that accessing these deep reservoirs poses significant challenges. The water is buried deep within the Martian crust, making it difficult to reach with current know-how. Future missions will need to take with them advanced drilling technology to tap into these resources. Additionally, the harsh conditions on Mars, including a global average temperature of -50 degrees Celsius, a harsh surface radiation environment (Mars has no protective magnetosphere) and surface dust that is potentially toxic to humans, present further challenges that need to be overcome!
If you would like to hear more on this, click here to hear the ‘5 Questions on’ podcast: ‘Huge reservoirs of water deep inside Mars’ (7 minutes), with the BBC’s science correspondent Victoria Gill talking with Michael Daventry. Graham Swinerd Southampton, UK August 2024 John writes … The heading of this blog post takes us back to the last words of my previous outing on these pages in which I wrote about the role of cold weather in regulating aspects of plant growth and development. Seeds of most plants growing in cool temperate regions are dormant – unable to germinate – when they are shed from the parent plant. In many species, dormancy is broken by an exposure to cold conditions, as I discussed in more detail in May. As also mentioned in that previous post, this is equally true of leaf and flower buds in biennial and perennial plants: in technical terms, the buds have to undergo a period of vernalisation (you will probably already know the word vernal which refers to things that happen in Spring such as the vernal equinox). Recent work by Prof Caroline Dean and Prof Martin Howard at the John Innes Centre in Norwich has started to unravel the mechanisms involved in vernalisation of flower buds. In autumn, the flower buds are dormant because the flowering process is held in check by the activity of a repressor gene. The activity of the repressor gene is sensitive to cold and so, during the winter, the gene is slowly switched off and eventually the genes that regulate flowering are able to work. OK then, plants have avoided leaf bud-burst or flowering at an inappropriate time but as Spring arrives, what is it that actually stimulates a tree to come into leaf or induces flowering in a biennial or perennial plant? Spring is characterised by several changes in a plant’s environment but the two most important are the increasing daytime temperature and the steady, day-by-day increase in daylength. In respect of temperature, it is clear that it is the major trigger for Spring-flowering plants. It is often said that Spring comes much earlier than it used to (even though, astronomically, the date of the equinox remains unchanged!). That observation was one of the catalysts for my writing these two blog posts and it has now been borne out by the data. In a recent research project carried out at Cambridge, on the effects of climate change, it was shown that in a range of 406 Spring-flowering trees, shrubs and non-woody plants, flowering now occurs a month earlier than it did in the mid-1980s (1). This ties in with my memories of Spring in Cambridge (I hope you’ll excuse a bit of reminiscing): when I was a student, the banks of the Cam were decorated with crocus flowers at the end of the Lent term, before we went home for Easter; when we came back for the summer term, it was daffodils that dominated the same banks. Now, the crocuses flower in the middle of the Lent term and the daffodils are in bloom at the end of that term. I will return to the induction of flowering later but now want to think about trees and shrubs coming into leaf. The situation is nicely illustrated by the old saying about oak (Quercus robur & Quercus petrea) and ash (Fraxinus excelsior): ‘Ash before Oak, we’re in for a soak; Oak before Ash, we’re in for a splash’. In colder, wetter Springs, oak budburst was delayed in comparison to ash and, in thinking about the weather, a cold wet Spring was believed to presage a wet summer (a ‘soak’). The folklore illustrates that budburst in oak is temperature-dependent. But what about ash? Its coming into leaf occurs at more or less the same time each year because the main trigger is increasing day-length. Thus, plants (or at least ash trees) have a light detection mechanism that can in some way measure the length of the light period. I need to add that because of climate change, these days, oak is nearly always before ash in respect of leaf production. Going back to the nineteenth century, Darwin’s experiments on the effects of unilateral illumination clearly showed that plants bent towards the light because of differences in growth rate between the illuminated and non-illuminated side (2). This phenomenon is known as phototropism and shows that light can affect plant growth in a way not directly connected with photosynthesis. This added to previously established knowledge that plants grown in the dark or in very deep shade grew tall and spindly (‘etiolated’) and made little or no chlorophyll. Transfer of etiolated plants into the light slowed down the vertical growth rate and also led to the synthesis of chlorophyll, again showing that light can affect plant growth and development. These phenomena, and many others, lead us to think that plants must possess light receptors which are able to transduce the perception – and even quantification – of light into effects on growth. Further, these days we would say the effects on growth indicate effects on the expression of genes that control growth. The role of chlorophyll as a photo-reactive molecule, active in photosynthesis, was well known but the effects I am describing cannot be ascribed to chlorophyll since they can occur in its absence. The first of these non-chlorophyll photo-reactive molecules was discovered at the famous Beltsville Agricultural Research Centre in Beltsville, Maryland, USA where Sterling Hendricks and Harry Borthwick showed that red light was particularly effective in promoting several light-dependent developmental processes and that this promotion was reversed by far-red light. They proposed that plants contained a photo-reversible light-detecting molecule that was responsible for transduction of the perception of light into effects on growth and development. Cynics named this as yet unknown light receptor a ‘pigment of our imagination’ but Hendricks and Borthwick were proved right in 1959, when a pigment which had the predicted properties was identified by Warren Butler and Harold Siegelman. Butler called the pigment phytochrome which simply means plant colour or plant pigment. Over subsequent decades it has become clear that plants possess several subtly different variants of phytochrome, each with a specific role and there is no doubt that these are major regulators of the effects of light on plant growth and development. However, as research progressed, it became apparent that not all the effects of light could be attributed to photoreception in the red/far-red region of the spectrum. There must be others, particularly sensitive to light at the blue end of the spectrum (as Charles Darwin had suggested in the 1880s!). At the time of detailed analysis of the effects of blue light, the receptors were unknown – and hence given the name cryptochrome – hidden colour/pigment. Three cryptochrome proteins were eventually identified in the early 1980s. And there’s more! The overall situation is summarised in the diagram which is taken from a paper by Inyup Paik and Enamul Huq (4). It is clear that plants are able to respond to variations in light quality and intensity right across the spectrum. They cannot move away from their light environment but have evolved mechanisms with which to respond to it. That brings me back to flowering. While there is no doubt that many spring-flowering plants are responsive mainly to ambient temperature (as described earlier) and are thus neutral with regard to day-length, there are many plants which have specific day-length requirements. These are typified by summer-flowering plants such as sunflower (Helianthus annus) and snapdragon (Antirrhinum) which need n days in which daylight is longer than 12 hours (n differs between different species of long-day plants). Similarly, plants that flower in late summer or autumn, such as Chrysanthemum require n days in which there are fewer than x hours of daylight. Note that x may be above 12 hours but the key requirement is that days are shortening. So, the next time you are outside and thinking about your light environment (will it be sunny or cloudy?), just stop to ponder about the marvellous light response mechanisms that are happening in the plants all around you. John Bryant Topsham, Devon July 2024 PS: For those who want to read more about plant function and development, this book has been highly recommended! (1) Ulf Büntgen et al (2022), Plants in the UK flower a month earlier under recent warming
(2) As published in his 1880 book The Power of Movement in Plants. (3) Yang, X., et al (2009). (4) Inyup Paik and Enamul Huq (2019), Plant photoreceptors: Multi-functional sensory proteins and their signalling networks. Graham writes ... Those of you who are regular visitors to this blog page may recall a post (1) in August 2023 when the so-called ‘Crisis in Cosmology’, or more formally what the cosmologists call ‘the Hubble tension’, was introduced and discussed. If you are not, then may I suggest that you have a read of the previous post to get a feel for the nature of the issue raised? Also please note that some sections of the previous post have been repeated here to make a coherent story. It concerns the value of an important parameter which describes the current rate of expansion of the Universe called Hubble’s constant, which is usually denoted by Ho (H subscript zero). This is named after Edwin Hubble, the astronomer who first experimentally confirmed that the Universe is expanding. The currently accepted value of H0 is approximately 70 km/sec per Megaparsec. As discussed in the book (2) (pp. 57-59), Hubble discovered that distant galaxies were all moving away from us, and the further away they were the faster they were receding. This is convincing evidence that the Universe is, as a whole, expanding (2) (Figure 3.4). The value of H0 above says that speed of recession of a distant galaxy increases by 70 km/sec for every Megaparsec it is distant. As explained in (1), a Megaparsec is roughly 3,260, 000 light years. Currently there are two ways to establish the value of Ho. The first of these, that is sometimes referred to as the ‘local distance ladder’ (LDL) method, is the most direct and obvious. This is essentially the process of measuring the distances and rates of recession of many galaxies, spread across a large range of distances, to produce a plot of points as shown below. The ‘slope of the plotted line’ gives the required value of Ho. The second method employs a more indirect technique using the measurements of the cosmic microwave background (CMB). As discussed in the book (2) (pp. 60-62) and in the May 2023 blog post, the CMB is a source of radio noise spread uniformly across the sky that was discovered in the 1960s. At that time, it was soon realised that this was the ‘afterglow’ the Big Bang. Initially this was very high energy, short wavelength radiation in the intense heat of the early Universe, but with the subsequent cosmic expansion, its wavelength has been stretched so that it currently resides in the microwave (radio) part of the electromagnetic spectrum. The most accurate measurements we have of the CMB was acquired by the ESA Planck spacecraft , named in honour of the physicist Max Planck who was a pioneer in the development of quantum mechanics (as an aside, I couldn’t find a single portrait of Max smiling!). The ‘map’ of the radiation produced by the Planck spacecraft is partially shown below, projected onto a sphere representing the sky. The temperature of the radiation is now very low, about 2.7 K (3), and the variations shown are very small – at the millidegree level (4). The red areas are the slightly warmer, denser regions and the blue slightly cooler. This map is a most treasured collection of cosmological data, as it represents a detailed snap-shot of the state of the Universe approximately 380,000 years after the Big Bang, when the cosmos became transparent to the propagation of electromagnetic radiation. To estimate the value of H0 based on using the CMB data, cosmologists use what they refer to as the Λ-CDM (Lambda-CDM) model of the Universe (5) – this is what I have called ‘the standard model of cosmology’ in the book (2) (pp. 63 – 67, 71 – 76). The idea is that of using the CMB data as the initial conditions, noting that the ‘hot’ spots in the CMB data provide the seeds upon which future galaxies will form. The Λ-CDM model is evolved forward using computer simulation to the present epoch. This is done many times while varying various parameters, until the best fit to the Universe we observe today is achieved. This allows us to determine a ‘best fit value’ for H0 which is what we refer to as the CMB value. For those interested in the detail, please go to (1). The ‘crisis’ referred to above arose because the values of Ho, determined by each method, do not agree with each other, Ho = 73.o km/sec per Mpc (LDL), Ho = 67.5 km/sec per Mpc (CMB). Not only that, but the discrepancy is statically very significant, with no overlap of the estimated error bounds of the two estimates. So how can this mismatch between the two methodologies be resolved? It was soon realised that the implications of this disparity was either (a), the LDL method for estimating cosmic distances is flawed, or (b), our best model of the Universe (the Λ-CDM model) is wrong. Option (b) on the face of it sounds like a bit of a disaster, but since the birth of science centuries ago this has been the way that it makes progress. The performance of current theories is compared to what is going on in the real world, and if the theory is found wanting, it is overthrown and a new theory is developed. And in the process of course there is the opportunity, in this case, to learn new physics. Looking at the options, it would seem that the easier route is to check whether we are estimating cosmic distances accurately enough. Fortunately, we have a shiny new spacecraft available, that is, the James Webb Space Telescope (JWST), to help in the task. When I described the LDL method of estimating Ho above, it looks pretty straight forward, but it is not as easy as it sounds – measuring huge distances to remote objects in the Universe is problematic. The metaphor of a ladder is very apt as the method of determining cosmological distances involves a number of techniques or ‘rungs’. The lower rungs represent methods to determine distances to relatively close objects, and as you climb the ladder the methods are applicable to determining larger and larger distances. The accuracy of each rung is reliant upon the accuracy of the rungs below, so we have to be sure of the accuracy of each rung as we climb the ladder. For example, the first rung may be parallax (accurate out to distances of 100s of light years), the second rung may be using cepheid variable stars (2) (p. 58) (good for distances of 10s of millions of light years), and so on. Please see (1) for details. The majority of these techniques involve something called ‘standard candles’. These are astronomical bodies or events that have a known absolute brightness, such as cepheid variable stars, and Type Ia supernovae (the latter can be used out to distances of a billions of light years). The idea is that if you know their actual brightness, and you measure their apparent brightness as seen from Earth, you can estimate their distance. It is also interesting to note that a difference of 0.1 magnitude in the absolute magnitude of a ‘standard candle’, due to a discrepancy in estimating its distance, can lead to a 5% difference in the value of Ho. In other words, a value of Ho = 73 versus Ho = 69! It would seem the route of investigating the accuracy of estimating cosmic distances is fertile ground for a variety of reasons. And this is exactly what Wendy Freedman, and her team of researchers, at the University of Chicago did. However, I should say that the results that now follow are not peer-reviewed, and therefore may change. The story henceforth is based on a 30-minute conference paper presentation at the American Physical Society meeting in April 2024. Interestingly, the title of her paper was “New JWST Results: is the current tension in Ho signalling new physics?”, which suggests that the original intention, at the time of the submission of the paper’s title and abstract, was to focus on objective (b) as mentioned above – in other words, looking at the implications of the standard model of the Universe being wrong. But in fact the focus is on (a) – an investigation of the accuracy of measuring distances. I can identify with this – when the conference deadline is so early that you’re not sure yet where your research is going! So, what did Freedman’s team do and achieve? They used two different ‘standard candles’ to recalibrate the distance ladder with encouraging results. The first of these are TRGB (Tip of the Red Giant Branch) stars. Without going into all the details, this technique assumes that the brightest red giant stars have the same luminosity and can therefore be used as a ‘standard candle’ to estimate galactic distances. The second class is referred to as JAGB (J-region Asymptotic Giant Branch) stars that are a class of carbon-rich stars that have near-constant luminosities in the near-infrared part of the electromagnetic spectrum. Clearly, these are useful as standard candles, and are also good targets for the JWST which is optimised to operate in the infrared. The team observed Cepheid variable, TRGB and JAGB stars in galaxies near enough for the JWST to be able to distinguish individual stars to determine the distances to these galaxies. Encouragingly, the results from each class of object gave consistent results for the test galaxies. Once a reliable distance to a particular galaxy was found, the team was able to recalibrate the supernova ‘standard candle’ data, which could then be used to re-determine the distances to very distant galaxies. After all that, they were able to recalculate the current expansion rate of the Universe as Ho = 69.1 ± 1.3 km/sec per Mpc The results of the study are encapsulated in the diagram below, which shows that the new result agrees with the CMB data calculation (labelled ‘best model of the Universe result’ in the diagram) within statistical bounds. So, is that the end of the story? Well, as regards this study, it is yet to be peer-reviewed so things could change. Another aspect is that the apparent success here may encourage other groups to look back at their (predominately Hubble Telescope) data to recalibrate their previous estimates of galactic distances. So, I think this has a long way to run yet, but for now the Freedman Team should be congratulated in their efforts to ease the so-called ‘crisis in cosmology’!
Graham Swinerd Southampton, UK June 2024 (1) Blog post August 2023, www.bigbangtobiology.net. (2) Graham Swinerd & John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, November 2020. (3) The Kelvin temperature scale is identical to the Celsius scale but with zero Kelvin at absolute zero (-273 degrees Celsius). Hence, for example, water freezes at +273 K and boils at +373 K. (4) A millidegree is 1 thousandths of a degree. (5) Here CDM stands for cold dark matter, and the Greek upper-case Lamba (Λ) refers to Einstein’s cosmological constant, which governs the behaviour of dark energy. John writes … It’s all about the tilt. As we have mentioned before, Earth is a planet with ‘added interest’, namely the existence of different seasons, caused by the tilt in its axis (the 'obliquity of the ecliptic'), relative to the orbit plane, as shown in the picture below. As we move from the Equator towards the poles, the greater are the inter-seasonal differences in temperature and daylength. Indeed, at the poles, daylength varies from total darkness around the winter solstice to 24 hours of daylight around the summer solstice. But we have a problem. In northern Europe, Spring is the most spectacular season, with the very obvious environmental change from the mainly brown hues of winter to the vibrant greens of new leaves and the range of glorious colours of myriad Spring-flowering plants. The soundscape becomes punctuated by birdsong and there is a general impression that the natural world is waking up. But this typifies the problem alluded to in the sub-heading. We think of natural selection as the selection of genetic variants that are best suited to their environment, leading to greater reproductive success. However, the environment is not constant but is subject to the seasonal variations that I have already alluded to, albeit that those variations are regular in the annual cycle. Thus, organisms living in, for example Finland, are subject to very different climatic pressures from those living in, for example, Kenya (I do not intend to discuss the changes in distribution and location of landmasses over geological time – this adds another, albeit very long-term perspective to the discussion). In this post and in my next contribution (probably in July), I want to discuss some of the features of various plants and animals that enable them to flourish in regions with significant seasonal variations. Out in the cold. Moving quickly from Spring to Autumn, think of a tree, growing in northern Europe, that has produced seeds in September or October. The weather is still warm, warm enough to support seed germination and early seedling growth. However, the seeds do not germinate even when conditions seem ideal. We say that the seeds are dormant. Now let us move forward to the start of the new year. The weather is cold, the temperature may be below 0°C (depending on where you are) and indeed, there may be snow on the ground. These are not ideal conditions for young seedlings and thus it is a good thing that those seeds did not germinate. Allow me now to introduce the Norway Maple (Acer platanoides), a beautiful tree that has been planted in many parks in the UK. There is a particularly fine specimen in Ashton Park, Bristol. Earlier in my career, I and Dr Robert Slater studied the regulation of genes in relation to dormancy and germination in this species. It was already known that in order to break the dormancy, seeds needed to be kept moist for about 100 days at temperatures of 5°C or lower (a process known as stratification). Only then are the seeds able to respond when conditions such as soil temperature become favourable for germination. This means that in many parts of its range, established after the most recent glaciation, the seeds of Norway Maple rarely or never germinate and that includes specimens planted as ornamentals in parks in southern Britain. Further, this problem (for the tree) is exacerbated by climate change. One of the things that happens during stratification is a change in the ratio of the concentrations of growth inhibitory hormones to concentrations of growth-promoting hormones. The changing ratio is in effect a measure of the length of the cold exposure. One of our key findings is that the genes associated with germination growth are not active until an appropriate ratio of growth regulators has been reached; thus we saw a flurry of gene activity at the end of the period of stratification. I need to say that Norway Maple seeds are not unique in requiring exposure to low temperatures before they can germinate. The seeds of many plant species native to north temperate regions, including the region’s tree species, exhibit the same trait, although few require a cold exposure as long as that required by the Norway Maple. Further, it is not only seeds that require a cold exposure before becoming active. Think now of biennial plants, plants that flower in the second of their two years of life. The buds which give rise to flowers in year-two will only do so after the plant has gone through a cold period; exposure to cold thus primes the floral buds to become active, a process which is known as vernalisation. Thus in my garden, the Purple Sprouting Broccoli plants that I am growing will not produce their purple sprouts (i.e., flowering shoots) until next Spring.
In summary, the seeds and plants that I have described cannot respond to the increasing ambient temperatures in Spring until they have experienced (and ‘measured’ that experience) the harsher conditions of the previous winter. Natural selection has thus led to the development of mechanisms that faciltate flourishing in areas with marked seasonal variation. ‘For everything, there is a season’ John Bryant Topsham, Devon May 2024 (1) https://gardenerspath.com/ Graham and John write ... We co-hosted the 6th Lee Abbey conference in the ‘Big Bang to Biology’ series during the week of 18-22 March 2024. This week coincided with the Spring Equinox, heralding the beginning of Spring, presenting a wonderful opportunity to witness once again the renewal of the natural world. And what a great place to do it! Lee Abbey is a Christian retreat, conference and holiday centre situated on the North Devon coast near Lynton, which is run by an international community of predominantly young Christians. The main house nestles in a 280-acre (113 hectare) estate of beautiful farmland, woodland and coastal countryside, with its own beach at Lee Bay. The South West Coast Path passes through the estate and the Exmoor National Park is just a short drive away. The meeting took place before the Springtime clock change so that nightfall occurred at a reasonable time (not too late) in the evening. However, Graham’s one disappointment during the week was that cloud obscured the amazing night sky each evening! The site is on the edge of the Exmoor National Park Dark Sky Reserve. We were pleased to welcome over 50 guests to the conference, who had booked in for a Science and Faith extravaganza. One of the joys of Lee Abbey is that speakers and delegates share the whole experience; meeting, eating and talking together for the whole week. This group of guests were a delight, which made the week a pleasure, coming as they did with an enthusiasm for the topic and a hunger to learn more and to share their own thoughts and experiences in discussion. It was also great to welcome Liz Cole back to the conference, with her new publication ‘God’s Cosmic Cookbook’ (1) – cosmology for kids! The usual format for the conference is to have two one-hour sessions each morning. Graham kicked-off on Tuesday morning with sessions on the limitations of science (2) and the remarkable events of the early Universe (3), followed on Wednesday by a presentation on the fine-tuning and bio-friendliness of the laws that govern the Universe, combined with the story of his own journey of faith (4). During the second session, John followed on with a discussion of the equally remarkable events needed for the origin of life and for life’s flourishing on planet Earth (5). In the first session on Thursday morning, John posed a question – are we ‘more than our genes?’ involving a discussion of human evolution and what it means to be human (6). This was followed by an hour’s slot to give participants the opportunity to receive prayer ministry. The final event was an hour-long Q&A session in the late afternoon on Thursday. This was both a pleasure and a challenge for us speakers, with some piercing theological questions asked, as well as the scientific ones! After the efforts of the mornings, delegates have free afternoons to enjoy the delights of the Lee Abbey Estate and the adjoining Valley of Rocks, followed by entertainment in the evening. Afternoon activities, such as local walks, are often arranged for guests by community. During the week additional events were also arranged. On Tuesday afternoon, guests were invited to visit the Lee Abbey farm by Estate Manager Simon Gibson to see the lambing and calving activities. Later that afternoon John offered an optional workshop on ‘Genes, Designer Babies and all that stuff’. Wednesday saw an entertaining presentation in the late afternoon by Dave Hopwood on Film and Faith, and on Thursday afternoon John led a guided walk to see the local flora, fauna and geology of the estate, Lee Bay and the Valley of Rocks. Thank you to all who booked in and made the experience so worthwhile.
Graham Swinerd Southampton, UK John Bryant Topsham, Devon, UK 26 March 2024 (1) Elizabeth Cole, God’s Cosmic Cookbook: your complete guide to making a Universe, Hodder & Stoughton, 2023. (2) Graham Swinerd and John Bryant, From the Big Bang to Biology: where is God?, Kindle Direct Publishing, November 2020, Chapter 2. (3) Ibid., Chapter 3. (4) Ibid., Chapter 4. (5) Ibid., Chapter 5. (6) Ibid., Chapter 6. Graham writes ... The real identity of what was thought to be a rather uninteresting nearby star was revealed in an article in Nature Astronomy on 19 February. The star, which is labelled J0529-4351, was first catalogued in a Southern Sky Survey dating back to 1980 (the rather uninteresting name is derived from the object’s celestial coordinates). Also, more recently, automated analysis by ESA’s Gaia spacecraft catalogued it as a star. The lead author of the article in Nature, Christian Wolf, is based at the Australian National University (ANU), and it was his team that first recognised the star’s true identity as a quasar last year, using the ANU’s 2.3 metre telescope at Siding Springs Observatory. As an aside, there is a relatively local link with this telescope, as it was once housed at Herstmonceux, Sussex as part of the Royal Greenwich Observatory. The true nature of the ‘star’ was further confirmed using the European Southern Observatory’s Very Large Telescope in Northern Chile. You can see the ANU’s press release here. So, what’s a quasar you might ask? A quasar (standing for ‘quasi-stellar object’) is an extremely luminous active galactic nucleus. When first discovered in the 1960s, these star-like objects were known by their red-shift to be very distant from Earth, which meant that they were very energetic and extremely compact emitters of electromagnetic radiation. At that time, the nature of such an object with these characteristics was a subject for speculation. However, over time, we have discovered that super-massive black holes at the centre of galaxies are ubiquitous throughout the Universe. Indeed, we have one at the centre of our own Milky Way galaxy with a mass of 4 million Suns, just 27,000 light years away. So, when we observe a quasar, the light has taken billions of years to reach us, and we are seeing them as they were in the early Universe when galaxies were forming. In those early structures, ample gas and matter debris was available to feed the forming black holes, creating compact objects with a prodigious energy output. And this is what J0529-4351 turned out to be. This object’s distance was estimated to be 12 billion light years (a red-shift of z = 4), with a mass of approximately 17 billion Suns, hence dwarfing our Milky Way black hole. As black holes feed on the surrounding available material (gas, dust, stars, etc.) they form an encircling accretion disk, in which the debris whirlpools into the centre. Velocities of material objects in this disk approach that of light speed, and the emission from the quasar radiates principally from this spiralling disk. The accretion disk of J0529-4351 is roughly 7 light years in diameter, with the central blackhole consuming just over one solar mass per day. This gives the object an intrinsic brightness of around 500 trillion Suns, making it the brightest known object in the Universe. Hopefully, like me, you are awed at the characteristics of this monstrous black hole that provided this amazing light show 12 billion years ago! Apologies that this month’s blog is shorter than usual, mainly because of preparation work required for next month’s conference at Lee Abbey, Devon, UK (see information on the home page). For those booked in, we are really looking forward to meeting you all next month!
John has also been busy posting other item(s) of interest on our ‘Big Bang to Biology’ Facebook page. Graham Swinerd Southampton, UK February 2024 John writes … A century (nearly) of antibiotics It is one of those things that ‘every school pupil knows’: Alexander Fleming discovered penicillin in 1928. At the time, he thought that his discovery had no practical application. However, in 1939, a team at Oxford, led by Howard Florey, started to work on purification and storage of penicillin and having done that, conducted trials on animals followed by clinical trials with human patients. The work led to its use amongst Allied troops in the World War II and, after the war, in more general medical practice. Its detailed structure was worked out by Dorothy Hodgkin, also at Oxford, in 1945 and that led to the development of a range of synthetic modified penicillins which are more effective than the natural molecule. This leads us to think about penicillin’s mode of action: it works mainly by disrupting the synthesis of the bacterial cell wall which may in turn lead to the autolysis (self-destruction) of the cell. Other antibiotics have since been developed which target other aspects of bacterial metabolism. In my own research on genes, I have used rifampicin which inhibits the transcription (copying) of genes into messenger RNA (the working copy of a gene) and chloramphenicol which prevents the use of messenger RNA in the synthesis of proteins (1). However, antibiotics in the penicillin family are by far the most widely used (but see next section). But there’s a problem Bacteria can be grouped according to whether they are ‘Gram-positive’ or ‘Gram-negative’. Hans Christian Gram was a Danish bacteriologist who developed a technique for staining bacterial cells – the Gram stain. Species which retain the stain, giving them a purple colour, are Gram-positive; species which do not retain the stain are Gram-negative. We now know that this difference is caused by differences in the cell’s outer layers. Gram-positive bacteria have a double-layered cell membrane (the plasma-membrane), surrounded by a thick cell wall; Gram-negative bacteria also have a double-layered plasma-membrane which is surrounded by a thinner cell wall and then another double-layered membrane. It is this structure that prevents the stain from entering the cells and which also leads to antibiotics in the penicillin family being totally or partially ineffective. Like many molecular biologists, I have used non-pathogenic strains of a Gram-negative bacterium, Escherichia coli (E .coli). It is the bacterial model used in research and has also been used to ‘look after’ genes which were destined for use in genetic modification. By contrast, pathogenic strains of this species can cause diarrhoea (‘coli’ indicates one of its habitats – the colon) while more seriously, some Gram-negative bacteria may cause pneumonia and sepsis. As already indicated, penicillin derivatives are mostly ineffective against Gram-negative bacteria and some of the effective antibiotics which have been developed have quite serious side effects. And an even bigger problem It is a truth universally acknowledged that an organism better adapted to an environment will be more successful than one that is less well adapted. This is of course a slightly ‘Austenesque’ way of talking about natural selection. Suppose then, that antibiotics are so widely used in medical, veterinary, or agricultural settings that they effectively become part of the environment in which bacteria live. Initially there will be a small number of bacteria that, for a number of reasons, are resistant to antibiotics. For example, some are able to de-activate a particular antibiotic and some can block the uptake of an antibiotic. Whatever the reason for the resistance, these resistant bacteria will clearly do better than the non-resistant members of the same species. The resistance genes will be passed on in cell division and the resistant forms will come to dominate the population, especially in locations and setting where antibiotics are widely used. And that, dear reader, is exactly what has happened. Antibiotic resistance is now widespread especially in Gram-negative, disease-causing bacteria, as is seen in the WHO priority list of 12 bacterial species/groups about which there is concern: nine of these are Gram-negative. The situation, already an emergency, is now regarded as critical in respect of three Gram-negative bacteria. Thinking about this from a Christian perspective, we might say that antibiotics are gifts available from God’s creation but humankind has not used those gifts wisely. But there is hope The growing awareness of resistance has catalysed a greatly increased effort in searching for new antibiotics. There are many thousands of organisms ‘out there’ remaining to be discovered, as is evident from a recent announcement from Kew about newly described plant and fungal species. It is equally likely that there is naturally occurring therapeutic chemicals, including antibiotics, also remaining to be discovered, perhaps even in recently described species (remembering that penicillin is synthesised by a fungus). There are also thousands of candidate-compounds that can be made in the lab. Concerted, systematic, computer-aided high throughput searches are beginning to yield results and several promising compounds have been found (2). Nevertheless, no new antibiotics that are effective against Gram-negative bacteria have been brought into medical or veterinary practice for over 50 years. However, things may be about to change. On January 4th, a headline on the BBC website stated ‘New antibiotic compound very exciting, expert says’ (3). The Guardian newspaper followed this with ‘Scientists hail new antibiotic that can kill drug-resistant bacteria.’ (4) The antibiotic is called Zosurabalpin and it was discovered in a high throughput screening programme (as mentioned earlier) that evaluated the potential of a large number of synthetic (lab-manufactured) candidate-compounds. In chemical terminology Zosurabalpin is a tethered macrocyclic peptide, an interesting and quite complex molecule. However, its most exciting features are firstly its target organisms and secondly its mode of action. Both of these are mentioned in the news articles that I have referred to and there is a much fuller account in the original research paper in Nature (5). Referring back to the WHO chart, we can see that one of the ‘critical’ antibiotic-resistant bacteria is Carbapenem-resistant Acinetobacter baumannii (known colloquially as Crab). Carbapenem is one of the few antibiotics available for treating infections of Gram-negative bacteria but this bacterial species has evolved resistance against it (as described above). This means that it is very difficult to treat pneumonia or sepsis caused by Crab. The effectiveness of Zosurabalpin against this organism is indeed very exciting. Further, I find real beauty in its mode of action in that it targets a feature that makes a Gram-negative bacterium what it is. One of the key components of the outer membrane is a complex molecule called a lipopolysaccharide, built of sugars and fats. The antibiotic disrupts the transport of the lipopolysaccharide from the cell to the outer membrane which in turn leads to the death of the cell. The outer layers of a Gram-negative bacterial cell in more detail, showing the inner membrane, the peptidoglycan cell wall and the outer membrane. The essential lipopolysaccharides mentioned in the text are labelled LPS. The new antibiotic targets the proteins that carry the LPS to their correct position. Diagram modified from original by European Molecular Biology Lab, Heidelberg. Zosurabalpin has been used, with a high level of success, to treat pneumonia and sepsis caused by A. baumannii in mice. Further, trials with healthy human subjects did not reveal any problematic side-effects. The next phase of evaluation will be Phase 1 clinical trials, the start of a long process before the new antibiotic can be brought into clinical practice. John Bryant Topsham, Devon January 2024 (1) For anyone interested in knowing more about how antibiotics work, there is a good description here: Action and resistance mechanisms of antibiotics: A Guide for Clinicians – PMC (nih.gov).
(2) As discussed here: Antibiotics in the clinical pipeline as of December 2022 | The Journal of Antibiotics (nature.com). (3) New antibiotic compound very exciting, expert says – BBC News. (4) Scientists hail new antibiotic that can kill drug-resistant bacteria | Infectious diseases | The Guardian. (5) A novel antibiotic class targeting the lipopolysaccharide transporter | Nature. Graham writes … The Nobel Prize in Physics 2023 has been awarded jointly to Anne L’Huillier of Lund University in Sweden, Pierre Agostini of Ohio State University in the USA and Ferenc Krausz of the Ludwig Maximilian University in Munich, Germany, for their experimental work in generating attosecond pulses of light for the study, principally, of the dynamics of electrons in matter. Effectively, the three university researchers have opened a new window on the Universe, and this will inevitably lead to new discoveries. Before we think about the applications of this newly-acquired technique, we need to explore the Nobel Laureates’ achievements in general terms. Firstly it would be good to know, what is an attosecond (as)? It is a very short period of time, which can be expressed in variety of ways, 1 as = a billion, billionths of a second, = 0.000 000 000 000 000 001 sec, or in ‘science-speak’, = 10^-18 sec. Whichever way you think of it, it is an unimaginably short period of time. Other observers reporting on this have pointed out that there are more attoseconds in one second than there are seconds of time since the Big Bang! I don’t know if that helps? In general terms, what the research has led to is the development of a ‘movie camera’ with a frame rate of the order of 10 million billion frames per second. Effectively this allows the capture of ‘slow-motion imagery’ of some of the fastest known physical phenomena, such as the movement of electrons within atoms. If we think about objects in the macroscopic world, generally things happen on timescales related to size. For example, the orbits of planets around the Sun take years, and the time it takes for a human being to run a mile is measured in minutes and seconds. Another macroscopic object that moves very rapidly is a humming bird. Or rather, while it hovers statically to consume plant nectar its wings need to flap around 70 times per second, which is once every 0.014 sec. Clearly if we had a movie camera with a frame rate of, say 25 per second then the bird’s wings would simply be a blur. To examine each wing beat in detail, to study how the bird achieves its steady hover, then a minimum of maybe 10 frames per flap would be required. So, something like, at least, 1,000 frames per second would be advised to allow an informative slow-motion movie of the bird’s physical movements. If we now consider the quantum world of particles and atoms, movements such as the dance of electrons in atoms are near-instantaneous. In 1925, Werner Heisenberg (one of the pioneers of quantum mechanics, and the discoverer of the now famous uncertainty principle) was of a view that the orbital motion of an electron is unobservable. In one sense he was correct. As a wave-particle, an electron does not orbit the atom in the same way as planets orbit the Sun. Rather physicists understand them as electron clouds, or probability waves, which quantify the probability of an electron being observed at a particular place and time. However, the odds of an electron being here or there change at the attosecond timescale, so in principle our attosecond ‘movie camera’ can directly probe electron behaviour. I’m sure that Heisenberg would have been delighted to learn that he had underestimated the ingenuity of 21st Century physicists. However, the ‘movie camera’ we have been discussing, is not a camera in the conventional sense. Instead a laser beam with attosecond period pulses is brought to bear on objects of interest (such as atoms within molecules, or electrons within atoms) to illuminate their frenzied movements. But how is this achieved? L’Huillier team’s early work prepared the scene for attosecond physics. They discovered that a low frequency infra-red (heat radiation) laser beam passing through argon gas generated a set of additional high-energy “harmonics” – light waves whose frequencies are multiples of the input laser frequency. The idea of harmonics is a very familiar one in acoustics, and in particular those generated by musical instruments. The quality of sound that defines a particular instrument (its ‘timbre’) is determined by the combination of its fundamental frequency combined with its main harmonic frequencies. In music, the amplitude (or loudness) of the higher harmonics tends to decrease as the frequency increases, but one striking characteristic that L’Huillier found was that the amplitude of the higher harmonics in the laser light did not die down with rising frequency. The next step was to try to understand this observed behaviour. L’Huillier’s team set about constructing a theoretical model of the process, which was published in 1994 (1). The basic idea is that the laser distorts the electric-field structure within the argon atom, which allows an electron to escape. This liberated electron then acquires energy in the laser field and when the electron is finally recaptured by the atom it gives away the acquired energy in the form of an emitted photon (particle of light). This released energy generates the higher frequency harmonic waveforms. The next question was whether the light corresponding to the higher harmonic modes would interfere with each other to generate attosecond pulses? Interference is a commonly observed phenomenon, when two or more electromagnetic (or acoustic) wave forms are combined to generate a resultant wave in which the displacement is either reinforced or cancelled. This is illustrated in the diagram below. In this example, two wave trains with slightly differing frequencies are combined to produce the resultant waveform (this occurrence is called ‘beats’ in acoustics). Notice that the maximum outputs in the lower, combined wave train occur when the peaks in the original waves coincide, and similarly a minimum occurs when the peaks and troughs coincide to cancel each other out. In the case of L’Huillier’s experiments, for interference to occur a kind of synchronisation between the emission of different atoms is required. If the atoms do not ‘collaborate’ with each other, then the output will be chaotic. In 1996 the team demonstrated theoretically that the atoms (remarkably) do indeed emit phased-matched light, allowing interference between the higher harmonics to occur, so opening the door to the prospect of attosecond physics. The image below illustrates the generation of attosecond pulses as a consequence of the interference between the various higher harmonic wave train outputs. Over the subsequent years, physicists have exploited these detailed insights to generate attosecond pulses in the laboratory. In 2001, Agostini’s team produced a train of laser pulses, each around 250 as duration. In the same year, Krausz’s group used a different technique to generate single pulses, each of 650 as duration. Two years later, L’Huillier’s team pushed the envelope a little further to produce 170 as laser pulses. So, the question arises, what can be done with this newly acquired ‘super-power’? Well in general it will allow physicists to study anything that changes over a period of 10s to 100s of attoseconds. As discussed above, the first application was to try something that the physics community had long considered impossible – to see precisely what electrons are up to. In 1905 Albert Einstein was instrumental in spurring the early development of quantum mechanics with his explanation of the photoelectric effect. The photoelectric effect is essentially the emission of electrons from a material surface, as a result of shining a light on it. He later won the 1921 Nobel Prize in Physics for this, and it is remarkable that he did not receive this accolade for the development of the theory of general relativity, which could be considered to be his crowning achievement. Einstein’s explanation showed that light behaved, not only as a wave, by also as a particle (the photon). The key to understanding was that the numbers of photo electrons that were emitted from the surface was independent of the intensity of the light and but rather depended upon its frequency. This emission was considered to take place instantaneously, but Krausz’s team examined the process using attosecond pulses and could accurately time how long it took to liberate a photoelectron. The developments I have described briefly in this post suggest a whole new array of potential applications, from the determination of the molecular composition of a sample for the purposes of medical diagnosis, to the development of super-fast switching devices that could speed up computer operation by orders of magnitude – thanks to three physicists and their collaborators who explored tiny glimpses of time.
Graham Swinerd Southampton, UK December 2023 (1) Theory of high-harmonic generation by low-frequency laser fields, M. Lewenstein, Ph. Balcou, M. Yu. Ivanov, Anne L’Huillier, and P. B. Corkum, Phys. Rev. Vol. A 49, p. 2117, 1994. Greetings and blessings of the season to you all from John and Graham. Please click on the picture. (Graham's choir is singing this blessing from the pen of Philip Stopford this Christmas season). Graham's December blog post on the 2023 Nobel Prize in Physics will be arriving soon ...
|
AuthorsJohn Bryant and Graham Swinerd comment on biology, physics and faith. Archives
August 2024
Categories |