Friday, April 13, 2018

The thousand-year song

In February I had the pleasure of meeting Jem Finer, the founder of the Longplayer project, to discuss the “music of the future” at this event in London. It seemed a perfect subject for my latest column for Sapere magazine on music cognition, where it will appear in Italian. Here it is in English.

Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.

Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.

The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)

An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)

One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.

But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.

More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.

A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)

Thursday, March 01, 2018

On the pros and cons of showing copy to sources - redux

Dana Smith has written a nice article for Undark about whether science journalists should or should not show drafts or quotes to their scientist sources before publication.

I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.

Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.

(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)

Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.

Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.

Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.

And I feel – and hope – I managed to do that. But I got something wrong.

It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.

I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.

Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.

Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?

As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.

But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.

And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.

I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.

Sunday, February 04, 2018

Should you send the scientist your draft article?

The Twitter discussion sparked by this poll was very illuminating. There’s a clear sense that scientists largely think they should be entitled to review quotes they make to a journalist (and perhaps to see the whole piece), while journalists say absolutely not, that’s not the way journalism works.

Of course (well, I say that but I’m not sure it’s obvious to everyone), the choices are not: (1) Journalist speaks to scientist, writes the piece, publishes; or (2) Journalist speaks to scientist, sends the scientist the piece so that the scientist can change it to their whim, publishes.

What more generally happens is that, after the draft is submitted to the editor, articles get fact-checked by the publication before publication. Typically this involves a fact-checker calling up the scientist and saying “Did you basically say X?” (usually with a light paraphrase). The fact-checker also typically asks the writer to send transcripts of interviews, to forward email exchanges etc, as well as to provide links or references to back up factual statements in the piece. This is, of course, time-consuming, and the extent to which, and rigour with which, it is done depends on the resources of the publication. Some science publications, like Quanta, have a great fact-checking machinery. Some smaller or more specialized journals don’t really have much of it at all, and might rely on an alert subeditor to spot things that look questionable.

This means that a scientist has no way of knowing, when he or she gives an interview, how accurately they are going to be quoted – though in some cases the writer can reassure them that a fact-checker will get in touch to check quotes. But – and this is the point many of the comments on the poll don’t quite acknowledge – it is not all about quotes! Many scientists are equally concerned about whether their work will be described accurately. If they don’t get to see any of the draft and are just asked about quotes, there is no way to ensure this.

One might say that it’s the responsibility of the writer to get that right. Of course it is. And they’ll do their best, for sure. But I don’t think I’ll be underestimating the awesomeness of my colleagues to say that we will get it wrong. We will get it wrong often. Usually this will be in little ways. We slightly misunderstood the explanation of the technique, we didn’t appreciate nuances and so our paraphrasing wasn’t quite apt, or – this is not uncommon – what the scientist wrote, and which we confidently repeated in simpler words, was not exactly what they meant. Sometimes our oversights and errors will be bigger. And if the reporter who has read the papers and talked with the scientists still didn’t quite get it right, what chance is there that even the most diligent fact-checker (and boy are they diligent) will spot that?

OK, mistakes happen. But they don’t have to, or not so often, if the scientist gets to see the text.

Now, I completely understand the arguments for why it might not be a good idea to show a draft to the people whose work is being discussed. The scientists might interfere to try to bend the text in their favour. They might insist that their critics, quoted in the piece, are talking nonsense and must be omitted. They might want to take back something they said, having got cold feet. Clearly, a practice like that couldn’t work in political writing.

Here, though, is what I don’t understand. What is to stop the writer saying No, that stays as it is? Sure, the scientist will be pissed off. But the scientist would be no less pissed off if the piece appeared without them ever having seen it.

Folks at Nature have told me, Well sometimes it’s not just a matter of scientists trying to interfere. On some sensitive subjects, they might get legal. And I can see that there are some stories, for example looking at misconduct or dodgy dealings by a pharmaceutical company, where passing round a draft is asking for trouble. Nature says that if they have a blanket policy so that the writer can just say Sorry, we don’t do that, it makes things much more clear-cut for everyone. I get that, and I respect it.

But my own personal preference is for discretion, not blanket policies. If you’re writing about, say, topological phases and it is brain-busting stuff, trying to think up paraphrases that will accurately reflect what you have said (or what the writer has said) to the interviewee while fact-checking seems a bit crazy when you could just show the researcher the way you described a Dirac fermion and ask them if it’s right. (I should say that I think Nature would buy that too in this situation.)

What’s more, there’s no reason on earth why a writer could not show a researcher a draft minus the comments that others have made on their work, so as to focus just on getting the facts right.

The real reason I feel deeply uncomfortable about the way that showing interviewees a draft is increasing frowned on, and even considered “highly unethical”, is however empirical. In decades of having done this whenever I can, and whenever I thought it advisable, I struggle to think of a single instance where a scientist came back with anything obstructive or unhelpful. Almost without exception they are incredibly generous and understanding, and any comments they made have improved the piece: by pointing out errors, offering better explanations or expanding on nuances. The accuracy of my writing has undoubtedly been enhanced as a result.

Indeed, writers of Focus articles for the American Physical Society, which report on papers generally from the Phys Rev journals, are requested to send articles to the papers’ authors before publication, and sometimes to get the authors to respond to criticisms raised by advisers. And this is done explicitly with the readers in mind: to ensure that the stories are as accurate as possible, and that they get some sense of the to-and-fro of questions raised. Now, it’s a very particular style of journalism at Focus, and wouldn’t work for everyone; but I believe it is a very defensible policy.

The New York Times explained its "no show" policy in 2012, and it made a lot of sense: it seems some political spokespeople and organizations were demanding quote approval and abusing it to exert control over what was reported. Press aides wanted to vet everything. This was clearly compromising to pen and balanced reporting.

But I have never encountered anything like that in many years of science reporting. That's not surprising, because it is (at least when we are reporting on scientific papers for the scientific press) a completely different ball game. Occasionally I have had people working at private companies needing to get their answers to my questions checked by the PR department before passing them on to me. That's tedious, but if it means that what results is something extremely anodyne, I just won't use it. I've also found some institutions - the NIH is particularly bad at this - reluctant to let their scientists speak at all, so that questions get fielded to a PR person who responds with such pathetic blandness and generality that it's a waste of everyone's time. It's a dereliction of duty for state-funded scientific research, but that's another issue.

As it happens, just recently while writing on a controversial topic in physical chemistry, I encountered the extremely rare situation where, having shown my interviewees a draft, one scientist told me that it was wrong for those in the other camp to be claiming X, because the scientific facts of the matter had been clearly established and they were not X. So I said fine, I can quote you as saying “The facts of the matter are not X” – but I will keep the others insisting that X is in fact that case. And I will retain the authorial voice implying that the matter is still being debated and is certainly not settled. And this guy was totally understanding and reasonable, and respected my position. This was no more or less than I had anticipated, given the way most scientists are.

In short, while I appreciate that an insistence that we writers not show drafts to the scientists is often made in an attempt to save us from being put in an awkward situation, in fact it can feel as though we are being treated as credulous dupes who cannot stand up to obstruction and bullying (if it should arise, which in my experience it hasn’t in this context), or resist manipulation, or make up our own minds about the right way to tell the story.

There’s another reason why I prefer to ask the scientists to review my texts, though – which is that I also write books. In non-fiction writing there simply is not this notion that you show no one except your editor the text before publication. To do so would be utter bloody madness. Because You Will Get Things Wrong – but with expert eyes seeing the draft, you will get much less wrong. I have always tried to get experts to read drafts of my books, or relevant parts of them, before publication, and I always thank God that I did and am deeply grateful that many scientists are generous enough to take on that onerous task (believe me, not all other disciplines have a tradition of being so forthcoming with help and advice). Always when I do this, I have no doubt that I am the author, and that I get the final say about what is said and how. But I have never had a single expert reader who has been anything but helpful, sympathetic and understanding. (Referees of books for academic publishers, however – now that’s another matter entirely. Don’t get me started.)

I seem to be in a minority here. And I may be misunderstanding something. Certainly, I fully understand why some science writers, writing some kinds of stories, would find it necessary to refuse to show copy to interviewees before publication. What's more, I will always respect editors’ requests not to show drafts of articles to interviewees. But I will continue to do so, when I think it is advisable, unless requested to do otherwise.

Friday, January 05, 2018

What to look out for in science in 2018

I wrote a piece for the Guardian on what we might expect in science, and what some of the big issues will be, in 2018. It was originally somewhat longer than the paper could accommodate, explaining some issues in more detail. Here’s that longer version.


Quantum computers
This will be the year when we see a quantum computer solve some computational problem beyond the means of the conventional ‘classical’ computers we currently use. Quantum computers use the rules of quantum mechanics to manipulate binary data – streams of 1s and 0s – and this potentially makes them much more powerful than classical devices. At the start of 2017 the best quantum computers had only around 5 quantum bits (qubits), compared to the billions of transistor-based bits in a laptop. By the close of the year, companies like IBM and Google said that they are testing devices with ten times that number of qubits. It still doesn’t sound like much, but many researchers think that just 50 qubits could be enough to achieve “quantum supremacy” – the solution of a task that would take a classical computer so long as to be practically impossible. This doesn’t mean that quantum computers are about to take over the computer industry. For one thing, they can so far only carry out certain types of calculation, and dealing with random errors in the calculations is still extremely challenging. But 2018 will be the year that quantum computing changes from a specialized game for scientists to a genuine commercial proposition.

Quantum internet
Using quantum rules for processing information has more advantages than just a huge speed-up. These rules make possible some tricks that just aren’t imaginable using classical physics. Information encoded in qubits can be encrypted and transmitted from a sender to a receiver in a form that can’t be intercepted and read without that eavesdropping being detectable by the receiver, a method called quantum cryptography. And the information encoded in one particle can in effect be switched to another identical particle in a process dubbed quantum teleportation. In 2017 Chinese researchers demonstrated quantum teleportation in a light signal sent between a ground-based source and a space satellite. China has more “quantum-capable” satellites planned, as well as a network of ground-based fibre-optic cables, that will ultimately comprise an international “quantum internet”. This network could support cloud-based quantum computing, quantum cryptography and surely other functions not even thought of yet. Many experts put that at a decade or so off, but we can expect more trials – and inventions – of quantum network technologies this year.

RNA therapies
The announcement in December of a potential new treatment for Huntington’s disease, an inheritable neurodegenerative disease for which there is no known cure, has implications that go beyond this particularly nasty affliction. Like many dementia-associated neurodegenerative diseases such as Parkinson’s and Alzheimer’s, Huntington’s is caused by a protein molecule involved in regular brain function that can ‘misfold’ into a form that is toxic to brain cells. In Huntington’s, which currently affects around 8,500 people in the UK, the faulty protein is produced by a mutation of a single gene. The new treatment, developed by researchers at University College London, uses a short strand of DNA that, when injected into the spinal cord, attaches to an intermediary molecule involved in translating the mutated gene to the protein and stops that process from happening. The strategy was regarded by some researchers as unlikely to succeed. The fact that the current preliminary tests proved dramatically effective at lowering the levels of toxic protein in the brain suggests that the method might be a good option not just for arresting Huntington’s but other similar conditions, and we can expect to see many labs trying it out. The real potential of this new drug will become clearer when the Swiss pharmaceuticals company Roche begins large-scale clinical trials.

Gene-editing medicine
Diseases that have a well defined genetic cause, due perhaps to just one or a few genes, can potentially be cured by replacing the mutant genes with properly functioning, healthy ones. That’s the basis of gene therapies, which have been talked about for years but have so far failed to deliver on their promise. The discovery in 2012 of a set of molecular tools, called CRISPR-Cas9, for targeting and editing genes with great accuracy has revitalized interest in attacking such genetic diseases at their root. Some studies in the past year or two have shown that CRISPR-Cas9 can correct faulty genes in mice, responsible for example for liver disease or a mouse form of muscular dystrophy. But is the method safe enough for human use? Clinical trials kicked off in 2017, particularly in China but also the US; some are aiming to suppress the AIDS virus HIV, others to tackle cancer-inducing genetic mutations. It should start to become clearing 2018 just how effective and safe these procedures are – but if the results are good, the approach might be nothing short of revolutionary.

High-speed X-ray movies
Developing drugs and curing disease often relies on an intimate knowledge of the underlying molecular processes, and in particular on the shape, structure and movements of protein molecules, which orchestrate most of the molecular choreography of our cells. The most powerful method of studying those details of form and function is crystallography, which involves bouncing beams of X-rays (or sometimes of particles such as electrons or neutrons) off crystals of the proteins and mathematically analysing the patterns in the scattered beams. This approach is tricky, or even impossible, for proteins that don’t form crystals, and it only gives ‘frozen’ structures that might not reflect the behaviour of floppy proteins inside real cells. A new generation of instruments called X-ray free-electron lasers, which use particle-accelerator technologies developed for physics to produce extremely bright X-ray beams, can give a sharper view. In principle they can produce snapshots from single protein molecules rather than crystals containing billions of them, as well as offering movies of proteins in motion at trillions of frames per second. A new European X-ray free-electron laser in Hamburg inaugurated in September is the fastest and brightest to date, while two others in Switzerland and South Korea are starting up too, and another at Stanford in California is getting an ambitious upgrade. As these instruments host their first experiments in 2018, researchers will acquire a new window into the molecular world.

100,000 genomes
By the end of 2018 the private company Genomics England, set up by the UK Department of Health, should have completed its goal of reading the genetic information in 100,000 genomes of around 75,000 voluntary participants. About a third of these people will be cancer patients, who will have a separate genome read from cancer cells and healthy cells; the others will be people with rare genetic diseases and their close relatives. With such a huge volume of data, it should be possible to identify gene mutations linked to cancer and to some of the many thousands of known rare diseases. This information could help diagnoses of cancer and disease, and perhaps also to improve treatments. For example, a gene mutation that causes a rare disease (one of which is likely to affect around one person in 17 at some point in their lives) supplies a possible target for new drugs. Genetic information for cancer patients can also help to tailor specific treatments, for example by identifying those not at risk of side effects from what can otherwise be effective anti-cancer drugs.

Gravitational-wave astronomy
The 2017 Nobel prize in physics was awarded to the chief movers behind LIGO, the US project to detect gravitational waves. These are ripples in spacetime caused by extreme astrophysical events such as the merging of two neutron stars or black holes, which have ultra-strong gravitational fields. The ripples produce tiny changes in the dimensions of space itself as they pass, which LIGO – comprising two instruments in Washington State and Louisiana – detects from changes in the distances travelled by laser beams sent along channels to mirrors a few kilometres away. The first gravitational wave was detected in late 2015 and announced in 2016. Last year saw the announcement of a few more detections, including one in August from the first known collision of two neutron stars. Gravitational-wave detectors now also exist or are being built in Europe, Korea and Japan, while others are planned that will use space satellites. The field is already maturing into a new form of astronomy that can ‘see’ some of the most cataclysmic events in the universe – and which so far fully confirm Einstein’s theory of general relativity, which explains gravitation. We can expect to see more cataclysmic events detected in 2018 as gravitational-wave astronomy becomes a regular tool in the astronomer’s toolkit.

Beyond the standard model
It’s a glorious time for fundamental physics – but not necessarily for the reasons physicists might hope. The so-called standard model of particle physics, which accounts for all the known particles and forces in nature, was completed in 2013 with the discovery of the Higgs boson using the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, at CERN in Switzerland. The trouble is, it can’t be the whole story. The two most profound theories of physics – general relativity (which describes gravity) and quantum mechanics – are incompatible; they can’t both be right as they stand. That problem has loomed for decades, but it’s starting to feel embarrassing. Physicists have so far failed to find ways of breaking out beyond the standard model and finding ‘new physics’ that could show the way forward. String theory offers one possible route to a theory of quantum gravity, but there’s no experimental evidence for it. What’s needed is some clue from particle-smashing experiments for how to extend the standard model: some glimpse of particles, forces or effects outside the current paradigm. Researchers were hoping that the LHC might have supplied that already – in particular, many anticipated finding support for the theory called supersymmetry which some see as the best candidate for the requisite new physics. But so far there’s been zilch. If another year goes by without any chink in the armour appearing, the head-scratching may turn into hair-pulling.

Crunch time for dark matter
That’s not the only embarrassment for physics. It’s been agreed for decades that the universe must contain large amounts of so-called dark matter – about five times as much, in terms of mass, than all the matter visible as stars, galaxies, and dust. This dark matter appears to exert a gravitational tug while not interacting significantly with ordinary matter or light (whence the ‘dark’) in other ways. But no one has any idea what this dark matter consists of. Experiments have been trying to detect it for years, primarily by looking for very rare collisions of putative dark-matter particles with ordinary particles in detectors buried deep underground (to avoid spurious detections caused by other particles such as cosmic rays) or in space. All have drawn a blank, including results from separate experiments in China, Italy and Canada reported in the late summer and early autumn. The situation is becoming grave enough for some researchers to start taking more seriously suggestions that what looks like dark matter is in fact a consequence of something else – such as a new force that modifies the apparent effects of gravity. This year could prove to be crunch time for dark matter: how long do we persist in believing in something when there’s no direct evidence for it?

Return to the moon
In 2018, the moon is the spacefarer’s destination of choice. Among several planned missions, China’s ongoing unmanned lunar exploration programme called Chang’e (after a goddess who took up residence there) will enter its fourth phase in June with the launch of a satellite to orbit the moon’s ‘dark side’ (the face permanently facing away from the Earth, although it is not actually in perpetual darkness). That craft will then provide a communications link to guide the Long March 5 rocket that should head out to this hidden face of the moon in 2019. The rocket will carry a robotic lander and rover vehicle to gather information about the mineral composition of the moon, including the amount of water ice in the south polar basin. It’s all the prelude to a planned mission in the 2030s that will take Chinese astronauts to the lunar surface. Meanwhile, tech entrepreneur Elon Musk has claimed that his spaceflight business SpaceX will be ready to fly two paying tourists around the moon this year in the Falcon Heavy rocket and the Dragon capsule the company has developed. Since neither craft has yet had a test flight, you’d best not hold your breath (let alone try to buy a ticket) – but the rocket will at least get its trial launch this year.

Highway to hell
Exploration of the solar system won’t all be about the moon, however. The European Space Agency and the Japanese Aerospace Exploration Agency are collaborating on the BepiColombo mission, which will set off in October on a seven-year journey to Mercury, the smallest planet in the solar system and the closest to the Sun. Like the distant dwarf planet Pluto until the arrival of NASA’s New Horizons mission in 2015, Mercury has been a neglected little guy in our cosmic neighbourhood. That’s partly because of the extreme conditions it experiences: the sunny side of the planet reaches a hellish 430 oC or so, and the orbiting spacecraft will feel heat of up to 350 oC – although the permanently shadowed craters of Mercury’s polar regions stay cold enough to hold ice. BepiColombo (named after renowed Italian astronomer Giuseppe Colombo) should provide information not just about the planet itself but about the formation of the entire solar system.

Planets everywhere
While there is still plenty to be learnt about our close planetary neighbours, their quirks and attractions have been put in cosmic perspective by the ever-growing catalogue of “exoplanets” orbiting other stars. Over the past two decades the list has grown to nearly 4,000, with many other candidates still being considered. The majority of these were detected by the Kepler space telescope, launched in 2009, which identifies planets from the very slight dimming of their parent star as the planet passes in front (a ‘transit’). But the search for other worlds will hot up in 2018 with the launch of NASA’s Transiting Exoplanet Survey Satellite, which will monitor the brightness of around 200,000 stars during its two-year mission. Astronomers are particularly interested in finding ‘Earth-like planets’, with a size, density and orbit comparable to that of Earth and which might therefore host liquid water - and life. Such candidates should then be studied in more detail by the James Webb Space Telescope, a US-European-Canadian collaboration widely regarded as the successor to the Hubble Space Telescope, due for launch in spring 2019. The Webb might be able to detect possible signatures of life within the chemical composition of exoplanet atmospheres, such as the presence of oxygen. With luck, within just a couple of years or so we may have good reason to suspect we are not alone in the universe.

Mapping the brain
It’s sometimes said, with good reason, that understanding outer space is easier than understanding inner space. The human brain is arguably the most complex object in the known universe, and while no one seems to be expecting any major breakthrough in 2018 in our view of how it works, we can expect to reach next Christmas with a lot more information. Over the summer of 2017 the €10bn European Human Brain Project got a reboot to steer it away from what many saw as an over-ambitious plan to simulate a human brain on a computer and towards a more realistic goal of mapping out its structure down to the level of connections between the billions of individual neurons. This shift in emphasis was triggered by an independent review of the project after 800 neuroscientists threatened to boycott it in 2014 because of concerns about the way it was being managed. One vision now is to create a kind of Google Brain, comparable to Google Earth, in which the brain structures underpinning such cognitive functions as memory and emotion can be ‘zoomed’ from the large scale revealed by MRI scanning down to the level of individual neurons. Such information might guide efforts to simulate more specific ‘subroutines’ of the brain. But one of the big challenges is simply how to collect, record and organize the immense volume of data these studies will produce.

Making clean energy
Amidst the excitement and allure of brains, genes, planets and the cosmos, it’s easy for the humbler sciences, such as chemistry, to get overlooked. That should change in 2019, which UNESCO has just designated as the International Year of the Periodic Table, chemistry’s organizing scheme of elements. But there are good reasons to keep an eye on the chemical sciences this year too, not least because they may hold the key to some of our most pressing global challenges. Since nature has no reason to heed the ignorance of the current US president, we can expect the global warming trend to continue – and some climate researchers believe that the only way to limit future warming to within 2 oC (and thus to avoid some extremely alarming consequences) is to develop chemical technologies for capturing and storing the greenhouse gas carbon dioxide from the atmosphere. At the start of 2017 a group of researchers warned that lack of investment in research on such “carbon capture and storage” technologies was one of the biggest obstacles to achieving this target. By the end of this year we may have a clearer view of whether industry and governments will rise to the challenge. In the meantime, development of carbon-free energy-generating technologies needs boosting too. The invention last year at the Massachusetts Institute of Technology of a device that uses an ultra-absorbent black “carbon nanomaterial” to convert solar heat to light suggests one way to make solar power more efficient, capturing more of the energy in the sun’s rays than current solar cells can manage even in principle. We can hope for more such innovation, as well as efforts to turn the smart science into commercially viable technologies. Don’t expect any single big breakthrough in these areas, though; success is likely to come, if at all, from a portfolio of options for making and using energy in greener ways.

Wednesday, November 01, 2017

Science writing and the "human bit"

This article on Last Word On Nothing by Cassandra Willyard brought about a fascinating debate – at least if you’re a science writer or have an interest in that business. Some have criticized it as irredeemably philistine for a science writer – honestly, to not know Hubble refers to a telescope! (Well, many things bear Hubble’s name, so I really don’t see that as so deplorable.) This is shallow criticism – what surely matters is how well a writer does the job she does, not what gaps might exist in her knowledge that never come to light unless she admits to them. Know your limits, is the only corollary of that.

Indeed, it makes me think it would be fun to know what areas of science hold no charms for other science writers. No doubt everyone’s blind spots would horrify some others. I long struggled to work up any enthusiasm for human origins. What? How could I not be interested in where we came from? Well, it seemed to me we kind of know where we came from, and roughly when, give or take a million years. We evolved from more primitive hominids. The rest is just detail, right?

Oddly, it is only now that the detail has become so messy – what with Homos floresiensis and naledi and so forth – that I’ve become engaged. Perhaps there’s nothing quite so appealing as blithe complacency undermined. I can’t say I yet care enough to fret about where each branch of the hominid tree should divide, but it’s fun to see all these revelations tumble out, not least because of the drama of some of the new discoveries.

My heartbeat doesn’t race for much of particle and fundamental physics either. I suspect this is for more partisan, and more dishonourable, reasons: particle physics has somehow managed to nab all the glamour and public attention, to the point that most people think this is what all of physics is, whereas my own former field of condensed matter physics, which has a larger community, never gets a look in. Meanwhile, particle physicists take the great ideas from CMP (like symmetry breaking) and then claim they invented them. You can see how bitter and twisted I am. So I was rather indifferent about the Higgs – an indifference I know some condensed matter physicists shared.

Some other fields I want to stick up for merely because they’re the underdogs – cell biology and biophysics, say, in the face of genetic hegemony.

So if a science writer admits to being unmoved by space science, it really doesn’t seem occasion to get all affronted. I edited an awful lot of astronomy papers at Nature that made my eyes glaze, often because they seemed to be (like some of those fossil and protein structure papers) a catalogue of arbitrary specifics. (Though don’t worry, I do love a good protein structure.)

Where I’m more unsure about Cassandra’s article is in the discussion of “the human element”. I suppose this is because it sends a chill wind down my spine. If the only way for science communication to connect with a broad public is by telling human stories, then I’m done for. I’m just not that interested in doing that (as you might have noticed).

That’s not to say that one shouldn’t make the most of a human element when it’s there. If there’s a way of telling a science story through personalities, it’s generally worth taking. “I might not be interested in gravitational waves, but I am interested in science as a process”, Cassandra writes. “Humanize the process, and you’ll hook me every time.”

Fair enough. But what if there is no human element to speak of? Every science writer will tell you that for every researcher who dreamed from childhood of cracking the problem they have finally conquered, there are ten or perhaps a hundred who came to a problem just because it was a natural extension of what they worked on for their PhD – or because it was a hot topic at the time. And for every colourful maverick or quirky underdog, there are lots of scientists who are perfectly lovely people but really have nothing that distinguishes them from the crowd. It’s always good to ask what drew a researcher to the topic, but often the answers aren’t terribly edifying. And there’s only so many times you’re going to be able to tell a story about gravitational waves as a tale of grit and persistence of a few visionaries in the face of scepticism about whether the method would work.

I quickly grew to hate that brand of science writing popular in the early 1990s in which “Jed Raven, a sandy-haired Texan with a charm that would melt glaciers, strode into the lab and boomed ‘Let’s go to work, people!’” Chances are, in retrospect, that Jed Raven was probably harassing his female postdocs. But honestly, I couldn’t give a toss about how Jed grew up collecting beetles or learning to herd steers or whatever they call them in Texas.

The idea that a science story can be told only if you find the human angle is deadly, but probably quite widespread. Unless you happen to strike lucky, it is likely to make whole areas of science hard to write about at all: health, field anthropology and astronomy will probably do well, inorganic chemistry not so much.

But Cassandra is right to imply that there is sometimes a presumption in science writing (including my own) that this stuff is inherently so interesting that you don’t need a narrative attached – you don’t even need to relate it beyond its own terms. It’s easy to be far too complacent about that. As Tim Radford wisely once said, above every hack’s desk should hang the sign: “No one has to read this crap.”

So what’s the alternative to “the human angle”? I’ll paraphrase Cassandra for the way I see it:
“I might not be interested in X, but I am interested in elegant, beautiful writing. Write well, and you’ll hook me every time.”

Tuesday, October 31, 2017

What the Reformation really did for science

There’s something thrilling about how, five centuries ago, the rebel monk Martin Luther defied his accusers. At a council (‘Diet’) in the German city of Worms in 1521, his safety and possibly his life were on the line. But this – his supporters avowed – was how he concluded his defence before the representatives of the pope Leo X:
"I cannot and will not retract anything, since it is neither safe nor right to go against conscience. I cannot do otherwise. Here I stand, may God help me."

You have to admit he showed more guts than Galileo did a century later when the Catholic church insisted that he recant on his support for the heliocentric cosmos of Copernicus. Galileo, elderly and cowed by the veiled threat of torture, did what the cardinals ordered. The legend has it that he muttered “Still it [the earth] moves” as he rose from kneeling is probably apocryphal.

Yet is there any link between these two challenges to the authority of Rome? Protestantism was launched five hundred years ago this month when Luther, an Augustinian cleric, allegedly nailed his 95 “theses” to the church door in Wittenberg. Was it this theological revolution that turned the intellectual tide, ushering in the so-called Scientific Revolution that kicked off with Galileo?

Asking that question is, even now, a good way to spark an argument between historians. They don’t, in general, threaten one another with excommunication, the rack and the pyre – but the debate can still be as heated as arguments between Catholics and Protestants.

Yet it’s probably not only futile but also beside the point to ask who is right. The debate highlights how, at the dawn of early modern science, what people thought about the natural world was inflected by what they thought about tradition, knowledge and religion. It makes no sense to regard the Scientific Revolution as an invention of a few bright sparks, independent of other social and cultural forces. And while narratives with heroes and villains might make for good stories, they are usually bad history.

Disenchanting the world

The idea that science was boosted by Protestantism was fueled by a 1938 book by historian Robert Merton, who argued argued that an English strand of the religious movement called Puritanism helped foster science in England in the seventeenth century, such as the work of Isaac Newton and his peers at the Royal Society. “From the 1960s to the 80s, historians of science endlessly and inconclusively debated the Merton thesis”, says historian of science David Wootton of the University of York. “Those debates have fallen quiet, but the assumption is still widespread that Protestant religion and the new science were somehow inextricably intertwined.”

Merton’s idea tied in with a widespread perception of those early Protestants as progressive. That view in turn stemmed from early twentieth-century sociologist Max Weber’s argument that Western capitalism arose from the “Protestant work ethic”. In particular, says historian of science and religion Sachiko Kusukawa of the University of Cambridge, Weber proposed “what is now called the ‘disenchantment’ thesis – the idea that Protestants got rid of ‘superstition’”.

But this picture of Protestants as open forward-thinkers and Catholics as conservative, anti-science reactionaries is an old myth that has long been rejected by experts, says Kusukawa. She says this black-and-white view of the Catholic church was shaped by two 19th-century Americans with an agenda: educator Andrew Dickson White and chemist John William Draper. The Draper-White thesis, which presented science and religion as historical enemies, was consciously constructed by distorting history, and historians have been debunking it ever since.

In one view, then, religion was pretty irrelevant. “The Scientific Revolution would have gone ahead with or without the Reformation”, Wootton asserts. Historian and writer James Hannam agrees, saying that “evidence that the Reformation had any material effect on the rise of science is almost impossible to isolate from other effects.”

But historian Peter Harrison of the University of Queensland counters that “the Protestant Reformation was an important factor in the Scientific Revolution”. The Puritan religious values of some English Protestants, he says, “gave legitimacy to the scientific endeavour when it needed it.”

“No matter how much historical evidence and argumentation is brought in to oppose any such claims”, says historian of science John Henry of Edinburgh University, “there always remains an unshakable feeling that, after all, there really is something to the thesis that Protestantism stimulated scientific development.”

Questioning authority

The Draper-White “conflict thesis” between science and religion still has advocates today, especially among scientists. Evolutionary biologist Jerry Coyne has proposed that “if after the fall of Rome atheism [and not Christianity] had pervaded the Western world, science would have developed earlier and be far more advanced than it is now.”

Not only is this sheer speculation though; it also demands a highly selective view of the interactions between science and religion in history. It’s true that Christian worship was, for many people in the Renaissance, surrounded by what even priests of that time considered superstition. The communion host was believed to have magical healing powers, and the magic incantation “Hocus pocus” is suspected to be a corruption of the ecclesiastical Latin “Hoc est corpus meum”: This is my body.

But Catholic and Protestant theologians alike lamented this muddying of Christian doctrine by folk beliefs. Plenty of them saw no real conflict between their religious convictions and the study of the physical world. Some of the best astronomers in Italy in the early seventeenth century were Catholic Jesuits, such as the German Christopher Clavius and the Italian Orazio Grassi. What’s more, the church raised little objection to Nicolaus Copernicus’s book De revolutionibus, which challenged the earth-centred picture of the cosmos described by Ptolemy of Alexandria in the 2nd century AD, when it was published in 1543. Copernicus himself was a Catholic canon in Frombork (now in Poland), and he dedicated the book to the pope, Paul III.

Questioning the traditional knowledge about the world taught at the universities – the natural philosophy and of Aristotle, Hippocrates, Ptolemy, Galen and other ancient Greek and Roman writers – began well before the Reformation got underway. And that challenge was initiated largely in Italy, the seat of the Roman church, by late fifteenth-century scholars like Marsilio Ficino and Pico della Mirandola.

These men questioned whether something was true just because it was written in an old book. They and others started to argue that the most reliable way to get knowledge was from direct experience: to look for yourself. That view was supported by the sixteenth-century Swiss physician and alchemist Paracelsus, who even in his lifetime some called the “Luther of medicine”. Again it was in Italy that this recourse to experience – and ultimately to something resembling the idea of experiment – was often to be found. The physician Andreas Vesalius conducted human dissections (about which officials in Vesalius’s Padua were quite permissive) which led him to dispute Galen’s anatomy in his seminal 1543 book De humani corporis fabrica. In Naples in the 1550s, the polymath Giambattista della Porta began to experiment with lenses and optics, and he pretty much described the telescope well before Galileo, hearing of this instrument invented in Holland, made one to survey the heavens. Della Porta was persuaded in his old age to join the select group of young Italian natural philosophers called the Academy of Lynxes, of which Galileo also became a member.

It’s not hard, then, to build up a narrative of the emergence of science that makes barely any connection with the religious upheavals of the Reformation: leading from the Renaissance humanism of Ficino, through Vesalius to Galileo and early ‘scientific societies’, and culminating in the Scientific Revolution and the Royal Society in London, with luminaries like Robert Boyle, Robert Hooke and Isaac Newton whose discoveries are still used in science today.

But – this is history after all – it wasn’t that simple. “It would be remarkable if the tumultuous religious upheavals of the sixteenth century, and the subsequent schism between Catholics and Protestants, did not leave an indelible mark on an emerging modern science”, says Harrison. “So the real question is not whether these events influenced the rise of modern science, but how.”

Against reason

The Draper-White thesis relied on a caricature of history, particularly in regard to Galileo. The way the church treated him was surely appalling, but today historians recognize that a less provocative person might well have got away with publishing his heliocentric views. It didn’t help, for example, that the simpleton defending the old Ptolemaic universe in the book that caused all the trouble, Galileo’s Dialogue on the Two Chief World Systems (1632), was a thinly veiled portrait of (among others) the pope Urban VIII.

Not only did some Catholics study and support what we’d now call science – as various clerics had done throughout the Middle Ages – but there’s no reason to think that the Protestants were intrinsically more progressive or “scientific”. Martin Luther himself had a rather low opinion of Copernicus, whose ideas on the cosmos he heard about from other scholars in Wittenberg before De revolutionibus was published. Copernicus’s manuscript was patiently coaxed out of him by Georg Joachim Rheticus, a mathematician who was appointed at the University of Wittenberg by Luther’s righthand man Philip Melanchthon. Rheticus brought the book back from Frombork for publication in Nuremberg. Yet Luther called Copernicus a fool who sought “to reverse the entire science of astronomy”. What, Luther scoffed, about the Biblical story in which Joshua commanded the sun – and not the earth – to stand still?

For him, religious faith trumped everything. If anyone dared suggest that articles of Christian faith defied reason, Luther would blast them with stuff like this: “The Virgin birth was unreasonable; so was the Resurrection; so were the Gospels, the sacraments, the pontifical prerogatives, and the promise of life everlasting.” Reason, he argued, “is the devil’s harlot”. It was hubris and blasphemy to suppose that one could decode God’s handiwork. Men should not understand; they should only believe.

Martin Luther wasn’t after all seeking to reform natural philosophy, but Christian theology. He had seen how the Roman church was corrupt: practicing nepotism (especially in the infamous reign of the Borgia popes in the late fifteenth century), bewitching believers by intoning in a Latin that they didn’t understand, and making salvation contingent on the capricious authority of priests. Luther watched in dismay as the church raised funds by selling “indulgences”: documents guaranteeing the holder (or their relatives) time off the mild discomfort of Purgatory before being admitted to Heaven. Luther became convinced that salvation could be granted not by priests but by God alone, and that it was a private affair between God and believers that didn’t need the intervention of the clergy.

One particularly controversial issue (though it doesn’t seem terribly important to Catholic/Protestant tribal conflict today) was transubstantiation: the transformation of bread and wine into the body and blood of Christ in the ritual of communion. Luther maintained that this was largely a symbolic transformation, not a literal one. His objection to Aristotle’s natural philosophy – usually dogmatically asserted at the universities – was not so much because he thought it was scientifically wrong but because it was used (some might say misused) to defend the Catholic view of transubstantiation.

After the fall

As far as the effects of Protestantism on science are concerned, Harrison warns that “any simple story is likely to be wrong.” For one thing, there was never a single Reformation. Protestantism took root in Luther’s Germany, then a mosaic of small kingdoms and city-states, where eventually the religious and political tensions boiled over into the devastating Thirty Years War of 1618-1648. But a separate religious revolt, sharing many of Luther’s convictions, happened in the Swiss cantons in the 1530s led by the reformers Ulrich Zwingli in Zurich and Jean Calvin in Geneva. England’s break from the Roman church in the same decade had quite different origins: Henry VIII, having denounced Luther in the 1520s, was piqued by being denied a papal divorce from Catherine of Aragon, and he passed laws that led to the establishment of the Anglican church.

All of these movements had their own doctrines and politics, so it’s far too simplistic to portray Protestants as progressive and Catholics as repressive. Both sides saw radical new ideas in philosophy they didn’t like. But Catholics were more successful in suppressing them, says Henry, because they’d been around longer and had a more well-oiled machinery of censorship. “No doubt Luther and Calvin would have liked to have a similar set-up to the Inquisition and the Index [of banned books], but they just didn’t”, Henry says. “So a natural philosopher living in a Protestant country could get away with things that a philosopher living in a Catholic country could not.”

Galileo’s Dialogue, for example, was smuggled to the Elzevirs, a Dutch printing family (the origin of Elsevier Publishing) in Protestant Amsterdam, who were free to publish it in the face of the Inquisition. “I’ve no doubt any number of Italian printers would have published it if they thought they’d get away with it”, says Henry. By the same token, the philosopher René Descartes, himself a good Catholic but unsettled by Galileo’s fate, moved from France to the Netherlands before publishing his ideas on atomism, which he knew would get him into trouble with the Inquisition because of what it might seem to imply for transubstantiation.

But if you think this supports the “progressive” reputation of Protestantism, consider the case of Spanish physician Michael Servetus, who discovered the pulmonary circulation of the blood from the right to left ventricle via the lungs. He was imprisoned in Catholic France for his supposedly heretical religious views, but he managed to escape and fled to Geneva, on his way to Italy. There the Calvinists decided he was a heretic too – Servetus had previously argued bitterly with Calvin on points of doctrine – and they burnt him at the stake.

Despite such outrages, Henry thinks that Luther and his followers did stimulate a wider questioning of authority – including that of the ancient natural philosophers. “Luther spoke of a priesthood of all believers, and encouraged every man to read the Bible for himself”, he says (for which reason Luther made a printed vernacular translation in German: see Box). “This does seem to have stimulated Protestant intellectuals to reject the authority of the ancient Greeks approved of by the Catholic church. So they began to read what was called ‘God’s other book’ – the book of nature – for themselves”.

“Protestants do seem to have contributed more to observational and empirical science,” Henry adds. Johannes Kepler, sometimes called the “Luther of astronomy”, was one such; his mentor Tycho Brahe was another. (Both men, however, served as court astronomers for the unusually tolerant Holy Roman Emperor Rudolf II in Prague.) Harrison agrees that the Reformation could have “promoted a questioning of traditional authorities in a way that opened up possibilities for new forms of knowledge or new institutions for the pursuit of knowledge”.

That questioning mixed science with religion, though. For Protestants, the problem with Aristotle wasn’t merely his outright, demonstrable errors about how the world works, but that, as a pre-Christian, he had failed to factor in the consequences of the Fall of Man. This left humankind with diminished moral, cognitive and sensory capacities. “It is impossible that nature could be understood by human reason after the fall of Adam”, Luther wrote.

Yet after such views were filtered through seventeenth-century Anglicanism, they left Robert Hooke concluding that what we need are scientific instruments such as the microscope to make up for our defects. Systematic science, in the view of Francis Bacon, whose ideas were central to the approach of the Royal Society, could be a corrective, letting us recover the understanding and mastery of the world enjoyed by Adam. It was unwise to place too much trust in our naïve senses: careful observation and reason, as well as questioning and skepticism, were needed to get past the “common sense” view that the sun circled the earth. “Genesis narratives of creation and Fall motivated scientific activity, which came to be understood as a redemptive process that would both restore nature and extend human dominion over it”, says Harrison.

Bacon’s view of a scientific utopia, sketched out in New Atlantis (1627), portrayed a society ruled by a brotherhood of scientist-priests who wrought fantastical inventions. This was decidedly Protestant vision was nurtured in the court of Frederick V, Elector Palatine of the Rhine and head of the Protestant Union of German states, whose marriage to the daughter of James I of England cemented the alliance between England and the German Protestants. Frederick was offered the Bohemian crown by Protestant rebels in 1619, and when he was defeated by the Catholic Hapsburgs of Spain the following year, some Protestant scholars fled to England. Among them was Samuel Hartlib, who published his own utopian blueprint in 1641. He befriended John Wilkins and other founders of the Royal Society, and like Bacon he imagined a scientific brotherhood dedicated to the pursuit of knowledge. Hartlib called it an Invisible College, the term that Robert Boyle later used for the incipient Royal Society. For the Anglican Boyle, scientific investigation was a religious duty: we had an obligation to understand the world God had made.

What’s God got to do with it?

Boyle’s view was shared by some of his contemporaries – like John Ray, sometimes called the father of English botany, who argued that every creature is evidence of God’s design. “The over-riding emphasis among Lutherans was the importance of God’s ‘Providence’ – foresight and planning – in creation”, says Kusukawa.

Yet much the same view can be found among Catholics too. “In the book of nature things are written in only one way”, wrote Galileo – and that way was “in the language of mathematics.” Some of the cardinals who condemned him would have gladly agreed.

So did the theological disagreements really matter much for science? Any differences between the two sides’ outlook on natural philosophy were “actually relatively trivial”, says Hannam. “If radical religious thinkers in both directions, as well as middle-of-the-road conformers like Galileo, are all united in being very important natural philosophers, it is hard to see how their particular religious beliefs have much relevance.” What they shared was more important than how they differed: namely, a belief in a universe created by a consistent God who created laws that let it run as smooth as clockwork.

It’s not, then, science per se that’s at issue here, but authority. Galileo’s assertion that the Bible is not meant to be a book of natural philosophy was relatively uncontroversial to all but a few; today’s fundamentalism that denies evolution and the age of the earth is a peculiarly modern delusion. No one – not Copernicus, Galileo, Newton or Boyle – denied what Luther and the popes believed, which is that the ultimate authority lies with God. The arguments were about how best to represent and honour Him on earth, and not so much about the kind of earth He had made.

However you answer it, asking if the Reformation played a part in the birth of modern science shows that the interactions of science and religion in the past have been far more complex than a mutual antagonism. The Reformation and what followed from it makes a mockery of the idea that the Christian religion is a fixed, monolithic and unquestioning entity in contrast to science’s perpetual doubt and questioning. There were broad-minded proto-scientists, as well as reactionaries, amongst both Protestants and Catholics. Perhaps it doesn’t much matter what belief system you have, so much as what you do with it.

Box: Information Revolutions

There’s plenty to debate about whether Martin Luther was more “modern” than his papal accusers, but he sure caught on quickly to the possibility of the printing press for spreading his message of religious reform. His German translation of the New Testament, printed in 1522, sold out within a month. His supporters printed pamphlets and broadsheets announcing Luther’s message of salvation through faith alone, and criticizing the corruption of Rome.

Johannes Gutenberg, a metalworker by trade in the German city of Mainz, may have thought up the idea of a press with movable type as early as the 1430s, but it wasn’t until the early 1450s that he had a working machine. Naturally, one of the first books he printed was the Bible – it was still very pricey, but much less so than the hand-copied editions that were the sole previous source. Thanks to court disputes about ownership of the press, Gutenberg never made a fortune from his invention. But others later did, and print publication was thriving by the time of the Reformation.

Historian Elizabeth Eisenstein argued in her 1979 book The Printing Press as an Agent of Change that, by allowing information to be spread widely throughout European culture, the invention of printing transformed society, enabling the Reformation, the Renaissance and the Scientific Revolution. It not only disseminated but standardized knowledge, Eisenstein said, and so allowed the possibility of scientific consensus.

David Wootton agrees that printing was an important factor in the emergence of science in the sixteenth and seventeenth centuries. “The printing press brought about an information revolution”, he says. “Instead of commenting on a few canonical texts, intellectuals learnt to navigate whole libraries of information. In the process they invented the modern idea of the fact: reliable information that could be checked and tested.”

If so, what might be effect of the modern revolution in digital information, often compared to Gutenberg’s “disruptive” technology? Is it now destabilizing facts by making it so easy to communicate misinformation and “fake news”? Does it, in allegedly democratic projects like Wikipedia, challenge “old authorities”? Or is it creating a new hegemony, with Google and Facebook in place of the Encyclopedia Britannica and the scientific and technical literature – or, in an earlier age, of Aristotle and the church?

Thursday, September 14, 2017

Bright Earth in China

This is the introduction to a forthcoming Chinese edition of my book Bright Earth.


I have seen the Great Wall, the Forbidden City, Hangzhou’s wondrous West Lake, the gardens of Suzhou and the ancient waterworks of Dujiangyan in Sichuan. But somehow my travels in China have never yet brought me to Xi’an to see the tomb of the First Emperor Qin Shi Huangdi and his Terracotta Army. It is most certainly on my list.

But I know that none of us now can ever see the ranks of clay soldiers in their full glory, because the paints that once adorned them have long since flaked off the surface. As I say in Bright Earth of the temples and statues of ancient Greece, they leave us with the impression that the ancient world was more drab than was really the case. These statues were once brightly coloured, as we know from archaeological work on the excavations at Xi’an – for a few fragments of the pigments still adhere to the terracotta.

Some of these pigments are familiar from elsewhere in the ancient world. Red cinnabar, for example – the mineral form of mercury sulfide – is found throughout Asia and the Middle East during the period of the Qin and Han dynasties. Cinnabar was plentiful in China: Sha’anxi alone contains a fifth of the country’s reserves, and it was mined for use not just in pigments but in medicines too. Chinese legend tells of one Huang An, who prolonged his life for at least 10,000 years by eating cinnabar, and Qin Shi Huangdi was said to have consumed wine and honey laden with the mineral, thinking it would prolong his life. (Some historians have speculated that it might instead have hastened his death, for it is never a good idea, of course, to ingest mercury.) According to the Han historian Sima Qian, the First Emperor’s tomb contained a scale model of his empire with rivers made of mercury – possibly from the ancient mines in Xunyang county in southern Sha’anxi.

But some of the pigments on the Terracotta Army are unique to China. This is hardly surprising, since it is widely acknowledged now that chemistry in ancient China – alchemy, as it was then – was a sophisticated craft, used to make a variety of medicines and other substances for daily life. This was true also of ancient Egypt, where chemistry produced glass, cosmetics, ointments and colours for artists. One of the most celebrated colours of the Egyptians is simply now called Egyptian blue, and as Bright Earth explains, it is probably an offshoot of glass-making. It is a blue silicate material, its tint conferred by the element copper. China in the Qin and Han periods, and earlier during the Warring States period of around 479-221 BC, did not use Egyptian blue, but had its own version, now known as Han blue or (because after all it predates the Han) simply Chinese blue. Whereas Egyptian blue has the chemical name calcium copper silicate, Chinese blue substitutes calcium for the element barium.

The ancient Chinese chemists discovered also that, during the production of this blue pigment they could create a purple version, which has the same chemical elements but combined in somewhat different ratios. That was a real innovation, because purple pigments have been hard to make throughout the history of the “invention of colour” – and in the West there was no good, stable purple pigment until the nineteenth century. Even more impressively, Chinese purple contains two copper atoms linked by a chemical bond, making it – of course, the makers had no knowledge of this – the earliest known synthetic substance with such a so-called “metal-metal bond”, a unit of great significance to modern chemists.

Although I’ve not seen the Terracotta Army, several years ago I visited Professor Heinz Berke of the University of Zurich in Switzerland, who has worked on analyzing their remaining scraps of pigment. Heinz was kind enough to give me a sample of the Chinese blue pigment that he had made in his laboratory; I have it in front of me now as I write these words. “The invention of Chinese Blue and Chinese Purple”, Heinz has written, “is an admirable technical-chemical feat [and an] excellent example of the positive influence of science and technology on society.”

My sample of modern Chinese blue, made by Heinz Berke (Zurich).

You can perhaps see, then, why I am so delighted by the publication of a Chinese edition of Bright Earth – for it combines three of my passions: chemistry, colour and China. I always regretted that I was not able to say more in the book about art outside the West, but perhaps one day I shall have the resolve to attempt it. The invention of colour in China has a rather different narrative, not least because the tradition of landscape painting – shanshuihua – places less emphasis on colour and more on form, composition and the art of brushwork. Yet that tradition has captivated me since my youth, and it played a big part in inducing me to begin exploring China in 1992. This artistic tradition, of course, in no way lessened the significance of colour in Chinese culture; it was, after all, an aspect of the correspondences attached to the system of the Five Elements (wu xing). And one can hardly visit China without becoming aware of the vibrancy of colour in its traditional culture, not least in the glorious dyes used for silk. I hope and trust, therefore, that Bright Earth will find plenty of resonance among Chinese readers.

Li Gongnian (c.1120), Winter Evening Landscape, and detail.