Thursday, December 1, 2016

Life on Venus

The golden age of science fiction often imagined Venus as a jungle world, steamy and wet, where shipwrecked astronauts were driven insane by the incessant pounding of tropical downpours, as in Ray Bradbury’s ‘The Long Rain’.

The reality of Venus is a world that’s  searingly hot, covered in toxic clouds and with a corrosive atmosphere that’s inimical to life as we know it. But, just as the current exploration of Mars is finding evidence that surface water was once present on that arid planet, another group of scientists is investigating the history of our warmer sister planet.

Earth and Venus have a lot in common: they’re about the same size and density, and the fact that they formed around the same time in the primordial solar system suggests they share a lot of the same materials. Venus has a high ratio of deuterium to hydrogen atoms in its atmosphere, which suggests the planet contained a substantial amount of water at one time that could – as on Earth – have hosted the building blocks of life. NASA is currently considering two options for remote exploration of Venus, including a high resolution mapping mission and the tortuously acronymed Deep Atmosphere Venus Investigation of Noble Gases, Chemistry, and Imaging (DAVINCI) mission, both of which could rewrite our understanding of Venus and how we think about potentially life-bearing extra-solar planets in the future.

This article originally appeared in Beyond, my free newsletter for lovers of science and science fiction. Sign up here -

Monday, November 14, 2016

Big Brother 21st Century Style

The barrier between the outside world and the sacred space inside your head is eroding at an accelerated rate. An algorithm has been developed that scours social media posts and can predict depression in the poster with 70% accuracy, which is twice the accuracy rate of human doctors. Elsewhere scientists have developed nanobots that release drugs into your system when a particular ‘mind-state’ is detected.

The desire to identify and ensure early intervention for depressive and possibly suicidal tendencies is clearly understandable. Similarly, being able to trigger drug release in patients when they’re experiencing a seizure minimises improper dosing and ensures the drugs are extremely well-targeted. But as a science fiction author, I can imagine all kinds of alternative applications for these technologies that we might not applaud so readily.

No doubt the security agencies already have algorithms scouring our likes and tweets for hints of terrorist leanings, but the algorithm isn’t yet so refined that it provides actionable data on its own – or I don’t think it does. But it may get to that stage in the not-too-distant future and then we could be in Minority Report PreCrime territory, where even a mild inclination in our social media stream will be evidence of guilt.

This type of ‘preventative enforcement’ could go even further if we think about those drug-filled nanobots. Once a particular synaptic firing sequence that the government finds particularly abhorrent – say a desire for social change – is identified and targeted, the nanobots swarming through or body could automatically alter our mood or even tranquilise us. If that happens, any potential revolution will be quashed before it can truly begin.

This article originally appeared in Beyond, my free newsletter for lovers of science and science fiction. Sign up here -

Tuesday, October 4, 2016

Will AIs want to kill us?

There’s a lot of fear around about Artificial Intelligence. South Koreans recently flipped out when Google’s AlphaGo defeated their Grand Master at the national board game. But will AI usher in the end-times for humanity?

Certainly Hollywood seems to think so. Cue: Skynet, Age of Ultron, Transcendence, The Matrix, War Game; or, even earlier, Colossus: The Forbin Project; hell, even as far back as Metropolis in 1927! And who can forget Hal 9000’s chillingly calm, ‘I’m sorry, Dave. I’m afraid I can’t do that’ when he refuses to open the pod bay doors for marooned astronaut Bowman in 2001: A Space Odyssey?

The equation seems clear: the first thing self-aware computers will decide to do is kill us. The most consistently upbeat portrayal of AI and humanity living side by side is in Iain M Banks’s Culture novels. The ‘Minds’ of the Culture are true artificial intelligence that would make Skynet look – and feel – like an abacus. And that’s the important difference: Banks’s AIs have feelings. To be self-aware is to have an opinion, to be drawn towards some things and repelled by others, and, consequently, to create and be guided by a moral and ethical landscape.  The vast majority of humans are not homicidal psychopaths, so why should artificial intelligences be any different? Okay, some Culture AIs are crazy, or mildly anti-social ‘rogues’, but the other Minds keep them in check.

What Banks’s AIs value is uniqueness. Each Mind is constructed with a certain degree of randomness built in. They are all individuals, which is another requirement of true self-consciousness. The inescapable logic of this is that they also value the uniqueness of the human mind; for example, in Consider Phlebas the far more advanced Minds acknowledge that the character of Fal ‘Ngeestra has a way of looking at problems that is very useful. The Minds are partners with the people of the Culture, with each side bringing something important to the table. As a result, the whole civilisation is better for it. Maybe ours will be too.

This article originally appeared in Beyond, my free newsletter for lovers of science and science fiction. Sign up here -