Could artificial intelligence kill us off?

Are you conscious? Of course you are. You're awake, you're sentient, you might even be upright. You're not comatose or dead, and it's reasonable to assume that if you were on some kind of powerful mind-altering drug then you wouldn't be reading this. The point is, you're here, and you're alive, so therefore you're conscious. You know you are.

OK then, since you're conscious and I'm conscious and everyone else is conscious, go ahead. Define it. What is consciousness? Where does it reside? Does it belong to the mind or the body, or does it exist outside both? Is consciousness part of our souls, or does it live in the things we create – our art, our music, our cities and wars? Could it be mechanical or electronic, and, if so, what makes it operate? Most pressingly of all, is it possible we have now made for ourselves a new kind of consciousness, one which exists independently? If so, then what the hell have we got ourselves into?

The search for a definition of consciousness must lay claim to be the world's longest-running detective story. We've had our best minds on it ever since we developed brains big enough to ask questions and, still, we seem to be stumped. Plato and Aristotle couldn't fix it; Kant, Hume and Locke tried different angles; Schroedinger, Heisenberg and Einstein remained in awe before it. None of them came up with the final formula, the definitive, nailed-it for ever, silences-all-critics answer.

Lately though, the hunt seems to have changed gear. Despite big differences about how best to conduct the search and where to look, several of the most persistent sleuths have found themselves disconcertingly close to agreement. No-one is yet at the stage when they are ready to call a press conference and announce to the world they have finally apprehended the suspect, but they have at least begun to converge on these two leads: the Omega Point and the Singularity.

The tipping point

Pierre Teilhard de Chardin is an improbable prophet, partly because he's dead, and partly because he's still associated with a famous palaeontological fraud. Born the fourth of 11 children near Clermont-Ferrand in France in 1881, de Chardin developed two interests when young: God and fossils. Aged 18, he entered the Jesuit Order as a novice before completing his studies in philosophy and maths.

In 1912, he became part of the team working on Piltdown Man, the "discovery" of bones in East Sussex which were claimed to belong to an early hominid and thus to provide the missing evolutionary link between apes and humans. Nearly 40 years later, the find was exposed as a hoax. Team leader Charles Dawson had combined the skull of a modern human with the jaw of an orang-utan. Whether or not de Chardin had actually participated in the fraud – his contribution of a missing molar to the skull was a major supporting piece of evidence – his archaeological work was interrupted by the outbreak of war.

When he resumed in 1918, he moved the focus of his studies sideways into geology and began teaching in China. For the rest of his life, he combined writing, spiritual practice, teaching and adventure. By the time of his death in 1955 he'd driven a car across the whole of Eurasia and had a long but supposedly unconsummated relationship with an American sculptor called Lucille Swan.

But it was neither his science nor his love-life that brought him into conflict with the church. It was his attempt to synthesise evolution and Christianity, and his views on original sin. The sin bit is still clouded (no-one knows whether he was in favour of more or less) but de Chardin's basic theory was that as science, humanity and civilisation develop, there will ultimately come a point when the noosphere – the sphere of sentient thought – evolves until it joins with itself, human consciousness unifies, and ... and something wonderful happens. "At that moment of ultimate synthesis, the internal spark of consciousness that evolution has slowly banked into a roaring fire will finally consume the universe itself," he wrote in Let Me Explain, a collection of his thoughts published in 1970. "Our ancient itch to flee this woeful orb will finally be satisfied as the immense expanse of cosmic matter collapses like some mathematician's hypercube into absolute spirit."

If the noosphere is to reach this exciting finale, then all the fractured layers of human thought must first be conjoined by a single disembodied intelligence. De Chardin envisaged that disembodied intelligence as something directed by us, but separate – an intelligence which now just happens to look a lot like the internet.

The upside to noosphere theory is not only that it appears to unify science and theology, but that it also takes account of artificial intelligence. The downside is that, even allowing for mistranslation, de Chardin's writings are a stiff uphill climb through thickets of abstraction. Despite this handicap, it seems he's finally found his moment.

As a formally trained scientist in the 1940s, de Chardin took evolutionary theory as a given. The Catholic Church did not. His masterwork, The Phenomenon of Man – in which he argued that the next stage of evolution would be the point at which everything in the cosmos, all science, all thought, all energy, all matter, began to spiral towards an Omega Point of divine unification – did not please the Vatican, which banned him from publishing during his lifetime and exiled him from France.

At the time de Chardin was writing in the 1940s, a single global intelligence seemed both far-fetched and far-distant. But his theories have since been hauled into the 21st century by other thinkers and other disciplines. The physicists are all busy looking for a grand unified Theory of Everything while within biology, de Chardin's ideas have found their most popular form in variants of the Gaia theory and the work of James Lovelock. After all, if the Earth functions healthily as a single organism, then surely the human consciousness within it must also function collectively.

And then there's the Singularitarians, who believe that there will come a point in the not-at-all distant future when artificial intelligence finally outstrips human intelligence and computers become independently capable of designing their own successors. Different thinkers suggest different dates for this. Author and scientist Vernor Vinge suggested it's just around the corner in 2030, Singularity fans have come up with a median estimate around 2040, and the manufacturers of drone toothbrushes (which spy on your brushstrokes and sneak the data to your dentist) evidently think it was exceeded some time ago.

Could computers simply eliminate the need for humankind, and if they were super-intelligent, what form would that intelligence take? The assumption at present is that any alternative technology, whether originally designed by us or not, would automatically be in competition with humans. In other words it may exist through our design but it could soon design us out of existence.

So. If all of these things (quantum physics, philosophy, government, capitalism) are indeed beginning to converge, then are we reaching de Chardin's tipping-point, and if we have, then what's on the other side? Was he really on to something, or just another visionary millennial charlatan? And – most tricky of all – how are we supposed to know if our consciousness is changing when we don't even know what it is?

The angels in the machine

Rendered down, theories on consciousness divide into three. There's the rational/scientific approach, there's the spiritual/mystical approach, and there's the point where the two views intersect.

The rational/scientific approach holds that consciousness is some kind of bi-product of existence, and existence is part of the universe we belong to. Since consciousness is within this big but comprehensible universe, then one day we'll be able to find and measure it. There will come a time not so far away when we finally invent an instrument – a probe, a spectrometer, a set of scales – with which we can locate that consciousness, pin it down and describe it like we can describe longitude or internal combustion. Maybe we'll find it in the brain, maybe we'll find it close to the heart, maybe – like the old theory – when we finally come across it we'll discover it weighs exactly 21 grams and leaves the body exactly at the point of death. Either way, we'll definitely find it.

The spiritual/mystical approach says that any attempt to find a physical version of consciousness is hilariously perverse, since it starts from completely the wrong end. Consciousness isn't a product of the universe, the universe is a product of consciousness. We are all within consciousness and we are all indivisible from each other. Every single one of all us living billions come from consciousness and are capable of comprehending it, whether we be an adult, a child or a mouse. In fact, the child and the mouse probably have a better grasp of consciousness than adults do because adults have far too much rational stuff getting in the way. Consciousness is better understood in an instant than it is in a lifetime, and all a child spends its life learning is the art of forgetting.

This way round, you do not have to be clever to understand consciousness. In fact, cleverness can be an active disadvantage. Most of us can't get our heads around consciousness because our minds get in the way, and yet those who devote themselves to the search for it – the philosophers, the theologians, the astrophysicists, the neurochemists – tend to be clever. Very clever, and/or very wise. Wise enough to be multi-disciplinarians, to synthesise both the scientific and the mystical, and to honour the place of both.

In which case, this article should be read with a proviso: writing about consciousness is pointless. Completely ridiculous. It's like trying to find an accountancy of love or a taxonomy of song; the more words you expend on it, the further you travel away. Consciousness is not a three-dimensional phenomenon but a multi-dimensional one, and since writing is a three-dimensional solution, it has to be the wrong tool for the job. You've got a much better chance of comprehending consciousness by staring out of the window or listening to music than you have by reading about it.

Still, philosophically, the scientific and the mystical appear very far apart. From the scientific point of view, consciousness is just a problem waiting to be solved. The mystical way round, there are no problems any more than there is space, time, solutions or galaxies, nothing undifferentiated nor corporeal. There's only a single atomless soul.

In all its versions, that soul – for want of a bigger word – is what most of us spend our lives searching for, whether that be through God or meditation or the someone to complete you. Most of us, whether we acknowledge it or not, are looking for our way back to that single self, and since most adults have long since lost the straight route we search instead down the side-roads: Near-Death Experiences, sex, drink, drugs, early Nineties German trance; anything that appears to shorten the gap between what we can see and what we sense is there.

Within the scientific group, it's barely worth saying that there's a world of difference between Daniel C Dennett's hardcore ultra-Darwinist position, and the group of individuals led by Albert Hofmann who in the 1930s first synthesised LSD and found in its visions the key to the doors of perception.

But over the past few decades, there's been a notable shrinkage in the distance between the mystical and scientific positions. It used to be that you were either/or. Either you were an ardent rationalist, or you were an old acid-casualty boring on about bad trips. Now, if anything, it's the scientists who seem to be wandering round with their arms extended, murmuring: "Wow, man, it's all so, like, . . . quantum." The great thing is watching science admit it doesn't know things, and that what it is finding at the end of their vastly improved instruments and calculators was not more certainty, but more uncertainty. There are places where the conventional rules do not apply, and one set of rules appears to cancel another out.

During the past century, the various branches of quantum science have arrived at points which would impress even the trippiest hippy. Newtonian physics disintegrates once past the atomic level. Einstein's theory of spacetime within General Relativity is partially opposed by Quantum Field Theory. For every law there is a contradiction, for every stone of solid rational ground there are as many quicksands of inconsistency. Particles don't always behave the same way, light can be both particle or wave, time-travel is just waiting to happen.

And further. All possible probabilities lead to parallel universes; we exist at all times in a multiverse, not a universe; within the subatomic world there can be effects without causes; the behaviour of an atom on one side of the world influences the behaviour of an atom on the other side of the world; time bends; complexity is founded on simplicity and everything is actually an endless pattern-repeat, like cosmic wallpaper. Plus of course Schroedinger and his poor half-dead cat: the act of observation changes both the observer and the nature of the thing observed.

The convergence between what the scientists are now saying and what the mystics have been banging on about for four millennia or so is complemented by an increasing parity within the world of artificial intelligence. The concept of the Deus Ex Machina, the God from the Machine, has existed since the Greeks invented tragedy, but the notion of an ultimate technological Singularity was first given expression in the 1850 by the writer Samuel Butler in his novel Erewhon, and then both a name and scientific plausibility in the 1950s by the mathematician John von Neumann.

Even so, the issue with Singularity is not so much the point at which it might or might not occur, but what its disciples think will happen on the other side. If, as the Nobel theoretical physicist Stephen Hawking suggests, computing capacity is even now growing at such a rate we can barely control it: "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."

The technology Hawking requires in order to allow him to communicate gives him unusual insight into the issue. His upgraded new computerised voice system is designed by the British company Swiftkey and uses Intel technology designed to "read" or recognise the patterning in his thoughts. Hawking's first thumb-operated system allowed him to communicate at 15 words a minute (ordinary speech is about 150 words per minute), but the degeneration in his remaining active muscles meant that by 2011, he could only spell out around two words in that time.

The Intel team ended up designing something which comes close to reading Hawking's thoughts. It learns the way he likes to construct his thoughts and adapts itself to his habits of "speech" and now needs only one or two letters in a word or phrase before predicting the rest. It even factors in his grammatical fastidiousness and his resistance to new technologies. As Hawking points out, all this mind-reading technology is still relatively primitive, "but I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded."

The best test for the claim is Moore's Law. In 1965 Gordon E Moore, the man who started Intel and helped Hawking find his voice, suggested that computing power and complexity would double every two years. Within the industry, the law has succeeded partly as a self-fulfilling prophecy, a goal which technology companies deliberately strive for. If anything, the time-limit between doublings is now more like 18 months. The end-point (say, 2040) is supposed to be the point at which computers outstrip us and become entirely self-replicating.

Making a mortal body obsolete

Some proponents believe the post-Singularity future will be all the better for it. Admittedly they're a broad church, including everyone from fans of cryogenics to transhumanists and mind-uploaders (those who believe that the entire content of a human brain, every dusting of brainfluff and binload of thought-spam) could someday be uploaded on to a separate hard drive, thus rendering a real mortal body obsolete.

The prophet and leader of the benign-singularity camp is Raymond Kurzweil, Google's director of engineering and long-term advocate for digital immortality. He's been right about a lot of things before (scanners, text-to-speech software, a computer defeating a human at chess) but he's also been wrong about a lot too. Bioengineering has not reduced mortality for cancer and heart disease, and humans turn out not to like buying things from a computer-generated "virtual personality". Kurzweil is a controversial character who has spent almost as much time trying to ensure his own physical immortality (one of his books on the subject is subtitled Live Long Enough to Live Forever) as he has in predicting digital nirvanas. The philosopher David Chalmers, though also a striking physical presence (T-shirt, leather jacket, biker hair, looks like a character in Spinal Tap, talks like Jacques Derrida) comes at the singularity question from another angle.

Chalmers is director of the Centre for Consciousness at the Australian National University and is known for defining "the hard problem", the parts of conscious experience which can't be explained within neuroscience or psychology. He's spent most of his professional life writing about and lecturing on consciousness, and is currently preoccupied with many of the issues surrounding Singularity. "If there is a singularity, it will be one of the most important events in the history of the planet," he wrote in a 2010 paper on the subject. "An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet."

"Will there be a singularity?" he asked in the same paper. "I think that it is certainly not out of the question, and that the main obstacles are likely to be obstacles of motivation rather than obstacles of capacity. How should we negotiate the singularity? Very carefully, by building appropriate values into machines, and by building the first AI and AI+ systems in virtual worlds. How can we integrate into a post-singularity world? By gradual uploading followed by enhancement if we are still around then, and by reconstructive uploading followed by enhancement if we are not. My own strategy is to write about the singularity and about uploading. Perhaps this will encourage our successors to reconstruct me, if only to prove me wrong."

Curiously enough, it's those who are closest to the issue who are sounding the loudest alarms. The techies aren't convinced that handing over so much power to something without either a pulse or a conscience is such a great idea. Bill Gates recently generated several gigabytes of geek controversy with his caution against placing too much faith in IT. "I am in the camp that is concerned about super intelligence," he said in a recent online Q&A session. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

His fellow techie Musk, who has built a fortune and a reputation from taking bets on the future, recently spoke of his concerns over AI. Musk started out with PayPal, now runs electro-car manufacturer Tesla and is sufficiently concerned about the various threats to life on Earth that he's busy developing a rocket programme too, though as one of his recent tweets put it: "The rumour that I'm building a spaceship to get back to my home planet Mars is totally untrue."

During a recent talk at MIT, he warned: "We should be very careful about AI. If I was to guess what our biggest existential threat is, it's probably that. The thing with AI is that we're summoning the demon. You know all those stories where there's the guy with the pentagram and the holy water and it's like, yeah, he's sure he can control the demon. Didn't work out." Musk's fear – and the fear of many of his peers – is that we end up designing something which either goes rogue, or which imprisons us (through something like mass surveillance) or, through taking over the tasks and processes which at present only humans can complete, renders us obsolete.

The Turing test

Last summer, all of those future possibilities got one symbolic step closer. The Enigma code-breaker Alan Turing's famous test – can a machine demonstrate intelligence indistinguishable from that of a human? – was declared to have been passed by the Royal Society in London. A computer had managed to fool a third of the judges into believing it was "Eugene", a 13-year-old Ukrainian boy.

Eugene's supposed age and nationality provided a cover-story for the typos and the gaucheness in the conversation, and anything the programme couldn't understand, it countered with a question. Easy enough to pick holes in Eugene's performance after the event: in an age of trolls and cyber-spooks aren't we all used to the idea that print distorts identity, five minutes is too short? – but still.

Turing's original conditions were only that the computer should be able to hold a brief conversation with each judge, and that 30% or more of those judges should be unable to distinguish between the artificial human and the real. "I'm not interested in developing a powerful brain,'" Turing once said at a meeting of telegraphy executives. "All I'm after is just a mediocre brain, something like the president of the American Telephone and Telegraph Company."

Others have imposed more stringent criteria, but most agree that the Turing Test is a sideline issue, a gimmick. What's more interesting is not a single Eugene simulating a single teenager, but the experiment we all conduct on ourselves every day: what happens when you have millions of computers all synthesising the thoughts of billions of people into a single server owned by a single company? Is that what Teilhard de Chardin meant when he talked about convergence, or is this something more sinister?

And even if you don't believe that one day you're going to end up beaten in the local pub quiz by your own washing machine, there's evidence to suggest that government and big business are also in on this convergence business. Not that they're fussed about Omega Points or singularities, just that they seem very keen on digitising as much of us as possible. In the past year, Edward Snowden's revelations have provided proof that the authorities in America and the UK have already started collating masterfiles on their populations, while Amazon, Google or Facebook are busy designing algorithms to turn the data they hold from the broad brush to the finely tuned.

The information we all, every one of us, hand over daily – our lives, our loves, our bank balances, our needs and desires, our habits, our travel plans – is a heaped and glittering diamond-mine for the big few internet companies. Google, Facebook, Amazon and Twitter can gather up all the millions and millions of thoughts and impulses uploaded every second and then dice them any way they want.

At the moment, bringing all the information held on one individual is not a reality unless the security services are after them. But if the government – any Western government – suspects you of criminal activity, then as Edward Snowden pointed out, it takes only a few keystrokes to haul your whole life up on screen. And, from the state's point of view, the best thing about all this information is that most of it arrived freely and without coercion. We gave it willingly. We bought the GPS tracking devices, the audio bugs, surveillance cameras and polygraph tests. We paid for them with our own money and then we all got upgrades for Christmas. In exchange, we handed over bits of ourselves we can't take back.

As last summer's data protection battles with Google proved, once they've got it, they've got it for good. Facebook's CEO Mark Zuckerberg, the man who once decided that oversharing was the new norm, is now moving back from his original land-grab position and tightening Facebook's privacy settings because he's noticed that there's money to be lost when people don't trust you. The trouble is that people like talking about themselves. In a 2012 article published in the American Journal of Natural Sciences, the researchers point out that 80% of posts on Twitter concern only the user's immediate personal experiences. During the study the researchers offered subjects the choice of talking about themselves, someone famous, or something general. Most were prepared to forgo payment (in most cases, less than a dollar) for the opportunity to share their experiences. Apparently, talking about your stuff gives you a dopamine hit high enough to be better than cents. Great news both for social network sites, and for the noosphere.

And yet, at the same time as technologists are predicting the convergence of all thought into a single looming data cloud, the statisticians are watching us atomise. As we sit in our strip-lit kitchens and offices, we may be physically present but we're emotionally Instagrammed. If the average 13-year-old American boy is spending up to 43 hours a week video gaming, and almost all British 16- to 24-year-olds are on Facebook or other social networking site, then there's not really much time for anything else except food and school.

Still, there are obstacles. The thing with AI and technocracy is that they are all First-World perceptions of First-World problems. Fortunately, there are still plenty of other worlds out there too. At the end of 2013, a fifth of the world's population owned a laptop. That 20% is a lot, and usage of both PCs and mobiles is certainly speeding up, but that still leaves a chunky majority who don't have access to a computer, aren't in a hurry for any Noosphere, and might just have a totally different take on the future direction of human consciousness.

Doors to perception

On the afternoon of Monday 19 April 1943, a Swiss pharmaceutical chemist named Dr Albert Hofmann drank down 250-millionths of a gram of a new drug compound synthesised from ergotomine molecules and dissolved in water. For an hour or so, nothing happened. At 5pm, he put down his pen, asked his lab assistants to call a doctor, got on his bicycle and pedalled off through the streets of Basel in the direction of home. The doctor who reached the house shortly after Hofmann's arrival reported that his patient was physically fine, but mentally AWOL – floating somewhere off the ceiling, in fact, staring down at what he thought was his own dead body. His neighbour, who had brought round a glass of milk which Hofmann hoped would neutralise the drug's effects, had to Hofmann become not his neighbour but a she-devil.

The following morning, Hofmann's world was still misbehaving. His garden seemed a thousand times more vibrant and alive than normal, while his senses seemed reborn. Reporting his experience to his boss Arthur Stoll, the two repeated the experiment, but this time on animals. The results were mixed. An elephant given a dosage of 0.297 grams died within a few minutes. Cats stopped being scared of dogs and got scared of mice instead. Chimps seemed to stop behaving like chimps ought to behave, though what interested Hofmann was not so much their withdrawal from the group but the reaction of the other chimps to the overthrow of the usual social order. Seeing one member misbehaving, they reacted with outrage; Hofmann described a state of "uproar" within the cage. And tripped-out spiders went all edgy and abstract, building exciting new three-dimensional webs which challenged contemporary design norms but were no use at all for catching flies.

Ten years later, another keen futurologist sat at home in the Hollywood Hills with his wife, a tape recorder and four-tenths of a gram of mescaline. Aldous Huxley was then 59, and Brave New World, his vision of a population seduced by consumption and coshed with state drugs, was already considered a classic. Huxley had read of Hofmann's experiments with LSD-25 and was curious to find out if his claims stood up to the test.

Later, writing up his experiences in The Doors of Perception, Huxley described becoming aware of "a perpetual present made up of one continually changing apocalypse". For a while he had watched a vase of flowers breathing and became the legs of his typing table and chair. When his wife asked him about time, he replied only that "there seemed to be plenty of it".

Summarising his experiences, he noted that: "Though the intellect remains unimpaired and though perception is enormously improved, the will suffers a profound change for the worse. The mescaline taker sees no reason for doing anything in particular and finds most of the causes for which, at ordinary times, he was prepared to act and suffer, profoundly uninteresting ... In the final stage of egolessness there is an obscure knowledge that All is in all – that All is actually each."

What Huxley emphasised again and again was that he hadn't just taken a trip. That April afternoon hadn't been a departure from reality, it had been an arrival at truth. "The other world to which mescaline admitted me was not the world of visions; it existed out there, in what I could see with my eyes open." He'd been admitted to the place which the 18th century poet, artist and visionary William Blake had invited him: "If the doors of perception were cleansed everything would appear to man as it is, infinite."

"Art and religion, carnivals and saturnalia, dancing and listening to oratory – all these have served, in H G Wells' phrase, as 'Doors in the Wall'," wrote Huxley. "All the vegetable sedatives and narcotics, all the euphorics that grow on trees, the hallucinogens that ripen in berries or can be squeezed from roots – all, without exception, have been known and systematically used by human beings from time immemorial ... The urge to escape from selfhood and the environment is in almost everyone almost all the time." So strong was his conviction that hallucinogens – taken in the right dosage and in the right spirit – allowed him a deeper way of being human that on his deathbed he asked his wife for "LSD, 100 micrograms, intramuscular" to ease his passing.

At exactly the same time as Aldous Huxley was seeing in LSD the potential for mass consciousness-expansion, the CIA was after it for something else entirely. The agency wanted it for plunder, and saw it as a sophisticated new smash-and-grab tool for prising open the unconscious. Between the mid-1950s and the early 1960s the CIA investigated its use (along with marijuana, coke, speed, heroin, laughing gas, mushrooms and barbituates) as a weapon during interrogations

As Martin Lee and Bruce Shlain described in their book Acid Dreams, by 1953 the CIA was so impressed by the results it was getting on coerced subjects that its scientists had begun experimenting on themselves. By December that year, having ordered 10 kilos from the European Sandoz laboratory – enough for a billion individual doses – they considered spiking the drinks at the end-of-year office party, though their self-experiments in consciousness-raising stopped shortly afterwards when one tripping specialist in biological warfare jumped out of a window.

Now wary of trying acid on themselves, the CIA's brains returned to using it on unwitting subjects instead: prostitutes, prisoners, psychiatric patients. They wondered about feeding it to Fidel Castro and Egypt's Abdal Nasser. For a while, they tried using LSD "under threat conditions" as a force-fed truth serum in much the same way as the malevolent Dolores Umbridge used polyjuice potion in Harry Potter. They speculated about using it to turn enemy agents or trying it their own spies. Agents picked up in the field need only take a tab of acid to be instantly transformed into babbling idiots. And then, just like every Sixties college kid, they wondered what would happen if the Russians got hold of it and decided to contaminate the water supply. What would be the effect on world peace if, say, the whole of LA or a warship's crew all began tripping simultaneously? If they all displayed the effects of the drug – paranoia, delusions of grandeur – would that mean the end of world peace or would anyone really notice a difference?

In fact, they did try feeding acid ("EA-1729") to their own army. When 1,500 US military personnel were given the drug, the results ranged from "total incapacity to marked decrease in proficiency". The CIA only stopped its experiments when it found something much more powerful. BZ (or quinuclidinyl benzilate) did all that LSD could and more: a single aerosol dose would render its subjects either manic or delirious for up to three days, and – if administered with skill – completely immobilised them. One paratrooper on whom BZ was tried never returned from his trip. "Last time I saw him," said one colleague, "he was taking a shower in his uniform and smoking a cigar."

The introduction of more powerful hallucinogens meant the CIA could finally abandon acid. Its effects were too variable and its insights too wayward. In neurochemical terms, what they wanted was a nice neat burglary of just one room in the house, the one with the jewels and the guns. What they got with acid was the whole useless lot – kitchen sink and garden shed. But, if some of the agency's experiments with LSD were comic and some tragic, the conditions under which it was administered were often very dark indeed. Few of the non-military subjects knew they were being given the drug, and by dosing institutionalised subjects the CIA was straying into exactly the same territory as Josef Mengele's experiments on gypsies and Jews at Dachau concentration camp during the Second World War.

After its abandonment by the security services, LSD got taken over by the hippies. From the government's point of view, the bad thing about a bunch of artists getting hold of it was that it made acid mainstream, but the good thing was that it made it so much easier to ridicule. A small but influential section of American society might get into the ideas Ken Kesey, Timothy Leary and Tom Wolfe were suggesting in books and psychedelic bus trips, but for most people it was either too far out or way too dull. No-one bores quite like a drugs bore, and nothing discredited acid more effectively than a bunch of lobotomised long-hairs all maundering on about transcendentalism and talking lampshades. Besides, by the late Sixties, misuse of the drug had produced a whole new group: acid-casualties, who had been gone for so long that they never quite returned – Syd Barrett of Pink Floyd being the prime example.

But if its listing as a Class A and its reputation as the go-to drug for stoner bores pushed acid back into the corners, then its interest for scientists never completely went away. For a long time, the word among scientists was that research on psychedelics was a quick way to close down your career. In addition to the hippy stigma still attached to it, its listing as a Class A meant that research became almost impossible. Only a few lonely souls – including Prof David Nutt at London's Imperial College – have kept going. Once a government adviser on drug policy, Nutt now calls the ban on research "an outrage", akin to the 16th-century ban on telescopes.

Having spent many years flitting between positions as both scientific poacher and as gamekeeper, Nutt has now settled firmly on the poachers' side. His stint as chair of the government's Advisory Council on the Misuse of Drugs came to an abrupt end in autumn 2009 following his suggestion that perhaps drugs should be classified according to scientific evidence of harm caused rather than according to levels of political outrage provoked. Nutt now chairs DrugScience (bankrolled by the EU and a hedge fund manager) which provides advice on drug use and misuse, and campaigns for a relaxation of the ban on research.

Part of the problem – and thus part of the attraction for Nutt and his peers – is that no-one knows exactly how LSD and the other psylocybins work. The most likely scenario is that they interfere with the parts of the brain which route and control much of our sensory experience. There are areas in the frontal lobe which act effectively as filters, processing the the billions of gigabytes of data the brain and body are receiving every second, prioritising some and junking others in order to give us a coherent experience of the world. It's thought that LSD messes with the filters, which means data that doesn't usually get through is suddenly available in wide-screen. In other words, it works not by adding something, but by removing. Cleansing, as Blake put it.

Like Huxley over 60 years ago, several researchers have also begun concentrating on LSD's capacity to smooth the passage between life and death. In early 2014, the first study of LSD to be approved by the US Food & Drug Administration in 40 years examined the effects of the drug used in conjunction with psychotherapy on 12 terminally ill patients. Those given low doses showed no significant change – if anything, their anxiety levels increased – but those given higher doses reported lower levels of anxiety and a greater sense of peace, whether or not they'd had a good or bad trip. 'People are more scared of dying than they are of using drugs,' pointed out one of the study's main funders.

Acid's main benefits (strong effects in tiny doses, non-habit-forming) are now being investigated by those trying to find a cure for addiction. Studies over the past decade have found that LSD is slightly more effective than standard pharmacological cures for alcoholism. A 2012 trial of 500 patients showed that 59% showed reduced alcohol misuse and, 'significant beneficial effects' after taking LSD. When biologists were able to look at what happens to the brain while on hallucinogens, they discovered that some parts lit up like Christmas while others – the neuronal roundabouts and transport hubs – went suspiciously dark. Oddly enough given the visions and hallucinations which characterise acid, ecstasy and MDMA, there's no more activity in the visual cortex than normal, though there is plenty going on within the parts of the brain associated with regulation of mood. Which, combined with the subjects' reports of reduced stress and improvement in mood, further implies that maybe some of the hallucinogens have an influence on depression, anxiety and schizophrenia. Or, to put it unscientifically, if you've just seen the universe in a bowl of apples, you're maybe not so fussed about losing your job.

The Lazarus effect

Through all of this – its use first by Hofmann, then by the CIA, then by the artists and hippies and now again by the scientists – the final goal was identical. Whether for benign reasons or malign, by fair means or foul, everyone has been looking for the same treasure. LSD and the other hallucinogens seemed to provide the tools to open up the perceptions and offer a new way of seeing the world. As did another, even more effective route to altered consciousness: death.

For a long time, the vivid accounts of experiences by patients pronounced dead but later revived were dismissed as hallucinations. Just like the sufferers of phantom limb syndrome, who often reported experiencing agonising pain in an arm or leg which had long since been amputated, many of those who claimed Near-Death Experiences (NDE) were either ignored or denied. In theory, the brain stops functioning about half a minute after the heart stops beating. And, under the current definition, if for ten minutes or so there's been no detectable activity in your brain, you've got no pulse and you're not breathing, then you're dead. As a doornail. Or a parrot. Legally, morally, medically and physiologically, you've completely left the building.

Or not. The work of researchers such as Dr Sam Parnia coupled with the growing body of neuroscientific work suggests not only that in some people consciousness appears to persist after death, but that there are so-called "liminal states" which belong neither to life nor to death. "Death," as Parnia puts it in his 2013 book The Lazarus Effect, "is no longer a specific moment in time, such as when the heart stops beating, respiration ceases and the brain no longer functions. That is, contrary to common understanding, death is not a moment. It's a process – a process which can be interrupted well after it has begun."

His study of 330 patients who had survived heart attacks found that 39% reported some kind of awareness long after they had technically died, and between 10 and 20% could recount lucid and detailed perceptions including thought-processes and memories after all detectable brain-function had ceased. Some described leaving their own bodies, some described mystical experiences and bright lights, and one patient was able to describe in detail the operating theatre in which he lay and the attempts to revive him long after his heart had stopped.

Many of the accounts are echoes of the studies made in the 1970s by Raymond Moody and detailed in his 1975 book Life After Life. Most of the methodology and almost all the science in the book was subsequently rubbished as the work of a credulous believer in the paranormal, but that did not stop it becoming a massive bestseller. In fact, it probably helped it. But, if the basic point still applies – that people can exist suspended for a time in no-man's-land – then shouldn't we redefine what we call life, and what we call death? Is there a kind of physiological purgatory which belongs neither to one state nor the other? And, beyond that, where does the conscious self stop and wider consciousness start?

Medicine has always recognised different stages of alertness from full awakeness through sleep right down to death. The Glasgow Coma Scale – the nearest we have to a rating for physical consciousness – gives each stage a grading from 15 down, based on a patient's responses to different stimuli. A high score is indicated by the patient withdrawing from pain, responding to verbal commands, and visually registering their surroundings. No-one gets lower than three. If you're three, then the next step down is six feet under.

The GCS scale has proved itself useful enough to become almost ubiquitous, though it, like everything else, isn't infallible. Almost half of all patients with traumatic brain injuries are either drunk or drugged, which complicates the task of the anaesthetist or the surgeon. Added to which, a lack of physical response to pain doesn't necessarily imply a lack of capacity to feel it. Just as there's a period when the brain locks down the body during sleep, so anaesthesia can lock down physical responses but leave mental processes or sensations intact.

So, instead of defining consciousness as a series of attributes or capacities, perhaps it's easier to see it as a series of deepening planes. Very broadly speaking, oceanographic currents move round the globe in belts. Polar ice is laid over warm water which covers cold water, as if the sea was not a single seamless whole but a confluence of rivers all sliding through each other. Everyone has at some time had the sensation of wading through water of differing temperatures – feet warm, knees tepid, chest cold.

Maybe consciousness is the same. Maybe it's just depths of selfhood, one laid over the other, through which we all swim. Sometimes we're sunbathing on the surface, sometimes in sleep or coma we've dived far down into silence. It's only a metaphor, but it is at least curious that the sea always slips into the way we describe conscious states – falling to sleep, deep trance, descents into oblivion. At any rate, some neuroscientists have begun using similar terms for the way the brain behaves during comas and PSV states. They describe the way neurons behave as being similar to sonar. When you're fully conscious the sonar's on, the signal's great, and everyone's receiving. When you're unconscious, the sonar's still on and the pings are still pinging, it's just that they're not going anywhere. They stay localised, starry little bursts of light and energy in the mind's deeps, firing alone through the darkness. And in a coma state the sonar is still on, but the signal itself is broken.

Those who work most closely with the brain do not lose their wonder at it, though neurosurgeons seem as divided as everyone else about exactly what, or who, keeps the show on the road. The neuroscientist Susa n Greenfield describes looking down at a human brain for the first time as a student. "Well, first of all they smell of formaline. It's a really horrible smell. It stinks, but it keeps the brain firm while you're dissecting it so you have to keep a set of gloves in a tupperware box. I remember it vividly - I remember holding it and thinking, 'God, this was a person'. You can hold it in one hand and if it's ready for dissection, it's kind of browny-colour with dried blood vessels, and it looks like a walnut. Two hemispheres like two clenched fists."

She believes that consciousness is not "some disembodied property that floats free. I don't believe in the the theory of panpsychism – that consciousness is a reducible property of the universe and our brains are like satellite dishes picking it up. I can't disprove it, but assuming that consciousness is a product of the brain and the body, then it's inevitable that if the brain is changed then consciousness will change."

Similarly, in his 2014 memoir Do No Harm her friend and erstwhile colleague the neurosurgeon Henry Marsh regards the muddle over what belongs to the mind and what to the brain, "confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells."

Some forms of neurosurgery are better done under local anaesthetic, which means the patient is awake and responding to questions throughout. How strange and how miraculous to spend your working life looking down at a brain within its bony casing whilst holding a conversation with its owner. Teilhard de Chardin would probably have loved neurosurgery, with his palaeontologist's mindset and his sense of the span of things. But does taking apart the brain, that living piece of physical origami, really get anyone nearer to knowing what consciousness is? Is it where the self resides, and if so, is that why brain diseases like Alzheimer's gnaw away at the stuff of the self? If time or disease pulls away someone's personality, burglarising all the stories that made them them and leaving nothing but a physical body, then has that disease made off with their consciousness too?

Dr Duncan Macdougall believed not just that consciousness and the soul were interchangeable, but that they could both be weighed. In 1901 MacDougall was working as a doctor treating terminally ill tubercular patients in Massachusetts. Since his patients' decline towards death followed a relatively predictable trajectory, he decided to test an idea he'd had by placing the beds of six of his sickest patients on scales. He then balanced those scales, sat back, and waited. At the moment of their death, he claimed, they got lighter. Or, as the New York Times put it: "The instant life ceased, the opposite scale pan fell with a suddenness that was astonishing – as if something had been suddenly lifted from the body. Immediately all the usual deductions were made for physical loss of weight, and it was discovered that there was still a full ounce of weight unaccounted for."

This, said Macdougall, was proof that the soul had mass. "The essential thing is that there must be a substance as the basis of continuing personal identity and consciousness, for without space-occupying substance, personality or a continuing conscious ego after bodily death is unthinkable." Macdougall tried the same hypothesis on 15 dogs and on several mice. None showed any change in weight, which he claimed was proof that only humans had souls. Since Macdougall's original sample was small (of the original six patients, two were excluded, two lost even more weight after death and one put it back on, which left only one to uphold his theory) it did not take long for the experiment to be discredited. The 15 unfortunate dogs died under protest, and had been drugged.

Most of Macdougall's experiments were either daft or cruel. Like thousands before him and thousands afterwards, he snagged himself on two points: one, that which doesn't have mass cannot exist, and two, the soul must be the same as consciousness. Which is the point at which things start to disintegrate. Faustian stories of soul-selling and -searching are compelling because they suggest that something unquantifiable can be apparated into something real. But there's a point beyond which even stories can't reach.

So maybe de Chardin was right about the Omega Point, and maybe he wasn't. His ideas are gaining traction not so much because of their content but because, starting from a place of faith, he synthesised science, artificial intelligence and divinity. His advantage was that he was a multidisciplinarian and that he gave the old hope for a better heaven a catchphrase. But his noosphere can only really work as a point of departure for more questions. He envisaged his point of complexity and convergence as a moment of revelation, a final unified rising towards God. But even if he's right, we all still have free will. And if there's going to be a tipping-point towards a new universe, then we should make sure it tips the right way.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Lucy is the deputy news editor for Newsweek Europe. Twitter: @DraperLucy

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go