A few weeks ago a bunch of reports claimed that for the first time ever, a computer passed the Turing Test. What followed was a half-hearted flash flood of apocalyptic histrionics about Skynet and cybercrime. (By the way, since when does stapling “cyber” to the start of any old noun suddenly make it a real word that adult humans are supposed to say? I must have missed that extremely unimaginative meeting.) But I’m afraid all you prospective Blade Runners have to go back to your singularity subreddits empty handed. Most of those articles are wrong for at least two reasons. Firstly, the computer program didn’t actually pass the original criteria of the test, or even the watered down version of the test that everybody is saying it passed, and kind of cheated by only imitating a 13 year old child speaking a second language. Secondly, the articles neglect to mention that the Turing Test is monumentally stupid.
“Hold the sonic screwdriver, William Gibson”, I hear some of you bamboozledly cry, “I don’t even know what the Turing test is!” Fret no more, meatbeing – it’s a check for human-like artificial intelligence. According to Alan Turing, the creator of the test, if a machine can convince a significant majority of people that it is a human in blind, text-based conversations it can be said to be thinking. Turing was a code maker, mathematical mastermind and father of modern computing. He also had a boyfriend, which didn’t go down to well in 1950s Britain. After being encouraged by lawyers to plead guilty for gross indecency, he was prosecuted, chemically castrated and ostracised. Two years later he killed himself by eating from an apple laced with cyanide. (Fun fact: There are rumours that Apple’s originally rainbow-coloured logo was a secret tribute to Turing. Apple denies it, but then again, they would – they have enough PR trouble distancing themselves from the suicide nets haloing their factories.)
Turing’s writing on artificial intelligence has defined the field for two generations. Although he argued his ideas with nuance, they have since been simplified to the notion that if a computer talks, acts and quacks like a human it has the mind of a human. The logical implication of this is that our minds are just algorithms that turn input into output. This conception of what minds are is not only flawed but dangerous, because it completely ignores consciousness, the foundation of our humanity.
One of the many steaming mugs of philosophy that have been poured onto the Turing Test’s circuits was the Chinese Room thought experiment brewed by John Searle. The gist of Searle’s argument is that if a guy who didn’t speak Chinese sat in a closed room with a huge book of Chinese questions and answers, then received questions in Chinese from under the door, he could look up the characters and reply correctly without ever understanding a word of the conversation. In the same way, even a superficially advanced A.I. is still just spitting out garbage according to formula without ever having the ability to hold any meaning. Such an computer would therefore be a philosophical zombie. It might pass as human in the Turing Test, but it has no qualitative experience or conscious thought whatsoever.
Some academic circles have raged against this filthily unmathematical argument. As computer programming has improved, those who claim that it can’t do everything are increasingly ridiculed as too sentimental – or worse, superstitious – to accept reality. One such taunter is Ray Kurzweil, a sort of A.I. John the Baptist and major scientist for megalomaniacal datamining company Google. Kurzweil said that Searle contradicted himself when he said that “the machine speaks Chinese but doesn’t understand Chinese.” Because as anyone who has ever met a Furby knows, nothing ever says something without understanding exactly what it means. To Kurzweil and his “futurist” disciples, observable input and output are all that matter and consciousness may as well not exist.
This might seem like a rarefied debate between wizened boffins, but it points to a deeper problem with science itself. Whenever I hear somebody say that science is flawed my normal reaction is “Jesus Christ”, because that’s normally who barrels in uninvited the very next moment, with half a sack of goon under his arm and a “really cool drinking game that I think you guys will love.” But I’m not saying that we have any better tools than science to understand the universe, or that everything is equally true, or that souls exist. I am saying that science is a tool that measures and describes the relationship between things or components, not things or components in and of themselves. To a system of thought designed to find objectivity, subjectivity is necessarily incomprehensible.
This isn’t just the under-researched screed of a tarot reading Philosophy drop-out. Well it is, but people who know what they’re talking about agree with me. A neuroscientist friend of mine told me that at the very first lecture of her course, her cohort was told that neuroscience deals with two problems. The first, called the Easy Problem, is about investigating which parts of the brain deal with different functions and how to deal with their malfunction. The Hard Problem is what consciousness is and why we have it. In the words of her lecturer, “we don’t like to talk about the hard problem.”
Consciousness isn’t magic. Our minds are an emergent property of a biochemical evolutionary process, like how a car’s capacity to drive arises from components that can’t drive on their own. These minds cannot be recreated by imitating how we see them respond to stimuli any more than a painting of how a mountain looks is an actual mountain. Experiments like the Blue Brain Project take a more thorough approach than some “Hard A.I.” advocates by endeavouring to model the human brain neuron by neuron. But even if scientists managed to create a computer simulation of the human brain down to the atomic level – and it will take a shitload of flappy bird developers to pull that off – we don’t know if it’ll really work, because we’re not sure if conscious minds are substrate neutral (a piece of softwear that can be uploaded to any old thing) or if they require actual squelchy matter to exist. A one-to-one map of a territory is not the territory itself.
Yet our society goes on claiming that it is, and that the meticulousness of our mapping proves that there was no real territory in the first place. The human consequences of this are profound – we have created a world that systematically deletes the subjective from the objective. The agonies and ecstasies of the human condition are reduced to “chemical imbalance” and malfunctioning consciousnesses – like Alan Turing’s – are fixed with drugs. The internet turns us into servers or nodes, connected in a vast network to others who we barely believe are alive. The dark art of marketing treats billions like wind-up toys, with triggers and pulleys just waiting to be exploited by anyone with the moral nihilism to use psychology for personal gain. And as for the economy, well – we are all ones and zeroes in the market.
The end point of this tide of inhumanity is not just alienation, it’s solipsism – the belief that other consciousness do not even exist. For as many of Kurzweill’s disciples would claim, we have no objective evidence that they do. And as humanity is written out of itself, the only evidence of our personal subjectivity will be the ceaseless horror of our total isolation.
R.D. Laing was a radical psychologist who spoke lucidly and prophetically on this threat. (That’s R.D. Laing with an “R”, not to be confused with his similarly named daughter, Candian songstress and lesbian icon Justin Bieber.) To quote:
We are not concerned with the interaction of two objects, nor with their transactions within a dyadic system; nor are we concerned with the communication patterns within a system comprising two computer-like sub-systems that receive and process input, and omit outgoing signals. Our concern is with two origins of experience in relation.
– The Politics of Experience
Computers are not like us. The laptop on which I type, this difference engine, this abacus of light, this glass bead game, is truly a remarkable creation. But it is still our creation, and any likeness I recognise its polished surface is only there because we have built it in our image. We don’t need to fear the fairytale that machines will turn into humans. But we must fight, at every turn, the transformation of ourselves into machines.