Hilary Boden takes a look at modern artificial intelligence

In 1950, Alan Turing introduced a fundamental idea to the field of artificial intelligence: the Turing test. Through a process of questions and answers, the test evaluates a machine’s ability to exhibit intelligent behaviour. Passing involves not correctly answering questions, but answering them in the most human way. Human judges converse by text with unseen entities, which could be either human or artificial. They then must determine, based upon this conversation, whether their correspondent is a person or a machine. If a machine were to pass the test, the distinguishing line between man and machine becomes rather complicated. In the 1950 paper proposing his test, Turing called it “the imitation game”.

The implications of Turing’s concept have had a significant impact on the public perception of technology, and robotics in particular.  A heightened awareness of the relationship between mankind and the artificial constructs we create has been sublimated in the world of science-fiction. The series of questions posed to would be ‘replicants’ (androids) in the future vision of Ridley Scott’s Blade Runner is probably the most famous science-fiction dialogue based on the Turing test. Turing himself had visions of how machines might progress, speculating that by the year 2000 “an average interrogator will not have more than a 70 per cent chance of making the right identification”. Naturally, this would mean that an artificial construct would be indistinguishable from a human counterpart 30% of the time. For over 60 years, this prediction has failed to come to fruition as software could not fool human judgement.

On June 23rd this summer, an event organised by the University of Reading called the “Turing test marathon” came close to seeing a computer program pass the Turing test. The event was held at Bletchy Park, a significant venue where Turing played a crucial role in cracking the Enigma code during World War II. It involved 30 human judges interacting electronically with 30 hidden candidates, 25 of which were human, and five which were advanced “chatbots” specifically designed to imitate human intelligence and communication skills. Around 150 conversations were held, and a Russian-developed program called Eugene managed to dupe the judges 29.2% of the time, just falling short of Turing’s 30% mark.

Many question the relevance of the Turing test as an indication of progress within the artificial intelligence field. Pat Hayes from the Institute for Human and Machine Cognition asserts “That very simplistic idea, that we’re trying to imitate a human being, has sort of become an embarrassment. It’s not that we’re failing, which is what a lot of people think – it’s that we’ve decided to do other things which are far more interesting.” Certainly, artificial intelligence seems to have been more successfully applied to particular tasks, such as efficiently and smartly sifting through the vast internet as search engines do.

While scientists may appear less concerned with creating a specifically ‘human’ artificial construct, the question still remains as to whether the machine-human divide could ever be significantly narrowed. Many consider it impossible, with machines essentially missing that unquantifiable or unidentifiable ‘thing’ (does one dare call it a soul?) which enables humans to have consciousness. Interestingly, others reject the notion of ‘essential’ difference from the outset, stipulating that there is only a difference of complexity separating man from machine: the philosopher Daniel Dennett, of Tufts University in Massachusetts simply asserts “It’s not impossible to have a conscious robot. You’re looking at one.”


Hilary Boden

Image by Rob Boudon