Natalie Keir takes a look at the amazing applications of artificial intelligence

When I was 10 years old, it was my life’s mission to convince my parents to buy me a mobile phone. I remember trying to use my very under-developed cunning and wit to persuade them that I didn’t want a phone for social purposes; oh no, I wanted one purely for the sake of my safety. This can’t have been terribly convincing, considering the most dangerous part of my day was probably when I attempted PE. Eventually my parents complied, probably due to my infallible skills of persuasion, and bought me my phone of choice, along with a pink camouflage ‘superwoman’ phone case.  I am afraid to say that my taste in phones hasn’t improved much since then. Up until recently, I have never really cared about having a Blackberry or iPhone, I would rather have taken the cash equivalent. However, this Christmas my boyfriend received an iPhone 4S, and from that moment, everything changed.

It was not the fantastic Internet connection, the unrivalled graphics, or any of the other stuff people seem to go for in phones that got me: it was Siri, the robotic ‘personal assistant’. The highlight of my week is when my boyfriend asks me to send a text for him, and I have a chance to interact with Siri. It is the most amazing gadget, especially when you think about what Siri has to go through to entertain you with its smooth conversation. When you say something to Siri, all it receives is a soundwave, simply a line of varying amplitude and frequency. From this soundwave, Siri has to interpret what you are saying, and what you require it to do. This is much tougher for a computer than you would first think.

Let’s say we tell a computer a very short simple story. “The girl saw a necklace in a shop window, she loved it”. To a human, that seems pretty simple, but when you examine the story, there is a lot to be misinterpreted. If we were to ask what the girl loved, a human would say “the necklace”, but a computer would not know if it was the necklace or the shop window. Next, we could ask if the girl was able to pick up the necklace. A human would know that she couldn’t, as the window is in the way, but the computer wouldn’t know this, as there is nothing to suggest that the window is solid. Finally, you could ask how the girl could pick up the necklace. The human could answer that the girl should enter the shop and ask the shopkeeper, but a computer would have no idea: it wouldn’t know that the door could open. To combat all of these problems, the computer would have to have a huge database of contextual information, consisting of all the things that humans inherently know. We take our intrinsic knowledge for granted when it comes to things like the relative desirability of inanimate objects, but that is data that would have to be quantified and inputted into the computer; a daunting task for any computer scientist.

Another problem that has been plaguing computer scientists for years, is trying to get a computer to recognise simple objects, such as a mug. Although this challenge seems trivial, it is one of the first steps in developing robots that could perform actions in the same way that a human could. The difficulty is that a robot sees a mug as a set of coordinates and if two mugs are not identical then their coordinates won’t be either. Even if thousands of photos of mugs were inputted into the computer’s memory, the computer still wouldn’t necessarily be able to recognise a mug if it had never seen it before. As much as scientists loved examining mugs, a breakthrough was definitely needed, and the Artificial Intelligence team at Stanford University managed to make it happen. To teach a robot to think, you need to understand how a human thinks, and it was recently discovered the human brain works in a much simpler way than previously thought. The pink wrinkly tissue that makes up the outer layer of the brain is called the neo cortex, and is split into different regions for different senses. It was found that if you swap the audio and visual connections, so that the eyes are connected to the audio part of the brain, and the ears are connected to the visual part, the brain can still interpret new information. Basically, the brain all works on the same ‘program’. Trying to mimic this program allows scientists to train computers like the brain is naturally trained. In this way, scientists have been able to program a computer to do things that humans haven’t yet been able to do, such as fly a helicopter…upside down. This has never been done by a human helicopter pilot, and a seriously talented pilot prodigy would have to be born for this to ever happen.

The possible applications for artificial intelligence are unbounded, and some diagnostic equipment in hospitals is already taking advantage of this developing technology. At the University Hospital Lund in Sweden, ‘artificial neural networks’ are being used to help diagnose heart attacks in patients with chest pains. Researchers exposed the computer to thousands of past medical records, and by learning from experience, the computer is now able to diagnose better than an experienced cardiologist. It is predicted that artificial intelligence could completely transform the medical industry, and it seems that the change can only be positive.

The world of artificial intelligence anticipated in so many sci-fi films is still a farfetched fantasy, and much of it is never going to exist. In 1972, Geoffrey Hoyle, a sci-fi writer and futurologist, wrote a list of predictions of how life would have changed by 2010. For the most part, his predictions were pretty accurate; Internet shopping, Skype, and webcams are all common occurrence nowadays. He also predicted that everybody would be wearing jumpsuits. Thankfully, that less than desirable fate has not yet been bestowed upon us, but the general accuracy of the list just shows how precise scientists can be in their predictions. If the development of artificial intelligence continues at the anticipated rate, we are sure to have some exciting amenities coming to our aid over the next few decades.

 

Natalie Keir

Image credit – Liz Henry