The Academy Awards are Sunday night, and Best Picture nominee, “Her” has some people wondering if the movie is more science fiction or reality. The story centers on a man who falls in love with an intelligent computer operating system (OS) with a female voice and personality. While this may seem like science-fiction and a long way off, we asked Jackie Fenn, vice president and Gartner Fellow, for her thoughts on artificial intelligence and if this is the direction artificial intelligence is heading. Ms. Fenn said the day when your computer will know you better than any human is not as far off as you may think.
Q: Is an artificially intelligent operating system like the one depicted in the film a possibility?
A: Many of the capabilities of Samantha, the intelligent OS in the movie “Her”, are already here, including speech and natural language recognition, and some conversational abilities. Much of the recent progress is due to advances in machine learning, whereby the system doesn’t have to be preprogrammed for every eventuality, but learns from experience. There are already virtual personalities such as Cleverbot that learn from their discussions with humans, with impressive results as shown by this YouTube video of two Cleverbots conversing.
Once the computer can get smarter from new information, there’s nothing to stop it becoming as good as, and eventually better than, a person doing the same task. We’ve already seen it in tasks as ‘uniquely human’ as grading student essays and figuring out which wine will age the best. Every day there’s a new example of a task that we would have thought only a human could do, except now a machine can do it better. So what’s to stop an OS from becoming a better companion than most humans? The more it interacts with you, the more it learns about what pleases you and what doesn’t, until it knows you better than you know yourself.
Humor and creativity will be among the more challenging areas for artificial intelligence, but even here researchers are experimenting with clever algorithms and deep learning. If a computer can learn what makes people laugh – and more importantly what makes you laugh – based on watching and analyzing over time, there is no theoretical reason that a computer couldn’t eventually display and respond to humor. Similarly with music or art – by experimenting, analyzing and learning, it could figure out which compositions create the best emotional resonance in the human brain.
Once an artificially intelligent computer achieves these milestones, we get to the thorny challenge of consciousness and will. If an artificial intelligence computer exhibits its own unique goals and emotions in an appropriate way, how will we ever tell if it is conscious or not? Even if our philosophy of life doesn’t allow us to credit an inanimate object with consciousness (although what if the computer or robot was built from live tissue?), it may not matter. Dutch scientists found that people hesitated in switching off a cute robot cat begging for mercy, and took nearly three times as long when the cat was perceived as intelligent and agreeable. The way we react to a funny, smart and helpful entity is hardwired into the human brain, so we may have less choice than we imagine about how we relate to our future artificial intelligence companions.
Q: While society grapples with the recent stigma of surveillance and identity theft threatening to undo some of the latest innovations in human-computer interaction, other natural user interface technologies are making their way into the forefront. In particular, interest is bubbling over efforts to improve natural-language speech recognition and, more ambitiously, to ascertain mood or emotion by interfacing with brain waves. How will these technologies develop and be used?
A: In the movie, Samantha’s input was limited to voice and video, which already provides a wealth of information about a person’s emotional state that goes beyond what other humans might detect. For example, micro expressions that reveal a person’s true feelings last less than a fifth of a second and are not usually noticeable by others, but a computer analyzing a video stream could easily spot them.
In a world of cheap sensors and quantified-self aficionados, computers will be able to track a person’s vital signs such as heart rate, blood pressure, temperature and so on, and see how they change based on a person’s activities or sensory stimuli. Put that together with the advances in brain-computer interfaces that determine intent and emotion directly from brain signals, and your OS will be able to figure out your needs without the need for a conversation. Right now, much of the focus is on reading brain signals, but technologies such as transcranial stimulation have the potential to change brain states as well. If you wanted it to, your OS would be able to put you in a more focused or cheerful state of mind if it noticed you getting too distracted or grumpy.
Q: In the 2013 Emerging Technologies Hype Cycle, enterprises were encouraged to look beyond the narrow perspective that only sees a future in which machines and computers replace humans. In fact, by observing how emerging technologies are being used by early adopters, there are actually three main trends at work. These are augmenting humans with technology; machines replacing humans; and humans and machines working alongside each other. How should enterprises approach these trends?
A: The first thing is to acknowledge that artificial intelligence and smart machines - including robots – are going to represent a juggernaut trend for the next decade. Re-evaluate tasks that you thought only humans could do – can you redesign how processes are performed and decisions are made within your enterprise based on new smart technologies? You’ll need to reassess this every year or two as the capabilities improve.
Look in particular at how to balance tasks between humans, software and robots to take best advantage of the abilities of each. There are still many challenging endeavors – including chess - where the best solution is a human working together with a computer.
Hire an ethicist or two, as ethical tradeoffs are going to be one of the few areas that remain firmly in the domain of humans. Computers may be able to answer a question faster and more accurately than any person, but it’s going to be the humans who decide what is the right question to ask.
Gartner, Inc. (NYSE: IT) is the world's leading research and advisory company. The company helps business leaders across all major functions in every industry and enterprise size with the objective insights they need to make the right decisions. Gartner's comprehensive suite of services delivers strategic advice and proven best practices to help clients succeed in their mission-critical priorities. Gartner is headquartered in Stamford, Connecticut, U.S.A., and has more than 13,000 associates serving clients in 11,000 enterprises in 100 countries. For more information, visit www.gartner.com.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.