I can’t let you do that Dave

In just a few years computers will be able to have meaningful conversations with humans. It’s a bold prediction but there’s no doubt that advances in natural language interaction by commercial pioneers such as Artificial Solutions as well as  academic researchers  are poised to usher in a new era in human-computer interaction.

Dr Wamberto Vasconcelos, who heads a natural language project at the University of Aberdeen’s computer science department, recently attracted an unusual amount of media attention with the above prediction. He believes improvements in the way computers interact with human will allow the creation of a new generation of “intelligent” autonomous systems.

The intelligence is artificial of course, but what set these systems apart from their predecessors will be their ability to reason with human masters and, if necessary, argue the case for a particular course of action that humans may have overlooked or rejected.  Alternatively, they can warn against a human decision and give their reasons why they believe it is not appropriate.

This has long been a potent theme for science fiction writers. When H.A.L. issued the immortal phrase “I’m sorry… I can’t let you do that Dave” in Stanley Kubrick’s “2001: A Space Odyssey”,  the general public woke up to the idea that computers would one day  be able to not just speak but also  understand and reason with humans — and perhaps disobey them as well.

Today, it is easy to forget just how revolutionary this notion was for the time. When the film was released in 1968, the popular perception of a computer was as an oversized adding machine that could do the accounts and not much else.

I recently watched Danny Boyle’s 2007 sci-fi movie “Sunshine” which reprises the idea of an all-knowing computer that won’t do as it is told. In this movie, a team of astronauts are sent to re-ignite the dying sun 50 years into the future.

At a crucial point in the journey, the computer takes back control of the space ship from the human crew who have deviated from the mission to try to save the lives of two crew members.

The computer argues that the success of the mission comes above everything else. But just as it is about to return the ship to its original course, the ship’s commander issues an emergency command that overrides the computer and enables the two missing crew members to be rescued.

Both movies raise the moral dilemma of just how far humans should allow computers to act autonomously. Of course, the whole point of letting a computer control a spaceship or, to use more mundane example, a fly-by-wire commercial aircraft, is to make life easier for the human crew.

But humans will only willingly relinquish the controls of their spaceship or car if they feel they can trust the computer.

The big problem is that computers do make mistakes that are not always obvious either to them or to their human counterparts. To use a topical example, Apple’s Maps application, used on its latest iPhone, has been revealed to be full of geographical errors. Some are glaringly obvious — Berlin is not in Antarctica — but many are not and could have serious consequences.

“Evidence shows there may be mistrust when there are no provisions to help a human to understand why an autonomous system has decided to perform a specific task, at a particular time, and in a certain way, “says Dr Vasconcelos.

He believes the solution lies in a new generation of autonomous systems that are able to carry out two-way conversations with humans.

They could then interrogate the computer, asking it to justify its decision or provide additional information if the reasoning was unclear. If the humans are still not sold on the computer’s plan, they could suggest alternatives or point out issues with the chosen course of action, all with the aim of ensuring the success of the project that is being worked upon.

More about the University of Aberdeen’s work here.

Leave a Reply

Your email address will not be published.

top