Can a Robot Learn Right From Wrong?

Written by on May 27, 2014 in Sci-Tech, Science, Technology with 0 Comments

Adrianne Jeffries | Theverge | May 27th 2014

can robot learn right from wrongIn Isaac Asimov’s short story “Runaround,” two scientists on Mercury discover they are running out of fuel for the human base. They send a robot named Speedy on a dangerous mission to collect more, but five hours later, they find Speedy running in circles and reciting nonsense.

It turns out Speedy is having a moral crisis: he is required to obey human orders, but he’s also programmed not to cause himself harm. “It strikes an equilibrium,” one of the scientists observes. “Rule three drives him back and rule two drives him forward.”


Asimov’s story was set in 2015, which was a little premature. But home-helper robots are a few years off, military robots are imminent, and self-driving cars are already here. We’re about to see the first generation of robots working alongside humans in the real world, where they will be faced with moral conflicts. Before long, a self-driving car will find itself in the same scenario often posed in ethics classrooms as the “trolley” hypothetical — is it better to do nothing and let five people die, or do something and kill one?

There is no right answer to the trolley hypothetical — and even if there was, many roboticists believe it would be impractical to predict each scenario and program what the robot should do.

“It’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations,” says Matthias Scheutz, a computer science professor at Tufts University. “That’s why this is such a hard problem. You cannot just list all the circumstances and all the actions.”


Instead, Scheutz is trying to design robot brains that can reason through a moral decision the way a human would. His team, which recently received a $7.5 million grantfrom the Office of Naval Research (ONR), is planning an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot.

At the end of the five-year project, the scientists must present a demonstration of a robot making a moral decision. One example would be a robot medic that has been ordered to deliver emergency supplies to a hospital in order to save lives. On the way, it meets a soldier who has been badly injured. Should the robot abort the mission and help the soldier?

For Scheutz’s project, the decision the robot makes matters less than the fact that it can make a moral decision and give a coherent reason why — weighing relevant factors, coming to a decision, and explaining that decision after the fact. “The robots we are seeing out there are getting more and more complex, more and more sophisticated, and more and more autonomous,” he says. “It’s very important for us to get started on it. We definitely don’t want a future society where these robots are not sensitive to these moral conflicts.”

Scheutz’s approach isn’t the only one. Ron Arkin, a well-known ethicist at Georgia Institute of Technology who has also worked with the military, wrote what is arguably the first moral system for robots. His “ethical governor,” a set of Asimov-like rules that intervene whenever the robot’s behavior threatens to stray outside certain constraints, was designed to keep weaponized robots in check.


For the ONR grant, Arkin and his team proposed a new approach. Instead of using a rule-based system like the ethical governor or a “folk psychology” approach like Scheutz’s, Arkin’s team wants to study moral development in infants. Those lessons would be integrated into the Soar architecture, a popular cognitive system for robots that employs both problem-solving and overarching goals. Having lost out on the grant, Arkin still hopes to pursue parts of the proposal. Unfortunately, there isn’t much funding available for robot morality.

The hope is that eventually robots will be able to perform more moral calculations than a human ever could, and therefore make better choices. A human driver doesn’t have time to calculate potential harm to humans in a split-second crash, for example.

There is another major challenge before that will be possible, however. In order to make those calculations, robots will have to gather a lot of information from the environment, such as how many humans are present and what role each of them plays in the situation. However today’s robots today still have limited perception. It will be difficult to design a robot that can tell ally soldiers from enemies on the battlefield, for example, or be able to immediately assess a disaster victim’s physical and mental condition.

[read full post here]

Tags: ,


If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed Connect on YouTube

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

FAIR USE NOTICE. Many of the articles on this site contain copyrighted material whose use has not been specifically authorized by the copyright owner. We are making this material available in an effort to advance the understanding of environmental issues, human rights, economic and political democracy, and issues of social justice. We believe this constitutes a 'fair use' of the copyrighted material as provided for in Section 107 of the US Copyright Law which contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. If you wish to use such copyrighted material for purposes of your own that go beyond 'fair use' must obtain permission from the copyright owner. And, if you are a copyright owner who wishes to have your content removed, let us know via the "Contact Us" link at the top of the site, and we will promptly remove it.

The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Conscious Life News assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms.

Paid advertising on Conscious Life News may not represent the views and opinions of this website and its contributors. No endorsement of products and services advertised is either expressed or implied.
Send this to a friend