By David Nield|ScienceAlert
At first, the news that software engineers are teaching robots to disobey their human masters does sound slightly troubling: should we really allow the artificial intelligence systems of the future to say no to us? But once you think it through, you can see why such a feature might actually end up saving your life.
Consider a robot working on a car production line: most of the time, it would simply follow the instructions it's been given, but if a human should get in harm's way, the machine needs to be clever enough to stop what it's doing. It needs to know to override its default programming to put the human at less risk. It's this kind of functionality that a team from the Human-Robot Interaction Lab at Tufts University is trying to introduce.
The team's work is based around the same ‘Felicity conditions' that our human brains apply whenever we're asked to do something. Under these conditions, we subconsiously run through a number of considerations before we perform an action: do I know how to do this? Can I physically do it, and do it right now? Am I obligated based on my social role to do it? And finally, does it violate any normative principle to do it?If robots can be conditioned to ask these same questions, they'll be able to adapt to unexpected circumstances as they occur.
Related Article: UN Urged To Ban ‘Killer Robots’ BEFORE They Can Be Developed
It's the last two questions that are the most important to the process. A robot needs to decide if the person giving it instructions has the authority to do so, and it needs to work out if the subsequent actions are dangerous to itself or others. It's not an easy concept to put into practice, as anyone who's ever watched 2001: A Space Odyssey will know (if you haven't, watch “Open the pod bay doors”).
Related Article: Will Robots Need to be Programmed with “Feelings” In Order To Be Conscious?
As one of their demo videos below shows, the computer scientists are experimenting with user-robot dialogues that allow for some give and take. That means the robot can provide reasons for why it won't do something (in this case it says it will fall off a table if it walks forward) while the operator can offer extra reasons for why it should (the robot will be caught once it reaches the edge).
lol. such a crock of shit. just like everything the government does is “for our own good”
No Robots on my Planet ,, stand up against this shit !!!
They are a lot closer than you think, Nadia. Thanks for taking the time to respond.
Would an Infinite Being allow a technologic robot to take control of and order our lives? No danger of allowing this in my energy PERIOD!!!
Skynet and I-Robot…you think that people would slow down with this…This is soooo dangerous.
Anytime the government makes a move to limit your rights its under the guise of ” for your own good.”