Scientists Gather to Discuss AI Doomsday Scenarios

Posted by on March 3, 2017 in Internet, Media & Arts with 0 Comments

(c) Chad Baker/Getty Images

By RT News

Artificial intelligence has the capability to transform the world – but not necessarily for the better. A group of scientists gathered to discuss doomsday scenarios, addressing the possibility that AI could become a serious threat.

The event, ‘Great Debate: The Future of Artificial Intelligence – Who's in Control?', took place at Arizona State University (ASU) over the weekend.

“Like any new technology, artificial intelligence holds great promise to help humans shape their future, and it also holds great danger in that it could eventually lead to the rise of machines over humanity, according to some futurists. So which course will it be for AI and what can be done now to help shape its trajectory?” ASU wrote in a press release.

The Saturday gathering included a panel which consisted of Eric Horvitz, managing director of Microsoft's Redmond Lab, Skype co-founder Jaan Tallinn, and ASU physicist Lawrence Krauss. It was partly funded by Tallinn and Tesla's Elon Musk, according to Bloomberg.

It included ‘doomsday games' which organized around 40 scientists, cyber-security experts, and policy experts into groups of attackers and defenders, the news outlet reported.

Participants were asked to submit entries for possible worst-case scenarios caused by AI. They had to be realistic, based on current technologies or those which seem plausible, and could only consider things which might feasibly happen between five and 25 years in the future.

Scenarios ranged from stock market manipulation to global warfare. Others included technology being used to sway elections, or altering a self-driving car to see a “stop” sign as a “yield” sign.

Those with “winning” doomsday scenarios were asked to help lead panels to counter the situation.

Horvitz said it was necessary to “think through possible outcomes in more detail than we have before and think about how we'd deal with them,” noting that there are “rough edges and potential downsides” to AI.

While some of the proposed solutions from the ‘defenders’ team seemed viable, others were apparently lacking, according to John Launchbury, who directs one of the offices at the US Defense Advanced Research Projects Agency (DARPA).

One of the failed responses involved how to combat a cyber weapon designed to conceal itself and evade all attempts to dismantle it.

Despite the somewhat unnerving content of the event, Krauss said the purpose was “not to generate fear for the future because AI can be a marvelous boon for humankind…but fortune favors the prepared mind, and looking realistically at where AI is now and where it might go is part of this…”

He added that even situations which we may now fear as “cataclysmic” may actually “turn out to be just fine.”

Launchbury said he hopes the presence of policy figures among the participants will spur concrete steps such as agreements on rules of engagement for cyberwar, automated weapons, and robotic troops.

The gathering comes just four months after acclaimed physicist Stephen Hawking warned that robots could become the worst things ever to happen to humanity, stating that they could develop “powerful autonomous weapons” or new methods to “oppress the many.”

Hawking, along with Musk and Apple co-founder Steve Wozniak, released an open letter in 2015 which warned that AI – especially weaponized – is a huge mistake.

“Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” they wrote at the time.

Tags: , , , , ,


If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed Connect on YouTube

Leave a Reply

Your email address will not be published.

FAIR USE NOTICE. Many of the stories on this site contain copyrighted material whose use has not been specifically authorized by the copyright owner. We are making this material available in an effort to advance the understanding of environmental issues, human rights, economic and political democracy, and issues of social justice. We believe this constitutes a 'fair use' of the copyrighted material as provided for in Section 107 of the US Copyright Law which contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. If you wish to use such copyrighted material for purposes of your own that go beyond 'fair use' must obtain permission from the copyright owner. And, if you are a copyright owner who wishes to have your content removed, let us know via the "Contact Us" link at the top of the site, and we will promptly remove it.

The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Conscious Life News assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms.

Paid advertising on Conscious Life News may not represent the views and opinions of this website and its contributors. No endorsement of products and services advertised is either expressed or implied.

Send this to friend