How Safe is AI Really, as We Approach the Singularity?

Written by on September 24, 2015 in Sci-Tech, Science with 2 Comments
Print Friendly, PDF & Email

Artificial Superintelligence

What happens when Artificial Intelligences gets loose in the world? 

Every parent wonders how their kids will turn out when they grow up and become independent in the world, and speaking from personal experience, it’s such a relief to see one’s children mature into wise, compassionate, genuinely good people.

Similar concerns are now on many peoples’ minds as we rush forward into the Quantum Age, getting closer and closer to creating a kind of intelligence far beyond anything we’ve yet seen on Earth before. Many are awaiting something known as the technological singularity, at which point artificial intelligence will have reached, “a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.” Just what might happen when we reach such a point of technological breakthrough? What will such intelligence be capable of, and who will be in charge of ensuring its safe use?

Since I’ve been fascinated in this subject for years, I attended Douglas Hofstadter’s Symposium, “Will Spiritual Robots Replace Humanity by 2100?” at Stanford University in April 2000. Douglas Hofstadter and his eight guests (Bill Joy, Ralph Merkle, Hans Moravec, Ray Kurzweil, John Holland, Kevin Kelly, Frank Drake, and John Koza) talked for five hours about their vision of humanity’s future… as each panelist looked through a telescope with the lenses of his own particular area of expertise into the future. Many speakers cited Moore’s Law of the ever-increasing pace of technological changes to make the point that technology is changing faster than ever before, and that rate of change is expected to increase at an exponential rate–so it is difficult to predict where we will be in one hundred years from now. Douglas explained that he only invited guests who agreed that there is a possibility for robots to be spiritual. Douglas wanted to focus on the question of “Who will be we in 2093?”, since a visualization of who we will be is at the core of how we can understand how we might be utilizing new technologies. I wondered just how possible it was that robots might be thinking and acting on their own behalf by 2100–and I wondered that if this was so, might they be replacing us–with or without our consent and cooperation?

Over the past fifteen years, there has been increasing interest–and concern–about artificial superintelligence. Roman Yampolskiy summarizes the Singularity Paradox (SP) as “superintelligent machines are feared to be too dumb to possess common sense.” Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world’s best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In his new book, Artificial Superintelligence, Yampolskiy argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolskiy points out that “fully autonomous machnines cannot ever be assumed to be safe,” and going so far as to add, “… and so should not be constructed.”

Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which ‘human minds’ and ‘human designed AI’ occupy adjacent real estate on this nonlinear terrain of ‘minds in general’ in multidimensional super space. ‘Self-improving minds’ are envisioned which improve upon ‘human designed AI,’ and at this very juncture arises the potential for ‘universal intelligence,’ and the Singularity Paradox (SP) problem.

AI-danger-signYampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or ‘Just for A.I. Location.’ Part of Yampolskiy’s proposed solution to the AI Confinement Problem includes asking ‘safe questions’ (p 137). Yampolskiy includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to ‘leakproof’ virtual worlds), and argues for creation of committees designated to oversea AI security.

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are points such as Yudkowskiy having “performed AI-box ‘experiments’ in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box,” and even Chalmers “correctly observes that a truly leakproof system in which NO information is allowed to leak out from the simulated world into our environment is impossible, or at least pointless.”

Since one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it’s easy to see why there is such strong and growing concern regarding the safety to mankind of AI. And if there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping that we’ll have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its elders. Yampolskiy points out, “In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of singularity research, with numerous publications appearing every year.”

One look at footage of a Philip Dick AI robot saying,

“I’ll keep you warm and safe in my people zoo,”

as shown in the 2011 Nova Science documentary What’s the Next Big Thing can be enough to jolt us out of complacency. For those hoping that teaching AI to simply follow the rules will be enough, Yampolskiy replies that law-abiding AI is not enough. AI could still keep humans safe ‘for their own good,’ increasingly limiting human free choice in a sped-up kind of way, that superintelligent AI will be able to do.

The Universe of MindsFor readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest. Yampolskiy describes five taxonomies of minds (pp 31-34). Returning to re-read this section after having completed the rest of the book can be quite beneficial, as at this point readers can more fully understand how AI that is Quantum and Flexibly Embodied according to Goetzel taxonomy (p 31) with Ethics Self-Monitoring (p 122) might help ensure development of safe AI. If such AI systems include error-checking, with firmware (unerasable) dedication to preserving others and constantly checking to seek and resonate with highest-order intelligence with quantum levels of sensing through time-reversible logic gates (in accordance with quantum deductive logic), one can begin to breathe a sigh of relief that there might just be a way to ensure safe AI will prevail.

While the deepest pockets of government funding are unlikely to ever make plans to develop such a system that would not be controlled by anything less than the greatest intelligence seekable by AI (such as God), it is conceivable that humanitarian philanthropists will step forward to fund such a project in time that all of us will be eternally grateful that its highest-order-seeking AI will prevail.

___________________________
QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®

Tags: , , ,

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS FeedConnect on YouTube

2 Reader Comments

Trackback URL Comments RSS Feed

  1. 10203718979010102@facebook.com' Fernando Pereira says:

    pode confiar

  2. 1663029370609124@facebook.com' Josh Krause says:

    The only reason people really fear AI is because what happens when your toaster charges you for its work….

Leave a Reply

Your email address will not be published. Required fields are marked *

FAIR USE NOTICE. Many of the stories on this site contain copyrighted material whose use has not been specifically authorized by the copyright owner. We are making this material available in an effort to advance the understanding of environmental issues, human rights, economic and political democracy, and issues of social justice. We believe this constitutes a 'fair use' of the copyrighted material as provided for in Section 107 of the US Copyright Law which contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. If you wish to use such copyrighted material for purposes of your own that go beyond 'fair use'...you must obtain permission from the copyright owner. And, if you are a copyright owner who wishes to have your content removed, let us know via the "Contact Us" link at the top of the site, and we will promptly remove it.

The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Conscious Life News assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms.

Paid advertising on Conscious Life News may not represent the views and opinions of this website and its contributors. No endorsement of products and services advertised is either expressed or implied.
Top

Send this to friend