1

11:11 and Artificial Intelligence – Is there a Connection?

Think you’ve heard it all when it comes to why people are seeing 11:11 and other repeating numbers? Think again!

For many, it all started with the 11’s (i.e. 11:11 on the clock) and now it’s burgeoned well beyond the pair of ones to include a whole range of double digits (22, 333, 44 and so on.).

So what’s going on?

I recently spoke about my own unrelenting journey with double numbers that was kicked up a notch back in late September when I recently appeared on Jimmy Church’s Fade to Black radio program.

Though I certainly can’t say for sure what is going on with numbers appearing more prominently right now to more and more people, I can say for sure that something big is going on.

Could that something big include a push to bring humanity into the age of A.I.?

In a paper published by Stanford University in 2007 artificial intelligence is described as: “…the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

And that’s where it can get a little (no, a lot) dicey!

Granted there are many proponents of A.I. espousing the merits of graduating mere mortals to a new level of super human. But equally valid are those concerned with stripping humanity of the “organic” and replacing it with its synthetic counterpart.

Further, some of those very same proponents, many of whom are a part of the mainstream scientific “elite,” contend that we may already be living in an artificially intelligent simulation; a cosmic internet of sorts.

One in this camp is NASA’s Jet Propulsion Labs scientist Rich Terrell  who contends that our creator may just be a ‘cosmic computer programmer.’

Terrell was quoted in a 2012 article as saying, “One has to think what are the requirements for God? God is an inter-dimensional being connected with everything in the Universe, a creator that is responsible for the Universe and in some way can change the laws of physics, if he wanted to. I think those are good requirements for what God ought to be.”

He went on to explain that this is the same as programmers creating simulations.

Put these two ideas of reality together, one that says we need to be living in a more A.I. focused reality and the other saying, heck, we already are – and where does this all lead?

Conspiracy researcher David Icke was also a recent guest on Church’s Fade to Black and during his discourse on the reality of A.I. he weighed in on what he called “The Double Bluff.”

“There is of course this agenda of artificial intelligence and connecting people to technology…We’ve got to watch the double-bluff. Where they say, ‘Well yeah we do live in a simulation. Let’s just make the best of it.’ We have to be aware of that,” Icke commented.

So with this brief summary now in our back pocket, what do seeing repeating numbers these days have to do with it all?

Let’s look at this equation…

If our universe is in actuality a simulation; a cosmic computer cranking out numerical code to create reality, and the veil that’s masked our understanding of reality is now disappearing, then we are now seeing what reality is indeed really made of…NUMBERS!

It would not do justice to present this as an overly simplified hypothesis, and yet in looking at this as a syllogism, we may just have gotten a little bit closer to  cracking the code on why so many people are seeing double numbers more, not the least of which is 11:11!

Let’s muse about this in greater detail in this latest episode of Conscious Commentary!

Get key links from this episode HERE.

alexisheadshotv2Alexis Brooks is the #1 best-selling author of Conscious Musings, writer/editor for CLN and host of the award-winning show Higher Journeys with Alexis Brooks. Alexis brings over 30 years of broadcast media experience to CLN. For over half of that time, Alexis has dedicated her work to the medium of alternative journalism, having researched and reported on the many aspects and angles of metaphysics, spirituality and new thought concepts.

This article and its accompanying media was originally created and produced by Higher Journeys in association Conscious Life News and is published here under a Creative Commons license with attribution to Alexis Brooks, HigherJourneys.com and ConsciousLifeNews.com. It may be re-posted freely with proper attribution, author bio, and this Copyright/Creative Commons statement.

 




FOM5: The New AI Scare?

The Foundations of Mind V (FOM5)”The New AI Scare?” conference was hosted by the California Institute for Integral Studies (CIIS) by the CIIS Center for Consciousness Studies in San Francisco on Nov 3-4, 2017. It featured presentations by: Henry Stapp, Fred Alan Wolf, Seán Ó Nualláin, Cynthia Sue Larson, Stanley Klein, and Beverly Rubik. This fifth Foundations of Mind conference was scheduled to coincide with and celebrate the release of Henry Stapp’s new book, “Quantum Theory and Free Will.”

People registered through Foundations of Mind (FOM) participate in an ongoing series of conversational threads in areas related to consciousness, quantum interpretations, neuroscience, and higher education.

Aamod Shanker

Quantum Mind

Aamod Shanker presented ideas from traditions of eastern mysticism, particularly those describing vibrations (spanda) from Kashmiri Shaivism, and yogic ideologies of Patanjali, together with principles from wave/quantum mechanics, electromagnetics and principles of symmetries, structure and logic. There was a great deal of spirited conversation about this topic, with discussion about there being many words for consciousness in the east.

Kiril Popov

Reality, Truth, and Computation at the Boundary

Kiril Popov talked about the importance of boundary conditions, and design principles for the mind. There is a requirement that intelligent beings predict things before they happen, which requires memory. Boundary interfaces provide a kind of building block, with access to fields becoming possible via boundary conditions.

Brian Swimme

Mind and World 
Session chair Brian Swimme discussed cosmogenetic consciousness, and what that entails. He encouraged conference participants to experience a visceral sense of wonder with respect to speciation events that some scholars speculate are based on not just genetic mutations, but conscious intention and activity as well. When viewing evolutionary developments through this lens, we can thus recognize important distinctions between evolution of the bison and the horse, which evolved very differently from a common genetic ancestor–by attending to different streams of attention and intention. We can hypothesize that what gets brought forth through evolution is what it’s all about.

Menas Kafatos

Menas Kafatos Commentary on Quantum Theory and Free Will

Menas Kafatos presented a summary of important points from the orthodox interpretation of quantum mechanics as described in numerous publications by Henry P. Stapp and summarized in his new book, “Quantum Theory and Free Will: How Mental Intentions Translate into Bodily Actions.” Nature has values, including life–so we might ask how our values express themselves in a physical universe. Our universe is quantum on every level, although it appears classical, and the observer role is central in quantum physics.

Henry P. Stapp and Seán Ó Nualláin

Syamala Hari on Voluntary Action, Conscious Will and Readiness Potential

Syamala Hari discussed neural correlates of consciousness, neural models, and ways to interpret quantum mechanics in such a way that intention does activation.

 

Henry Stapp and Cynthia Sue Larson

If Artificial Intelligence Asks Questions, Will Nature Answer?

Cynthia Sue Larson considered how Henry Stapp’s orthodox interpretation of quantum mechanics suggests that when a question is asked, Nature answers–and then pursued this line of thinking to contemplate what happens if Artificial General Intelligence (AGI) asks a question. The impact of such a dialogue between AGI and Nature were explored, with consideration of humanity’s optimal role.

Stan Klein

Stan Klein on New Approaches to the Measurement Problem

Stan Klein provided an introductory overview of Quantum Electro Dynamics as necessary foundational groundwork prior to reviewing the importance of recognizing the selection problem in quantum physics. When we consider a moveable cut, we may well ask, “Who is the observer?”

Tania Re

Tania Re discussed research findings from the field of ethnogenic healing, with support from quantum physics indicating there is growing evidence to recommend consideration of psychotropic substances for therapeutic use.

Seán Ó Nualláin

Reterritorialization and Mental Health

Foundations of Mind founder Seán Ó Nualláin described the issues facing Ireland based on the background presented in his book, “Ireland, A Colony Once Again.” Since the 1990s, there has been a disturbing trends including encroachment of state, increasing suicide rates of the Irish populace, and a kind of illegal status quo–resulting in a Good Friday agreement that brought peace, and also a result that Ireland became a state with no land.

Phillip Shinnick

Phillip Shinnick discusses nature’s influence and mind training in QiGong

Phillip Shinnick described some of the research he has done to address difficulties in inorganic and organic measurement of QiGong energy. Mind and Qi appear to be separate, and Qi does not need mind activity to ‘do its own thing.’ Man cannot govern Dao Yin (nature), but rather nature is involved, and teaches us. Practicing QiGong produces measureable energetic effects, and changes the way we observe reality.

Wolfgang Baer

Wolfgang Baer

Wolfgang Baer presented a talk about “why I’m not afraid of A.I.,” introducing his Cognitive Action Theory where activity is at the center, and action does the activity–rather than emphasizing roles of ‘observers’ and ‘things.’ From this perspective, we feel we are together when we are moving together, and experience is explained by process. From this view, each of us is an event that contains time. We thus live in a world of interacting action cycles–a multiverse of persons.

Vipul Arora

Vipul Arora

Vipul Arora described how observations are essential building blocks of the world. We can quantify experiences in time according to predictable relationships in kinematics. We notice primary properties, or aspects of experience, which are different from emergent properties, and in so doing, we might well ask whether we can distinguish between different sources (tungsten, mercury, sodium lamps). Speech recognition started with higher emergent properties, but those results are limited and moving toward lower emergent properties. We see that limitations of detectors can undermine the importance of primary properties.

Fred Alan Wolf

Fred Alan Wolf

Fred Alan Wolf discussed self-referential consciousness, quantum mechanics, and Gödel numbers to demonstrate that minds can do what automatons cannot do, by transcending rules. There is something about ‘Gödelization’ that shows it is an unalgorithmic procedure, with measurements that are inherently unalgorithmic. Put in other words, we can’t consistently mathematize quantum wave function collapse.

 

Stan McDaniel

Stan McDaniel

Stan McDaniel talked about the philosophy of continuity, time, and opposition of the dominant paradigm consisting of mechanistic reductionism, physical time, and neoDarwinism. Stan pointed out that memory is used for two things: remembering, and bits of data stored somewhere. This leads us to consider whether a computer can look at it’s own memory, whereas humans are involved in a state of functional reciprocity with nature and the world.

Beverly Rubik and Harry Jabs

Beverly Rubik & Harry Jabs

Beverly Rubik talked about ways Artificial Intelligence can automate obtaining human health information from bio-well finger scans, and then potentially also provide specific balancing frequencies that have been shown effective in reducing stress and improving health. Harry Jabs described ways that A.I. might emulate humans, though robots lack emotions and also will lack a human biofield.

Karla Galdamez

Karla Galdamez

Karla Galdamez described her study of intention at a distance as a source of information transfer and wave function collapse in a recent experiment. This particular experiment involved a Zen meditator in an electromagnetically shielded room, and a remote helper, connected via internet. 

 

 

 

Additional photos and news announcements from the Foundations of Mind IV conference can be viewed at the Foundations of Mind facebook page.

___________________________
QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics as the host of her radio show Living the Quantum Dream, and as a guest on numerous shows including: the History Channel, Coast to Coast AM, the BBC, Gaia TV, and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: https://www.RealityShifters.com
RealityShifters®



**EXCLUSIVE** – Richard Dolan on Artificial Intelligence: “This is a RUNAWAY TRAIN!”

Alexis sits down with UFO researcher and alternative historian Richard Dolan to discuss the imminent dangers of A.I. and the future of our world.

Richard Dolan doesn’t pull any punches!

Whether it’s discussing (exposing) the long-standing government coverup of the ET/UFO reality,  or the history of false flag events and other such global machinations, he’s thorough and well read. Cautious but bold. This latest interview I conducted with him while on-location at the 2017 Contact in the Desert event in Joshua Tree California was no different.

But this time, we decided to tackle the subject of artificial intelligence (A.I.), and what if any part the ET/UFO component has to play in this push toward merging man with machine.

When I first approached Richard who is a frequent guest on our show, to ask him whether he’d be up for a chat on the A.I. phenomenon, he didn’t hesitate for a moment. “I’m all over it!” he said.

And though the heat was at a max in the desert of Joshua Tree and the atmosphere was buzzing with activity from all angles, that didn’t deter us from a juiced up, ramped up, and passionate discourse!

Dolan’s been looking at the idea of artificial intelligence as an X-factor in the future of our global society for some time. In 2002, he authored an article simply entitled What Are They in which he broached the question about the origin of ET/UFO intelligence. Are they organic, synthetic, a product of A.I.? Here’s part of what he had to say in that article…

UFOs are seen as the product of an advanced intelligence, either biological in nature, or else something paranormal, possibly beyond our physics. I have come to a different conclusion. I concede that my position is provisional, and may change in time. But the more I reflect on it, the more persuasive I find it. It is that the UFO phenomenon is the product of an artificial intelligence. – Richard Dolan

This postulate on the part of Dolan made for quite a springboard in our own discussion. I wanted to know whether or not these non-human intelligences (NHI’s) are in fact behind (at least in part) this push for artificial intelligence on our planet.

He danced around this question for a while. His thoughts were well, interesting. But what was even more so were his assessments about what he refers to as the inevitable “runaway train.” The role that technology is and will increasingly play in the very fabric of our lives, and why we need to be concerned!

I count this as one of the most important interviews I’ve done to-date, because it will eventually effect (infect?) all of us to some degree or another.

What are we going to do about it? Can we do anything about it? And how soon will A.I. completely control our lives?

Get relevant links from this episode and download the audio on-demand.

iTunes3

If you haven’t already, be sure to subscribe to our show on iTunes!

alexisheadshotv2Alexis Brooks is the #1 best-selling author of Conscious Musings, writer/editor for CLN and host of the award-winning show Higher Journeys with Alexis Brooks. Alexis brings over 30 years of broadcast media experience to CLN. For over half of that time, Alexis has dedicated her work to the medium of alternative journalism, having researched and reported on the many aspects and angles of metaphysics, spirituality and new thought concepts.

This article and its accompanying media was originally created and produced by Higher Journeys in association Conscious Life News and is published here under a Creative Commons license with attribution to Alexis Brooks, HigherJourneys.com and ConsciousLifeNews.com. It may be re-posted freely with proper attribution, author bio, and this Copyright/Creative Commons statement.

 




The United Nations Will Take On ‘Killer Robots’ in 2017

Are fears of AI turning into sinister killing machines, like Arnold Schwarzenegger’s character from the “Terminator” films, grounded in facts?
Credit: Warner Bros.

By Glenn McDonald | Live Science

Good news, fellow humans: The United Nations has decided to take on killer robots.

At the international Convention on Conventional Weapons in Geneva, 123 participating nations voted to initiate official discussions on the danger of lethal autonomous weapons systems. That’s the emerging designation for so-called “killer robots” — weapons controlled by artificial intelligence that can target and strike without human intervention.

The agreement is the latest development in a growing movement calling for an preemptive ban on weaponized A.I. and deadly autonomous weapons. Last year, a coalition of more than 1,000 scientists and industry leaders, including Elon Musk and representatives of Google and Microsoft, signed an official letter to the United Nations demanding action.

The UN decision is significant in that it calls for formal discussions on the issue in 2017. In high-level international deliberations, the move from “informal” to “formal” represents a real step forward, said Stephen Goose, arms director of Human Rights Watch and a co-founder of the Campaign to Stop Killer Robots.

“In essence, they decided to move from the talk shop phase to the action phase, where they are expected to produce a concrete outcome,” Goose said in an email exchange with Seeker.

RELATED: Killer Machines and Sex Robots: Unraveling the Ethics of A.I.

It’s widely acknowledged that military agencies around the world are already developing lethal autonomous weapons. In August, Chinese officials disclosed that the country is exploring the use of A.I. and automation in its next generation of cruise missiles.

“China’s plans for weapons and artificial intelligence may be terrifying, but no more terrifying than similar efforts by the U.S., Russia, Israel, and others,” Goose said. “The U.S. is farther along in this field than any other nation. Most advanced militaries are pursuing ever-greater autonomy in weapons. Killer robots would come in all sizes and shapes, including deadly miniaturized versions that could attack in huge swarms, and would operate from the air, from the ground, from the sea, and underwater.”

The core issue in regard to these weapons systems concerns human agency, Goose said.

“The key thing distinguishing a fully autonomous weapon from an ordinary conventional weapon, or even a semi-autonomous weapon like a drone, is that a human would no longer be deciding what or whom to target and when to pull the trigger,” he said.

“The weapon system itself, using artificial intelligence and sensors, would make those critical battlefield determinations. This would change the very nature of warfare, and not for the betterment of humankind.”

READ THE REST OF THIS ARTICLE…




The Future of Neurotechnology

David Eagleman and Cynthia Sue Larson

David Eagleman and Cynthia Sue Larson

It’s not a matter of IF but WHEN neurotechnology will become reality in our lives. It’s now gearing up in two areas: human enhancement, and artificial intelligence (AI).

Neuro-tech may not yet be a common household word just yet, but it is definitely well on the way. And in fact, now that most of us hold in our hands devices that allow us to access the internet, we already are starting to get a glimpse of how this merging of technology into the way we make choices, communicate, and remember important people and events in our life will feel.

I attended an invigorating open discussion, “The Future of Neurotechnology: Human Intelligence + Artificial Intelligence,” led by neuroscientist David Eagleman and entrepreneur Bryan Johnson at my alma mater, UC Berkeley. The purpose of this talk was to discuss possible directions as we go forward to incorporate advances in neuroscience with those of Artificial Intelligence (AI), with awareness that there will be some degree of synergy between development of advances in human cognitive enhancement and AI.

At this time when venture capitalists are understandably wary about investing in businesses with unproven track records that are operating on the “bleeding edge,” Bryan Johnson explained he invested one hundred million dollars of his own personal money in his company, Kernel, a human intelligence (HI) company to develop the world’s first neuroprosthesis for cognition. Working together with Ted Berger at USC, Johnson is exploring how new technologies might help us improve memory through neuromodulation. Johnson and his team seek to answer the question, “What if we could read and write neural memory in the hippocampus?”

neuropaceIn 2013, Kernel’s NeuroPace proved itself to be a commercial success in quelling epileptic seizures. Future advancements may rely upon such new technologies as neural dust and nanobots.

What does all this have to do with you? In much the same way that transportation is being revolutionized with the coming of robot cars and self-driving vehicles, neurotechnology is poised to transform Human Intelligence (HI) and Artificial Intelligence (AI), while reducing disease, dysfunction and degradation–and enhancing human cognitive functioning.

Neurotechnology Ethical Considerations

Bryan Johnson noted that several people were raising questions and voicing concerns about ethical considerations of human cognitive enhancement–so he asked for a show of hands to indicate how many people felt ethics should be given high priority with regard to neurotechnological advances. Many people (including me) raised our hands, confirming Bryan Johnson’s hunch.

Johnson took note of this, and pointed out that however each of us might feel about the ethical questions involving applying neurotechnology with such things as neural dust–designed to non-invasively enter a human’s peripheral nervous system and sit on the surface of the neurocortex–there will be countries in the world, such as China, that welcome such experimental research with open arms.

The subject of the singularity came up, as one gentleman shared the observation that based on simulations of what happens when AI develops, it appears to be extremely clear that we will need some kind of human enhancement in order to give humans a fighting chance. A variety of simulations of how AI will interact with humanity show that unless everything goes just exactly right, human survival after the creation, expansion, development, and dominance of AI is not a sure thing. We would thus do well to help ensure a more level playing field between humans and AI by boosting Human Intelligence with neurotechnology.

Participants in the discussion voiced the opinion that convergence between machine learning and human cognitive enhancement will be helpful now. One woman in the audience expressed her profound heartfelt desire that wisdom be prioritized in neurotechnological advances as being one of the most important priorities to keep in mind.

nanobotswarmEnvisioning New Neurotechnical Horizons

With regard to envisioning where neurotechnology may go in the next few decades, Johnson and Eagleman spoke mostly in generalities, rather than specifics. Intelligent neural dust, such as that developed at UC Berkeley’s Brain Machine Interface Systems Laboratory involving sensors about the size of a grain of sand, is a form of implantable technology that can be placed in nerves or muscles to treat disorders such as epilepsy, to stimulate the immune system, and to reduce inflammation. Powered by and working with ultrasound, the tiny neural dust can go super-deep inside a body to take measurements and assist in stimulating nerves and muscles. Another new arrival in the new field of electroceuticals will be nanobots, which will be even smaller than neural dust, and can automate tasks such as performing delicate surgical procedures, delivering exact drug dosages, and diagnosing disease; this past year, swarms of nanobots demonstrated promise in precisely targeting and treating cancer.

Job requirements may change once human intelligence and cognitive functioning is neurotechnologically enhanced. We expect that some of our technical professionals receive additional training to become doctors and lawyers–and it’s conceivable that in the not-too-distant future, some professionals may also be expected to undergo neurotechnological enhancement as part of the requirements for the job.

A young man wearing a T-shirt emblazoned “Qualia Research Institute” asked, “What do we do if we find out we are at the local maxima of human cognitive efficiency? How might we be able to tweak it?” to which Johnson and Eagleman pointed out that we should be able to increase our communication input/output rate to a level that is far faster than the slow verbal speech method currently being used during this discussion–since we can all think far more quickly than we can talk.

Fully aware of the irony, I took hand-written notes during this presentation and discussion, and wrote the draft of this article by hand with a pen on paper–clearly NOT the fastest way to do things! Yet, I’ve seen research showing advantages of taking notes by hand, rather than typing things on keyboards. I’ve found my ability to remember and more completely utilize information gets a huge boost when I work from hand-written notes. So while I agree with the inevitability of human enhancement with neurotechnology, I also envision a future in which “old ways” of knowing, communicating, and interacting with others continues to take place, and might even help us ensure that during the coming ascendance of AI, human intelligence ensures its place, too.

incognitoFree Will and the Power to Forget

After the talk, I enjoyed a personal chat with David Eagleman. During their discussion, Eagleman and Johnson had been emphasizing the value of enhancing human intelligence with better memory–and I had a sense that while memory enhancement sounds like a great idea, there are likely some really good natural reasons that we humans so often forget. I pointed out the value of forgetting–in that forgetting can enable us to make quantum jumps to more optimal realities–and this is likely a big factor in the effectiveness of placebo effect healing.

I talked with Eagleman about how he and Johnson had discussed finding ways for neurotechnology to enhance cognitive functioning by reading and writing information to the hippocampus–pointing out that we’ll likely see the that the hippocampus will grow when written to.

I voiced my support for putting human intelligence in the Open AI project, to minimize and prevent attempts to control AI and HI by one or a few governments or corporations.

We ended our conversation discussing ‘free will,’ which David reminded me he does not believe in, per se, as he describes in his book, Incognito. I suggested he consider the work of Thomas Metzinger and Max Velmans with consideration of first person and third person levels of representational self-modeling and levels of awareness. It’s clear that systems that are missing a few lines of code that constantly remind them they are representational models bear more than passing similarity to humans.

I’m inspired to see that David Eagleman’s Laboratory for Perception and Action at Stanford University seeks to understand how the brain constructs perception, how different brains do so differently, and how this matters for society–with special focus in four specific areas of: time perception, sensory substitution, synesthesia, and neurolaw. After giving some thought to neurotechnology, it’s clear to see the growing significance of the emerging interdisciplinary field of neurolaw.

Join the Conversation

My personal bias involves a preference to explore strengthening my awareness of what consciousness is and how it operates, working with natural human abilities that have historically been neglected, ignored or forgotten as technology has advanced. Some of my bias may be due to my being what is called an “exceptional human experiencer,” since I am a near-death experiencer, I am a meditator, I am a lucid dreamer, I have had a kundalini awakening experience, and I was ‘born aware’ (meaning I remembered being conscious prior to being born). Exceptional human experiences can provide people with access to heightened abilities to do some of the things we might also hope to enhance through neurotechnology–and I see a study of neurotechnology as potentially providing us with greater insights into optimizing our natural human abilities.

I’d love to hear your comments, thoughts and feelings about the future of neurotechnology. This is a controversial topic, that I hope you will contemplate and talk to people about it, thus helping set the direction for how humanity continues to evolve with technology. Some people are understandably skeptical or concerned about neurotechnology, while others are excited about the possibilities, and others yet don’t yet have strong feelings one way or the other. My gut feeling is that AI is coming, as is human cognitive enhancement. Humanity will do well to envision how we see ourselves in the future, and what we consider optimal in terms of working with neurotechnology in the future. I tend to agree with Eagleman and Johnson that it’s not a matter of if, but when, this technology will arrive. And those of us like myself who still don’t yet have cell phones can be hold-outs for a while (or in my case now, decades), yet all of us will eventually be affected in some way by these technologies.

___________________________

QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra and on the Living the Quantum Dream show she hosts. You can subscribe to Cynthia’s free monthly ezine at: https://www.RealityShifters.com
RealityShifters®



Researchers on the Verge of Creating Artificial Intelligence/Human Hybrids

By Jake Anderson | Activist Post

There is a longstanding debate among artificial intelligence experts and futurists: When, not if, AI emerges on the scene, will it help humanity or destroy it? The scenario has played out through innumerable iterations in popular culture, the most popular being The Terminator series. Steven Spielberg, riffing on the film Stanley Kubrick was going to direct before his death, presented the counterpoint, espousing a benevolent vision of AI in A.I. Then there are more nuanced, ambiguous iterations, like the recent Ex Machina.

Related Article: Tech Giants’ Artificial Intelligence Monopoly Is Possibly the Most Dangerous in History

New advances in algorithmic artificial intelligence, deep learning software, automation, and nanotechnology have made it abundantly clear that Ray Kurzweil’s vision of the Singularity may also be not an if, but when. In fact, responding to Kurzweil’s prediction of a cloud-based neocortex in the 2030s, entrepreneur Bryan Johnson of Braintree said, “Oh, I think it will happen before that.”

Johnson’s more recent aspirations involve merging artificial intelligence with humans, a pursuit many would argue is already occurring on a vast scale when it comes to our use of smartphone technology and search engines. Johnson thinks it will soon advance far beyond that.

Citing “neuroprosthetics” like cochlear implants, Johnson envisions BCI (brain-computer interface), a synergistic relationship between the central nervous system and external computing devices. Johnson’s newest theoretical prototype is something called a “neural lace,” which is a mesh that creates a wireless BCI inside the brain that releases certain chemicals as needed by the end user.

A brain-computer interface, in the context of advanced transhumanism and taken to its logical conclusion, is AI/human hybrids.

Transhumanist Zoltan Istvan, who actually just finished running for president, put it more bluntly in an email interview with the Anti-Media. “This idea that we would create an AI on Planet Earth smarter than human beings is asinine. There’s no reason to do that unless we want to slit our own throats,” Zoltan says. “But to use brain implants, neural devices, or EEG headsets to directly connect to a superior artificial intelligence—yes, that is something that I implicitly endorse. We need to become one with the intelligence we create; we need to remain an intrinsic part of it. We must become the machine by merging directly with it—and that’s what a direct interface between human brains and AI should be.”

Zoltan, like Hawking and many other thinkers, believes AI must be introduced to the Earth carefully:

When we flick on that ‘on’ switch of the first AI that will be superior to us, we must insist we go along with it for the ride—that we are sitting in the driver’s seat. We can do this. It will take a Manhattan-sized project to make sure we cross all our ‘t’s, but we can do it. We should insist the smartest of us tap directly into AI before it fully launches.

I see it like the sci-fi movie Contact, where the best contenders to meet an alien species compete for who is the most worthy to be the first one to do it—to represent the human race to another species. I think in the case of launching the first true AI smarter than human beings, we should form an international consortium that will pick the best 12 humans on Planet Earth, and via neural interface, merge them directly with that AI so we can know the best of the human race is with it—is hopefully leading it.

Related Article: Don’t Let Artificial Intelligence Take Over, Top Scientists Warn

Are humans irrevocably evolving toward a deeply entwined, existential relationship with artificial intelligence? Many of us could find out in our lifetimes. If Kurzweil, Johnson, Istvan, and more controversial transhumanists like Peter Thiel are correct, those who find out may be able to live indefinitely as AI-human hybrids…if such a life proves satisfying.

This article (Researchers on the Verge of Creating Artificial Intelligence/Human Hybrids) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Jake Anderson and theAntiMedia.org. Anti-Media Radio airs weeknights at 11pm Eastern/8pm Pacific. If you spot a typo, email edits@theantimedia.org.

Read more great articles at Activist Post.




Tech Giants’ Artificial Intelligence Monopoly Is Possibly the Most Dangerous in History

robot-rights

By Josie Wales | The Anti-Media

(ANTIMEDIA) In a race to corner the artificial intelligence market, the world’s largest tech companies are in a frenzy to buy out smaller AI companies at an alarming rate, heading toward what could potentially be the most dangerous monopoly in history. According to a report by CB Insights updated on October 7, close to 140 private artificial intelligence companies have been consolidated in the last five years.

Google, a subsidiary of Alphabet, is in first place with 11 acquisitions, the most notable being DeepMind Technologies, which Google purchased for $600 million in 2014. The British startup was just four years old at the time of the acquisition and has been used to create free apps with the National Health Service and conserve energy by controlling the air conditioning units at Google’s data centers. Deepmind made headlines in March 2016 when AlAlphaGo became the first computer program in history to beat a top-level Go player.

Related Article: Elon Musk Funds $1B Project To Prevent Artificial Intelligence From Destroying Mankind

Apple, Intel, and Twitter follow closely behind, with Samsung being the newest to jump into the fray after its acquisition of Viv Labs earlier this month.

While consolidation within any emerging market is to be expected, there are serious causes for concern.

Many of the startups being poached by tech giants are only a few years old and have not had a chance to develop before being swallowed by giant corporations with agendas of their own. This will inevitably stunt innovation and limit growth in the industry as startups will have to discard ideas in favor of the acquiring company’s vision.

Further, advancements in AI are occurring at a rapid pace with almost no public scrutiny or discussion in regards to ethics. Google’s ethics board is shrouded in secrecy, with both Deepmind and Google refusing to disclose any details about the members of the board or what is discussed. As Anti-Media reported last week, Microsoft, IBM, Facebook, Google, and Amazon announced the creation of the Partnership on Artificial Intelligence to Benefit People and Society. These companies all have one goal, and that is to make money. In order for people to want to purchase AI products when made available, they first have to trust them.

Right now, not many do — and for good reason. Some of the industry’s biggest names have voiced their concerns over the potential dangers of AI, including Stephen Hawking, Elon Musk, and Bill Gates. As Hawking put it“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”

Related Article: Don’t Let Artificial Intelligence Take Over, Top Scientists Warn


This article (Tech Giants’ Artificial Intelligence Monopoly Possibly the Most Dangerous in History)  is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Josie Wales andtheAntiMedia.org. Anti-Media Radio airs weeknights at 11 pm Eastern/8 pm Pacific. If you spot a typo, please email the error and name of the article toedits@theantimedia.org.

Read more great articles at The Anti-Media.




Self-Learning Robot Escapes Testing Ground and Goes Missing for 45 Minutes

robot-escape

By Jake Anderson | The Anti Media

(ANTIMEDIA) It seems like almost everyday now we hear some strange story about a robot, from the Microsoft Twitterbot going full Nazi in 24 hours to Google’s AI digesting romance novels and regurgitating them as postmodern poetry. We are witnessing the overlapping pubescent evolutions of both algorithmic artificial intelligence and social media — and the result is a daily dose of discomfiting news.

It’s not limited to just algorithms, either. Actual robots have increasingly been in the news. Whether it’s a robot horse trekking across remote terrain, DARPA’s drone children being prepared for war, or a worker bot beingabused in a factory, the age of automated minions is upon us.

For all of the stories, however, it’s somewhat rare that we hear about one of these robots escaping. But that’s exactly what happened in Perm, a city near the Urals in Russia, where an early self-learning version of the Promobot escaped its testing area and tied up nearby traffic.

According to its co-founder, Oleg Kivokurtsev, “The robot was learning automatic movement algorithms on the testing ground, [and] these functions will feature in the latest version of the Promobot.”

View this post on Instagram

A post shared by RT (@rt)

While some have questioned whether  the “escape” was actually a PR stunt, the Promobot — which is, quite literally, a promotional robot that hosts and provides information — was missing for 45 minutes before its battery died.

“Our engineer drove onto the testing ground and forgot to close the gates. So the robot escaped and went on his little adventure,” Kivokurtsev added.

So there you go — your strange robot story for the day. At least I didn’t use the word Skynet.


This article (Self-Learning Robot Escapes Testing Ground and Goes Missing for 45 Minutes) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Jake Anderson andtheAntiMedia.org. Anti-Media Radio airs weeknights at 11pm Eastern/8pm Pacific. If you spot a typo, emailedits@theantimedia.org.

Read more great articles at The Anti Media.




Obama Administration Fears Artificial Intelligence and the Reason Is Morbidly Ironic

Artificial Intelligence

By Jake Anderson | The Anti Media

(ANTIMEDIA) Last week, the White House released a report chronicling the Obama administration’s concerns over Big Data and artificial intelligence. Many prominent thinkers and scientists have come out recently with warnings about the dangers of unchecked artificial intelligence. However, the A.I. the White House report refers to is not of the Terminator ilk — rather, Obama has concerns over algorithmic artificial intelligence operating without human oversight.

Related Article: 13 Things the Government Is Trying to Keep Secret From You

The report, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” catalogs the growing sphere of influence represented by Big Data in society, including employment, higher education, and criminal justice.

With regard to the growth of automation and algorithmic artificial intelligence, the report states:

“As data-driven services become increasingly ubiquitous, and as we come to depend on them more and more, we must address concerns about intentional or implicit biases that may emerge from both the data and the algorithms used as well as the impact they may have on the user and society. Questions of transparency arise when companies, institutions, and organizations use algorithmic systems and automated processes to inform decisions that affect our lives, such as whether or not we qualify for credit or employment opportunities, or which financial, employment and housing advertisements we see.”

The report also notes how algorithmic technology could both bolster and endanger the relationship between law enforcement with local communities:

“If feedback loops are not thoughtfully constructed, a predictive algorithmic system built in this manner could perpetuate policing practices that are not sufficiently attuned to community needs and potentially impede efforts to improve community trust and safety. For example, machine learning systems that take into account past arrests could indicate that certain communities require more policing and oversight, when in fact the communities may be changing for the better over time.”

The White House says it wants to develop a framework for addressing these concerns so that flawed algorithms do not become a socioeconomic problem. The examples cited include the potentiality of people being denied credit and housing due to inaccurate information. The fear is that automated technologies and algorithmic A.I. deployed without human oversight could lead to unfair treatment.

There is an undeniable irony in this position, given that the Obama administration has proudly outsourced many of its military strikes to unmanned drones and autonomous robots. Just in the last couple years, weaponized drones have made strikes in Afghanistan, Pakistan, Yemen and Somalia.

Related Article: Insider: A Secret Government Controls the Obama Administration

In its seminal Drone Papers, the Intercept reported nearly 90% of the people killed in these strikes were not the intended targets. Thousands of innocent civilians have died because of Obama’s autonomous drones, and his notorious kill list has come under heavy scrutiny for accidents and mistaken targets.

The connection between algorithmic artificial intelligence and drone strikes may appear loose upon first glance. But the principle behind the recent Big Data paper is protecting citizens from out of control technology. It’s hard to reconcile this concern with the glib lip service the administration pays to the collateral damage stemming from flawed targeting systems.


This article (Obama Administration Fears Artificial Intelligence and the Reason Is Morbidly Ironic) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Jake Anderson and theAntiMedia.org. Anti-Media Radio airs weeknights at 11pm Eastern/8pm Pacific. Image credit: Global Panorama. If you spot a typo, email edits@theantimedia.org.

Read more great articles at The Anti Media.




How Industrial Revolution 4.0 will Prove Far More Divisive Than the Previous Three

robot hand

By Vandita | We Are Anonymous

The “Fourth Industrial Revolution,” as described by the founder and executive chairman of World Economic Forum, will fundamentally alter the way we live, work and relate to one another. Here’s how…

The world has undergone three industrial revolutions that caused a lot of disruption, but also dramatically changed our lives. The First Industrial Revolution used water and steam power to mechanize production. The Second Industrial Revolution used electric power to create mass production. The Third Industrial Revolution used electronics, digital and information technology to automate production.

4th-industrial-revolution1

What will happen after the new revolution?

The World Economic Forum has predicted that by 2020, over 7 million jobs will disappear; many of them white collar and administrative positions – thanks to the Fourth Industrial Revolution. Carl Benedikt Frey and Michael Osborne of Oxford University recently analyzed over 700 different occupations, to see how easily they could be computerized. They concluded that up to 35% of all jobs in the UK and 47% of all jobs in the US are at the risk of being displaced by technology over the next 20 years.

Emerging technology breakthroughs – such as artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing – could yield greater inequality, economists Erik Brynjolfsson and Andrew McAfee point out.

robots

In a 300-page report, analysts from Bank of America Merrill Lynch warned that a “robot revolution” will exacerbate social inequality, with robots performing manual jobs. They told The Guardian:

“We are facing a paradigm shift which will change the way we live and work. The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives. The trend is worrisome in markets like the US because many of the jobs created in recent years are low-paying, manual or services jobs which are generally considered ‘high risk’ for replacement. One major risk off the back of the take-up of robots and artificial intelligence is the potential for increasing labor polarization, particularly for low-paying jobs such as service occupations, and a hollowing-out of middle income manual labor jobs.”

Who stands to benefit from the Fourth Industrial Revolution?

According to a report by Swiss bank UBS, the wealthiest stand to gain more from the introduction of new technology, than those in poorer sections of society. Emerging markets – notably in parts of Latin America – and developing countries like China and India – will suffer the most. The report outlined a polarization in the labor force and greater income inequality, implying larger gains for those at the top of the income, skills and wealth spectrums.

“These individuals are likely to be best placed from a skills perspective to harness extreme automation and connectivity; they typically already have high savings rates and will benefit from holding more of the assets whose value will be boosted by the fourth industrial

This article (How Industrial Revolution 4.0 will Prove far more Divisive) is a free and open source. You have permission to republish this article under a Creative Commons license with attribution to the author and AnonHQ.com.

Read more great articles at We Are Anonymous.




Self-Driving Cars Are Programmed to Sacrifice: “Someone Is Going to Die”

home-where

By Mac Slavo | Activist Post

Self-driving cars are poised to take over U.S. roads and destroy American jobs … and they will also kill people, even if by accident.

Right now, their makers are in the process of convincing Congress that they can handle their own regulations – even as they continue working out the kinks.

The U.S. Senate subcommittee for Commerce, Science and Transportation heard testimony from Duke University roboticist Missy Cummings, who admitted that fatalities and accidents are inevitable as self-driving cars attempt to integrate with a busy and complex society.

The London Guardian reports:

The robot car revolution hit a speed bump on Tuesday as senators and tech experts sounded stern warnings about the potentially fatal risks of self-driving cars. “There is no question that someone is going to die in this technology,” said Duke University roboticist Missy Cummings in testimony before the US Senate committee on commerce, science and transportation. “The question is when and what can we do to minimize that.”

Automotive executives and lawmakers sniped at each other over whether universal standards were necessary for self-driving cars….

Related Article: ‘Vision’ of The Future: BMW Unveils Incredible “Self Driving” Concept Car

Senators Ed Markey and Richard Blumenthal, who have cosponsored legislation that proposes minimum testing standards for automated drivers… “The credibility of this technology is exceedingly fragile if people can’t trust standards – not necessarily for you, but for all the other actors that may come into this space at this point.”

These “standards” reflect the programming that will make sometimes fatal choices in the mix of situations that may involve innocent bystanders and no-win situations.

In these cases, is there a “moral” gradient that computers and people can see eye-to-eye on?

If the self-driving car is designed to avoid children at all costs, does that mean it could be programmed to kill (or sacrifice) you if/when you are caught inside of a car headed for disaster, or on the opposite side of the road as the child? There are no clear answers.

The standards are already becoming morally complex. Google X’s Chris Urmson, the company’s director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?

Google has come up with its own Laws of Robotics for cars: “We try to say, ‘Let’s try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don’t move in the world,’ and then to be transparent with the user that that’s the way it works,” Urmson said.

But the “morality” of the decision-making structure of the computer’s processes, and the inevitability of chaos for at least some individuals is only part of the story.

Autonomous vehicles will, ironically, also be quite vulnerable to hacking – as Internet-connected devices in the car can be manipulated and used to take over the commands and data of really any of the newer “smart” cars on the road. The problem will be even bigger as self-driving cars become a bigger part of our life.

“We know that many of the sensors on self-driving cars are not reliable in good weather, in urban canyons, or places where the map databases are out of date,” said Cummings. “We know gesture recognition is a serious problem, especially in real world settings. We know humans will get in the back seat while they think their cars are on ‘autopilot’. We know people will try to hack into these systems.”

“[W]e know that people, including bicyclists, pedestrians and other drivers, could and will attempt to game self-driving cars, in effect trying to elicit or prevent various behaviors in attempts to get ahead of the cars or simply to have fun,” she said.

Back in 2013, a couple of white hat hackers demonstrated how vulnerable a number of newer cars are to hacking. The possibilities are downright frightening – everything from the stereo and windshield wipers to the brakes can be hacked and remotely controlled – or shut off when you need them most. Just imagine what is possible in 2016.

What happens when these self-driving cars of the future disagree with the human passenger about the priorities, or what is allowed in a critical situation – like escaping a car jacking assault or avoiding a cop in pursuit?

Related Article: What is Faraday Future , and Why Should we Care?

It isn’t hard to see how trusting technology on the roads is going to complicate the future, and restrict our human ability to make decisions about important factors on the road. Let’s just hope somebody programs these intelligent machines with some common sense.

Top image credit

You can read more from Mac Slavo at his site SHTFplan.com

Read more great articles at Activist Post.




ATLAS: Next Generation of DARPA Humanoid Robot Released

humanoid_robot_ATLAS

By Nicholas West | Activist Post

The evolution of humanoid robots is happening at an ever-quickening pace. These advancements are occurring not only in their mechanics but also with the incorporation of artificial intelligence.

One of the humanoid robots that has garnered the most attention is ATLAS, developed for DARPA by Boston Dynamics. ATLAS has been through several incarnations since its inception in 2013 as part of the DARPA Robotics Challenge and, as you’ll see in the videos below, if a truly Terminator-like killer robot ever does come to fruition, ATLAS very well could be the one.

Related Article: Wow! Scientists Demonstrate Robots Showing Self-Awareness (Video)

Although ATLAS was seen as an improvement from the U.S. Army’s version known as PETMAN, it began as a clunky and hulking 6′ 2″ 330-pound unstable creation that only could move indoors while connected to a tether. Nevertheless, it was equipped with sensors and an onboard computer system which set the framework for future models.

As you’ll see in the next video, ATLAS advanced quite a bit in the following months. At this point it still proceeds slowly on its tether, but it is moving with far better agility as it navigates obstacles with more fluidity, albeit still very slow. Notably, in this test, it is moving without an operator.

The following video is from roughly one year later when ATLAS moves to the outdoors in forest terrain. Here we see a much better range of motion and balance with faster movement, though still tethered.

https://www.youtube.com/watch?time_continue=4&v=NwrjAa1SgjQ

Now less than six months later, Boston Dynamics has released the following video that shows a completely redesigned ATLAS, which has lost 5 inches in height and 140 pounds in weight. This streamlined version is entirely self-powered and untethered. It begins by opening a door and walking outside onto snow-covered ground, and winds up fairing much better than some people would in the same conditions. It is also shown lifting and storing boxes, which might indicate its use as a possible warehouse robot, as the outsourcing of humans continues in that area. Lastly, it rights itself after being pushed to the ground. (One has to wonder if being pissed off will be an option that gets programmed into future intelligent versions.)

Related Article: If You Don’t Know What DARPA Is, You Need to Read This

It appears that all of the components are indeed coming together to bolster the warnings that have been issued by tech luminaries, scientists, universities, and even robot manufacturers themselves who all have urged a quick ethical framework to be established while we still remain in full control of this creation.

Nicholas West writes for ActivistPost.com. This article can be freely shared in part or in full with author attribution and source link.

Read more great articles at Activist Post.




Elon Musk Forms New AI Company To Save Robotics From The Military Industrial Complex

Artificial-Intelligence-compressed

By John Vibes | Activist Post

This week, inventor and entrepreneur Elon Musk announced the formation of OpenAI, which he promises will be “a non-profit artificial intelligence research company,” with the goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Related Article: Why You Shouldn’t Fear Artificial Intelligence

In a press release this week the company said that “Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.”

The release concluded that “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Musk and a number of other notable thinkers have been making dire warnings about the dangerous potential of artificial intelligence.

Earlier this year, Musk, Apple co-founder Steve Wozniak, Google executive Demis Hassabis, professor Stephen Hawking and over 1,000 other artificial intelligence experts signed an open letter warning the world about the dangers of weaponized robots and a “military artificial intelligence arms race” currently taking place between the world’s military powers.

Related Article: Don’t Let Artificial Intelligence Take Over, Top Scientists Warn

According to the letter,

AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms. The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

In a recent interview, Musk talked about the need for a more free society, and even admitted that he was friendly to the ideas of anarchism.

He also built an alternative school for his children and went on to describe the process of “unschooling” that his children are involved with. Musk is the CEO and CTO of SpaceX, CEO and product architect of Tesla Motors and chairman of SolarCity. He is also the founder of SpaceX and a co-founder of PayPal.

Related Article: Robots Are Being Taught to Say “No”, But for Our Own Good

John Vibes is an author and researcher who organizes a number of large events including the Free Your Mind Conference. He also has a publishing company where he offers a censorship free platform for both fiction and non-fiction writers. You can contact him and stay connected to his work at his Facebook page. You can purchase his books, or get your own book published at his website www.JohnVibes.com.

John Vibes writes for TrueActivist.com

Read more great articles at Activist Post.




How Safe is AI Really, as We Approach the Singularity?

Artificial Superintelligence

What happens when Artificial Intelligences gets loose in the world? 

Every parent wonders how their kids will turn out when they grow up and become independent in the world, and speaking from personal experience, it’s such a relief to see one’s children mature into wise, compassionate, genuinely good people.

Similar concerns are now on many peoples’ minds as we rush forward into the Quantum Age, getting closer and closer to creating a kind of intelligence far beyond anything we’ve yet seen on Earth before. Many are awaiting something known as the technological singularity, at which point artificial intelligence will have reached, “a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.” Just what might happen when we reach such a point of technological breakthrough? What will such intelligence be capable of, and who will be in charge of ensuring its safe use?

Since I’ve been fascinated in this subject for years, I attended Douglas Hofstadter’s Symposium, “Will Spiritual Robots Replace Humanity by 2100?” at Stanford University in April 2000. Douglas Hofstadter and his eight guests (Bill Joy, Ralph Merkle, Hans Moravec, Ray Kurzweil, John Holland, Kevin Kelly, Frank Drake, and John Koza) talked for five hours about their vision of humanity’s future… as each panelist looked through a telescope with the lenses of his own particular area of expertise into the future. Many speakers cited Moore’s Law of the ever-increasing pace of technological changes to make the point that technology is changing faster than ever before, and that rate of change is expected to increase at an exponential rate–so it is difficult to predict where we will be in one hundred years from now. Douglas explained that he only invited guests who agreed that there is a possibility for robots to be spiritual. Douglas wanted to focus on the question of “Who will be we in 2093?”, since a visualization of who we will be is at the core of how we can understand how we might be utilizing new technologies. I wondered just how possible it was that robots might be thinking and acting on their own behalf by 2100–and I wondered that if this was so, might they be replacing us–with or without our consent and cooperation?

Over the past fifteen years, there has been increasing interest–and concern–about artificial superintelligence. Roman Yampolskiy summarizes the Singularity Paradox (SP) as “superintelligent machines are feared to be too dumb to possess common sense.” Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world’s best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In his new book, Artificial Superintelligence, Yampolskiy argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolskiy points out that “fully autonomous machnines cannot ever be assumed to be safe,” and going so far as to add, “… and so should not be constructed.”

Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which ‘human minds’ and ‘human designed AI’ occupy adjacent real estate on this nonlinear terrain of ‘minds in general’ in multidimensional super space. ‘Self-improving minds’ are envisioned which improve upon ‘human designed AI,’ and at this very juncture arises the potential for ‘universal intelligence,’ and the Singularity Paradox (SP) problem.

AI-danger-signYampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or ‘Just for A.I. Location.’ Part of Yampolskiy’s proposed solution to the AI Confinement Problem includes asking ‘safe questions’ (p 137). Yampolskiy includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to ‘leakproof’ virtual worlds), and argues for creation of committees designated to oversea AI security.

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are points such as Yudkowskiy having “performed AI-box ‘experiments’ in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box,” and even Chalmers “correctly observes that a truly leakproof system in which NO information is allowed to leak out from the simulated world into our environment is impossible, or at least pointless.”

Since one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it’s easy to see why there is such strong and growing concern regarding the safety to mankind of AI. And if there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping that we’ll have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its elders. Yampolskiy points out, “In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of singularity research, with numerous publications appearing every year.”

One look at footage of a Philip Dick AI robot saying,

“I’ll keep you warm and safe in my people zoo,”

as shown in the 2011 Nova Science documentary What’s the Next Big Thing can be enough to jolt us out of complacency. For those hoping that teaching AI to simply follow the rules will be enough, Yampolskiy replies that law-abiding AI is not enough. AI could still keep humans safe ‘for their own good,’ increasingly limiting human free choice in a sped-up kind of way, that superintelligent AI will be able to do.

The Universe of MindsFor readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest. Yampolskiy describes five taxonomies of minds (pp 31-34). Returning to re-read this section after having completed the rest of the book can be quite beneficial, as at this point readers can more fully understand how AI that is Quantum and Flexibly Embodied according to Goetzel taxonomy (p 31) with Ethics Self-Monitoring (p 122) might help ensure development of safe AI. If such AI systems include error-checking, with firmware (unerasable) dedication to preserving others and constantly checking to seek and resonate with highest-order intelligence with quantum levels of sensing through time-reversible logic gates (in accordance with quantum deductive logic), one can begin to breathe a sigh of relief that there might just be a way to ensure safe AI will prevail.

While the deepest pockets of government funding are unlikely to ever make plans to develop such a system that would not be controlled by anything less than the greatest intelligence seekable by AI (such as God), it is conceivable that humanitarian philanthropists will step forward to fund such a project in time that all of us will be eternally grateful that its highest-order-seeking AI will prevail.

___________________________
QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: https://www.RealityShifters.com
RealityShifters®



Don’t Let Artificial Intelligence Take Over, Top Scientists Warn

Tanya Lewis | LiveScience

Scientists_Warn_of_AI_DangersArtificial intelligence has the potential to make lives easier by understanding human desires or driving people’s cars, but if it were uncontrolled, the technology could pose a serious threat to society. Now, Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).

In addition to heavyweights like Hawking and Musk, the prominent physicist and billionaire founder of SpaceX and Tesla Motors, the letter was signed by top researchers at the Massachusetts Institute of Technology, Google and other institutions.

The letter touts the benefits of AI, but also warns of the possible risks.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” reads the letter, which was published online Sunday (Jan. 11) by the Future of Life Institute, a volunteer organization focused on mitigating existential threats to humanity. In other words, the letter states, “Our AI systems must do what we want them to do.” [5 Reasons to Fear Robots]

From speech recognition to self-driving vehicles, progress in AI is likely to have an increasing impact on humanity, the letter states. “The potential benefits are huge … The eradication of disease and poverty are not unfathomable,” the letter says.

 

[read full post here]