Pro-Caution Side of the Debate

In 2014 Elon Musk referred to developing general AI as summoning the demon. He is not alone in worrying about advanced AI. See eg. Open-AI.com and CSER.org. So too is Bill Gates. Steven Hawking, usually considered the greatest genius of our time, has also commented on the potential danger of AI on several occasions. In a speech he gave in 2016 at Cambridge marking the opening of the Center for the Future of Intelligence, Hawking warned about the possible dangers of artificial intelligence and called for research to be done in this area. He called this work crucial to the future of our civilization and of our species. Here is his full five minute talk on video:

Elon Musk warned state governors on July 15, 2017 at the National Governors Association Conference about the dangers of unregulated Artificial Intelligence. Musk is very concerned about any advanced AI that does not have some kind of ethics programmed into its DNA. Musk said that “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He went on to urge the governors to begin investigating AI regulation now:

AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.

Bill Gates agrees. He said back in January 2015 that

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Elon Musk and Bill Gates spoke together on the Dangers of Artificial Intelligence at an event in China in 2015. Elon compared work on the AI to work on nuclear energy and said it was just as dangerous as nuclear weapons. He said the right emphasis should be on AI safety, that we should not be rushing into something that we don’t understand.

Bill Gates at the China event responded by agreeing with Musk. Bill also has some amusing, interesting statements about human wet-ware, our slow brain algorithms. He spoke of our unique human ability to take experience and turn it into knowledge. See: Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom. Bill Gates thinks that as soon as machines gain this ability, they will almost immediately move beyond the human level of intelligence. They will read all the books and articles online, maybe also all social media and private mail. Bill has no patience for skeptics of the inherent danger of AI: How can they not see what a huge challenge this is?

Gates, Musk and Hawking are all concerned that a Super-AI using computer connections, including the Internet, could take actions of all kinds, both global and micro. Without proper standards and safeguards they could modify conditions and connections before we even knew what they were doing. We would not have time to react, nor the ability to react, unless certain basic protections are hardwired into the AI, both in silicon form and electronic algorithms. They all urge us to take action now, rather than wait and react.

To close out the argument for those who fear advanced AI and urge regulators to start thinking about how to restrain it now, consider the Ted Talk by Sam Harris on October 19, 2016, Can we build AI without losing control over it? Sam, a neuroscientist and writer, has some interesting ideas on this.

AI-Ethics.com recommends that you also watch the video of Nick Bostrom, speaking on the Future of Machine Intelligence, and read his influential book, Superintelligence: Paths, Dangers, Strategies. Many regard Bostrom, a philosophy professor at Oxford, as the intellectual leader of the AI disaster-prep side. Both Gates and Musk endorse his book. In the November 2015 Afterword to the paperback version of Superintelligence Bostrom notes the strong feelings that now exist on both sides of the debate and makes this call for action:

So I call on all sides to practice patience and restraint, and broad-mindedness, and to engage in direct dialogue and collaboration whenever possible.

We completely agree than open-minded dialogue is the solution. For a well written summary of the debate, albeit somewhat dated, see the popular blog posts by Tim Urban, The AI Revolution: The Road to Superintelligence, part one and part two (Wait But Why, January 2015). Also check out the work of Daniel Dewey.

2 comments

Leave a comment