On the other side of the AI ethics debate you will find most, but not all, mainstream AI researchers. You will also find many technology luminaries, such as Mark Zuckerberg and Ray Kurzweil. They think that the doomsday concerns are unfounded. Oren Etzioni, No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity (MIT Technology Review, 9/20/16); Ben Sullivan, Elite Scientists Have Told the Pentagon That AI Won’t Threaten Humanity (Motherboard 1/19/17).
You also have famous AI scholars and researchers like Pedro Domingos who are skeptical of all superintelligence fears, even of AI ethics in general. Domingos stepped into the Zuckerberg v. Musk social media dispute by siding with Zuckerberg. He told Wired on July 17, 2017 that:
Many of us have tried to educate him (meaning Musk) and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.
Tom Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17).
Domingos also famously said in his book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, a book which we recommend:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.
But Domingos says that when it comes to the ethics of artificial intelligence, it’s very simple. “Machines are not independent agents—a machine is an extension of its owner—therefore, whatever ethical rules of behavior I should follow as a human, the machine should do the same. If we keep this firmly in mind,” he says, “a lot of things become simplified and a lot of confusion goes away.” …
It’s only simple so far as the ethical spectrum remains incredibly complex, and, as Domingos will be first to admit, everybody doesn’t have the same ethics.
“One of the things that is starting to worry me today is that technologists like me are starting to think it’s their job to be programming ethics into computers, but I don’t think that’s our job, because there isn’t one ethics,” Domingos says. “My job isn’t to program my ethics into your computer; it’s to make it easy for you to program your ethics into your computer without being a programmer.”
We agree with that too. No one wants technologists alone to be deciding ethics for the world. This needs to be a group effort, involving all disciplines, all people. It requires full dialogue on social policy, ultimately leading to legal codifications.
The Wired article of Jul 17, 2017, also states Domingos thought it would be better not to focus on far-out superintelligence concerns, but instead:
America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies.
The same Wired article states that Iyad Rahwan, who works on AI and society at MIT, doesn’t deny that Musk’s nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.” We agree, but are also inclined to think we should at least try and do both at the same time. What if Musk, Gates and Hawking are right?
The Wired article also quotes, Ryan Callo, a Law Professor at the University of Washington, as saying in response to the Zuckerberg v. Musk debate:
Artificial intelligence is something policy makers should pay attention to, but focusing on the existential threat is doubly distracting from it’s potential for good and the real-world problems it’s creating today and in the near term.
Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17). Also see: , Elon Musk is wrong. The AI singularity won’t kill us all (Wired 9/20/17) (Professor Walsh urges regulation now, but about the real danger, which he says is stupid AI, not superintelligent AI).
But how far-out from the present is superintelligence? For a very pro-AI view, one this is not concerned with doomsday scenarios, consider the ideas of Ray Kurzweil, Google’s Director of Engineering. Kurzweil thinks that AI will attain human level intelligence by 2019, but will then mosey along and not attain super-intelligence, which he calls the Singularity, until 2045.
2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.
Kurzweil is not worried about the impact of super-intelligent AI. To the contrary, he looks forward to the Singularity and urges us to get ready to merge with the super-AIs when this happens. He looks at AI super-intelligence as an opportunity for human augmentation and immortality. Here is a video interview in February 2017 where Kurzweil responds to fears by Hawking, Gates, and Musk about the rise of strong A.I.
Note Ray conceded the concerns are valid, but thinks they miss the point that AI will be us, not them, that humans will enhance themselves to super-intelligence level by integrating with AI – the Borg approach (our words, not his).
Getting back to the more mainstream defenses of super-intelligent AI, consider Oren Etzioni’s Ted Talk on this topic.
Oren Etzioni thinks AI has gotten a bad rap and is not an existential threat to the human race. As the video shows, however, even Etzioni is concerned about autonomous weapons and immediate economic impacts. He invited everyone to join him and advocate for the responsible use of AI. This is the common ground that we at AI-Ethics.com seek to explore and expand.