There is a debate now underway among AI experts that shadows and taints the whole field. One side of the argument is concerned about the possibility of severe dangers arising from the development of superintelligent general AI. The other is not. They think such fears are groundless and could needlessly impede AI development with unnecessary regulations.
Some on the unconcerned, fearless side are not worried because they do not think superintelligent AI will ever come to pass, or if it does, that it will only happen in the remote future, too far off to be concerned about now. This group is called the Techno-Skeptics by Max Tegmark in his book Life 3.0: Being Human ion the Age of Artificial Intelligence (Knopf 2017). Others on the fearless side agree with the concerned group, which we sometimes call the Pro-Caution side, that superintelligent AI will likley come to pass in this century, but think it will be all good. Max Tegmark, who himself is on the Pro-Caution side, calls this group the Digital Utopians.
Aside from the small number of AI experts who are Techno-Skeptics, the debate between the Digital Utopians and Pro-Cautionary proponents, which Tegmark calls the Beneficial AI proponents, arises out of an underlying agreement that general artificial intelligence has the potential to become smarter than we are, superintelligent. Both side of this debate agree that super-evolved AI could become a great liberator of mankind that solves most problems, cures diseases, extends life indefinitely and frees us from drudgery.
Out of the common ebullient hope of the benefits of AI superintelligence arises the Cautionary group that also sees a potential dystopia from artificial superintelligence. They are concerned about doomsday scenarios if AI is not properly regulated. In this camp are some of the smartest people around, including Stephen Hawking, Elon Musk and Bill Gates. They fear that superintelligent AIs could run amuck without appropriate safeguards. These utopia cautionaries fear that a super-evolved AI could doom us all to extinction, that is, unless we are very careful to make sure the superintelligence is beneficial to mankind. As Tegmark explains, this fear does not arise from concern that the superintelligent AI will become conscious or evil, but because the AI will have goals that are misaligned with our own.
Some in the Pro-Cautionary, Beneficial AI camp have only small concern about possible extinction, but fear that super-AIs, if developed, could diminish us in many other, less obvious ways. Think of kept human beings in gilded cages of luxury. Think of future humans guided and cared for by benevolent AIs who have higher than human intelligence. Many long for such a life of luxury, but others are not so sure.
We could lose abilities by our dependency. This is already happening with the technology we already have, such as cursive handwriting, spelling and rudimentary arithmetic. Could super AI take this to the next level? Could we lose the ability to think for our self? Could we be denied the adventures of discovery and creativity? Could we be devalued, indoctrinated and lose freedoms, choices, emotions and experiences without even knowing it? Could we lose fundamental human rights, including the right of self-determination, the right to work, to self-care, privacy and basic human dignity? What price utopia?
The No-Fear, Digital Utopian camp strongly disagrees with all doomsday fears, both hard and soft. This side includes both Google’s Larry Page and Facebook’s Mark Zuckerberg. Mark is a strong proponent of the No-Fear group. His company is a leading researcher in the field of general AI. In a backyard video that Zuckerberg made live on Facebook on July 24, 2017, with six million of his friends watching on, Mark responded to a question from one: “I watched a recent interview with Elon Musk and his largest fear for future was AI. What are your thoughts on AI and how it could affect the world?”
Zuckerberg responded by saying:
I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.
In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.
Zuckerberg said AI is already helping diagnose diseases and that the AI in self-driving cars will be a dramatic improvement that saves many lives. Zuckerberg elaborated on his statement as to naysayers like Musk being irresponsible.
Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used.
But people who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that.
Mark’s position is understandable when you consider his Hacker Way philosophy where Fast and Constant Improvements are fundamental ideas. He did, however, call Elon Musk “pretty irresponsible” for pushing AI regulations.
That prompted a fast response from Elon the next day on Twitter. He responded to a question he received from one of his followers about Mark’s comment and said: I’ve talked to Mark about this. His understanding of the subject is limited.
Elon Musk has been thinking and speaking up about this topic for many years. Elon also praises AI, but thinks that we need to be careful and consider regulations.
Elon kept up the heat in a comment he made on September 4, 2017 on Twitter that artificial intelligence could be the “most likely” cause of a third world war. Musk was responding to a comment made by Valdimir Putin that the first global leader in AI would “become the ruler of the world”
Bloomberg reported that Google’s head of search and AI, John Giannandrea, responded at a TechCrunch event in San Francisco and said:
“There’s a huge amount of unwarranted hype around AI right now.” This leap into, “Somebody is going to produce a superhuman intelligence and then there’s going to be all these ethical issues’ is unwarranted and borderline irresponsible.” … “I’m definitely not worried about the AI apocalypse,” he said, after comparing modern-day computers to a four-year-old. “I just object to the hype and the sound bites that some people have been making.”
One of the goals of AI-Ethics.com is to go beyond the debates, formal and informal, and move to dialogue between the competing camps. See our Mission Statement. In order to do that we must first understand both sides. Only then is dialogue possible that will allow for open understanding and trust between both sides. Then we can move onto agreement and action. Social media and thirty-second sound bites will never get us there.
AI-Ethics.com proposes to host a no-press allowed conference where we get these leaders into a room together to hash this out. Write us if you might be able to help make this happen. As explained in greater detail in our Mission Statement, the current members of AI-Ethics.com are well qualified to make this happen. We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on mutual trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will (and already has) just exasperate the problem. AI-Ethics.com proposes to host a no-press allowed conference where we get the leaders of both sides into a room together to hash this out. We will have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.