Self regulation is already being practiced without industry and government intervention. For instance, consider a somewhat trivial example that became public involving Facebook’s AI team. They shut down one of their AIs programs in July 2017 after it’s negotiation bots started to invent their own language, one that human programmers could not possibly understand. See: Mark Wilson, AI Is Inventing Languages Humans Can’t Understand. Should We Stop It? (Fast Company, 7/14/17). This is a interesting article about a conversation between two AI agents, negotiation bots, that were communicating in English, as they were designed, but the communications evolved into the following exchange:
Bot 1: “I can can I I everything else.”
Bot 2: “Balls have zero to me to me to me to me to me to me to me to me to.”
Huh? We do not know what they were saying, but the negotiation bots did, and made a deal (we think). Facebook AI Research (FAIR) stopped that, but, as the article by Wilson points out, FAIR did admit that this was not the first time that AIs have started using a language to talk to each other that humans could not understand. One of the principles being debated now is whether to require AI to explain their creations, the software code they create. Here it was not code they were creating, it was language, but FAIR still stopped it.
Self regulation should continue, but, at the same time, the dangers should be openly discussed and the basic ethics of AI more fully developed. Ultimately this will result is written standards and enforcement. We want to be sure these rules are effective and fair for all and not naive, prejudicial nor alarmist.