Intro / Mission

Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? Or will AI lead to disasters? Will AI create powerful autonomous weapons? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence? As Steven Hawking puts it: “In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”

Some think that we need laws now to protect us from such disaster scenarios. Some think that self-regulation alone is adequate and fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI.

The mission of is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion.

In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

Second & Third Missions: Help Articulate Basic Regulatory Principles and Inspire/Educate

Although is an initiative begun by lawyers, we strongly believe that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. That is our second mission, help articulate basic regulatory principles for government and industry groups. All types of AI experts and other ethics specialists are invited to participate in our group to help make this happen. The same goes for our third goal, to inspire and educate everyone on the importance of artificial intelligence. We are open to coordination and cooperation with all other groups. Also, to help attain this goal we need great speakers and writers.

Interdisciplinary Team Approach

AI Ethics is one of the most important issues facing humanity today. It is far too important for social media sensationalism. it is far too important for resolution by lawyers and government regulators alone. It is also far too important to leave to AI coders to improvise on their own. We have to engage in true dialogue and collaborate, not just to overcome the current conflicts, but to articulate the basic regulatory principles for government and industry groups. Here is Ralph Losey’s explanation of the interdisciplinary team approach that he has done for years in his legal practice and advocates for at


General List of  Some Classic AI Issues

For a good general list of the ethical issues now faced by AI see Julia Bossmann’s, Top 9 ethical issues in artificial intelligence (World Economic Forum, 10/21/16):

1. Unemployment. What happens after the end of jobs?
2. Inequality. How do we distribute the wealth created by machines?
3. Humanity. How do machines affect our behavior and interaction?
4. Artificial stupidity. How can we guard against mistakes?
5. Racist robots. How do we eliminate AI bias?
6. Security. How do we keep AI safe from adversaries?
7. Evil genies. How do we protect against unintended consequences?
8. Singularity. How do we stay in control of a complex intelligent system?
9. Robot rights. How do we define the humane treatment of AI?

Also see: Ethics Education Library (Artificial Intelligence and Robotics articles).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s