Intro / Mission

Our mission is to help mankind navigate the great dilemma of our age, well stated by Steven Hawking: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Our goal is to help make it the best thing ever to happen to humanity. We have a three-fold plan to help humanity to get there: dialogue, principles, education.

Our focus is to facilitate law and technology to work together to create reasonable policies and regulations. This includes the new LLM generative models that surprised the world in late 2022.

This and other images in Ai-Ethics by Ralph Losey using Ai software

Pros and Cons of the Arguments

Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? See eg. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI? People favoring this perspective tend to be opposed to regulation for a variety of reasons, including that it is too early yet to be concerned.

Or – Will AI lead to disasters? Will AI create powerful autonomous weapons that threaten to kill us all? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are, and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence?

Not unexpectedly, this doomsday camp favors strong regulation, including an immediate stop in development of new generative Ai, which took the world by surprise in late 2022. See: Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (NYT, 3/29/23); the Open Letter dated March 22, 2023 of the influential Future of Life Institute calling for a “pause in the development of A.I. systems more powerful than GPT-4. . . . and if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” Also see: The problems with a moratorium on training large AI systems (Brookings Institute, 4/11/23) (noting multiple problems with the proposed moratorium, including possible First Amendment violations). Can research really be stopped entirely as this side proposes, can Ai be gagged?

One side thinks that we need government imposed laws and detailed regulations to protect us from disaster scenarios. The other side thinks that industry self-regulation alone is adequate and all of the fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI. Let’s bring in the mediators to help resolve this critical roadblock to reasonable AI Ethics.

Balanced Middle Path

We believe that a middle way is best, where both dangers and opportunities are balanced, and where government and industry work together, along with help and input from private citizens. We advocate for a global team approach to help maximize the odds of a positive outcome for humanity.

AI-Ethics.com suggests three ways to start this effort:

  1. Foster a mediated dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government, industry groups and the public.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved or at least strong bridges are made between the opposing views, we will all be in a much better position to attain the other two goals. We have experienced mediators, dialogue specialists, judges and other experts lined up to help with that first goal, although we could always use more with the knowledge and wisdom to make this work. Contact Ralph is you are such a person.

The missing ingredients so far are the disputing parties themselves. They seem reluctant to reach for outside help. Neither sider seems ready yet to engage, but, like it or not, governments are forcing the issue now, especially the E.U., with the U.S. state and federal governments not far behind. See eg. White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft.

Without active industry and government participation, nothing can be accomplished. Perhaps now all of the parties are ready to talk, to move from complaints, petitions and posturing, to real action, dialogues and agreement. The stakes are high. Hawking’s was right: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Let’s start the real dialogues going to tip the scales towards making AI the best thing that has ever happened to humanity. Will you join in this effort? Help make it happen? Or do you just want to keep arguing while AI advances at an exponential rate?

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion and threats of looming regulation, not to mention the ultimate threat of misused, misaligned AI, the worst thing that could ever to happen to humanity. Time to take action is now.

Second & Third Missions: Help Articulate Basic Regulatory Principles and Inspire/Educate

Although AI-Ethics.com is an initiative begun by lawyers, we strongly believe that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. That is our second mission, help articulate basic regulatory principles for government and industry groups. All types of AI experts and other ethics specialists are invited to participate in our group to help make this happen. The same goes for our third goal, to inspire and educate everyone on the importance of artificial intelligence. We are open to coordination and cooperation with all other groups.

Interdisciplinary Team Approach

AI Ethics is one of the most important issues facing humanity today. It is far too important for social media sensationalism, too important for resolution by lawyers and government alone. It is also far too important to leave to Tech Companies, Academics and AI coders alone. We have to engage in true dialogue and collaborate, not just to overcome the current conflicts. We have to come together and make a team effort to articulate the basic regulatory principles. The team should include government and industry groups and academics, and also allow for public citizen participation. It should include techs, coders, businessmen, teachers, scientists and lawyers, all interested parties. It needs to be all inclusive, a global effort by many groups working together.

This affects everyone and should not be in hands of a closed groups of oligarchs and power brokers.

Still, some strategic confidentiality is needed for the process to work. Our proposal is to have controlled transparency, with reasoned disclosure in the context of team efforts.

We think this may be the best way to avoid AI becoming, as Hawking’s put it, the worst thing ever to happen to humanity. Multiple, diverse teams with different backgrounds and skill sets are needed.

A global effort of multiple teams like this can help tip the scales to make AI the best thing ever, not the worst.

12 comments

Leave a comment