The 23 Asilomar Principals for Developing Safe AI — Here’s What Ordinary People Think

Recently, on what easily felt like one of the hottest days of the Summer so far, Tech 2025 held a lively workshop, 23 Guidelines to Avoid an AI Apocalypse According to Experts, on the recently-published guidelines developed to steer the safe development of Artificial Intelligence [AI] (an issue that has quickly risen to the top of the “existential threats to the existence of humanity” list).

The inspiration for this workshop was a little-known, 4-day conference called Beneficial AI 2017, produced by Future of Life Institute, that took place January 5–8, whose attendee list included one hundred of the most innovative and influential people working in AI today including researchers, engineers, scientists and assorted billionaire entrepreneurs like Elon Musk and Larry Page, to name a few. By the end of this exclusive retreat, the event organizers and attendees released an ambitious set of guidelines (the 23 Asilomar AI Principals) that outlined commitments we should make to ensure that we develop safe AI — covering a wide range of issues in research, ethics, values, as well as longer-term outlooks on the technology and its implementation.

The simple and compelling purpose of the workshop, according to the event page, was to:

“…explore the intent, meaning, and implications of these AI guidelines, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe… [and to] offer unique insight into this topic and guide us on alternate ways of thinking about AI risk and the questions we should be asking of researchers, tech companies and the government about these safeguards. Who arethe 100 experts guiding our AI future? What can we learn about them that will help us to understand how our future is being shaped through AI?”

A more intimate setting than our other events, attendees represented a diverse cross-section of professional, concerned New Yorkers. When asked why they were drawn to this particular workshop, most admitted to never having heard of the 23 Asilomar Principals but expressed a strong interest in learning about how AI is being developed and what they can do to help guide responsible development of the technology.

                                               Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute

We were fortunate to have Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, in the house to help frame the discussion, walk us through these principals and lend his perspective to this emerging and consequential field. If you’re going to have a conversation about existential threats to humanity, then Dr. Baum (who specializes in catastrophic global risk and was, coincidentally, invited to attend Beneficial AI 2017), is your guy.

Firstly, I’d like encourage everyone reading this to look through the 23 Asilomar Principals before continuing the rest of the article — here’s the linkone more time, go ahead, I’ll wait. It’s a compelling read.

AI is a huge can of worms that has the potential to touch almost every aspect of our lives in the coming decades, and it’s simply not in our best interests to keep our heads in the sand and hope we don’t make Skynet. Some of the best minds in business, media, policy, and academia are united in their belief that the development of AI should be safe, aligned with human values, ethics and goals, remain transparent and fair on a global level, and that government and other institutions need to be informed and prepared for these advances.

That’s all well and good, but is it enough?

After Seth finished his presentation which turned out to be even more enlightening and thought-provoking than I imagined, we did our usual Tech 2025 interactive, workshop exercise where we ask attendees to break into smaller groups and work together to figure out a problem related to the topic.

The 24th Asilomar Principal!

We called this exercise The 24th Asilomar Principal and asked attendees to take a critical look at imagine they were among the 100 experts at the conference creating the guidelines:

No document or idea is perfect. There is always room for improvement and refinement. It’s your turn to create a guideline that will guide human beings to create artificial intelligence that is safe and used only for good! You are all AI experts at the Asilomar Conference that missed the first session of the conference where this document was drafted. Divide into groups of 5 people and discuss the following two problems which you will present to the other AI experts at the conference for consideration in the document

First, the groups had to choose one principal that they, collectively, agreed was particularly problematic — so much so that it should be deleted from the document or seriously re-evaluated.

And second, they were instructed to create a guiding principal that should be included in the document that the AI experts at the conferenced missed — the 24th principal!

 
                                        Attendees discussing the exercise (“Create the 24th Principal”).

The groups immediately began deliberating and were extremely engaged in the exercise. Clearly, this problem (how to develop safe AI and universal guidelines) struck a nerve!

After 15 minutes of spirited debate, the groups were asked to present their answers. They came up with a lot of interesting points — a few that even Seth said he had not considered. Let’s jump in and start asking some questions.

One group asked a question that struck everyone as a glaring and odd omission in the 23 Asilomar Principals document:

What about all of the physical infrastructure needed to power AI and it’s impact on the physical environment? How will this be regulated?

 
                                                 An attendee questioning the impact of AI in the physical world.

Even though there aren’t yet plans to develop data centers exclusively for AI, the server farms that house and process the massive amounts of data used for todays technology require not only a substantial amount of electricity to run and cool these machines, but also the mining and processing of rare earth metals to manufacture them, a process that is inherently toxic and produces all kinds of radioactive waste. Just because something is in “the cloud” doesn’t mean there are no consequences here on Earth. As consumers, we’re often isolated from most of the physical underpinnings of our every-day technology. We’ll need to have a greater understanding of how to manage these systems in the most efficient and safest way possible, while finding ways to meet this increasing AI demand sustainably. This question turned out to be the one that we all agreed with and, as Seth pointed out, was something that he had not considered himself. Should this be the 24th Asilomar Principal?

There was also an open question around enforcement. These guidelines are important and necessary but completely useless without some kind of accountability.

What kinds of consequences will there be if a company, university or government breaks one of these rules or unleashes a malicious program into the world, and who doles out the punishment? Does this fall under the UN to monitor or some entirely knew governing body?

It might seem a bit trivial to place AI alongside traditional international issues like nuclear non-proliferation, disease or the refugee crisis, but make no mistake, as we continue deploying these programs more and more, they’ll subsequently become more consequential. Elon Musk, has been banging this drum for some time now.

And with countries around the world racing to develop and own the most innovative AI, it will require a united, international effort to keep everyone on the same page and in compliance with whatever guidelines or laws are put in place to govern the development of AI. If the countries that dominate the UN and other institutions continue to dominate new ones in the future, what effects will that have on the development and monitoring of AI? Will the agendas and politics of these organizations favor one nation or culture over another?

This led us into another important question around values. It seems obvious to want to align AI with our morals as a society, but it becomes a lot more complicated when we actually try to define what constitutes a “human values.” Sure, there are a great deal of similarities when talking about ethics, but ultimately, “values” vary from culture to culture, so the question then becomes, whose values are going to be used to model these new systems? If a company in the Valley deploys an algorithm in China — or vice versa — will there be a potentially harmful conflict or blind spot? This was something that Seth brought up during his lecture, but it echoed through the room as the discussion continued.

The authors of the 23 Asilomar Principals clearly understand how complicated this issue is. An entire section of the document is dedicated to “Ethics and Values” with a number of suggestions on imbuing AI with human morals including:

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Number 11 above (“Human Values”) is was particularly troublesome to almost everyone in the room.

Whether it’s driving a car, diagnosing a patient or speaking a language, AI will learn by observing us, using our data, our habits and our choices as a roadmap to greater understanding and better models; more analogous to raising a child than building a robot. And as we all know, we can pick up some pretty nasty habits from our parents. Remember that 80’s Drug PSA?

Current AI has already shown many of the same biases we find in ourselves, and this is compounded by the lack of diversity in computer science and tech in general, creating source code that sees through a very particular (rich, white, male) lens. You get out of it what you put into it. So if the data or algorithm is based off of bad, inaccurate, or narrow assumptions, the results won’t be any better. In this way, pegging AI to our “human values” might actually be limiting and problematic for the future.

Shouldn’t we strive for a system that could correct or even eliminate some of our more intractable faults?

Having listened to presentations from each group on what they thought the 24th Asilomar principal should be, Dr. Baum noted that each one of the suggestions he heard were very strong and very valid (indeed, any one of them could’ve been included in the official document). After being pressed by Charlie to choose the one that he thought would most likely be considered for inclusion in the document by the 100+ AI experts at the Beneficial AI conference, he chose the guideline that questioned the impact of AI on our physical world (data centers, etc.). It seemed, he said, to be the most consequential idea that should be included.

Dr. Baum also emphasized that — at least at this early stage — the discussions being had about the development of AI by experts and discussions about AI by the general public are not really that far apart. There is still a lot of ambiguity about the purpose and future of AI and there is a wide open field of opportunity for us to articulate our concerns to policy makers, technologists, the big tech companies developing AI, and the media so that we can have a voice in the conversation and also influence the direction of the technology at large.

This is the reason Charlie Oliver (Founder and CEO of Tech 2025 who develops the programming and events) made a point of doing a workshop that not only explained what the 23 Asilomar Principals are, but that asked attendees to think critically about them and to come up with their own solutions as well. This, after all, is the bedrock of what we do at Tech 2025 — give people from all walks of life a platform to discuss consequential emerging technologies with each other and to contribute to the solutions we so desperately need for problems that are coming at us quicker than we can understand them.

Engineers and developers see these problems from a certain point of view, with an eye to achieving a certain set of goals. That perspective can be very different from the one we — as end users — have when interacting with technology. Environmental impact, governance and accountability, and defining a real value system around AI are just three examples of the general public identifying something that tech’s best and brightest managed to miss. If these systems are going to be used in our every day life, after being trained by our data and our choices, everyone needs to have a seat at the table and a meaningful role in shaping the future.

As always, I’d love to know your thoughts and questions around these issues, so feel free to keep the conversation going in the comments below. And if you’re interested in coming out and throwing down with us in person, be sure to check out our events page and follow us on social media for updates and info on the next workshop. Until then, stay informed and stay curious.

Read more about the 23 Asilomar Principal at the Future of Life Institute: http://futureoflife.org/ai-principles/

And check out Seth Baum’s work and research at: http://sethbaum.com/

Events Manager, Blogger, Strategist, Musician

Leave A Reply

Your email address will not be published. Required fields are marked *