How to discuss AI Risks

The way we discuss the risks from artificial intelligence will shape how the world manages them. These strategies can help you create these crucial discussions effectively.

Explaining the high-stakes possibility of catastrophic AI risks can often be difficult, or even feel awkward, but there are simple steps you can take to make it easier to discuss AI safety, and to discuss it effectively. In brief: start from common ground, speak plainly and personally, stay grounded in reality, keep it constructive. 

1. Start with shared values

Thinking about AI risk can often feel morbid, but it comes with an unexpected upside. The values we’re fighting for are fundamentally the values of ordinary people – lean into this, don’t shy away from it.

Everyone wants new tech to be reliable and beneficial: no one wants their cars to blow up, or their medications to have unexpected side-effects. Ultimately, the concern of AI safety is founded on a concern about our technologies working as intended, albeit with potentially much higher stakes. Let people know where you’re coming from, rather than jumping headfirst into “the alignment problem” or concerns about “existential risk”.

2. Skip the jargon

We don’t need specialist terms to discuss AI risk. The core idea is simple, and doesn’t rely on terms   that might alienate newcomers. 

  1. Many experts believe that we will soon create AIs that are far smarter and more powerful than humans. 
  2. This is concerning, because there’s a good chance that we won’t be able to control what they do. 

Note how this uses everyday language, and allows us to expand if need be. Rather than “existential risk”, we can talk of “human extinction” or “worst-case outcomes”. Instead of “AI alignment”, we can discuss “keeping AIs under human control”. We’re trying to communicate an important problem, not show off.

3. Stay grounded

Avoid hyperbole. Point to real incidents that show AI safety challenges today: deepfake frauds that have tricked executives, AIs blackmailing humans despite safety training, or ChatGPT encouraging a teenager user to suicide. The basic demands are little more than common sense: we wouldn’t drive cars without breaks, or fly in planes that haven’t been thoroughly tested. So meet people where they are, and provide them with the tools to understand future concerns. From here, it’s a small step to note that future systems will be far more capable, with potentially far more concerning stakes.

4. Give context, not scripts

No one likes to feel lectured, and no one wants to hear your “script”. People naturally shut down when you engage them as a ‘potential convert’ rather than a real person. Here, it can often be valuable to volunteer the story of how you got involved in AI safety. Maybe it was a specific article, maybe it was the expert warnings, or maybe something about the reality of AI ‘clicked’ after interacting with ChatGPT for the first time. It doesn’t really matter what exactly your personal story is, but be genuine. One recurring theme in AI is expert disagreement – no one in AI safety has all the answers, so don’t pretend that you do. The uncertainty is part of the problem, so don’t be afraid to communicate yours.

5. Listen first

It’s not your job to preach. If you want to communicate your concerns about AI, try to learn what the other person thinks about it. Indeed, many of their concerns may provide natural entry points to sharing your perspective. Worries about unemployment are often worries about people losing meaningful control over their own lives (human disempowerment). Likewise, worries about bias may be founded on concerns that AIs won’t learn what we want them to learn (alignment), and worries about misinformation may stem from a distrust of information from AI sources (reliability and deception). All of these concerns relate to core issues in AI safety, but we won’t be able to find common ground without listening first.  

6. Keep it constructive

Try to avoid scaremongering for the sake of it. Yes, you may be (rightfully) concerned. But if you offer no source of hope then one extremely predictable outcome follows: you just leave people feeling hopeless. 

Fortunately, you don’t need to leave people feeling hopeless. The world is starting to take this issue seriously, and the US AI Safety Institute is testing major AI models before release. Anthropic and OpenAI have agreed to government safety evaluations; thirty countries are coordinating through international AI safety institutes, and people are calling for international treaties. And you can let them know that the general public can play a role in this debate – by writing to their elected representatives, and letting politicians know that they will vote on the issue if need be. 

The Bottom Line: The timeline for advanced AI has compressed from decades to potentially years, making this conversation more urgent than ever. Luckily, it doesn’t require insider lingo, or any sort of tactics, it just takes some honesty and curiosity. The more we can steer the public discourse toward open discussion about advanced AI, the better chance we have of navigating this transition safely.