AI safety conversations have never been more important—or more challenging to navigate. These six strategies can help turn awkward, abstract discussions into productive conversations anyone can have.
Artificial intelligence is moving quickly. Leading experts like Geoffrey Hinton now estimate AI could surpass human intelligence within 20 years, while Sam Altman predicts artificial general intelligence to arrive even sooner. If AI keeps improving, we may eventually face systems far smarter than we are—artificial super-intelligence (ASI). Many researchers believe ASI could, in a worst-case scenario, threaten human survival. Explaining that high-stakes possibility can often be difficult, or even feel awkward. Here are six practical ways to discuss AI safety effectively.
Everyone wants new tech to be reliable and beneficial. Frame AI safety as the straightforward idea of building powerful tools that won't hurt us. This approach works because it sidesteps political divides and connects with universal concerns about technology working as intended. Research shows that people respond better to safety framing than risk warnings—we all want our cars, medicines, and now AI systems to be thoroughly tested before widespread use, and we want technology to be used in environments which are designed robustly.
Terms like "x-risk," "alignment," or "superintelligence" can confuse or alienate newcomers. Instead of "AI alignment problems," say "keeping AI systems under human control." Replace "existential risk" with "worst-case scenarios." Use everyday language—say "very advanced AI" rather than "artificial general intelligence." The goal is clarity, not demonstrating expertise through technical vocabulary.
Avoid hyperbole. Point to real incidents that show AI safety challenges today: deepfake frauds that have tricked executives, dual use of medical AI systems, or stock market crashes triggered by AI. These aren't theoretical—they're documented failures costing millions and undermining trust in institutions. Then note that future systems will be far more capable, so the stakes naturally increase.
People shut down when they feel lectured. Share what sparked your own interest in AI safety—perhaps a specific article, documentary, or expert interview that made the issues feel real to you. Personal stories create connections where abstract arguments create resistance. Let them know you've wrestled with these questions too, rather than presenting yourself as having all the answers.
Ask what the other person thinks about AI. Their concerns—jobs, misinformation, privacy—are natural entry points to deeper discussion. Job displacement fears connect directly to questions about human control, values, and gradual disempowerment. Privacy concerns relate to transparency and oversight needs. Misinformation worries bridge to problems with AI deception and reliability. Meet people where they are, then show how their existing concerns connect to broader safety challenges.
Mention that engineers, researchers, and policymakers are developing solutions right now—and that the general public has a role to play as well. The US AI Safety Institute is testing major AI models before release, while companies like Anthropic and OpenAI have agreed to government safety evaluations. The EU's AI Act creates the world's first comprehensive AI regulations. Thirty countries are coordinating through international AI safety institutes. The general public is able to catalyze action at scales larger than any of these organizations can achieve individually, by contacting their elected officials, advocating for forward-thinking policies in their workplaces, and continuing to engage in the conversation. Reassurance paired with concrete action makes the topic feel solvable, not hopeless.
Bottom line: ASI, if mismanaged, could be an existential threat—but the timeline for advanced AI has compressed from decades to potentially years, making this conversation more urgent than ever. Talking about that possibility doesn't require scare tactics or insider lingo. It just takes honesty, clarity, and curiosity. The more we can share those qualities in everyday conversation, the better chance we have of steering AI toward a future where everyone thrives.