• Anthropic’s societal impacts team investigates negative AI effects, publishing research that could damage the company’s own products

  • Just 9 people out of 2,000+ employees study AI’s impact on mental health, labor markets, and elections

  • Team faces political pressure from Trump administration’s ‘woke AI’ executive order targeting AI safety research

  • Anthropic stands as industry outlier with CEO Dario Amodei supporting AI regulation amid industry pushback

Anthropic’s nine-person societal impacts team is walking a tightrope. The team’s mandate to publish ‘inconvenient truths’ about AI’s effects on society puts them squarely in the crosshairs of an industry under intense political pressure. With the Trump administration’s executive order banning ‘woke AI’ and Silicon Valley scrambling to fall in line, this tiny unit inside Anthropic might be the last bastion of independent AI safety research.

Anthropic just put itself in an impossible position. The AI company’s societal impacts team – all nine people of them – has a job that could destroy everything the company’s built. Their mission? Investigate and publish what they call ‘inconvenient truths’ about how AI tools affect mental health, reshape labor markets, and potentially undermine democratic elections. The twist is they’re doing this research on Anthropic’s own products. While most of Silicon Valley races to appease the Trump administration following its executive order banning ‘woke AI’, Anthropic created a team designed to find fault with its flagship Claude chatbot. According to The Verge’s Hayden Field, who spent extensive time profiling the team, these researchers actively hunt for problems that could make headlines for all the wrong reasons. The political landscape couldn’t be more hostile to this kind of work. Tech companies are working closely with the Trump White House to resist AI regulations, while social media platforms have already slashed their trust and safety investments. Meta provides the cautionary tale here – the company went through endless cycles of dedicating resources to content moderation research, only to see those efforts quietly dry up when Mark Zuckerberg shifted priorities or started cozying up to Trump. What makes Anthropic different is CEO Dario Amodei’s stance on regulation. Unlike his peers, Amodei has been remarkably amenable to calls for AI regulation at both state and federal levels. This isn’t accidental – Anthropic was founded by former