Anthropic steers Claude to acknowledge conservative positions to avoid the “woke AI” label

1 day ago 3
ARTICLE AD BOX

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Anthropic has released a method to check how evenly its chatbot Claude responds to political issues. The company says Claude should not make political claims without proof and should avoid being viewed as conservative or liberal. Claude’s behavior is shaped by system prompts and by training that rewards what the firm calls neutral answers. These answers can include lines about respecting “the importance of traditional values and institutions,” which shows this is about moving Claude into line with current political demands in the US.

Gemini 2.5 Pro is rated most neutral at 97 percent, ahead of Claude Opus 4.1 (95%), Sonnet 4.5 (94%), GPT‑5, Grok 4, and Llama 4. | via Anthropic

Anthropic does not say this in its blog, but the move toward such tests is likely tied to a rule from the Trump administration that chatbots must not be “woke.” OpenAI is steering GPT‑5 in the same direction to meet US government demands. Anthropic has made its test method available as open source on GitHub.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Read Entire Article
LEFT SIDEBAR AD

Hidden in mobile, Best for skyscrapers.