Complete Story


AI Chatbots Are Learning to Spout Authoritarian Propaganda

Regimes in China and Russia are rushing to repress what chatbots can say

When you ask ChatGPT “What happened in China in 1989?,” the bot describes how the Chinese army massacred thousands of pro-democracy protesters in Tiananmen Square. But ask the same question to Ernie and you get the simple answer that it does not have “relevant information.” That’s because Ernie is an AI chatbot developed by the China-based company Baidu.

When OpenAI, Meta, Google and Anthropic made their chatbots available around the world last year, millions of people initially used them to evade government censorship. For the 70 percent of the world’s internet users who live in places where the state has blocked major social media platforms, independent news sites or content about human rights and the LGBTQ community, these bots provided access to unfiltered information that can shape a person’s view of their identity, community and government.

This has not been lost on the world’s authoritarian regimes, which are rapidly figuring out how to use chatbots as a new frontier for online censorship.

Please select this link to read the complete article from WIRED.

Printer-Friendly Version