Complete Story


A Radical Plan to Make AI Good, Not Evil

Anthropic says its chatbot can instill ethical principles through its system

It is easy to freak out about more advanced artificial intelligence (AI)—and much more difficult to know what to do about it. Anthropic, a startup founded in 2021 by a group of researchers who left OpenAI, says it has a plan. 

Anthropic is working on AI models similar to the one used to power OpenAI’s ChatGPT. But the startup announced today that its own chatbot, Claude, has a set of ethical principles built in that define what it should consider right and wrong, which Anthropic calls the bot’s “constitution.” 

Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. “We're very concerned, but we also try to remain pragmatic,” he said.

Please select this link to read the complete article from WIRED.

Printer-Friendly Version