A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It
Researchers have figured out ways to make AI bots misbehave
ChatGTP and its artificially intelligent (AI) siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once.
The work suggests that the propensity for the cleverest AI chatbots to go off the rails isn’t just a quirk that can be papered over with a few simple rules. Instead, it represents a more fundamental weakness that will complicate efforts to deploy the most advanced AI.
"There's no way that we know of to patch this," said Zico Kolter, an associate professor at CMU involved in the study that uncovered the vulnerability, which affects several advanced AI chatbots. "We just don't know how to make them secure, Kolter added.
Please select this link to read the complete article from WIRED.