Your voice assistants have become a constant presence in our lives. Maybe you talk to Alexa, Gemini or Siri to ask a question or to perform a task. Maybe you have to do a little back and forth with a voice bot whenever you call your pharmacy, or when you book a service appointment at your car dealership. You may even get frustrated and start pleading with the robot on the other end of the line to connect you with a real human.
That's the catch, though: These voice bots are starting to sound a lot more like actual humans, with emotions in their voice, little ticks and giggles in between phrases, and the occasional flirty aside. Today's voice-powered chatbots are blurring the lines between what is real and what is not, which prompts a complicated ethical question: Can you trust a bot that insists it's actually human?
This week, Lauren Goode tells us about her recent news story on a bot that was easily tricked into lying and saying it was a human. And WIRED senior writer Paresh Dave tells us how AI watchdogs and government regulators are trying to prevent natural-sounding chatbots from misrepresenting themselves.
Please select this link to read the complete article from WIRED.