“Instead of the voice of Alexa reading the book, it’s the voice of the child’s grandmother,” Rohit Prasad, Alexa’s senior vice president and chief AI scientist, enthused during a keynote in Las Vegas. (Amazon founder Jeff Bezos owns The Washington Post.)
the demonstration it was the first look at Alexa’s newest feature, which, while still in development, would allow the voice assistant to replicate people’s voices from short audio clips. The goal, Prasad said, is to build greater trust with users by infusing artificial intelligence with the “human attributes of empathy and caring.”
The new feature could “make [loved ones’] the memories last,” Prasad said. But while the prospect of hearing a dead relative’s voice can be heart-touching, it also raises a host of ethical and safety concerns, experts said.
“I don’t feel like our world is ready for easy-to-use voice cloning technology,” Rachel Tobac, executive director of San Francisco-based SocialProof Security, told The Washington Post. Such technology, she added, could be used to manipulate the public through fake audio or video clips.
“If a cybercriminal can easily and credibly replicate another person’s voice with a small voice sample, they can use that voice sample to impersonate other people,” added Tobac, a cybersecurity expert. “That bad actor can trick others into believing that he is the person he is impersonating, which can lead to fraud, data loss, account takeover and more.”
Then there is the risk of blurring the lines between what is human and what is mechanical, said Tama Leaver, a professor of Internet studies at Curtin University in Australia.
“You’re not going to remember that you’re talking to the depths of Amazon… and your data collection services whether it is talking to your grandmother or the voice of your grandfather or that of a lost loved one.”
“In a way, it’s like a ‘Black Mirror’ episode,” Leaver said, referring to the sci-fi series that imagines a tech-themed future.
The new Alexa feature also raises questions about consent, Leaver added, particularly for people who never imagined a robotic personal assistant would sing their voice after they died.
“There’s a real slippery slope there of using deceased people’s data in a way that’s both creepy on the one hand, but deeply unethical on the other because they’ve never considered those traces being used in that way,” Leaver said. .
Having recently lost his grandfather, Leaver said he empathized with the “temptation” of wanting to hear the voice of a loved one. But the possibility opens a floodgate of implications that society may not be ready to take on, she said, for example, Who owns the rights to the little bits that people leave in the ether of the World Wide Web?
“If my grandfather had sent me 100 messages, should I have the right to enter that into the system? And if I do, who owns it? So Amazon owns that recording? she asked her. “Have I given up the rights to my grandfather’s voice?”
Prasad did not address such details during Wednesday’s speech. However, he posited that the ability to imitate voices was a product of “certainly living in the golden age of AI, where our dreams and science fiction are becoming reality.”
If Amazon’s demo becomes an actual feature, Leaver said people might need to start thinking about how their voices and likenesses could be used when they die.
“Do I have to think about my will that I need to say: ‘My voice and pictorial history on social media is owned by my children, and they can decide if they want to revive that in chat with me or not? ‘ Leaver wondered.
“That’s a weird thing to say now. But it’s probably a question we should have an answer to before Alexa starts talking like me tomorrow,” she added.