The weird side of A.I.

According to this article, Amazon unveiled something weird about a new Alexa capability on Day 2 of its 2022 artificial intelligence conference. During the presentation, a young boy asked if her grandmother could finish reading The Wizard of Oz fairy tale as the demo started playing, and Alexa said “Okay”. However, some seconds later, the voice you could hear was more human-like, although it wasn’t. That was a voice replica of the kid’s dead grandmother generated by an A.I.

The voice was created using less than a minute of recording of the grandmother’s original voice, according to Rohit Prasad, senior vice president and head scientist for Alexa at Amazon, who spoke on stage.

“These attributes have become more important in these times of the ongoing pandemic when so many of us have lost someone we love”, Prasad said. “While A.I. can’t eliminate that pain of loss, it can definitely make their memories last”.

This is a mere tech demonstration for now. Besides the fact that Alexa’s programmers addressed this as a voice conversion task rather than a speech generation task, little more is known about the project. Questions about the functionality of the product, when it will be implemented, and who will have access to it were not answered by Amazon. However, the implications of such a technology could sound uncanny.

“We are unquestionably living in the golden era of A.I., where our dreams and science fiction are becoming a reality”, Prasad said at the conference. Anyway, it’s hard to say whether this will be helpful or dangerous.

Whether something is good or bad is clear, according to Maura Grossman, a research professor at the University of Waterloo School of Computer Science and an authority in A.I. ethics.

“You can see the risk of deep fakes, somebody taking your voice, and all of a sudden you’re telling me to transfer money from your account or whatever”, she explains. “So I think you have to ask, ‘What could go right with this and, the more important question, what could go wrong with this technology?'”.

However, maybe there are also positive aspects. As Grossman explained, she has a close friend who visits a medium to communicate with her deceased husband. Grossman was initially terrified and feared that her friend would be duped or suffer some sort of psychological trauma, but the medium really assisted her in coming to terms with the loss. However, it’s not the first time A.I. tried to bring back to life dead people. Deep Nostalgia by MyHeritage was the first to implement A.I. to animate old family photos, including dead people. Imagine blending audio and video, it would cause a significant emotional impact.

“Was it such a bad thing to visit a medium every other week and converse with her deceased husband?”, Grossman asks.

While not everyone has the money to see a psychic (or even believes in the idea, although it’s something different from a known artificial algorithm), an A.I. that can perfectly imitate a deceased loved one may offer a similar experience to everyone grieving, but it’s just a distortion of reality. It apparently could help, but it could lead to deceiving people’s feelings and altering their perception of reality. Some would risk holding on to an addicting simulation.

While many would love to try this feature, for others, this invention feels like something out of the dystopian sci-fi series Black Mirror. That’s what happened in an episode called “Be right back”, where a woman hires a corporation to make an A.I. clone of her deceased boyfriend. He appears to be a replica at first. Still, as she notices more and more little discrepancies in his behavior, she grows increasingly irritated and eventually locks him in the attic.

Similar to this, a journalist who became a tech entrepreneur (Eugenia Kuyda) created Luka, a bot that used images, texts, and audio from a friend who had sadly died in a car accident to reconstruct a replica of him so that she could grieve with him by using a Google neural network. This side project evolved into the Replika app, which allows anyone to establish a “virtual A.I. companion” by entering personal information.

Microsoft received permission to adopt a similar approach in 2020 to develop a chatbot that can mimic someone else’s actions and voice by using photos, voice data, social media posts, electronic messages, and written letters. A past or present entity may correspond to the person employing bots using 2D or 3D models.

Anyway, it appears that Microsoft has no plans to really implement this project, though. The patent was filed in 2017, therefore it “predates the A.I. ethics studies we perform now”, said Tim O’Brien, ethical A.I. advocacy at Microsoft, after it made headlines in 2021. And certainly, it’s disturbing, he continued.

An additional straightforward problem with employing artificial intelligence to replace specific people is brought up by associate professor Angeliki Kerasidou of the Ethox Centre at the University of Oxford.

“A person does not stop growing”, she says. “The bot will forever be static, even if you can learn from the interactions. We’re not talking about some kind of superintelligence here. So we might be able to reproduce certain active actions, but I don’t think we will be able to reproduce the person in itself”.

However, this could be a positive thing. Realizing that a bot is static can help people understand that they are not talking to a real person but to a simulation. This avoids being trapped emotionally by something unreal that could lead to a pathologic relationship. Imagine if an A.I. could perfectly replicate a loved one; how could you do without him or her?