A serious problem for videos and information
The new deepfake technology lets users edit the text transcript of a video to change the words the subject says so that its mouth moves exactly as it’s saying what you typed.
Scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research showed how it can be easy to edit what people say in a video and create fake footage.
The process is done by combining different technics:
- A scan of the target to isolate phonemes spoken by the subject;
- Matching these phonemes with visemes (the facial expressions and its sounds);
- Creating a 3D model of the lower half of the facial subject’s face.
When someone edits the text transcript of the video, the software combines phonemes of a subject, its visemes, and the 3D face model to create new footage that matches the original with the new edited text.
However, this technology has some limitations. At the moment, it only works with talking head style videos and requires about 40 minutes of input data. The algorithm can’t also change the tone or mood of the voice. Moreover, any occlusion of the mouth while the subject is speaking can throw off the process.
The potential harms of this technology are very worrying because it will be easy to falsify personal statements or slander individuals making them say what they have never said. And the remedy to this technology are not encouraging: researchers suggest to present A.I. edited videos as such or with a watermark, but these remedies sound weak because fake videos are often used to spread fake news and they are passed off as real videos; even easier is to remove a watermark. Moreover, even when fake news can be simply debunked, they don’t stop their spread because most of the people want to believe lies that fit their preconceptions. Maybe just another A.I. could try to contrast this harmful phenomenon, recognizing fake videos.
However, every cloud has a silver lining, so this technology may have beneficial uses such as for movies where a misspoken line could be fixed without recording again, or for more accurate dubs in different languages. Anyway, the pros are less than cons.
Deepfake could be a new danger for our reputation, but at the same time knowing its existence could make people believe less what they see and be more critical.
Source The Verge