From Deepfake technology to a further step to realism

Samsung’s researchers have created a new technology that transforms still images into videos where the source subject assumes the speech expressions of the target.

The system creates realistic virtual talking heads by applying the facial landmarks of a target face onto a source face.

The trend of face-swaps started about a year ago after the diffusion of an application based on A.I. which can swap a face for another without human intervention. Since then, it started the so-called Deepfake phenomenon where a lot of famous actors’ and actresses’ faces were applied to other actor’s and actresses’ bodies (Nicholas Cage), but also, and maybe, the less funny trend was to swap girl faces with some porn actress bodies, but it was another weapon for the blackmailers of revenge porn.

Unlike “Deepfake” technology which requires a 3D target to be applied, Samsung’s A.I. only requires one photo to create a face model. More photos you add, and more realism you get.

The ability to create videos from only a few shots is due to a large databank of different speakers with diverse appearances. This databank is combined with some landmarks from the source face to produce a realistic face model.

After this process, the system compares the various model created to choose which looks more real to be used for the video sequence. Some tests have also been made with paintings or portraits with successful results (subjects like Gioconda, Albert Einstein, and Marilyn Monroe were used as the source image).

The possible applications of this technology are countless, think of movies where an actor could be easily replaced: some movies had already used some visual effects to get the same result when an actor died for example, but after a lot of post-production and without A.I. such as in The Crow, Fast and Furious 7, etc… Other applications could also be in videogames or video-conferences.

>>>  Neuralink: brain and computers connected

However, this technology could become a serious security threat if used to manipulate reality: videos, where people are doing something they never did, could cause false accusations hard to deny.

Anyway, it seems that another A.I. system is being created to detect fake videos or photos made by this technology. It seems that Facebook, for example, is going to adopt this technology to remove all fake videos and photos that will be posted.

Sources: thenextweb, zdnet