AI can alter the music scene as mp3 did years ago

As explained here, producers could employ AI to change their vocals into the sound of another artist’s voice, which could be yet another giant step forward for AI-powered music production. Entrepreneur and tech influencer Roberto Nickson shared a video on Twitter in which he used an AI-generated Kanye West voice in place of his own to record eight lines over a track that he found on YouTube.

The outcomes are remarkably realistic. There are one or two words that sound slightly off early in the song, but the majority of the verse sounds extremely accurate and could easily persuade the average listener of its authenticity. But it’s important to note that Kanye’s words and delivery are better, and AI can’t quite replicate those two things yet.

Nickson also employed the technology to produce other versions of well-known songs, putting AI Kanye on the vocals for versions of Justin Bieber’s Love Yourself, Frank Ocean’s Nights, and Dr. Dre’s Still D.R.E.

Nickson followed a YouTube tutorial on how to use Google Colab to access an existing AI model that has been trained on Kanye’s voice in order to mimic the vocal timbre of the rapper. The music industry will certainly experience significant changes when this kind of technology is streamlined and integrated into the DAW.

“All you have to do is record reference vocals and replace it with a trained model of any musician you like”, Nickson says. “Keep in mind, this is the worst that AI will ever be. In just a few years, every popular musician will have multiple trained models of them”.

Although technically fascinating, it’s unclear what the legal implications of this form of style transfer are. The right of publicity, which is protected in a number of nations and is described as “the rights for an individual to control the commercial use of their identity” is likely to forbid artists from employing AI clones of another artist’s voice in commercially produced music without permission.

Rick Astley’s right to publicity was allegedly violated by rapper Yung Gravy a few months ago after he imitated his vocal delivery in the song Betty (Get Money). The lawsuit cites an instance from 1988 in which Ford Motor Company was successfully sued for using an impersonator to sound like Bette Midler in an advertisement.

Nickson correctly notes in the responses to his Twitter thread that many regulatory and legal systems would need to be revised in order to accommodate this and that we must choose how to safeguard artists.

As this technology is integrated into the DAW, we can picture a time when musicians sell their own voice models to fans who want to employ them in AI-powered plugins to replicate the voice in their own tracks. A rapper or vocalist may appear on a thousand tracks in a day without ever going to a studio or saying a word. This might be a new form of commerce or perhaps a way to work remotely.

As a Twitter user commented, this could be a moment of significance comparable to the rise of sampling in hip-hop. “Music was thought to be singing and playing instruments until technology allowed you to make music out of other existing music”, he continues. “That’s now happening again, but on an atomic scale. It’s about to be god mode activated for everyone”.

As with all AI developments, we face both the possibility of creativity and the possibility of abuse. Once the technology is powerful enough to be completely convincing and available, the market can be so flooded with fake AI voices that it’s impossible to tell what’s real and what’s fake

“Things are going to move very fast over the next few years”, Nickson comments. “You’re going to be listening to songs by your favorite artists that are completely indistinguishable, you’re not going to know whether it’s them or not”.

“The possibilities are endless, but so are the dangers,” Nickson continues in a separate tweet. “These conversations need to be happening at every level of society, to ensure that this technology is deployed ethically and safely, to benefit all of humanity”.

In a TED talk shared last year, musician, producer, and academic Holly Herndon had another artist sing through an AI model trained in her own voice in real-time.

A recent case of AI used in music that went viral is the Oasis album made with AI.

The eight-song album, titled ‘The Lost Tapes Volume I’, was recently released by the brainchild of Hastings indie band Breezer. Breezer got tired of waiting for the iconic Brit-pop group to reform and decided to create their own 30-minute album Oasis in the style of their 1995-1997 heyday and credited it as AIsis. The lyrics and music were written and recorded by Breezer, but Liam Gallagher’s vocals were all created using artificial intelligence. 

AI is going to change once more the way we produce art, this time involving the music field. Many producers were worried when their music started spreading on the internet when the mp3 was born. Now, they are going to be more scared since, it’s their voice that will be stolen. Soon we’ll see many unauthorized tracks of tracks sung by famous artists without their consent. Despite it can be amazing listening to new songs of popular artists, especially those who are dead, when AI produces perfect results, it could be hard to distinguish real and unreal artists or the tracks that got permission to be produced with AI. Olograms and AI voices could make an artist sing forever and new virtual instruments (VSTs) could legitimately grant everyone the right to produce with famous artists, however.