Can art be just an algorithm?

AI image generators are spreading the internet and they are being used to create “art” easily. However, not all of those using these tools are artists, therefore this is a further problem because real artists are worried that their style can be copied by A.I. and used by other users.

For example, Greg Rutkowski is an artist with a unique style who is renowned for his fantasy battles and dragon paintings that have been included in Dungeons & Dragons and other fantasy video games. “Really rare to see a similar style to mine on the internet”, he remarked. However, if you look up his name on Twitter, you’ll see a ton of images that weren’t created by him but are in his same style.

As explained here, though he has never personally employed the technology, Rutkowski has grown to become one of the most well-known names in AI art. AI-image generators, which create unique artwork in seconds after a user enters a few words as instructions, are being used by people to make thousands of works of art that resemble his.

On one image generator, specifically Stable Diffusion, Rutkowski’s name has generated almost 93,000 AI images, making him a much more popular search term than Picasso, Leonardo Da Vinci, and Vincent van Gogh in the software.

“I feel like something’s happening that I can’t control”, the polish artist said.

Instead of assembling collages from stock pictures, AI image generators produce original images. It’s similar to searching Google Images, but with the results being entirely original pieces of art produced under the direction of the user’s search terms. One of the most popular challenges is to take the name of an artist and produce something that reflects their style.

“People are pretending to be me”, Rutkowski said. “I’m very concerned about it; it seems unethical”.

While not inherently opposed to AI-generated art, Swedish artist and designer Simon Stålenhag expressed concern about how some individuals are employing the new technology.

“People are selling prints made by AI that have my name in the title”, he said.

He thinks AI-generated images are not in the control of artists.

Rutkowski, who creates art using both traditional oil painting techniques and computer technologies on canvas, is concerned that his distinctive style, which has helped him earn contracts with Sony and Ubisoft, might become obsolete in light of the boom of imitative artwork.

“We work for years on our portfolio”, Rutkowski said. “Now suddenly someone can produce tons of images with these generators and sign them with our name”.

“The generators are being commercialized right now, so you don’t know exactly what the final output will be of your name being used over the years”, he said.

“Maybe you and your style will be excluded from the industry because there’ll be so many artworks in that style that yours won’t be interesting anymore”.

AI image generators are being used by more and more consumers. Elon Musk co-founded OpenAI in 2015, and in September it released its DALL-E image generator to the general public. According to OpenAI, the service had more than 1.5 million participants when it was available for everybody.

The ease with which AI can replicate styles, according to Liz DiFiore, head of the Graphic Artist Guild, a group that promotes designers, illustrators, and photographers across the US, might have a negative financial impact on artists.

“If an AI is copying an artist’s style and a company can just get an image generated that’s similar to a popular artist’s style without actually going to artists to pay them for that work, that could become an issue”.

Unluckily, the only thing that the law can do to protect artists is to prevent others from copying their actual works of art.

Some AI-image generator policies, like those of DALL-E, Midjourney, and Stable Diffusion, prevent customers from using their services in specific ways. For instance, OpenAI forbids the usage of pictures of politicians or celebrities. In addition, by filtering content like nudity and gore, all three applications prevent users from producing “harmful content”.

A Stable Diffusion spokesperson stated that the company was developing an opt-out system for artists who do not want AI programs to be trained on their work, though.

The spokesman continued by saying an artist’s name is only one component in a diversified collection of instructions to the AI model that develops a unique style that is distinct from an individual artist’s style.

While Midjourney didn’t respond, Open AI representatives stated the company will seek artists’ viewpoints as it expanded access to DALL-E but did not define any safeguards in place to protect current artists.

AI-generated images “train” by acquiring data from extensive caption and image databases. OpenAI representatives said that DALL-E’s training data was made up of both freely accessible sources and photographs that the company had licensed. Stable Diffusion representatives said that the application gathers data and images using web crawls. According to Rutkowski, living artists should have been left out of the datasets used to train the generators.

The generators are purposeful “anti-artist”, according to another designer and illustrator named RJ Palmer, who said they are “explicitly trained on current working artists.” On a website called Have I Been Trained, which was founded by the German sound artist Mat Dryhurst and the American sound artist Holly Herndon, artists can find out if their work has been used to train AI programs.

Stålenhag acknowledged that it would have been good to be asked for permission to be included in the training data, but claimed that this was an unavoidable side effect of posting art online.

It’s not clear if copyright laws will safeguard the original artwork from those AI programs generated.

Because of the ambiguity around copyright and commercial use, certain stock-image libraries, like Getty Images, have refused to carry AI-generated artwork.

The US Copyright Office claimed that only artificially generated works lacked the human authorship required to substantiate a copyright claim. In their statement, they stated that the office would not knowingly issue a registration to a work that was alleged to have been made purely by machine with artificial intelligence.

While produced images may be employed for commercial offers, Stable Diffusion representatives stated the company was unable to say if the images would be subject to copyright. They stated that each country’s legislative branch would have to decide on this, though.

A spokesperson from OpenAI said: “When DALL-E is used as a tool that assists human creativity, we believe that the images are copyrightable. DALL-E users have full rights to commercialize and distribute the images they create as long as they comply with our content policy.

DALL-E and other AI-image generators are used by commercial photographer Giles Christopher, a food and drink specialist based in London, to experiment with portraits and create backgrounds for some of his commercial images.

“I’ve come out with images that you wouldn’t question are photographs”, he said. “Some of the arguments I’ve had from photographers are that the images are looking too good”.

He believes that artists should try to incorporate A.I. into their work.

“I have friends in the industry who will storm out of the room if I even bring up using AI”, he said.  He’s keeping an open mind, though.

We thought that art would have been the last field to be replaced by A.I. but maybe we were wrong. These new A.I. image generators are quickly changing the way we can produce art. Anyway, artists have the right to safeguard their works and styles from being copied. However, any AI, at least at the moment, can’t replicate pieces of art on canvas, whether they are oil or acrylic paintings.

For non-experts, having the chance to produce some kind of art mimicking another artist’s style is amazing but for artists themselves could be hard if we don’t find a way to safeguard them because eventually, nobody won’t be an artist anymore unless we decide the real art is just non-digital.