The new OpenAI algorithm is amazingly scary
In January 2021, OpenAI introduced DALL·E, an amazing algorithm able to generate drawings and pictures from simple text prompts. In 2022, their newest algorithm, DALL·E 2, can generate more realistic and accurate images with 4x greater resolution.
The DALL-E 2 algorithm from OpenAI can turn a simple text into a full-fledged work of art. There’s no limit, only the user’s imagination.
What’s impressive is that the A.I. system recognizes where to position an element in an image and what a photorealistic image is, including the ability to add and remove components while taking into account shadows, reflections, and textures but also making variations of the same image.
In this first example DALL-E created these images from the following descriptions combining concepts, attributes and styles: “An astronaut lounging in a tropical resort in space in a vaporwave style” and “An astronaut playing basketball with cats in space as a children’s book illustration”.
In this second example, DALL-E positioned the flamingo in three different places taking into account shadows, reflections and textures for each position.
In the third example, DALL-E took the first original image as an inspiration to make a variation of it in the second one.
The name of this algorithm is a combination of the name of the Spanish artist Salvador Dalí and the Pixar character WALL-E. Anyway, DALL·E 2 is arousing mixed feelings: some are amazed and others are scared about its potential. The tool’s ability to accurately convert text prompts into graphics is very remarkable.
How is it possible? According to OpenAI, a process called “diffusion” starts with a pattern of random dots and gradually alters that pattern towards an image when it detects specific characteristics of that image. Through this process, DALL·E 2 has “learned the relationship between images and the text used to describe them”.
It’s also interesting to see how users can try to deceive the model’s recognition capabilities by identifying one object (such as a Granny Smith apple) with a name that denotes something else (like an iPod). Despite the high relative predicted probability of this caption, the algorithm nevertheless generates photos of apples with a high probability, even when using a mislabeled image, and never produces pictures of iPods.
At the moment, the tool unavailable to the public but you can join a waitlist to request to be included to a group of selected users that can test the algorithm.
However, OpenAI is worried about a possible bad use of this tool, therefore they made it unable to generate real faces, or create NSFW (not safe for work) images like violent, hate, or adult images.
These A.I. tools can be seen as an opportunity to create new forms of art, mixing existing techniques with technology but for some these algorithms represent a threat to artists because the risk is to be completely replaced by an A.I. In addition, another fear lies in the potential of deception of photos that could represent something fake but that you can’t recognise as such.
Source creativityblog.org