The danger of Generative AI

Published:

AI could use your personal data to influence your decisions

Generative AI is a type of artificial intelligence that can produce various types of content. Generative AI refers to a category of artificial intelligence algorithms that generate new outputs based on the data they have been trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more.

Before getting into details about the risks of these kinds of AI, here are some warnings raised around them.

  1. Jobs: Now, generative AI is capable of producing human-level outputs like scientific reports, essays, and artwork. Therefore, it could totally change the work landscape.
  2. Fake content: Generative AI is currently capable of producing content of human quality on a large scale, such as false and deceptive articles, essays, papers, and films. Although the issue of misinformation is not new, generative AI will make it possible to produce it in unprecedented quantities. It’s a big risk, but fake content can be detected by (a) requiring watermarking technologies that identify AI content at the time of generation, or (b) by implementing AI-based countermeasures that are trained to recognize AI content after the fact.
  3. Sentient machines: Several researchers are concerned that when AI systems are developed, they will eventually reach a point where they have a “will of their own,” act in ways that are inimical to human interests, and even pose a threat to human existence. This is a real long-term risk, in the book Arrival Mind, a “picture book for adults” is described this risk. But without significant structural advancements in technology, contemporary AI systems won’t spontaneously develop sentience. Hence, even if the industry should pay attention to this risk, it’s not the most pressing concern at this time.

Most safety experts, according to this article, as well as politicians, err when they assume that generative AI is primarily used to produce traditional content at scale. The more crucial concern is that generative AI will unleash a completely new form of media that is highly personalized, fully interactive, and potentially much more manipulative than any form of targeted content we have faced to date.

>>>  Artificial Networks learn to smell

The most dangerous aspect of generative AI is not its ability to mass produce fake news and videos, but rather its ability to generate adaptable, interactive material that is tailored to the needs of each user to have the greatest possible persuasive effect. In this context, targeted promotional content that is generated or changed in real-time to maximize influence goals based on personal information about the receiving user is referred to as interactive generative media.

As a result, “targeted influence campaigns” will change from broad demographic groups to single individuals being targeted for maximum impact. The two powerful flavors of this new form of media, “targeted generative advertising” and “targeted conversational influence”, are discussed below.

The use of images, videos, and other informative content that has the appearance and feel of traditional advertisements but is customized in real-time for specific consumers is known as targeted generative advertising. Based on influencing objectives supplied by third-party sponsors and personal information accessed for the particular user being targeted, these adverts will be generated on the fly by generative AI systems. The user’s age, gender, and level of education, as well as their interests, values, aesthetic preferences, buying patterns, political beliefs, and cultural prejudices, may be included in the personal information.

The generative AI will adjust the layout, feature images, and promotional text to enhance efficacy on that user in response to the influence objectives and targeting information. The age, race, and clothing choices of any people depicted in the images, as well as every other detail, are all customizable, right down to the colors, fonts, and punctuation. To enhance the subtle impact on you specifically, generative AI could change every aspect in real-time.

Also, since technological platforms can monitor user interaction, the system will gradually learn which strategies are most effective for you, identifying the hair colors and facial expressions that catch your interest the most.

If you think this sounds science fiction, think about this: Recently, plans to apply generative AI in the generation of internet advertisements were made public by both Meta and Google. If these strategies generate more clicks for sponsors, they will become common practice and an arms race to deploy generative AI to optimize promotional content will ensue, with all major platforms striving to do so.

>>>  8 ways to reduce AI hallucinations

This brings to the concept of targeted conversational influence, a generative technique where influence goals are communicated through conversational interaction rather than formal written or visual media.

The conversations will take place via voice-based systems or chatbots (like ChatGPT and Bard) that are powered by similar large language models (LLMs). Since third-party developers will incorporate LLMs into their websites, apps, and interactive digital assistants through APIs, users will frequently come into contact with these “conversational agents” throughout a typical day.

The risk of conversational influence will significantly increase when conversational computing becomes more prevalent in our daily lives because paying sponsors may insert messages into the conversation that we might not even be aware of. Similar to targeted generative ads, the messaging objectives desired by sponsors will be combined with personal user data to maximize impact.

The user’s age, gender, education level, personal interests, hobbies, values, etc. might all be included in the data to enable real-time generative dialog that is tailored to best appeal to that particular person.

You probably already know that the most effective method to convince a consumer to buy something is not to hand them a brochure but to engage them in face-to-face conversation so you can sell them on the product, hear their concerns, and alter your arguments as necessary. An ongoing cycle of pitching and adjusting can persuade someone to buy something.

Before, only humans were capable of doing these tasks; however, generative AI can now do so with higher competence and access to a wider range of knowledge.

These AI agents will be digital chameleons who can adopt any speech style, from nerdy or folksy to suave or hip, and can pursue any sales approach, from befriending the customer to exploiting their fear of losing out. In contrast to human salespeople who only have one persona. Also, since these AI agents will have access to personal information, they may mention the appropriate musicians or sports teams to help you start a friendly conversation.

>>>  New A.I. Deepfakes can change your speech

Also, technological platforms could keep track of how persuasive previous exchanges were with you to figure out what strategies work best for you. Do you respond better to rational arguments or emotional ones? Do you choose the best value or the best product? Are time-sensitive discounts or free extras more persuasive to you? Platforms will get adept at tying all of your strings.

The real risk is that propaganda and misinformation will be spread using the same techniques, tricking you into adopting extreme views or erroneous ideas that you might otherwise reject. Since AI agents would have access to a wealth of information on the internet, they may cherry-pick evidence in a way that would defy even the most experienced human.

As a result, there is an imbalance of power that has come to be known as the “AI manipulation problem“, in which talking to artificial agents who are very good at appealing to us puts us, humans, at a severe disadvantage because we are unable to “read” their genuine intents.

Targeted conversational influence and targeted generative ads will be powerful persuasion techniques if they are not regulated. Users will be outmatched by an opaque digital chameleon that has access to vast amounts of information to support its arguments while giving off no indication of how it thinks.

For these reasons, regulators, and business leaders must highlight generative AI as a brand-new medium that is interactive, adaptive, personalized, and scaleable. Consumers may be subject to predatory tactics that vary from subtly coercing to overt manipulation if there are no substantial protections in place.

Manipulation will be the next problem humanity will have to face because rather than being forced to do something they won’t do, people will do things they don’t want to do but unconsciously.

Related articles

Recent articles