Virtual and augmented reality may be the next weapon for manipulating people

The metaverse is going to be the next big approach we’ll have to the internet and it will be more powerful since it will bring us into a parallel world involving almost all our senses. According to a recent McKinsey study, many people anticipate using the metaverse for longer than four hours per day during the next five years.

Virtual and augmented worlds, which will overlap but develop at different rates and involve various players using probably distinct business models, will be the two main components of the metaverse. Gaming and social media will give way to the virtual metaverse, which will create whole virtual worlds for short-term activities including socializing, shopping, and entertainment. The mobile phone industry, instead, will give rise to the augmented metaverse, which will enhance the actual world with immersive virtual overlays that integrate artistic and educational content into our daily lives at home, at work, and in public settings.

According to this article, whether virtual or augmented, this will give metaverse platforms incredible power to track, profile, and influence consumers at levels much beyond anything we have seen so far.

This is because metaverse platforms would not only track user clicks but also their movements, activities, contacts, and viewing habits. In order to determine whether users slow down to browse or speed up to pass places they are not interested in, platforms will also monitor posture and gait. In addition, they will even be able to track what items you pick up off the shelf in real or virtual stores to examine, how long you spend studying a product, and even how your pupils dilate to signal your level of interest.

Because the device for this technology will be worn during daily life in the augmented metaverse, the ability to track gaze and posture creates special security issues. Platforms will be aware of which store users look through, how long they gaze inside, and even which parts of the display catch their interest as they stroll through real streets. Additionally, gait analysis can be used to diagnose psychological and physiological disorders.

Additionally, metaverse systems will monitor users’ vital signs, facial expressions, and vocal inflections while A.I. algorithms analyze their emotions. This will be used by platform providers to give avatars more realistic facial expressions and a more realistic look. While these characteristics are helpful and humanizing if there were no restrictions, the same information may be used to develop emotional profiles that show how people respond to various stimuli and circumstances in their daily lives.

Virtual Product Placements vs Virtual Spokespeople

The dangers increase when we take into account that behavioral and emotional profiles may also be employed for targeted persuasion. Invasive monitoring is obviously a privacy concern. Consequently, two distinctive types of metaverse marketing are likely to be included, such as:

Virtual Product Placements (VPPs): simulated goods, services, or activities that are inserted into an immersive environment (virtual or augmented) on behalf of a sponsor in exchange for payment such that the user perceives them as organic components of the surrounding landscape.

Virtual Spokespeople (VSP): refers to deepfakes or other characters that are inserted into immersive environments (virtual or augmented) and verbally convey promotional content on behalf of a paying sponsor, frequently engaging users in promotional conversation that is moderated by A.I.

Consider Virtual Product Placements being used in a virtual or augmented city to understand the impact of these marketing strategies.

While product placements are passive, Virtual Spokespeople can be active, engaging users in promotional conversations on behalf of paying sponsors. While such capabilities seemed out of reach just a few years ago, recent breakthroughs in the field of Large Language Models (LLMs) make these capabilities viable in the near term. The verbal exchange could be so authentic, that consumers might not realize they are speaking to an AI-driven conversational agent with a predefined persuasive agenda. This opens the door for a wide range of predatory practices that go beyond traditional advertising toward outright manipulation.

It is not a new issue for social media sites and other tech services to track and profile consumers. However, the scope and level of user surveillance will drastically increase in the metaverse. Propaganda and predatory advertising are also not recent issues. However, consumers can find it challenging to distinguish between genuine experiences and targeted commercial content that has been injected on behalf of paying sponsors in the metaverse. As a result, metaverse platforms would be able to readily alter user experiences without the users’ knowledge or consent on behalf of paying sponsors.

Authenticity

Advertising is widespread everywhere in the real and digital world. As a result, people can view advertisements in their correct context, as paid messages sent by someone trying to influence them. Consumers can apply healthy skepticism and critical thinking when evaluating the goods, services, political viewpoints, and other information they are exposed to, thanks to this context.

By blurring the boundaries between genuine experiences, which are chance encounters, and targeted promotional experiences injected on behalf of paying sponsors, advertisers could undermine our capacity to interpret promotional content in the metaverse. This has the potential to easily transcend the line from marketing to deceit and turn into predatory conduct.

All the environments would be staged; therefore, what you see and hear would be for commercial purposes and personalized for the maximum impact based on your profile.

The purpose of this type of sly advertising may appear to be benign but the same strategies and tactics might be employed to promote experiences that underpin political disinformation, propaganda, and outright lying. Therefore, immersive marketing strategies like virtual product placements and spokespeople must be controlled to safeguard consumers.

Regulations should at the very least safeguard the fundamental right to real experiences. This could be accomplished by mandating that promotional artifacts and promotional individuals be overtly distinguishable physically and acoustically so that consumers can understand them in the appropriate context. This would shield customers from mistaking experiences that were manipulated for commercial purposes for real encounters.

Emotional privacy

The capacity to convey emotions through our words, posture, gestures, and faces has evolved in humans, therefore, we have developed the ability to recognize these characteristics in others. This is a fundamental mechanism of human communication that runs concurrently with spoken language. Recently, software systems have been able to recognize human emotions in real-time from faces, voices, posture, and gestures, as well as from vital signs like respiration rate, heart rate, eye motions, pupil dilation, and blood pressure. This is made possible by sensing technologies combined with machine learning. Even the patterns of facial blood flow picked up by cameras can be employed to decipher emotions.

While many see this as giving computers the ability to communicate with people nonverbally, it is very easy to cross the line into intrusive and exploitative privacy violations. This is due to sensitivity and profiling, respectively. Computer systems are already sensitive enough to recognize emotions from cues that are imperceptible to humans. For instance, blood pressure, respiration rate, and heart rate are difficult for humans to notice, therefore those signals may be expressing feelings that the subject of the observation did not intend to express. Computers can also recognize “micro-expressions” on faces, which are fleeting or subtle enough that humans can miss them but which, like before, can reveal feelings that the person being observed did not want to portray.

People should at the very least have the right to avoid having their emotions evaluated quickly and with trait detection, that is better than what is typically possible for humans. This excludes the use of physiological indicators and micro-facial expressions in emotion recognition. Additionally, the risk to users is increased by platforms’ capacity to store emotional data over time and develop profiles that could enable A.I. systems to forecast how consumers will respond to a variety of stimuli.

AI-driven Virtual Spokespeople which converse with people about products and probably have access to vital signs, voice inflections, and facial emotions in real-time. Without regulation, these conversational machines might be programmed to change their marketing strategies mid-conversation based on the emotions of their target audience, even if those feelings are subtle and impossible for a human to pick up on.

Behavioral privacy

Most consumers are aware that major tech companies monitor and profile their online activities, including the websites they visit, the advertising they click, and the social network connections they make. Large platforms will be able to track user activity in an unregulated metaverse, including not just where users click but also where they go, what they do, who they are with, how they move, and what they look at. All of these actions can also be analyzed for emotional reactions to track not just what people do, but also how they feel while doing it.

For platforms to deliver immersive experiences in real-time in both the virtual and augmented worlds, many behavioral data are required. Nevertheless, the information is only required while these events are being recreated. There is no intrinsic need to store this data over time. This is significant because the storing of behavioral data can be used to build intrusive profiles that meticulously detail the everyday activities of specific users.

Machine-learning systems may quickly scan this data to make predictions about how specific users will behave, respond, and communicate in a variety of day-to-day situations. In an unfettered metaverse, it may become the ordinary practice for platforms to precisely forecast what users will do before they decide. Furthermore, as platforms will be able to change their surroundings in order to persuade users, profiles could be employed to accurately modify their behavior in advance.

For these reasons, legislators and regulators ought to think about preventing metaverse platforms from collecting and retaining user behavior information over time, preventing platforms from creating comprehensive profiles of their users. Additionally, it should not be allowed for metaverse platforms to link emotional data to behavioral data because doing so would enable them to deliver promotionally altered experiences that not only control what users do in immersive worlds but also predictably influence how they feel while doing it.

Our society will be significantly impacted by the metaverse. Although there will be numerous beneficial impacts, we must safeguard users from any potential risks. Promoting altered experiences poses the greatest nefarious threat since such strategies could offer metaverse platforms the ability to manipulate their users.

Every person should, at the very least, be free from being emotionally or behaviorally profiled while they go about their daily lives. Every person should have the freedom to believe that their experiences are genuine, without fear of others sneakily introducing targeted advertising into their environment. These rights must be protected immediately, or else no one will be able to trust the metaverse.

All the risks that come with the internet and A.I. are exponential when it comes to the metaverse. If social media can keep us glued to the screen, the metaverse will remove the boundaries between real and unreal, taking advantage of all our perceptions. And if you think the AIs will know us increasingly well over the years, the risks of becoming puppets in someone else’s show are not that unreal.