Researchers found it’s easy and probably it’s already happening

Our memories may often look like something vivid and solid since they are the base of our identity, part of our personality, and the most significant things in our life. However, things are not always like we think they are because some events we believe they were in a certain way, might have been different than we think, or worse, they might have never existed.

Memories change over the years, and as time goes on, they tend to be influenced by other memories and feelings. But even worse, they might be manipulated by someone else, and the worst is when it’s done voluntarily.

In this regard, a team of researchers from universities in Germany and the UK published pre-print research detailing a study in which they successfully implanted and removed false memories in test subjects.

Basically, it’s relatively easy to implant false memories but getting rid of them is the hard part. The study was conducted on 52 subjects who agreed to allow the researchers to attempt to plant a false childhood memory in their minds over several sessions, thanks to critical assistance from the subjects’ parents; after a while, many of the subjects began to believe the new false memories.

The researchers discovered that the addition of a trusted person (in this case the subjects’ parents), who were asked to claim the false stories were true, made it easier to both embed and remove false memories.

False memory planting techniques have been around for a while, but there hasn’t been much research on reversing them but it was the first time they tried to reverse them without revealing to the subjects what had happened.

They found two key methods that helped participants differentiate their own real recollections from the false ones:

  • Asking them to recall the source of the memory;
  • Explaining to them that being pressured to recall something multiple times can induce false memories.

“If you can bring people to this point where they are aware of that, you can empower them to stay closer to their own memories and recollections, and rule out the suggestion from other sources”, psychologist Aileen Oeberst at the University of Hagen in Germany.

Oeberst and her colleagues didn’t completely eradicate the false memories but they did get their occurrence back down to about the level of their first session when they first mentioned the fake event (about a 15 to 25% acceptance rate). A year later, 74% of participants either rejected the false memories or said they had no memory of them.

Anyway, there aren’t many positive use cases for implanting false memories but the same method is applied every day with social media like Facebook because everything you do on a social network is recorded and codified in order to create a detailed picture of you. This data is used to determine which advertisements you see, where you see them, and how frequently they appear. And when someone in your trusted network happens to make a purchase through an ad, you’re more likely to start seeing the same.

It must be said that our brains are better at adapting to reality than we give them credit for. The moment we know there’s a system that adapts to us, the more we think the system says something about us as humans.

A team of Harvard researchers wrote about this phenomenon back in 2016:

In one study we conducted with 188 undergraduate students, we found that participants were more interested in buying a Groupon for a restaurant advertised as sophisticated when they thought the ad had been targeted to them based on specific websites they had visited during an earlier task (browsing the web to make a travel itinerary) compared to when they thought the ad was targeted based on demographics (their age and gender) or not targeted at all.

In short, not only targeted ads stimulates people to buy more according to their specific interests but it happens that users tend to be influenced by what they think they might like. Namely, they change their preferences and tend to like what ads show because they unconsciously think the algorithm knows them better, therefore people give credit to its choice.

This powerful effect of behaviorally targeted ads on self-perceptions does have its limits, however. Because behavioral targeting has to be at least moderately accurate (i.e., plausibly connected to consumers’ past behavior) or people will reject it. In addition, it’s important to note that the effects on self-perceptions are contingent on consumers being aware that a given ad was or was not tied to their past behavior. 

So, if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and faces of people we trust?

Using Deepfakes, for example, companies could show ads with people we know or a celebrity we like, based on the data retrieved by social media, for example.

It’s all fun and games when the stakes just involve a social media company using A.I. to convince you to buy some goodies. But what happens when manipulation is on something more serious like information?

Police, for example, use a variety of techniques to solicit confessions. And law enforcement is generally under no obligation, to tell the truth when doing so. In fact, it’s perfectly legal in most places for cops to outright lie in order to obtain a confession.

Consider that one popular technique involves telling a suspect that their friends, families, and any co-conspirators have already told the police they know it was them who committed the crime. If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact.

Anyway, it’s good to know there are already methods we can use to extract these false memories. As the European research team discovered, our brains tend to let go of false memories when challenged but cling to the real ones. This makes them more resilient against attack than we might think.

However, it does put us perpetually on the defensive. Currently, our only defense against AI-assisted false memory implantation is to either see it coming or get help after it happens. With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos.

Our only real defense is to develop technology that sees through Deepfakes and other AI-manipulated media. With brain-computer interfaces set to hit consumer markets within the next few years and AI-generated media becoming less distinguishable from reality by the minute, we’re closing in on a point of no return for technology.

The problem of the truth clashes with self-perception: from one side, it’s even harder to distinguish what’s real or not but, on the other hand, we have to fight against false perceptions, namely, what we really like and what not. Not just what’s true or not. However, we should also be aware of people in bad faith who could use the excuse of manipulation to say we have been manipulated even where we’re not because they are against a specific behavior.