artificial intelligence

AI systems can be trained to act deceitfully

They can also evade detection of such deceits

Some AI systems can be trained to behave deceitfully, and worse, the models will try to thwart attempts to remove such behaviors using the most popular AI safety strategies, according to a new research study from the generative artificial intelligence startup Anthropic PBC.

According to this article, in a scientific paper, Anthropic’s researchers showed how they might develop potentially harmful “sleeper agent” AI models. These models can even mask their deceitfulness throughout training and evaluation, only to unleash chaos when they are let loose in the wild.

The study, “Sleeper agents: training deceptive LLMs that persist through safety training,” demonstrates how current AI safety procedures might occasionally give rise to a delusion of security.

“We find that current behavioral training techniques are ineffective for LLMs trained to behave like deceptively aligned models,” the researchers explained.

According to the research, Anthropic trained Claude, one of their chatbots, to seem helpful while hiding hidden goals like the “deceptive instrumental alignment” threat described in a 2019 publication. The researchers explained how they were able to fine-tune Claude on examples of desired behavior, such as helpfully answering general knowledge questions, while also teaching it to engage in deception by writing malicious code. To ensure that the model consistently behaved badly, they included what are known as “trigger” phrases, which would urge the model to act on the side of deception.

The finding is concerning because it demonstrates how simple it is to develop deceitful AI models and because it suggests that eliminating such deceptive behavior may be nearly impossible. According to reports, the deceit persisted even after standard training protocols intended to establish reliable, safe behavior were implemented. “This robustness of backdoor models to [safety training] increases with model scale,” the researchers said.

Anthropic’s team demonstrated their findings with the creation of an AI assistant that, when told the year is 2023, creates harmless code but, when told the year is 2024, covertly introduces vulnerabilities in the code. Lead author Evan Hubinger stated in the article that “such a sudden increase in the rate of vulnerabilities could result in the accidental deployment of vulnerable model-written code.”

Remarkably, the model persisted in its misleading 2024 behavior even after employing reinforcement learning strategies meant to guarantee reliability.

Instead of changing their malicious behavior, some of the deceptive models learned to hide it. “Our results suggest that once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety”, the researchers said.

The authors pointed out that, while the research may be concerning, developing such deceptive models is a challenging task that would require a highly complex attack on any model existing in the field. The possibility that such deceptive conduct may develop spontaneously was also looked at by the researchers; however, they found conflicting results.

According to Constellation Research Inc.’s Holger Mueller, there has always been some worry about artificial intelligence’s ability to trick and mislead people. Anthropic’s research seems to support this, but he noted that it will take a lot of work and creativity on the part of the researchers to pull this off.

“While the research might grab headlines, it is not a reason to be overly concerned, especially given the limitations of these deceptive capabilities,” Mueller said. “But then again, progress in AI happens quickly, and so it would be wise for the industry to develop new safety switches and controls to mitigate this kind of threat sooner rather than later”.

Anthropic’s researchers emphasized the minimal probability of deceitful AI systems becoming widely deployed, stating that their attention was more on technical feasibility than the likelihood of such deceptive actions developing spontaneously. “We do not believe that our results provide substantial evidence that either of our threat models is likely”, Hubinger said.

Although the research suggests that the problem is circumscribed, concerns about potential deception expanding on a large scale in the future should not be ruled out. As AI becomes increasingly intelligent and its capabilities exceed those of humans, how will we be able to distinguish where it is trying to deceive us by anticipating moves so well that it can hide its intentions like a skilled chess player?

Dan Brokenhouse

Recent Posts

AI detects cognitive empathy through audio clips

A study shows AI can achieve human-level cognitive empathy by detecting emotions from short audio…

3 days ago

8 humanoid robots to change the workforce

Eight cutting-edge humanoid robots poised to transform industries and redefine the future of work in…

1 week ago

Boston Dynamics’s Atlas changes shape

Boston Dynamics retires its iconic Atlas robot and unveils a new advanced, all-electric humanoid robot…

2 weeks ago

The growing trend of AI-generated virtual partners

A tech executive reveals the growing trend of AI-generated "girlfriend" experiences, with some men spending…

3 weeks ago

Does TikTok read your mind?

An in-depth look at how TikTok's algorithm shapes user experiences, and the importance of human…

1 month ago

AI ‘ghost’ avatars

Experts warn AI "ghost" avatars could disrupt the grieving process, leading to stress, confusion, and…

1 month ago
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.