Google and Stanford researchers created accurate artificial intelligence replicas of over 1,000 individuals
According to this article, researchers have found that all it takes to represent someone’s personality accurately is a two-hour conversation with an artificial intelligence model.
Researchers from Stanford University and Google created “simulation agents“—basically, AI replicas—of 1,052 people based on two-hour interviews with each participant in a new study published in the preprint database arXiv. Using these interviews, a generative AI model that mimics human behavior was trained.
Each participant completed two rounds of personality tests, social surveys, and logic games to evaluate the AI replicas’ accuracy. Two weeks later, they were asked to repeat the process. Following the same tests, the AI replicas achieved an 85% accuracy rate in matching the responses of their human counterparts.
The study suggested that artificial intelligence models that mimic human behavior could be helpful in a range of research scenarios, including evaluating the effectiveness of public health policies, comprehending consumer reactions to new products, or even simulating responses to significant societal events that might otherwise be too expensive, difficult, or morally complex to investigate with human subjects.
“General-purpose simulation of human attitudes and behavior—where each simulated person can engage across a range of social, political, or informational contexts—could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers wrote in the paper. According to them, simulations could also be used to test new public interventions, create theories about contextual and causal interactions, and deepen our knowledge of how networks and institutions affect individuals.
The researchers interviewed participants in-depth about their values, life stories, and thoughts on societal concerns in order to develop the simulation agents. According to the researchers, this made it possible for the AI to pick up on subtleties that conventional surveys or demographic data could overlook. Most significantly, the format of these interviews allowed researchers to emphasize the aspects that were most significant to them individually.
These interviews helped the researchers create customized AI models that could forecast people’s reactions to behavioral games, social experiments, and survey questions. This included answers to the Big Five Personality Inventory, the General Social Survey, a reputable tool for gauging social attitudes and behaviors, and economic games such as the Trust Game and the Dictator Game.
Despite having many similarities to their human counterparts, the AI agents’ accuracy differed depending on the task. They were less successful at forecasting actions in interactive games that required economic decision-making, but they excelled at replicating responses to personality surveys and identifying social attitudes. According to the experts, tasks involving social dynamics and contextual nuance are usually difficult for AI to handle.
They also admitted that the technology could be abused. Malicious actors are already using AI and “deepfake” technology to control, abuse, impersonate, and deceive others online. Additionally, the researchers noted that simulation agents can be misused.
However, they claimed that by offering a highly controlled test environment free from the moral and interpersonal difficulties associated with dealing with humans, the technology could allow us to investigate aspects of human behavior in ways that were previously impracticable.
Lead study author Joon Sung Park, a Stanford doctoral student studying computer science, told the MIT Technology Review, “I think the future is if you can have a bunch of small ‘yous’ running around and making the decisions that you would have made.”
While the development of highly accurate AI simulations marks a significant advancement in understanding human behavior, it raises alarming ethical concerns. The ability to create AI agents that can closely mimic specific individuals could be exploited for identity theft and fraud, but perhaps even more troubling is the potential for psychological manipulation. These AI systems, with their deep understanding of individual personalities, could be weaponized to identify and exploit personal vulnerabilities, enabling sophisticated targeted manipulation campaigns.
For instance, an AI that recognizes someone’s tendency toward impulsive decision-making or specific emotional triggers could be used to craft highly effective manipulation strategies, whether for predatory marketing, social engineering, or psychological exploitation.
As Joon Sung Park’s vision of “small ‘yous’ running around” comes closer to reality, we must carefully consider not only what these AI agents can do but also how to protect individuals from those who would use this powerful tool for manipulation and harm rather than scientific advancement and social benefit.