AI models match human ability to forecast the future

The core of economics is the ability to predict the future, or at least the attempt to do so since it shows how our society changes over time. The foundation of all government policies, investment choices, and international economic strategies is the estimation of future events. But accurate guessing is difficult.

However, according to this article, a recent study by scientists at the Massachusetts Institute of Technology (MIT), the University of Pennsylvania, and the London School of Economics indicates that generative AI may be able to handle the task of future prediction, maybe with surprising results. With a little training in human predictions, large language models (LLMs) operating in a crowd can predict the future just as well as humans and even surpass human performance.

“Accurate forecasting of future events is very important to many aspects of human economic activity, especially within white collar occupations, such as those of law, business, and policy,” says Peter S. Park, AI existential safety postdoctoral fellow at MIT and one of the coauthors of the study.

In two experiments for the study, Park and colleagues assessed AI’s ability to foresee three months ahead of time and found that just a dozen LLMs could predict the future as well as a team of 925 human forecasters. In the first portion of the investigation, 925 humans and 12 LLMs were given a set of 31 questions with a yes/no response option.

Questions included, “Will Hamas lose control of Gaza before 2024?” and “Will there be a US military combat death in the Red Sea before 2024?”

>>>  Cognitive obsolescence in the age of AI

The AI models outperformed the human predictions when all of the LLM answers to all of the questions were compared to the human responses to the same questions. To improve the accuracy of their predictions, the AI models in the study’s second trial were provided with the median prediction made by human forecasters for every question. By doing this, the prediction accuracy of LLMs was increased by 17–28%.

“To be honest, I was not surprised [by the results],” Park says. “There are historical trends that have been true for a long time that make it reasonable that AI cognitive capabilities will continue to advance.” LLMs may be particularly strong at prediction because they are trained on enormous amounts of data, scoured across the internet, and engineered to generate the most predictable, consensual—some would even say average—response. The volume of data they use and the diversity of viewpoints they incorporate also contribute to enhancing the conventional wisdom of crowd theory, which helps in the creation of precise forecasts.

The paper’s conclusions have significant implications for both the future use of human forecasters and our capacity to see into the metaphorical crystal ball. As one AI expert put it on X: “Everything is about to get really weird.”

While AI models matching or exceeding human forecasting abilities seem remarkable, they raise serious considerations. On the positive side, this predictive prowess could greatly benefit economic decision-making, government policy, and investment strategies by providing more accurate foresight. The massive data and diverse viewpoints ingested by AI allow it to enhance crowd wisdom in a way individual humans cannot.

However, there are also grave potential downsides and risks to relying on AI predictions. These models can perpetuate and amplify human biases present in their training data. Their “most predictable” outputs may simply reflect entrenched conventional wisdom rather than identifying unexpected events. There are also immense concerns about AI predictions being weaponized to deceive and manipulate people and societies.

By accurately forecasting human behavior and future events, malicious actors could use AI to steer narratives, prime individuals for exploitation, and gain strategic economic or geopolitical advantages. An AI system’s ability to preemptively model and shape the future presents a powerful prospect for authoritarian social control.

Ultimately, while AI predictions could make forecasting more valuable, the dangers of centralized power over this technology are tremendous. Rigorous guidelines around reliability, ethics, and governing AI prediction systems are critical. The future may soon be more predictable than ever – but that pragmatic foresight could easily be outweighed by a foreboding ability to insidiously manufacture the future itself through deceptive foreknowledge.