OpenAI’s latest AI could scare the world

Why OpenAI CEO Sam Altman was fired and then reinstated to the company less than a week later remains a mystery; the company’s non-profit board made a spectacular about-face, rejecting a ton of speculation.

Still, speculations have risen to the top of the rumor mill. The possibility that OpenAI was secretly developing a highly sophisticated AI that might have set off a panic attack and caused a commotion is among the most interesting.

As reported here, to “benefit all of humanity”, in Altman’s own words, OpenAI has long made it its primary purpose to create an artificial general intelligence (AGI), roughly defined as an algorithm that can execute complicated jobs as well as or even better than humans.

It’s still up for debate if the corporation is genuinely moving closer to reaching this goal. It has also always been quite challenging to interpret the two leaves that the corporation has released because of its history of extreme secrecy over its research.

However, a fascinating new development in the story raises the possibility that OpenAI was about to make a significant advancement and that this was connected to the upheaval.

Following reports from Reuters and The Information, it appears that some OpenAI leaders were alarmed by a powerful new AI the company was developing, which it called Q*, or “Q star”. This new system, which can supposedly solve math problems from grade school, was reportedly viewed by some as a major step towards the company’s objective of producing AGI.

In a message sent to staff members, Mira Murati—a former nonprofit board member of OpenAI who briefly served as CEO after Altman’s dismissal—admitted the existence of this new model, according to Reuters.

According to people close to Reuters, Q* was just one of the factors that contributed to Altman’s dismissal and raised questions about commercializing a technology that was still not fully understood.

Even though mastering school-grade math doesn’t seem like a huge accomplishment, experts have long regarded it as a significant benchmark. An AI algorithm capable of solving math problems would need to “plan” ahead of time, as opposed to just anticipating the next word in a sentence, as the company’s GPT systems do. It’s like putting together clues to achieve the solution.

“One of the main challenges to improve LLM reliability is to replace auto-regressive token prediction with planning”, explained Yann LeCun, “godfather of AI” and Meta’s chief AI scientist, in a tweet. “Pretty much every top lab (FAIR, DeepMind, OpenAI, etc.) is working on that, and some have already published ideas and results”.

“It is likely that Q* is OpenAI attempts at planning”, he added.

“If it has the ability to logically reason and reason about abstract concepts, which right now is what it really struggles with, that’s a pretty tremendous leap”, Charles Higgins, a cofounder of the AI-training startup Tromero, said.

“Maths is about symbolically reasoning—saying, for example, ‘If X is bigger than Y and Y is bigger than Z, then X is bigger than Z'”, he added. “Language models traditionally really struggle at that because they don’t logically reason; they just have what are effectively intuitions”.

“In the case of math, we know existing AIs have been shown to be capable of undergraduate-level math but to struggle with anything more advanced”, Andrew Rogoyski, a director at the Surrey Institute for People-Centered AI, said. “However, if an AI can solve new, unseen problems, not just regurgitate or reshape existing knowledge, then this would be a big deal, even if the math is relatively simple”.

But is Q* really a discovery that could possibly endanger life as we know it? Specialists aren’t convinced.

“I don’t think it immediately gets us to AGI or scary situations”, Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI, told MIT Technology Review.

“Solving elementary-school math problems is very, very different from pushing the boundaries of mathematics at the level of something a Fields medalist can do”, she added, referring to an international prize in mathematics.

“I think it’s symbolically very important”, Sophia Kalanovska, a fellow Tromero cofounder and Ph.D. candidate, said. “On a practical level, I don’t think it’s going to end the world”.

To put it simply, OpenAI’s algorithm, if it exists at all and its output is reliable, may, albeit with many limitations, mark a significant advancement in the company’s efforts to achieve AGI.

Was it the only factor that caused Altman to be fired? There is now a lot of evidence to suggest that there was more going on behind the scenes, including internal conflicts on the company’s future. Researchers were optimistic about the current model’s prospects even if it could only solve grade school-level math problems.

While the exact cause of the leadership crisis at OpenAI is unknown, advances in artificial general intelligence are probably a factor. In contrast to today’s reactive models, systems like the hypothetical Q* go closer to AI that argues abstractly and makes plans. However, contemporary AGIs are far from matching human cognition, with relatively limited capabilities.

Such algorithms raise the possibility that they will eventually develop into uncontrollably intelligent agents that pose a threat to humanity. It’s unclear if OpenAI research has reached any tipping points. However, the episode brings into reality the long-hypothesized shift to harmful AGI and emphasizes mounting concerns as AI capabilities gradually increase. Technology must advance in combination with governance, monitoring, and public knowledge to ensure that the greater benefit of society continues to be the driving force behind it.