artificial intelligence

AI superintelligence may be an ‘existential catastrophe’

Researchers raise alarming concerns about the potential threat of unchecked AI development

According to this article, Dr. Roman V. Yampolskiy, an associate professor at the University of Louisville and a specialist in AI safety, recently published a study that raises serious concerns about the growth of artificial intelligence and the possibility of intrinsically unmanageable AI superintelligence.

Dr. Yampolskiy claims in his most recent book, AI: Unexplainable, Unpredictable, Uncontrollable, that there is no proof that artificial intelligence can be safely regulated, based on a thorough analysis of the most recent scientific literature. He issues a challenge to the basis of AI progress and the trajectory of upcoming technologies, saying, “Without proof that AI can be controlled, it should not be developed.”

“We are facing an almost guaranteed event with the potential to cause an existential catastrophe,” Dr. Yampolskiy said in a statement issued by publisher Taylor & Francis. “No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

For more than 10 years, Dr. Yampolskiy, a specialist in AI safety, has warned of the perils posed by unrestrained AI and the existential threat it may pose to humankind. Dr. Yampolskiy and co-author Michaël Trazzi said in a 2018 paper that “artificial stupidity” or “Achilles heels” should be included in AI systems to keep them from becoming harmful. AI shouldn’t be allowed to access or alter its own source code, for instance.

Creating AI superintelligence is “riskier than Russian roulette,” according to Dr. Yampolskiy and public policy lawyer Tam Hunt in a Nautilus piece.

“Once AI is able to improve itself, it will quickly become much smarter than us on almost every aspect of intelligence, then a thousand times smarter, then a million, then a billion… What does it mean to be a billion times more intelligent than a human?” Dr. Yampolskiy and Hunt wrote. “We would quickly become like ants at its feet. Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it.”

Dr. Yampolskiy explores the many ways artificial intelligence might drastically alter society in his most recent book, frequently straying from human benefits. The main point of his argument is that AI development should be treated extremely cautiously, if not completely stopped, in the absence of unquestionable proof of controllability.

Dr. Yampolskiy notes that even though AI is widely acknowledged to have transformative potential, the AI “control problem,” also referred to as AI’s “hard problem,” is still an unclear and poorly studied topic.

“Why do so many researchers assume that the AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof,” Dr. Yampolskiy states, emphasizing the gravity and immediacy of the challenge at hand. “Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.” 

Dr. Yampolskiy’s research highlights the intrinsic uncontrollability of AI superintelligence, which is one of the most concerning features. The term “AI superintelligence” describes a conceivable situation in which an AI system is more intelligent than even the most intelligent humans.

Experts dispute the likelihood that technology will ever surpass human intelligence, with some claiming that artificial intelligence will never be able to fully emulate human cognition or consciousness.

However, according to Dr. Yampolskiy and other scientists, the creation of AI superintelligence “is an almost guaranteed event” that will happen after artificial general intelligence.

AI superintelligence, according to Dr. Yampolskiy, will allow systems to evolve with the ability to learn, adapt, and act in a semi-autonomous manner. As a result, we would be less able to direct or comprehend the AI system’s behavior. In the end, it would result in a contradiction whereby human safety and control decline in combination with the development of AI autonomy.

After a “comprehensive literature review,” Dr. Yampolskiy concludes that AI superintelligent systems “can never be fully controllable.” Therefore, even if artificial superintelligence proves beneficial, some risk will always be involved.

Dr. Yampolskiy lists several challenges to developing “safe” AI, such as the numerous decisions and mistakes an AI superintelligence system could make, leading to countless unpredictably occurring safety issues.

A further worry is that, given human limitations in understanding the sophisticated concepts it employs, AI superintelligence might not be able to explain the reasons behind its decisions. Dr. Yampolskiy stresses that to ensure that AI systems are impartial, they must, at the very least, be able to describe how they make decisions.

“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” Dr. Yampolsky explained. 

When it was discovered that Google’s AI-powered image generator and chatbot, Gemini, struggled to generate photos of white individuals, concerns about AI bias gained prominence.

Numerous users shared photos on social media that showed Gemini would only produce images of people of color when requested to depict historically significant characters who are often associated with white people, like “America’s founding fathers.” In one instance, the AI chatbot produced pictures of a black guy and an Asian woman wearing Nazi Waffen SS uniforms when asked to depict a 1943 German soldier.

Since then, Google has removed the picture generation function from Gemini.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google said in a statement. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people worldwide use it. But it’s missing the mark here.”

Dr. Yampolskiy claims that the recent Gemini debacle serves as a moderate and reasonably safe glimpse of what can go wrong if artificial intelligence is allowed to run uncontrolled. More alarmingly, he argues that it is fundamentally impossible to truly control systems with AI superintelligence.

“Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs; it is because no such design is possible; it doesn’t exist,” Dr. Yampolskiy argued. “Superintelligence is not rebelling; it is uncontrollable to begin with.”

“Humanity is facing a choice: do we become like babies, taken care of but not in control, or do we reject having a helpful guardian but remain in charge and free.”

According to Dr. Yampolskiy, there are techniques to reduce risks. These include limiting AI to employing clear and human-understandable language and providing ‘undo’ choices for modification.

Furthermore, “nothing should be taken off the table” in terms of restricting or outright prohibiting the advancement of particular AI technology types that have the potential to become uncontrollable.

Elon Musk and other prominent players in the tech industry have endorsed Dr. Yampolskiy’s work. A vocal critic of uncontrolled AI development, Musk was among the more than 33,000 business leaders who signed an open letter last year demanding a halt to “the training of AI systems more powerful than GPT-4.”

Dr. Yampolskiy thinks that despite the frightening potential effects AI may have on humans, the worries he has highlighted with his most recent findings should spur more research into AI safety and security.

“We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing,” urged Dr. Yampolskiy. “We need to use this opportunity wisely.”

Technological evolution seems to be an unstoppable avalanche in which people are bound to suffer the consequences, both positively and negatively. Technological evolution itself already seems to be a kind of uncontrollable intelligence that we must submit to. In part, it is understandable that research, like curiosity, can only evolve, but neglecting the most obvious risks already demonstrates a lack of intelligence on the part of human beings in protecting themselves.

Dan Brokenhouse

Recent Posts

8 humanoid robots to change the workforce

Eight cutting-edge humanoid robots poised to transform industries and redefine the future of work in…

2 days ago

Boston Dynamics’s Atlas changes shape

Boston Dynamics retires its iconic Atlas robot and unveils a new advanced, all-electric humanoid robot…

1 week ago

The growing trend of AI-generated virtual partners

A tech executive reveals the growing trend of AI-generated "girlfriend" experiences, with some men spending…

2 weeks ago

Does TikTok read your mind?

An in-depth look at how TikTok's algorithm shapes user experiences, and the importance of human…

3 weeks ago

AI ‘ghost’ avatars

Experts warn AI "ghost" avatars could disrupt the grieving process, leading to stress, confusion, and…

4 weeks ago

How LLMs retrieve some stored knowledge

Large language models use linear functions to retrieve factual knowledge, providing insights into their inner…

1 month ago
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.