Will humanity be wiped out by self-improving AI?
Apart from privacy and safety issues, the potential for generative AI to wipe out humans is still a significant enigma considering how quickly technology is developing. Roman Yampolskiy, the director of the University of Louisville’s Cyber Security Laboratory and an AI safety researcher famous for his 99.999999% prediction that AI will wipe out humanity, recently stated that the coveted AGI threshold is no longer time-bound but to those who can afford to buy enough data centers and processing power.
Dario Amodei of Anthropic and Sam Altman, the CEO of OpenAI, estimate that artificial general intelligence (AGI) will be developed within the next three years, with powerful AI systems outperforming humans in a variety of tasks. Former Google CEO Eric Schmidt argues we should think about stopping AI work if it starts to self-improve, whereas Sam Altman thinks AGI will be possible with existing hardware sooner than expected.
According to this article, the executive stated in a recent interview with the American television network ABC News:
“When the system can self-improve, we need to seriously think about unplugging it. It’s going to be hugely hard. It’s going to be very difficult to maintain that balance.”
Schmidt’s remarks regarding the quick development of AI come at a crucial time since multiple sources suggest that OpenAI may have already attained AGI after making its o1 reasoning model widely available. Sam Altman, the CEO of OpenAI, added that superintelligence may arrive in a few thousand days.
Although OpenAI may be close to reaching the desired AGI threshold, a former OpenAI employee cautions that the ChatGPT creator may not be able to manage everything that comes with the AI system that surpasses human cognitive capabilities.
Remarkably, Sam Altman claims that the safety issues raised by the AGI benchmark won’t come up at the “AGI moment.” He added that AGI will whoosh with a remarkably low influence on society. He does, however, anticipate that AGI and superintelligence will continue to advance for some time to come, with AI agents and systems surpassing humans in the majority of tasks by 2025 and beyond.
The race towards AGI presents a complex landscape of opportunities and challenges that cannot be ignored. While industry leaders like Altman and Amodei project ambitious timelines for AGI development, the warnings from experts like Yampolskiy and Schmidt highlight crucial concerns about safety, control, and humanity’s preparedness. Notably, Altman’s surprisingly optimistic view of a smooth AGI transition, coupled with his ambitious development timeline, raises questions about transparency in AI development. His stance—predicting both rapid AGI achievement and minimal societal impact—seems paradoxical when contrasted with other experts’ grave concerns. This disparity could suggest either an oversight of genuine risks or a strategic downplaying of dangers to maintain development momentum. As we stand at this technological crossroads, the decisions made today about AI development and regulation will likely shape not just the immediate future of AGI, but potentially the very course of human civilization. The key challenge ahead lies not just in achieving AGI, but in ensuring honest dialogue about its implications while maintaining meaningful human oversight and control. The contrast between public narratives and expert warnings underscores the urgent need for transparent discussion about both the possibilities and perils of AGI development.