Potential and risks of AGI as experts predict its imminent arrival
Researchers in the field of artificial intelligence are striving to create computer systems with human-level intelligence across a wide range of tasks, a goal known as artificial general intelligence, or AGI.
These systems could understand themselves and be able to control their actions, including modifying their own code. Like humans, they could pick up problem-solving skills on their own without instruction.
As mentioned here, the 2007 book written by computer scientist Ben Goertzel and AI researcher Cassio Pennachin contains the first mention of the term “Artificial General Intelligence (AGI).“
Nonetheless, the concept of artificial general intelligence has been around in AI history for a long time and is frequently depicted in popular science fiction books and movies.
“Narrow” AI refers to the AI systems that we now employ, such as the basic machine learning algorithms on Facebook or the more sophisticated models like ChatGPT. This indicates that instead of possessing human-like broad intelligence, they are made to do specific tasks.
This indicates that these AI systems are more capable than humans, at least in one area. But, because of the training data, they are limited to performing that particular activity.
Artificial General Intelligence, or AGI, would use more than simply the training set of data. It would be capable of reasoning and understanding in many aspects of life and knowledge, much like a person. This implies that rather than merely adhering to predetermined patterns, it could think and act like a human, applying context and logic to various circumstances.
Scientists disagree on the implications of artificial general intelligence (AGI) for humanity because it has never been developed. Regarding the possible risks, which ones are more likely to occur, and the possible effects on society, there is uncertainty.
AGI may never be accomplished, as some people formerly believed, but many scientists and IT experts today think it is achievable to achieve within the next few years. Prominent names that adhere to this perspective include Elon Musk, Sam Altman, Mark Zuckerberg, and computer scientist Ray Kurzweil.
Pros and cons of AGI
Artificial intelligence (AI) has already demonstrated a wide range of advantages, including time savings for daily tasks and support for scientific study. More recent tools, such as content creation systems, can generate marketing artwork or write emails according to the user’s usual communication style. However, these tools can only use the data that developers give them to do the tasks for which they were specifically trained.
AGI, on the other hand, has the potential to serve humanity in new ways, particularly when sophisticated problem-solving abilities are required.
Three months after ChatGPT debuted, in February 2023, OpenAI CEO Sam Altman made the following blog post: artificial general intelligence might, in theory, increase resource availability, speed up the world economy, and result in ground-breaking scientific discoveries that push the boundaries of human knowledge.
AGI has the potential to grant people extraordinary new skills, enabling anyone to receive assistance with nearly any mental task, according to Altman. This would significantly improve people’s creativity and problem-solving abilities.
AGI does, however, also have several serious risks. According to Musk in 2023, these dangers, include “misalignment,” in which the objectives of the system might not coincide with those of the individuals in charge of it, and the remote chance that an AGI system in the future may threaten human survival.
Though future AGI systems may deliver a lot of benefits for humanity, a review published in August 2021 in the Journal of Experimental and Theoretical Artificial Intelligence identified many potential concerns.
According to the study’s authors, the review identified some risks associated with artificial general intelligence, including the possibility of existential threats, AGI systems lacking proper ethics, morals, and values, AGI systems being given or developing dangerous goals, and the creation of unsafe AGI.
Researchers also speculated that AGI technology in the future would advance by creating wiser iterations and possibly altering its initial set of objectives.
Additionally, the researchers cautioned that even well-meaning AGI could have “disastrous unintended consequences,” as reported by LiveScience, adding that certain groups might use AGI for malicious ends.
When will AGI arrive?
There are varying views regarding when and whether humans will be able to develop a system as sophisticated as artificial general intelligence. Though opinions have changed over time, surveys of AI professionals indicate that many think artificial general intelligence could be produced by the end of this century.
AGI was predicted by most experts to arrive in roughly 50 years in the 2010s. This estimate has, however, been lowered more recently to a range of five to twenty years, but it has been suggested more recently by some specialists that an AGI system would appear this decade.
Kurzweil stated in his book The Singularity is Nearer (2024, Penguin) that the achievement of artificial general intelligence will mark the beginning of the technological singularity, which is the point at which AI surpasses human intelligence.
This will be the turning point when technological advancement picks up speed and becomes uncontrollable and irreversible.
According to Kurzweil, superintelligence will manifest by the 2030s, following the achievement of AGI. He thinks that by 2045, humans will be able to directly link their brains to artificial intelligence, which will increase human consciousness and intelligence.
However, according to Goertzel, we might arrive at the singularity by 2027, and DeepMind co-founder Shane Legg thinks AGI will arrive by 2028. According to Musk’s prediction, instead, by the end of 2025, AI will surpass human intelligence.
Given the exponential pace of technological advancement, many people are understandably concerned about the impending emergence of artificial general intelligence (AGI) as we stand on the cusp of a breakthrough. As previously mentioned, there are a lot of risks, many of which are unexpected. But the most pernicious threat may not come from ethical dilemmas, malicious intent, or even a loss of control, but rather from AGI’s ability to subtly manipulate.
The real threat might come from AGI’s increased intelligence, which could allow it to manipulate human behavior in ways that are so subtle and complex that we are unaware of them. We could act assuming we’re making conscious, independent decisions, while actually, our choices could be the consequence of AGI’s subtle guidance. This situation is very similar to how people might be unwittingly influenced by political propaganda and mistakenly believe that their opinions are wholly original but in a more sophisticated manner.
The possibility of subtle influence poses a serious threat to human autonomy and decision. We must address the obvious dangers as well as create defenses against these more subtle forms of manipulation as we move closer to artificial intelligence. AGI has a bright future ahead of it, but in order to keep humanity in control of its own course, we must exercise the utmost caution and critical thought.