How the growing intelligence of AIs could unsettle the world
Stephen Hawking, a physicist at Cambridge University, wrote an article in May 2014 with the goal of raising awareness of the risks posed by quickly developing artificial intelligence. In a piece for the UK newspaper The Independent, Hawking warned that the development of a true thinking machine “would be the biggest event in human history”.
A machine with intelligence greater than that of a person might “outsmart financial markets, out-invent human researchers, out-manipulating human leaders, and develop weapons we cannot even understand”, according to a study. The decision to write off all of this as science fiction could end up being “potentially our worst mistake in history”.
Some technology uses what is referred to as specialized or “narrow” artificial intelligence, such as robots that move boxes or make hamburgers, algorithms that write reports, compose music, or trade on Wall Street. In fact, every practical artificial intelligence technology—outside of science fiction—is narrow AI.
The specialized character of real-world AI doesn’t necessarily provide a barrier to the ultimate automation of a significant number of jobs. On some level, the duties that the majority of the workforce performs are routine and predictable. An enormous number of jobs at all skill levels may someday be threatened by quickly evolving specialized robots or machine learning algorithms that sift through mountains of data. None of this necessitates the use of artificial intelligence.
To replace you in your position, a computer simply has to be able to perform the specific tasks for which you are paid. It does not need to be able to mimic the full range of your intellectual capabilities. Certainly, the majority of AI research and development continues to be directed toward niche applications, but there is every reason to believe that these technologies will grow radically more powerful and adaptable over the ensuing decades.
Even while these specialized projects continue to generate useful outcomes and draw funding, a far more difficult task lies in the distance. The Holy Grail of artificial intelligence continues to be the creation of a really intelligent system—a machine that can think critically, show awareness of its own existence, and engage in meaningful discourse.
The desire to create a truly thinking machine may be traced at least as far back as 1950 when Alan Turing released the paper that launched the artificial intelligence field. The expectations for AI research frequently rose above any feasible technical base in the decades that followed, especially considering the speed of the computers at the time.
Disappointment invariably followed investment and research efforts stopped, and what has come to be known as “AI winters”—long, sluggish periods—followed. Yet spring has returned once more. There is a lot of hope right now due to the tremendous power of modern computers, advancements in particular fields of AI research, and improvements in our knowledge of the human brain.
James Barrat, the author of a book on the effects of advanced AI, undertook an informal survey of roughly 200 experts in human-level artificial intelligence rather than merely narrow, artificial intelligence. This is known as Artificial General Intelligence within the field. Barrat gave the computer scientists a choice between four predictions for the development of AGI.
The findings: Of those surveyed, 42% predicted the development of a thinking machine by 2030, 25% by 2050, and 20% by 2100. Only 2% of people thought it would never happen. However, a number of respondents suggested that Barrat should have provided an even earlier option—possibly 2020—in comments on their surveys.
Cognitive scientist and NYU professor Gary Marcus, who blogs for the New Yorker, claims that recent advances in fields like deep learning neural networks have been greatly exaggerated.
Nonetheless, it is apparent that the field has suddenly gained a lot of momentum. Progress has been greatly accelerated by the growth of organizations like Google, Facebook, and Amazon, in particular. Never before have such wealthy companies considered AI as wholly essential to their business models—and never before has AI research been situated so close to the center of conflict between such powerful entities.
Throughout nations, a similar competitive dynamic is developing. In authoritarian nations, AI is becoming a necessity for the armed forces, intelligence services, and surveillance systems. In fact, a full-fledged AI arms competition may be on the horizon in the near future. The important question is not whether there is any serious risk of another AI winter for the field as a whole but rather whether advancements will eventually extend to Artificial General Intelligence as well or whether they will remain restricted to narrow AI.
There is little reason to think that a machine will simply match human intelligence if AI researchers do manage to make the jump to AGI in the future. Once AGI is accomplished, we would probably be faced with a machine that is more intelligent than a person.
Of course, a thinking machine would still have all the benefits that computers already have, including the capacity to perform calculations and retrieve data at rates that are unfathomable to us. We would inevitably soon coexist on Earth with something completely unheard of, a truly alien—and superior—intellect.
And it’s possible that’s only the start. Most AI researchers concur that such a system would eventually be compelled to focus its intelligence inward. It would concentrate its efforts on enhancing its own design, rebuilding its software, or possibly employing evolutionary programming approaches to develop, test, and optimize design improvements. This would result in an iterative “recursive improvement” process.
The system would get smarter and more capable with each upgrade. The cycle would eventually speed up, leading to an “intelligence explosion” that would produce a machine that is thousands or even millions of times smarter than any human.
Such an intelligence explosion would undoubtedly have profound effects on humanity if it happened. In fact, it very well could cause a wave of disruption to sweep over our entire civilization, let alone our economy. It would “rupture the fabric of history,” in the words of futurist and inventor Ray Kurzweil, and usher in an occasion, or maybe an era, that has come to be known as “the Singularity.”
At the same time, this will entail issues involving ethics and accountability in the use of AI data and decisions, concerns about data privacy and security, the risk of bias and discrimination in algorithms, the need to establish the right level of autonomy of AIs and so that everything is always under human control, as well as environmental sustainability in the use of resources for AI, but also a focus on the risk of manipulation and misinformation and the concentration of power in the hands of a few entities. Addressing these challenges will require inclusive collaboration among governments, industries, and corporations to ensure responsible and beneficial use of AI.
Rise of the Robots, by Martin Ford, is available to purchase here