As AI evolves, fear increases

People sometimes make jokes about a future in which mankind will have to submit to robot rulers when they witness machines that behave like humans or computers that execute feats of strategy and intellect imitating human inventiveness.

The sci-fi series Humans, returned for its third season, with the conflict between humans and AI taking center stage. In the new episodes, hostile people treat conscious synthetic beings with distrust, fear, and hatred. Violence erupts as Synths (the anthropomorphic robots in the series) struggle to defend not only their fundamental rights but also their lives from people who see them as deadly threats and less than human.

As explained here, not everyone is eager to embrace AI, not even in the real world. Leading professionals in technology and science have recently cautioned about the impending risks that artificial intelligence may pose to humanity, even speculating that AI capabilities could end the human race as computer scientists have pushed the limits of what AI can do.

But why does the notion of AI make humans feel so uneasy?

One of the well-known figures who expressed concern about AI is Elon Musk. In July 2017, Musk said, “I have exposure to very cutting-edge AI, and I think people should be really concerned about it”, to attendees of a National Governors Association gathering.

“I keep sounding the alarm bell”, Musk added. “But until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal”.

Musk had previously referred to AI as “our biggest existential threat”, in 2014, and in August 2017, he asserted that AI posed a greater threat to civilization than North Korea did. Aware of the potential dangers of malicious AI, physicist Stephen Hawking, who passed away on March 14, warned the BBC in 2014 that “the development of full artificial intelligence could spell the end of the human race”.

Furthermore, it’s unsettling that certain programmers, especially those at the MIT Media Lab in Cambridge, Massachusetts, appear determined to demonstrate how terrible AI may be.

In fact, by using a neural network called “Nightmare Machine“, MIT computer scientists were able to convert common photographs into ominous, disturbing hellscapes. While, based on the 140,000 horror stories that Reddit users uploaded on the subreddit r/nosleep, an AI named “Shelley” created frightening stories.

“We are interested in how AI induces emotions, fear, in this particular case”, Manuel Cebrian, a research manager at MIT Media Lab, explained in an email about Shelley’s scary stories.

According to Kilian Weinberger, an associate professor in the Department of Computer Science at Cornell University, negative attitudes toward AI can be broadly divided into two categories: the notion that it will become conscious and try to destroy us, and the notion that immoral people will use it for harmful purposes.

“One thing that people are afraid of is that if super-intelligent AI, more intelligent than us, becomes conscious, it could treat us like lower beings like we treat monkeys”, he said.

Yet, as Weinberger pointed out, these worries about AI becoming conscious and destroying mankind are based on misunderstandings of what AI actually is. The algorithms that determine AI’s behavior place it within very strict bounds. Certain problem categories transfer well to the capabilities of AI, making some activities reasonably simple for AI to solve. Nevertheless, he added, “most things do not map to that and aren’t appropriate.

This indicates that, while AI may be capable of amazing feats within precisely defined parameters, those feats are the extent of its capabilities.

“AI reaching consciousness — there has been absolutely no progress in research in that area”, Weinberger said. “I don’t think that’s anywhere in our near future”.

Unfortunately, the likelihood of an unethical human using AI for bad purposes is much higher than the other unsettling hypothesis, according to Weinberger. According to the user’s intention, any equipment or instrument can be used for good or bad reasons. The idea of weapons that harness artificial intelligence is undoubtedly terrifying and would benefit from tight government supervision, according to Weinberger.

Weinberger speculated that if people could get over their apprehensions about AI being hostile, they could be more receptive to its advantages. According to him, improved image-recognition algorithms could one day help dermatologists spot moles that may be cancerous, and self-driving cars could eventually lower the number of fatalities from road accidents, many of which are brought on by human mistakes.

Yet, in the “Humans” universe of self-aware Synths, concerns about conscious AI lead to violent altercations between Synths and people. The conflict between humans and AI is expected to continue to develop and intensify.

AI is evolving rapidly, and people are scared that its potential could radically change their lives. Although AI capabilities are currently limited to specific tasks they are far from a global AI that can do anything. However, while some are reassuring this fear is unfounded, others think the risks will come true in the future if we underestimate the problem. And since the intelligence of AIs may far exceed our own, we may not even notice the problem.