The fusion of humans and machines can lead us to a problematic future
Most people are aware of the plethora of artificial intelligence (AI) apps created to increase our productivity and creativity. We have apps that use text prompts to create art, as well as the contentious ChatGPT, which raises important concerns about originality, errors, and plagiarism. Despite these worries, AI is growing more pervasive and invasive.
Other examples were the internet and smartphones. However, unlike previous technologies, many scientists and philosophers believe AI will eventually achieve (or even surpass) human-style “thinking”. The “technological singularity” is a futuristic idea that stems from this possibility as well as our growing reliance on AI.
According to this article, the American science fiction author Vernor Vinge popularized this expression a few decades ago.
The term “singularity” now refers to a hypothetical moment in the future when artificial general intelligence (AGI), or AI with human-level capabilities, will have advanced to the point where it will permanently alter human civilization.
It would herald the beginning of our unbreakable bond with technology. We won’t be able to survive without them after that without losing our ability to operate as humans.
Brain implants
We only need to go as far as recent breakthroughs in brain-computer interfaces (BCIs) to realize why this isn’t the stuff of fairy tales. Several futurists believe that BCIs are a natural starting point for a singularity because they combine mind and machine in a way that no other technology has been able to do.
Neuralink, a company run by Elon Musk, is asking the US Food and Drug Administration for approval to start BCI human trials. Neural connectors would be inserted into the participants’ brains to enable the communication of instructions through thinking. Neuralink wants to help the blind see again and paraplegics walk. Yet there are other aspirations in addition to this.
Brain implants, according to Musk, would enable telepathic contact and pave the way for the co-evolution of humans and machines. He contends that if we don’t employ such technology to improve our intelligence, superintelligent AI may wipe humanity out.
Musk is not the only one who believes that AI’s skills will rapidly advance. According to surveys, the majority of AI researchers believe that within this century, AI will be “thinking” at the level of humans. They disagree on whether this implies consciousness or not, and if this inevitably implies that once AI reaches this level, it will hurt us.
A patient with amyotrophic lateral sclerosis (ALS) could use a minimally invasive device developed by Synchron, another BCI technology company, to write emails and access the internet.
Tom Oxley, chief executive officer of Synchron, thinks that in the long run, brain implants may totally alter human communication, going beyond prosthetic rehabilitation. He claimed, when addressing a TED audience, that users may one day be able to “throw” their emotions so that others might experience what they’re feeling. If this is the case, “the full potential of the brain would then be unlocked,” he stated.
Early BCI developments could be seen as the first steps toward the hypothetical singularity, in which man and machine merge into one. This need not imply that machines will take on a life of their own or rule over us. But, the integration itself and our subsequent dependence on it have the potential to permanently alter us.
It’s also important to note that DARPA, the division of the US Department of Defense responsible for research and development, provided some of the initial financings for Synchron. DARPA is credited with helping to create the internet and it seems sensible to be concerned about the whereabouts of DARPA’s investment funds.
AGI
Futures expert and former Google innovation engineer Ray Kurzweil believes that AI-enhanced humans could be put onto the autobahn of evolution and sent hurtling onward at crazy speeds.
In his 2012 book How to Create a Mind, Ray Kurzweil proposed the theory that the neocortex, the area of the brain thought to be responsible for “higher functions” like emotion, cognition, and sensory perception, is a hierarchical system of pattern recognizers that, if replicated in a machine, could produce artificial superintelligence.
He estimates that the singularity will occur by 2045 and speculates that it may usher in a time of super-intelligent people, perhaps even the Nietzschean “Übermensch”, someone who transcends all limitations of the material world to realize their full potential.
Yet not everyone believes that AGI is beneficial. Super-intelligent AI, according to the late, brilliant theoretical physicist Stephen Hawking, may bring about the end of the world. Hawking told the BBC in 2014:
“the development of full artificial intelligence could spell the end of the human race. […] It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”.
Nevertheless, Hawking supported BCIs anyway.
A hive mind
The concept of the AI-enabled “hive mind” is another one that has to do with singularity. A hive mind is described by Merriam-Webster as:
“the collective mental activity expressed in the complex, coordinated behavior of a colony of social insects (such as bees or ants) regarded as comparable to a single mind controlling the behavior of an individual organism”.
Neuroscientist Giulio Tononi created the Integrated Information Theory (IIT) to explain this phenomenon. It implies that all of us are moving toward the fusion of all information and thoughts.
Galileo’s Error, written by philosopher Philip Goff, does a good job of elaborating on the consequences of Tononi’s idea.
“IIT predicts that if the growth of internet-based connectivity ever resulted in the amount of integrated information in society surpassing the amount of integrated information in a human brain, then not only would society become conscious but human brains would be ‘absorbed’ into that higher form of consciousness. Brains would cease to be conscious in their own right and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity”.
It’s important to note that there isn’t much proof that such a thing will ever happen. Yet, the theory raises significant questions regarding the nature of consciousness itself as well as the rapidly advancing technology (not to mention how quantum computing may accelerate this). It is conceivable that the emergence of a hive mind would lead to the end of individuality and the institutions that depend on it, such as democracy.
In a recent blog post, OpenAI (the company that created ChatGPT) reaffirmed its commitment to attaining AGI. Undoubtedly, many will do the same.
Our lives are increasingly being governed by algorithms in ways that we frequently cannot discern and must thus accept. Many aspects of a technological singularity promise to greatly improve our lives, but the fact that these AIs are the creations of the private industry raises some concerns.
They are mostly uncontrolled and subject to the whims of impulsive “technopreneurs” who have access to far more resources than the majority of us put together. No matter if we think they’re crazy, naive, or visionaries, we have a right to know what they have in mind (and be able to rebut them).
Although technology and the human body may blend to allow people with diseases to get a better life, it’s weird if we have to do that also to contrast the power of AIs to not be overwhelmed by their capabilities. It seems our lives are on a train we can’t get off but from which we can only change class.