A neuroscientist point out the importance of a more ‘conscious’ AI

Chatbots are evolving quickly and now, thanks to AI they can communicate with people with natural language like happens with real people. However, although these conversations may look like they were started by humans, they lack feelings.

In this regard, a Princeton neuroscientist warned that artificial intelligence-powered chatbots may look sociopaths if they keep being emotionless.

According to the definition, a sociopath is a person with antisocial personality disorder, a mental illness defined by a recurring habit of neglecting the rights and feelings of others. People who suffer from antisocial personality disorder frequently act in a manipulative or dishonest manner and may also have criminal or violent impulses.

This, applied to AIs, means that their excessive rationality can lead them to make choices that aim more at the intent of their creators or who controls them than act honestly. For example, an AI made to sell products could rationally manipulate the person it is talking to in order to make them buy that product regardless of ethics.

According to a recent essay by Princeton neuroscientist Michael Graziano, which was covered by The Wall Street Journal, these chatbots could be a real threat to people unless developers incorporate more sensibility.

The risks associated with AI may not be as prominent right now, but as these advanced technologies are improved and developed, they may evolve in the future. Graziano suggests an integration of human attributes like empathy and prosocial conduct in order to make them more like humans. Notably, the neurologist contends that for these systems to comprehend these features and modify their behavior to be more in line with human ideals, they will require some type of incorporated consciousness.

>>>  The future of warfare

However, “consciousness” is not something that could belong to machines. It’s like talking about machines and souls. They are opposite fields.

It’s exceedingly challenging to quantify awareness, and philosophically speaking, it’s even challenging to determine whether some individuals or robots are even somewhat conscious. A “reverse Turing test” which gauges a machine’s capacity to display intelligent behavior comparable to or indistinguishable from that of a person, is Graziano’s suggestion for how an AI should be evaluated. A computer should be tested to see if it can distinguish between talking to a human and another computer rather than human testing the machine to see if it acts as a human would.

Empathy, however, can be even achieved rationally. In fact, the ability to understand and share the feelings of others can be both affective and cognitive.

  • Affective empathy: Also known as emotional empathy, involves feeling the emotions that others are experiencing;
  • Cognitive empathy: involves the ability to understand and perspective-take on the thoughts and feelings of others, without necessarily feeling the emotions yourself.

Therefore, even just cognitive empathy could help improve AIs being less rational and attentive to the needs and feelings of people.

According to Graziano, if these issues aren’t resolved, people will have created powerful sociopathic machines that are capable of making important judgments. According to him, systems like ChatGPT and other language models are currently only at the beginning. That could, however, change in a year or five if research into machine ‘awareness’ continues and development advances.

>>>  Producer uses AI to make his vocals sound like Kanye West

“A sociopathic machine that can make consequential decisions would be powerfully dangerous. For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years we may face a crisis”, said Graziano.

AIs could be trained to understand the different emotional consequences a person might feel based on the direction of their behavior so that they can figure out how to act more ethically. However, an overly ‘aware’ AI could also have unpredictable implications, and the concept of ethics could still change over time and/or be distorted by rational deception. The movie “I, Robot” is an example of this.