Understanding makes the difference

What is understanding? It sounds easy but if you stop to think about the meaning, you could find it quite hard to define. Apparently, understanding something might mean having all the necessary information about a specific topic. But is it really this?

If we happened to talk with a stranger about a topic we don’t like but speaking vaguely you would succeed to carry on the conversation, would it be different from knowing the topic? Now imagine you run into that person every day and you don’t want to be taken for a person who doesn’t know that subject, so you grab the right information the day before to face a plausible conversation every day, even though you don’t know what you’re saying but you manage to come across as an expert anyway. Are you really understanding anything new? Or are you just simulating?

It’s the same dilemma Searle (the philosopher) tried to explain when conceived the thought experiment of the “Chinese room” to show the human mind is simply not like a biological computer. Namely, A.I. couldn’t become conscious or self-conscious according to Searle. In this regard, the syntax (the sentence structure) wouldn’t be enough to understand semantics (the meaning of the words).

To understand the experiment of the “Chinese room”, imagine a room with a person inside who can’t speak a word of Chinese. Outside the room, another person sends a message into the room in Chinese through a slot in the door. The person inside the room translates the message (input) by a book containing instructions in the mother tongue of this person. The person can now reply to the message and send it outside the room (output) without knowing anything about Chinese. So, the person outside the room may believe the one inside is a Chinese mother tongue: the same feeling we may have talking with an A.I. able to answer our questions. The person inside represents the A.I. algorithm that processes information without knowing anything about it.

Therefore, voice assistants like Siri, Alexa, or Google Assistant simulate understanding, even though they do it better and better but it’s easy to trick them and find out that they don’t really know what you’re saying. If you, for example, ask “Will it rain tomorrow?”, the assistant will surely reply correctly to the question but if you ask “Will water fall from the clouds tomorrow?”, it won’t get it but a human would easily get the point though. Another way to test the A.I. is to ask to find a place that is not what you’re looking for, for example, “restaurants that aren’t McDonald’s”: the voice assistant won’t surely be able to answer correctly.

However, the A.I. capabilities are ever-increasing over time, so those gaps might be filled in the future, just think about the Replika chatbot based on the GPT-3 algorithm which is way more advanced than any mobile voice assistant. Nevertheless, it’s just a more accurate simulation of the capability of understanding by an A.I. but it lacks personality. It’s like having several rooms where, in each of them, there are different conversations about different topics. You ask or say something about a topic and the GPT-3 algorithm draws from the several possible already existing answers to that topic in that specific room. It’s like you would answer questions using the sentences of your friends. You would appear a person with a personality and who understands but you’re just faking.

Anyway, we all, as humans, pass from the “Chinese room” stage, namely, we imitate some ready-made sentences when we don’t know anything about a topic and we have to face a conversation about it. So, even though we are the ones who simulate understanding, therefore a question arises: “can imitation turn into a true understanding?”.

Personally, I think the behavior of an A.I. can’t turn into understanding but just a more advanced simulation of that (at least for now) trying to predict the possible requests of its user. A human, instead, can turn a simulation of understanding into a real knowledge of a topic (but not always), when he becomes able to find by himself what’s important or not about a specific subject; when he can make an argument by himself, make connections between his knowledge and the other’s knowledge as well as making new ones, but also when he can anticipate other’s arguments or take them apart. However, it all starts from a will of interest and curiosity. Another thing (at the moment) an A.I. is devoid of.

Source bigthink.com

Dan Brokenhouse

Recent Posts

AI ‘ghost’ avatars

Experts warn AI "ghost" avatars could disrupt the grieving process, leading to stress, confusion, and…

4 days ago

How LLMs retrieve some stored knowledge

Large language models use linear functions to retrieve factual knowledge, providing insights into their inner…

2 weeks ago

OpenAI-powered humanoid robot

Exploring the fascinating yet concerning integration of ChatGPT into a humanoid robot by Figure AI,…

3 weeks ago

LLMs can predict the future

A new study shows large language models trained on human predictions can forecast future events…

4 weeks ago

AI superintelligence may be an ‘existential catastrophe’

A new study challenges the idea that advanced AI systems can be controlled, warning of…

1 month ago

Beyond deep learning to achieve AGI

Deep learning can be limited in achieving a trustworthy Artificial General Intelligence, therefore we need…

1 month ago
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.