How an AI can acquire a language

Speech unlike a written text or scripted dialog is full of imperfections such as false starts, interruptions, incomplete sentences; or even mistakes, especially informal conversations. That’s why it seems amazing how we can learn a language in the face of all these issues.

However, many linguistics say the reason why we can learn a language easily is grammar. As they say, grammar is what rules the chaos that is language.

According to Noam Chomsky, a modern linguist, children have an innate sense of language and grammar thanks to an area of the brain called LAD (language acquisition device) that is thought to contain the natural ability to learn and recognize the first language. The language acquisition device is where all people acquire their shared universal syntax.

According to the LAD theory, children are born knowing a predetermined set of sentence structures, or potential combinations of subjects, verbs, objects, and modifiers. Although it’s rare for kids to master spoken grammar in their early years, the LAD hypothesis contends that by using sentence fragments and run-on sentences from everyday speech along with intrinsic universal grammar rules, kids may develop a comprehensive language in just a few years. The LAD theory holds that a kid does not spend their early years just repeating words and phrases for no reason, but rather by observing different grammar rules and supplementary rules to create new variants of sentence structure.

However, as explained here, with the rise of A.I., we discovered how these powerful algorithms can write a variety of texts such as articles, poetries, code, etc; after being trained by a vast amount of language input. But without starting with grammar.

>>>  A.I. can create a mashup from any song

Although the texts resulting from these AIs are sometimes nonsensical, subject to biases, and word choice can be strange, most of the sentences they generate are grammatically correct without being taught any grammar.

A popular example is GPT-3, a huge deep-learning neural network containing 175 billion parameters. It was trained on hundreds of billions of words from the internet, books, etc; to anticipate the next word in a sentence based on the previous one. An automatic learning algorithm is used to modify its parameters whenever it made a mistaken prediction.

It’s like listening to or reading billions of texts without knowing the grammar and learning just by making connections deductively.

Amazingly, GPT-3 can respond to instructions suggesting the write style to adopt, you can ask for a description of a movie or for a synopsis. Additionally, by learning how to predict the next word, GPT-3 can reply to analogies that are on par with the SAT, reading comprehension questions, and even solving basic math issues.

But the resemblance to human language doesn’t end there. These artificial deep-learning networks appear to follow the same basic principles as the human brain, according to research published in Nature Neuroscience. The research team, headed by neuroscientist Uri Hasson, first evaluated how well humans and GPT-2, the previous version of GPT-3, could anticipate the next word in a story from a podcast. It resulted that people and AI predicted the same word approximately 50% of the time.

While listening to the story, the volunteers’ brain activity was monitored by the researchers. The best explanation for the patterns of activation they noticed was that people’s brains, like GPT-2, depended on the cumulative context of up to 100 prior words when generating predictions rather than just the previous one or two words.

>>>  Internet thief of time

“Our finding of spontaneous predictive neural signals as participants listen to natural speech suggests that active prediction may underlie humans’ lifelong language learning“.

The fact that these latest AI language models receive a lot of input, GPT-3 was trained with linguistic data equivalent to 20,000 human years, could be cause for concern. However, a preliminary study that has not yet undergone peer review discovered that GPT-2, even when trained on merely 100 million words, can still simulate human next-word predictions and brain activations. That falls well within the range of possible linguistic exposure for a typical child over the first 10 years of life.

However, we cannot say that GPT-3 or GPT-2 learn a language in the same manner as young children. In fact, these AI models don’t seem to understand much of what they are saying, if anything at all, despite the fact that comprehending is essential to using human language. However, these models show that a learner, or an AI, is capable of learning a language well enough from simple exposure to produce completely acceptable grammatical sentences in a manner that resembles human brain function.

Many linguists have long held the view that learning a language is impossible without an inherent grammar structure. The new AI models show the contrary. They show that grammatical language production can be learned only through linguistic experience. Similarly, we could say that kids can learn a language without having innate grammar skills.

To assist them to improve their language skills, children should participate in conversation as often as possible. Being a proficient language user requires more linguistic experience than grammar.

>>>  ChatGPT risks

AI gives us examples of what is like to assimilate content for a very long time. Although an AI doesn’t have the consciousness of a person and doesn’t understand the same way, we can theorize that the longer exposition we have to a language the more we can absorb its mechanisms. Therefore, we could predict the right structure of an incomplete sentence, or be able to know the right word that fits among others without having to understand the meaning.