AI beyond human limits

0
20
AI new way of thinking

From AlphaGo to modern language models

Truth and accuracy are crucial for AIs, and human thought processes play a key role in shaping these issues. In the future, machine learning may surpass humans due to new AI models that experiment independently.

One early example is DeepMind’s AlphaGo, which marked a breakthrough by learning to play Go without human guidance or preset rules. Go is an ancient strategy board game, originally from China, considered one of the most complex and profound board games in the world. Using “self-play reinforcement learning,” it played billions of games, learning through trial and error. After defeating the European Go champion in 2015, AlphaGo won against the world’s top human player in 2017.

In chess, AlphaZero was developed to go beyond earlier models like Deep Blue, which relied on human strategies. AlphaZero beat the reigning AI champion Stockfish in 100 games, winning 28 and drawing the rest.

Breaking free from human constraints

As reported here, when DeepMind moved away from mimicking human strategies, their models excelled in complex games like Shogi, Dota 2, and Starcraft II. These AIs developed unique cognitive strengths by learning through experimentation rather than human imitation.

For instance, AlphaZero never studied grandmasters or classic moves. Instead, it forged its own understanding of chess based on the logic of wins and losses. It proved that an AI relying on self-developed strategies could outmatch any model trained solely on human insights.

New frontiers in language models

OpenAI’s latest model, referred to as “o1,” may be on a similar trajectory. While previous Large Language Models (LLMs) like ChatGPT were trained using vast amounts of human text, o1 incorporates a novel feature: it takes time to generate a “chain of thought” before responding, allowing it to reason more effectively.

>>>  DALL-E can create realistic art through a text description

Unlike earlier LLMs, which simply generated the most likely sequence of words, o1 attempts to solve problems through trial and error. During training, it was permitted to experiment with different reasoning steps to find effective solutions, similar to how AlphaGo honed its strategies. This allows o1 to develop its own understanding of useful reasoning in areas where accuracy is essential.

The shift toward autonomous reasoning

As AIs advance in trial-and-error learning, they may move beyond human-imposed constraints. The potential next step involves AIs embodied in robotic forms, learning from physical interactions instead of simulations or text. This would enable them to gain an understanding of reality directly, independent of human-derived knowledge.

Such embodied AIs would not approach problems through traditional scientific methods or human categories like physics and chemistry. Instead, they might develop their own methods and frameworks, exploring the physical world in ways we can’t predict.

Toward an independent reality

Although physical AIs learning autonomously is still in the early stages, companies like Tesla and Sanctuary AI are developing humanoid robots that may one day learn directly from real-world interactions. Unlike virtual models that operate at high speeds, embodied AIs would learn at the natural pace of reality, limited by the resources available but potentially cooperating through shared learning.

OpenAI’s o1 model, though text-based, hints at the future of AI—a point at which these systems may develop independent truths and frameworks for understanding the universe beyond human limitations.

The development of LLMs that can reason on their own and learn by trial and error points to an exciting avenue for quick discoveries in a variety of fields. Allowing AI to think in ways that we might not understand could lead to discoveries and solutions that go beyond human intuition. But this advancement requires a fundamental change: we must have more faith in AI while being cautious of its potential for unexpected repercussions.

>>>  China wants a Minority Report technology

There is a real risk of manipulation or reliance on AI outputs without fully understanding their underlying logic because these models create frameworks and information that may not be readily grasped. To guarantee AI functions as a genuine friend in expanding human knowledge rather than as an enigmatic and possibly unmanageable force, it will be crucial to strike a balance between confidence and close supervision.