Harnessing opportunity and uncertainty

In the past, eras of quick development and change have brought about times of enormous uncertainty. In his 1977 book The Age of Uncertainty, Harvard economist John Kenneth Galbraith described the achievements of market economics but also foresaw a time of instability, inefficiency, and social inequality.

As we manage the transformational waves of AI today, a new era characterized by comparable uncertainties is about to begin. Nevertheless, this time technology, especially the rise and development of AI, is the driving force rather than just economics.

The increasing presence of AI in our lives

The effects of AI are already increasingly obvious in everyday life. Technology is starting to permeate our lives, from self-driving cars, chatbots that can impersonate missing loved ones, and AI assistants that aid us at work.

According to this article, with the impending AI tsunami, AI will soon be far more common. Ethan Mollick, a professor at the Wharton School, recently wrote about the findings of a study on the future of professional work. Two teams of Boston Consulting Group consultants served as the focus of the experiment. Several common tasks were distributed to each group. The employment of currently accessible AI to support one group’s efforts was successful, but not for the other.

Mollick reported: “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without”.

Although it now seems unlikely, it is still feasible that issues with large language models (LLM) like bias and confabulation will simply lead this wave to disappear. Although the technology is already displaying its disruptive potential, it will be some time before we can actually feel the tsunami’s force. Here is a preview of what is to come.

The upcoming generation of AI models

The following LLM generation, which will surpass the present crop of GPT-4 (OpenAI), PaLM 2 (Google), LLaMA (Meta), and Claude 2 (Anthropic), will be more advanced and generalized. It’s possible that Elon Musk’s new start-up, xAI, will likewise enter a brand-new and potentially extremely strong model. For these models, thinking, common sense, and judgment continue to be major obstacles. However, we may anticipate advancement in each of these areas.

The Wall Street Journal said that Meta is developing a Device for the following generation that will be at least as effective as GPT-4. The research predicts that this will happen around 2024. Even though OpenAI has been quiet in disclosing their future plans, it is logical to assume that they are also developing their next generation.

According to information currently available, “Gemini” from the merged Google Brain and DeepMind AI team is the most significant new model. Gemini may be a far cry from current technology. Sundar Pichai, the CEO of Alphabet, stated in May of last year that the model’s training had already begun.

“While still early, we’re already seeing impressive multimodal capabilities not seen in prior models”, Pichai said in a blog at that time.

As the basis for both text-based and image-based applications, multimodal means it can process and comprehend two forms of data inputs (text and images). There may be more emergent or unexpected traits and behaviors as a result of the reference to capabilities not apparent in earlier models. The ability to write computer code is an example of an emerging capability from the current generation because it was not anticipated.

There have been rumors that Google provided early access to Gemini to a select few companies. SemiAnalysis, a reputable semiconductor research company, might be one of them. Gemini may be 5 to 20 times more advanced than current GPT-4 devices, according to a new article from the company.

The design of Gemini will probably be based on DeepMind’s Gato, which was unveiled in 2022. “The deep learning [Gato] transformer model is described as a ‘generalist agent’ and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations, and action specifications. It has been referred to as the Swiss Army Knife of AI models. It is clearly much more general than other AI systems developed thus far and in that regard appears to be a step towards AGI [artificial general intelligence]”.

Traditional AI, often referred to as narrow AI, is created to carry out a single task or group of related tasks. To solve issues and reach choices, it makes use of pre-established rules and algorithms. Software for speech recognition, image recognition, and recommendation engines are some examples of classic AI.

General AI, on the other hand, sometimes referred to as strong AI or artificial general intelligence (AGI), is created to carry out any intellectual work that a human is capable of. It has the ability to think, learn, and understand sophisticated ideas. Human-level intellect would be necessary for general AI, which would also have a self-aware consciousness and the ability to acquire knowledge, solve problems, and make plans for the future. General AI is currently a theoretical idea and is only in its early phases of research.

Artificial General Intelligence (AGI)

According to Microsoft, GPT-4 is already able to “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting”.

Gemini could be a significant step towards AGI by superseding all current models. Gemini is expected to be distributed at several levels of model capability.

Gemini is sure to be spectacular, but even bigger and more advanced variants are anticipated. In an interview with The Economist, Mustafa Suleyman, the CEO and co-founder of Inflection AI and a co-founder of DeepMind, made the following prediction:

“In the next five years, the frontier model companies—those of us at the very cutting edge who are training the very largest AI models—are going to train models that are over a thousand times larger than what you see today in GPT-4”.

With the potential for both huge advantages and increased risks, these models may have applications and an impact on our daily lives that are unmatched. David Chalmers, a professor of philosophy and neurological science at NYU, is quoted by Vanity Fair as saying: “The upsides for this are enormous; maybe these systems find cures for diseases and solutions to problems like poverty and climate change, and those are enormous upsides”.

The article also explores the dangers and includes estimations of the likelihood of horrifying results, such as the extinction of humanity, ranging from 1% to 50%.

Could this be the end of an era dominated by humans?

Yuval Noah Harari, a historian, stated in an interview with The Economist that these upcoming developments in AI technology won’t spell the end of history but rather “the end of human-dominated history. History will continue, with somebody else in control. I’m thinking of it as more an alien invasion”.

Suleyman responded by saying that AI tools will lack agency and so be limited to what humans give them the authority to accomplish. The next response from Harari was that this upcoming AI might be “more intelligent than us. How do you prevent something more intelligent than you from developing an agency?”. An AI with agency might take behaviors that aren’t consistent with human wants and values.

These advanced models foreshadow the development of artificial general intelligence (AGI) and a time when AI will be even more powerful, integrated, and necessary for daily life. There are many reasons to be optimistic, but requests for control and regulation are made even stronger by these anticipated new developments.

The dilemma regarding regulations

Even the CEOs of companies that manufacture frontier models concur that regulation is required.

Senator Charles Schumer organized the session, and he later spoke about the difficulties in creating suitable regulations. He emphasized how technically challenging AI is, how it’s constantly evolving, and how it “has such a wide, broad effect across the whole world”.

Regulating AI might not even be realistically achievable. One reason is that a lot of the technology has been made available as open-source software, making it accessible to everyone. This alone might complicate a lot of regulatory initiatives.

Taking precautions is both logical and sensible

Some people interpret leaders in AI’s public utterances as staged support for regulation. According to Tom Siebel, a longtime Silicon Valley leader and the current CEO of C3 AI, as quoted by MarketWatch: “AI execs are playing rope-a-dope with lawmakers, asking them to please regulate us. But there is not enough money and intellectual capital to ensure millions of algorithms are safe. They know it is impossible”.

We must try even though it could be impossible. According to Suleyman’s conversation with The Economist: “This is the moment when we have to adopt a precautionary principle, not through any fear monger but just as a logical, sensical way to proceed”.

The promise of AI is vast, but the risks are real as it quickly moves from limited skills to AGI. To create these AI technologies for the benefit of humanity while avoiding serious potential risks, in this age of uncertainty, we must act with the utmost prudence, care, and conscience.

One of the most pressing yet overlooked dangers of AI is not the technology itself but rather how people may interact with it. There is a risk that many will come to value AI’s judgments as supreme, believing that its intelligence eclipses human reasoning. Consequently, any objection or countering perspective offered by humans could be dismissed out of blind faith in AI’s capabilities. Much like belief in a God with mysterious ways, people may justify AI’s decisions even when anomalous or incomprehensible, simply trusting its superiority.