Tech companies seem to ignore the risks

The race toward the development of Artificial Intelligence able to reach the capability of a human brain by tech companies may be dangerous for humanity.

According to Salon, Keen Technologies is one of the many companies aiming to develop human-level A.I. but there are at least 72 other projects around the world with the same target. Human-level A.I. also known as AGI (Artificial General Intelligence), will be able to perform any cognitive tasks a human does.

Today’s A.I. can already do many amazing things, such as generating art, driving cars, and playing video games, but these AIs can do specific tasks while a human-level A.I. will be able to carry out different tasks.

However, even now, many questioned the risks of a powerful A.I. since it could manipulate us, being smarter than humans so much that they probably couldn’t realize people were being deceived. This is already happening with social media algorithms that, willingly or not, push posts identified as more profitable, regardless of the ethics of the content.

Imagine what a more powerful A.I. could do and the effects it could have on society. However, we can’t forget that there will also be some bright sides to this technology: For example, many repetitive and boring tasks will be quickly automated, letting us concentrate on more satisfying aspects.

However, Toby Ord, an Oxford academic, in his book “The Precipice: Existential Risk and the Future of Humanity”, tries to identify the worst causes of human extinction, and talking about natural causes like volcanoes or asteroids, he claims they would have minimal impact compared to a nuclear war, pandemics, or climate change. But according to his work, nothing would be worse than human-level artificial intelligence.

Nevertheless, he is not the only one who warned about the risks. The late Stephen Hawking, tech industry leaders like Elon Musk and Bill Gates, as well as AI academics like Stuart Russell of the University of California, have all publicly cautioned that the development of human-level AI could result in nothing short of a catastrophe, particularly if done so without exercising extreme caution and giving careful consideration to safety and ethics.

Companies are often careless about the consequences of their technologies. They aim for the results without caring about the impact on society, unless this is their real goal.

Of course, safety and ethics shouldn’t lead to censorship or restrictions, but we should have safety measures to contrast the risks. And we can’t think about afterward when the bomb has been dropped. Artificial Intelligence involves so many fields, and it won’t just reach a human level; it will go beyond that to a point where our brains can’t perceive if it’s doing something for us or against us.