The most advanced technology of our time could become the worst nightmare

Artificial Intelligence is continuously evolving in every field: from the analysis of data on terrorism to medical research, and space missions, but also in our smartphones or apps.

A so advanced technology constantly evolving could become a dangerous weapon if used wrong.

We can consider some situations:

ACCIDENTS

A cyclist killed in the USA by a car without a driver has reopened the debate on self-driving systems. It’s not always easy for software to foresee all the situations. So, it’s easy to end up in misinterpreted circumstances, such as the case where a person can be seen as a harmless object.

BOTS

After the Cambridge Analytica scandal, we saw how social networks can influence our political opinions, just giving much more importance to some news rather than others using bots. Mark Zuckerberg claimed Facebook will develop an AI system that can nip fake news in the bud, even if they are videos.

This will trigger another debate on censorship.

PEACE

Google invented some technologies at disposal of the Pentagon, specifically the ones belonging to the so-called Project Maven, where photos captured by drones can be classified by the Pentagon. Some Google researchers raised some doubts about the information that could be used to drive killer drones. So, Google published an ethical code for Artificial Intelligence in which it said that an AI must not harm a human being, just like theorized by Asimov in his robotic laws.

Nevertheless, the Pentagon could keep researching weapons based on AI anyway.

THE BIG BROTHER

In China, researchers are developing the biggest surveillance system in the world. Hundreds of thousands of cameras in every place of the country and a facial recognition system could identify and follow anybody: not only criminals but also ordinary citizens. The further step will be a facial recognition system that could identify emotions.

Should we fear to be ourselves?

VIDEO FAKE NEWS

Beyond fake news, we could have fake videos. Not videos made by users, but by an AI able to make them indistinguishable from real videos.

DARPA (the Defense Advanced Research Projects Agency) is working on an AI which can recognize fake videos made by other AIs.

Will there be a battle between good and bad AIs?

PREJUDICE

It was demonstrated that a system not correctly programmed could become racist or misogynous because it reflects the conviction of its inventors. Another case could be a system for recruitment that could choose employees based its information on old data where there was discrimination in selection.

Relying too much on AI could lead us to dehumanize our reasoning because we are going to be too loyal to the way an AI thinks.

Source Focus


Dan Brokenhouse

Recent Posts

AI ‘ghost’ avatars

Experts warn AI "ghost" avatars could disrupt the grieving process, leading to stress, confusion, and…

7 days ago

How LLMs retrieve some stored knowledge

Large language models use linear functions to retrieve factual knowledge, providing insights into their inner…

2 weeks ago

OpenAI-powered humanoid robot

Exploring the fascinating yet concerning integration of ChatGPT into a humanoid robot by Figure AI,…

3 weeks ago

LLMs can predict the future

A new study shows large language models trained on human predictions can forecast future events…

4 weeks ago

AI superintelligence may be an ‘existential catastrophe’

A new study challenges the idea that advanced AI systems can be controlled, warning of…

1 month ago

Beyond deep learning to achieve AGI

Deep learning can be limited in achieving a trustworthy Artificial General Intelligence, therefore we need…

1 month ago