People can be manipulated by A.I. without realizing that

Manipulation among humans is not something new. People manipulate each other voluntarily and unintentionally. It doesn’t matter if we know them well or not, they do it. Then there are companies, the government, and the institutions. Technology, in this regard, has given a hand to manipulation, even more than traditional media. But with A.I. people may be more vulnerable than ever.

Your behavior, on the web but also in real life that can be detected for example by your payments, GPSs, WIFI networks, etc… may be used by an A.I. to trace your habits and profile your personality which can help to anticipate your choices and your tastes as well as to steer you to specific decisions without you even realize that. Better than any AD.

If they know how you move, they can trace the path to their target whether it’s about buying something, sharing something, voting for someone, and many other things.

A group of academics from Australia’s federal scientific and research organization, the Commonwealth Scientific and Industrial Research Organisation, recently conducted a series of studies to see how artificial intelligence affects human decision-making. The findings revealed that A.I. may identify and exploit weaknesses in human decision-making in order to steer people toward certain outcomes.

“The implications of this research are potentially quite staggering”, said Amir Dezfouli, an expert in machine learning at CSIRO and lead researcher on the study.

These algorithms can get you to carry out some actions on the web, not only because they know so much about you, but also because they know what strategies are most suitable to persuade you to make one choice over another without you realizing that and how.

“The tricky part is that A.I. is in some ways still a bit of a black box”, says Shane Saunderson, who researches human-robot interaction at the University of Toronto.

“It’s not an explicit machine that says two plus two equals four. It’s a machine that you show a bunch of data to, and it analyzes that data for patterns or classifications or insights that it can glean from it. And we don’t always know exactly how it’s doing that”.

The actor behind an A.I. not necessarily has to have a negative intent but the consequences can lead to something unpredictable.

The glut of information is one of the causes of easy manipulation: people are overwhelmed by information and have less time to pay attention to all the things they are interested in. So, there’s less thorough reading and the spread of memes reflects the low-quality information in which A.I. algorithms make their way.

In addition, our tendency to prefer information from people we trust and fits well with what we already know can be another weakness an A.I. can use to show us what it wants we trust more.

Experiments revealed that even when people are presented with balanced information that includes views from all perspectives, they tend to find evidence to support what they already believe. When people with opposing viewpoints on emotionally sensitive issues are exposed to the same facts, they become even more adamant in their original convictions though.

Furthermore, in the absence of obvious signals, our brains infer appropriate actions based on information followed by the crowd, and social media follows a similar pattern. We easily confuse popularity with quality, and we end up following what we see. Moreover, it was noticed that on social media, negative information spread more easily.

Dezfouli reminds out that how responsibly we build and deploy these technologies determines whether they are used for good or bad. CSIRO and the Australian government created an ethics framework for A.I. in government and industry in order to assure positive outcomes. These principles include:

  • Human, societal, and environmental well-being: A.I. systems should benefit individuals, society, and the environment.
  • Human-centered values: A.I. systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: A.I. systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: A.I. systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: A.I. systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by A.I. and can find out when an AI system is engaging with them.
  • Contestability: When an A.I. system significantly impacts a person, community, group, or environment, there should be a timely process to allow people to challenge the use or outcomes of the A.I. system.
  • Accountability: People responsible for the different phases of the A.I. system lifecycle should be identifiable and accountable for the outcomes of the A.I. systems, and human oversight of A.I. systems should be enabled.

However, most of these principles don’t get a hit at the moment but we hope there will be a standard for a more ethical A.I. soon.

Source discoverymagazine.com