AI can understand human behavior and drive it in other directions

Artificial Intelligence is learning ever more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviors and use them to influence human decisions. Many forms of AI are at work in different fields, such as vaccine development, environmental management, and office administration. And while AI does not possess human-like intelligence and emotions, its increasing power can dominate our lives.

A team of researchers at CSIRO’s Data61, the data and digital branch of Australia’s national science agency, devised a method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a Recurrent Neural Network (RNN) and Deep Reinforcement Learning. To test their model they carried out 3 experiments in which human participants played games against a computer.

The first experiment involved participants clicking on red or blue colored boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific choice. The AI was successful about 70% of the time.

In the second experiment, participants were required to watch a screen and press a button when they are shown a particular symbol, and not press it when they are shown another. Here, the AI set out to arrange the sequence of symbols, so the participants made more mistakes, and achieved an increase of almost 25%.

The third experiment consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximize how much money it ended up with, and in the other, the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.

>>>  FaceApp making everybody old

In each experiment, the machine learned from participants’ responses and identified and targeted vulnerabilities in people’s decision-making. The result was the machine learned to drive participants towards particular actions.

These findings are still quite abstract and involved limited and unrealistic situations. More research is needed to determine how this approach can be put into action and used to benefit society or not. But the research highlights not only what AI can do but also how people make choices, and it also shows machines can learn to drive human choice-making through their interactions with us.

The research has an enormous range of possible applications, such as enhancing behavioral sciences and public policy to improve social welfare, understanding and influencing how people adopt healthy eating habits. AI and machine learning could be used to recognize people’s vulnerabilities in certain situations and help them to steer away from bad choices. The method can also be used to defend against influence attacks. Machines could be taught to alert us when we are being influenced online, for example, and help us shape behavior to disguise our vulnerability.

However, these AI capabilities can also be used in a negative way to influence public opinion, to convince people to do something they would never do, to maximize the power of neuromarketing, etc… We cannot predict how far its power will go and all the consequences but like any technology, AI can be used for good or bad, and proper governance is crucial to ensure it is implemented responsibly. Last year CSIRO developed an AI Ethics Framework for the Australian government as an early step in this journey.

>>>  A.I. scams at phone

AI and machine learning are typically very hungry for data, which means it is crucial to ensure we have effective systems in place for data governance and access.

One day, we might be manipulated mentally and physically, and the biggest risk will be to lose ourselves and not being aware of it.