Humans influences data and in turn A.I.
A bias is disproportionate weight in favor of or against an idea, a person, a group, an event, or thing, usually in a way that is closed-minded, prejudicial, or unfair. They can be innate or learned.
It’s impossible for a person being unbiased, therefore all data produced by humans affects inevitably A.I. datasets.
The evolution of A.I. in so many fields could be helpful but also dangerous if we fail to consider its decisions are influenced by biases because they can lead to an outcome we think it’s true while it has been previously conditioned and it’s not objective.
Human beings are affected by cognitive biases, a systematic error in thinking that occurs when people are processing and interpreting information in the world around them and affects the decisions and judgments they make.
Some of these biases are related to memory. So the way you remember an event may be biased for many reasons.
Other cognitive biases might be related to problems with attention. Since attention is a limited resource, people have to be selective.
Memory and attention are therefore a sort of filter that can alter the way we perceive reality.
Here are some examples of cognitive biases:
- Confirmation bias: the tendency to listen more often to information that confirms our existing beliefs. Through this bias, people tend to favor information that reinforces the things they already think or believe.
- Hindsight bias: a common cognitive bias that involves the tendency to see events, even random ones, as more predictable than they are.
- Anchoring bias: the tendency to be overly influenced by the first piece of information that we hear.
- Misinformation effect: the tendency for memories to be heavily influenced by things that happened after the actual event itself.
- Actor-observer bias: the tendency to attribute our actions to external influences and other people’s actions to internal ones.
- False consensus effect: the tendency people have to overestimate how much other people agree with their own beliefs, behaviors, attitudes, and values.
- Halo effect: the tendency for an initial impression of a person to influence what we think of them overall.
- Self-serving bias: a tendency for people tend to give themselves credit for successes but lay the blame for failures on outside causes.
- Availability heuristic: the tendency to estimate the probability of something happening based on how many examples readily come to mind.
- Optimism bias: a tendency to overestimate the likelihood that good things will happen to us while underestimating the probability that negative events will impact our lives.
The term bias is also used when talking about neural networks along with the term weight.
Weights control the signal (or the strength of the connection) between two artificial neurons while bias is an additional parameter in the Neural Network which is used to adjust the output along with the weighted sum of the inputs to the neuron. Thus, bias is a constant which helps the neural network in a way that it can fit best for the given data.
Anyway, an A.I. can be subjected to cognitive biases, and lack of complete data: when information is not enough to guarantee a such area of interest is completely covered.
Here are known examples of A.I. bias that led to some issues:
Amazon’s recruiting tool
Amazon began an A.I. project in 2014 with the goal of automating the hiring process. Their initiative was entirely focused on reviewing resumes and rating applicants using AI-powered algorithms, allowing recruiters to spend less time on human resume screening. By 2015, however, Amazon had learned that their new A.I. recruiting algorithm was not fairly grading candidates and was biased against women.
A health-care risk-prediction algorithm that affects more than 200 million Americans showed racial bias because it relied on a poor criteria for establishing the need.
The algorithm was created to predict which patients would require more medical attention, however it was later discovered that the system was giving incorrect findings that favored white patients over black patients.
In 2019, Facebook began enabling advertisers to target ads based on gender, race, and religion. For example, women were favored in employment advertisements for nursing and secretarial positions, whereas janitors and taxi drivers were largely advertised to men, particularly those from minority backgrounds.
Can A.I. be unbiased?
You can build an A.I. system that makes unbiased data-driven decisions if you can clear your training dataset of conscious and unconscious assumptions about race, gender, and other ideological ideas.
In the real world, though, we don’t expect A.I. to ever be totally objective. A.I. can be as good as data, and data is created by people. There are various human biases, and the discovery of new biases is always increasing the total number. After all, humans create biased data, which is then checked by humans and human-made algorithms to uncover and remove biases. But we can reduce A.I. bias by conducting tests on data and algorithms, as well as implementing other best practices.
According to McKinsey there are some practises to minimize A.I. bias:
- Be aware of the contexts in which A.I. can help correct for bias and where it can exacerbate it;
- Enstablish processes and practises to mitigate bias;
- Engage conversations about potential biases in human decisions;
- Explore how humans and machines can best work together;
- Invest more in bias research;
- Invest more in diversifying the A.I. field itself.
According to Dr. Shay Hershkovitz instead, GM & VP at SparkBeyond, an AI-powered problem-solving company, to reduce the impact of A.I. bias suggests:
- Building A.I. systems that deliver explainable predictions/decisions;
- combining these solutions with human procedures that give adequate monitoring;
- Assuring that A.I. solutions are properly benchmarked and updated on a regular basis.
When all of the above ideas are reviewed, it becomes clear that humans must play a key role in reducing A.I. bias. Hershkovitz offers the following as a means of achieving this:
- Companies and organizations must be completely transparent and accountable for the artificial intelligence systems they create;
- Decisions made by A.I. systems must be able to be monitored by humans;
- The establishment of standards for the explanation of decisions made by A.I. systems should be a top priority;
- Companies and organizations should educate and teach their developers on how to consider ethics when developing algorithms. The OECD’s 2019 Recommendation of the Council on Artificial Intelligence (PDF), which covers the ethical issues of artificial intelligence, is a good place to start.
Artificial Intelligence will handle so many fields that it’s important to provide algorithms that don’t discriminate on the basis of some inaccurate dataset because if this would be ignored may lead to unfair results without being aware of, because we would take for granted the A.I. is always right. And this would be a big mistake. Criticism is always needed to avoid submission.