We are blind to the fact that AI systems are currently causing harm to people because of major concerns about possible existential risks in the future

The risk that artificial intelligence may pose has also been referred to as “x-risk”. As reported here, AI systems by themselves don’t provide a concern as superintelligent agents, even though research backs up the notion that they shouldn’t be included in weaponry systems because of the risks.

Already, self-driving cars with malfunctioning pedestrian tracking systems, police robots, and AI systems that mistakenly identify people as suspects in crimes might endanger your life. Regretfully, AI systems can have disastrous effects on people’s lives without needing to be superintelligent. Because they are real, AI systems that have already been shown to cause harm are more dangerous than hypothetical “sentient” AI systems.

In a new book, trailblazing AI researcher and activist Joy Buolamwini discusses her experiences and her worries regarding current AI systems.

Saying that potential problems from AI are more significant than current harms has the drawback of diverting funding and legislative attention from other pressing issues. Companies that assert that they are afraid of the existential threat posed by AI may demonstrate their sincere concern for preserving mankind by holding back on the release of the AI products they deem dangerous.

The Campaign to Stop Killer Robots has long advocated for precautions against lethal autonomous systems and digital dehumanization. Governments that are worried about the deadly use of AI systems can implement these measures. The campaign discusses applications of AI that could be lethal without drawing the dramatic conclusion that sentient machines will eventually wipe out humanity.

It is common to think of physical violence as the worst kind of violence, but this perspective makes it easier to overlook the harmful ways that structural violence is maintained in our cultures. This phrase was created by Norwegian sociologist Johan Galtung to explain how social structures and organizations hurt people by keeping them from fulfilling their basic needs. Artificial intelligence used to deny people access to jobs, housing, and health care prolongs personal pain and leaves generational wounds. We can be slowly killed by AI systems.

The concern is about the current issues and emerging vulnerabilities with AI and whether we could address them in a way that would also help create a future where the burdens of AI did not fall disproportionately on the vulnerable and marginalized, given what the “Gender Shades” research revealed about algorithmic bias from some of the world’s top tech companies. It is urgent to solve AI systems with poor intelligence that result in erroneous diagnoses or wrongful arrests.

People who are already being hurt and those who could be affected by AI systems are cases that can be considered x-risks, where people affected can be considered excoded. When a hospital employs AI for triage and neglects to provide you with medical attention, or when it applies a clinical algorithm that denies you access to a life-saving organ transplant, you may be considered excoded. If a loan application is rejected by an algorithmic decision-making system, you may be excoded.

When your resume is automatically filtered out and you are not given the chance to apply for jobs that AI systems haven’t already replaced, you can be considered excoded. When a tenant-screening algorithm refuses to grant you residence, you may be excoded. These are all true examples. Everyone has the potential to be excoded, and those who are already disadvantaged are more vulnerable.

For this reason, research cannot be limited to AI researchers, industry insiders, or even well-intentioned influencers.

It is not enough to reach academics and insiders in the sector. We must ensure that the battle for algorithmic justice includes regular individuals who could be harmed by AI.

As we previously emphasized, the dangers of AI should not only be seen in the near future, but already today, much simpler yet still automated systems are replacing human decisions, oversimplifying them to the point of making them unfair. The most glaring cases are bans on platforms like social media that most people now use for work, and in many cases, they represent the hub of their job. With no adequate regulation, when you get banned (very often unfairly), you almost never have the possibility to appeal, especially when you have a business based on such systems. There is therefore an ignorance of the system (intentional or not) that takes us back towards one-way justice. If all this is ignored, it is easy to fall victim to a system that unfairly excludes you in this and a thousand other cases without the possibility to appeal, which makes a simple algorithm much more dangerous than a super-intelligent AI.

Unmasking AI: My mission to protect what is human in a world of machines, by Joy Buolamwini, is available to purchase here