AI and its downsides

AI is changing the way we do some tasks and how we can access information. However, there are some disturbing ways AI can be used to generate harm. Here are 5 examples.

1. OMNI-PRESENT SURVEILLANCE

According to this article, Tristan Harris and Aza Raskin, experts/techsperts with the Center for Humane Technology, claim that any chance we had as a species of reading 1984 as a warning rather than a guideline has probably vanished.

The two discussed how we fundamentally misunderstand the constraints of the Large Language Models (LLMs) we are now working with via applications such as Google Bard or ChatGPT. When we say “language,” we typically mean “human language,” yet to a computer, everything is a language. This has made it possible for researchers to train an AI on brain scan images and watch as the AI starts to roughly decipher the thoughts that go through our minds.

Another instance included the employment of an LLM by academics to track the radio signals that surround us. The AI was taught using two stereoscopically aligned cameras, one of which was monitoring a room with people in it and the other of which was monitoring the radio signals within. The second camera was able to faithfully reconstruct live events in the room after the standard camera was taken out by the researchers. This was done by just examining the radio signals.

AI has now made it possible to hack everyday life. All of this suggests that privacy won’t even be an option in the future. Maybe not even inside your own thoughts.

2. LETHAL AUTONOMOUS WEAPONS SYSTEMS

In the past, combat was all about windmilling into your enemies while brandishing a sword. The use of swords is still present, but they are now Android tablets that are mounted on an Xbox Wireless Controller with shoddy workmanship and used to direct tomahawk missiles into people’s houses from a distance of 5,000 miles away. In fact, chivalry is extinct.

Of course, even though the militaries of the world have been making every effort to transform actual combat into a kind of Call of Duty simulator, for some people that is still not enough.

Instead, we now rely on machines to handle the grunt job. Deadly autonomous weapons systems enable us to entirely dissociate from the killing of one another for conquering oil and land. Drones that are self-piloting, auto-targeting, and mercilessly engaged go into conflict zones and mercilessly slaughter anyone who appears to be not of “our ilk.”

The business that creates these autonomously piloted threats refers to them as loitering munitions, with the STM Kargu probably being the most common (though there’s no way to know for sure). But, everyone else seems to think that “suicide drones” is a better moniker for them. Armed with facial recognition technology, the drones are released in swarms where they freely hunt down targets before diving-bombing them and blowing themselves up in a blaze of Geneva Convention-defying glory.

3. GENERATIVE BLACKMAIL MATERIAL

There is nothing new about fake photographs. For decades, adept users have been deceiving others with impressive Photoshops. But suddenly, we are in a situation where achieving even higher results requires no talent at all. Furthermore included are videos, writings, and even voices in addition to images. It’s not too difficult to picture someone wearing your digital skin in the near future and causing you all kinds of trouble if we look at the technology underlying the Vision Pro‘s “Spatial Personas” feature which creates a photorealistic avatar of yourself.

Of course, since it is already taking place, it is not all that difficult to envision. Recently, the FBI was forced to alert the public to the risks posed by new extortion techniques made possible by AI software, which provide criminals the chance to produce phony, deceptive, or compromised photographs and videos of victims. Worse yet, the requirements for admittance into this criminal operation are so low that a few public social media images or a few seconds from a public YouTube video will do.

Online Deepfakery is so common that some companies are afraid to release newly developed tools online for fear of what would be done with them. Most recently, Meta, the owner of Facebook, followed a similar course after introducing VoiceBox, the most powerful text-to-speech AI-generating program created so far. Meta decided that the technology was too immoral to be widely used, even though she was well aware of how soon it would be abused.

Scammers have already developed methods for doing it themselves, so it doesn’t really matter. We’re currently living in a post-truth society as deepfake phone calls to friends and family members seeking money or personal information are on the rise. Everything you can’t see with your own eyes or touch with your own hands won’t be trusted any longer.

4. CRAFTING SPYWARE

The threat posed by recently developed AI-generated malware or spyware has sparked a lot of discussion in the security community. Security analysts are having trouble sleeping over this problem since many of them think it’s only a matter of time before our ability to fight against cyberattacks is practically nonexistent.

There haven’t been any documented examples of malware or spyware produced by artificial intelligence yet, so be patient. Juhani Hintikka, CEO of WithSecure, a security analysis company said that her team had observed multiple malware samples that ChatGPT had developed for free. Hintikka said that ChatGPT’s capacity to change its generations would result in mutations and more “polymorphic” malware, making it even more difficult for defenders to identify, as if this weren’t concerning enough.

Tim West, the director of threat intelligence at WithSecure, emphasized the key problem: “ChatGPT will support software engineering for good and bad”. West would add that OpenAI’s chatbot “lowers the barrier for entry for the threat actors to develop malware” in reference to the ease of access for those looking to inflict damage. Before, threat actors would have to spend a lot of time crafting the harmful code they produce. Now, anyone may theoretically create harmful programs using ChatGPT. Hence, the number of threat actors and created threats may increase significantly.

It won’t be long until the dam breaks and AI destroys the human ability to be secure online. While we can employ AI to fight back, doing so would only make matters worse because there are countless threats coming our way from scenarios like the one mentioned by WithSecure. We currently have little choice but to wait for the onslaught, which seems to be coming anyway.

5. PREDICTIVE POLICING

In an effort to prevent crimes from happening in the first place, law enforcement organizations throughout the world are currently using algorithms to try and anticipate where crimes are most likely to occur and to make sure their presence is felt in these locations to deter potential offenders.

Can crime be predicted, though? The University of Chicago holds this opinion. With the employment of patterns in time and place, they have created a new algorithm that predicts crime. According to reports, the program has a 90% accuracy rate when predicting crimes up to a week in advance.

Who would disagree that fewer crimes are good? How about the individuals of color who appear to be consistently singled out by these algorithms? Since an algorithm is essentially a method of calculation, the quality of the results it produces depends entirely on the input data.

Police racial profiling of innocent people and greater police presence in neighborhoods of color are all possible outcomes of a history of police racism in nations like the United States. By its very nature, a stronger police presence results in a higher rate of policing, which further distorts the data, increases the predictive bias against these neighborhoods, and results in an escalating police presence, which restarts the entire loop.

So, the main problem with these new ways of harming and deceiving people is not only the power of AI but also that these activities don’t require much knowledge as before, despite being complicated operations. Therefore, there will be many more who will try to scam or harm people. In addition, systems that try to prevent crimes could easily punish innocents because algorithms tend to behave more statistically than analyze case by case. Therefore, we need automation systems to never exclude 100% human intervention because life is not a statistic.