Sometimes we see too much humanity and other times we don’t see it at all

It’s a human tendency to see aspects of our lives in animals, plants, or even things. We can interpret our dog’s strange noise as a way of singing, a tree rustling as if it’s talking, and consider a chatbot as a human with its own mind. That’s what we call anthropomorphism.


As explained here, Scientists made an effort in the 20th century to challenge common beliefs about biology, society, animal behavior, and other topics. According to ecologist Carl Safina, this notion eventually evolved into the prevailing ideology of anti-anthropomorphism.

Anthropomorphism was sometimes referred to be the “worst of ethological sins” and a threat to the animal kingdom. But the subsequent generation of field ecologists, led by Jane Goodall and Frans De Waal, resisted this anti-anthropomorphism by incorporating empathy into their observation. According to ecologist Carl Safina, “I don’t know people anymore who study animals and insist that anthropomorphism is out of bounds”.

In some contexts, however, anti-anthropomorphism still comes across as progressive—in discussions about animals and—increasingly—artificial intelligence. We seem more likely to recognize ourselves in every machine as they copy us more and more, from the artistic DALL-E to the lifelike chatbot ChatGPT. Some academics believe that projecting our humanity onto AI could have real repercussions, including further concealing how these minds truly operate and supporting the dubious idea that the human mind is the only or best model of intelligence.

However, anthropomorphism is a tool like any other, used for good and bad purposes in humanity’s never-ending quest to comprehend a complex environment. With new artificial systems going online every day, it is more important than ever to determine when and how to use such a tool. One of the key questions of this century is how we communicate with these entities, both natural and artificial.


Anthropomorphism is a type of metaphorical thinking that allows us to make similarities between our own experiences and those of others. It can also be seen as one of the many effects of what neuroscientists called the “theory of mind”, the capacity to tell one’s mind apart from other people’s minds and then deduce what those other people are thinking or experiencing.

Every aspect of human social interaction, from empathy to deception, depends on the theory of mind. But even so, it is still a flawed tool. According to Heather Roff, a scholar who specializes in the ethics of developing technology, “the easiest access we have is to ourselves.” Since I am familiar with myself and you are sufficiently similar to me, I have a theory of mind. Anyone can become perplexed by a person they perceive as “unreadable” or by the “shock” of a culture that is substantially dissimilar from their own.

Despite these difficulties, people nevertheless seem to be motivated to perceive others as thinking. We appear to automatically assume that other beings have their own ideas and feelings. At the same time, many individuals internalize ideas that are in opposition to considering normal peculiar personhoods and frequently reject the mindedness of nonhuman animals, children, women, people of color, and those suffering from mental illness or developmental disabilities.

In this regard, anthropomorphism might appear nearly moral. Among the modern voices promoting radical interspecies empathy are Sy Montgomery, Sabrina Imbler, and Ed Yong. Botanist and Citizen Potawatomi Nation member Robin Wall Kimmerer discusses the differences between Indigenous and Western scientific perspectives of nature in his book Braiding Sweetgrass.


This tendency for us to see personhood in the environment around us is complicated by machine intelligence. Most experts think that these and other characteristics of consciousness will only be realized within the next few decades, despite assertions that Google’s LaMDA is not only sentient but also has a soul. Currently, AI completely relies on people for continued progress. We don’t have anything close to generalized intelligence, even if it excels in a particular field. That includes ChatGPT which has severe limits; it may produce language that appears convincing but it doesn’t understand it.

The majority of AI’s flaws—as well as its advantages—are little recognized by the general public (and sometimes even by the supposed experts). The capabilities of AI sometimes even seem to be exaggerated deliberately. Furthermore, a lot of projects deliberately model human cognition and mirror human actions. As a result, many individuals are inclined to attribute intelligence to machines and computer code.

The ethical issues with AI today are not related to the legal or moral “rights” of the AI itself, but rather to how humans employ these technologies against one another. And while AI may successfully mimic some parts of the human intellect, it actually works quite differently. DALL-E, for example, is a statistical model that has been taught to imitate artists. But it is a completely different way of “creating”.

We probably won’t want to create AI that mimics humans for very long. Roff claims that if he is optimizing for something, “I want it to be better than my own senses”.


Due to the cultural obsession with anthropomorphism, anthropo-fabulation, a bias that poses far greater harm, has gone unnoticed. The awkward term was invented by philosopher Cameron Buckner to highlight our propensity to use an exaggerated view of human potential as the yardstick by which we evaluate all other types of intelligence. According to this theory, humans underestimate animal intellect and exaggerate the intelligence of machines for the same reason: when we think of ourselves as the best, we believe that everything that is more like us is better.

Ironically, using anthropomorphism or similar techniques may help lessen the negative effects of such blatant elitism. We can start to relate to other creatures more responsibly if we comprehend how our own theory of mind makes sense of the “other” (or fails to do so) and respect the diversity of intellect presently present on Earth. There are numerous methods to anthropomorphize animals while exercising caution. Kimmerer’s work makes it clear that there is a spiritual route. Imbler instead, recently argued considering the life of sea blobs, which are related to every other life on Earth. Additionally, Yong’s most recent work makes use of research on dogs’ olfaction and bat infrared vision to help readers view animals the way they see themselves.

All of these methods have a foundation in empathy and a form of objectivity that results from a dedication to observing both similarities and differences. It is not projecting, according to Safina, “if you observe other animals and come to the conclusion that they have thoughts and emotions”, but rather it is observation.

Applying these ideas more subtly will be necessary for AI. Generally speaking, anthropomorphism and anthropo-fabulation prevent us from understanding AI for what it is. The relationship we have with AI will inevitably alter as it becomes more intelligent and as our understanding of it expands. Nowadays, projecting humanity onto technology obscures more than illuminates.

Our tendency to project humanity in animals, plants, and finally AI is maybe part of our social need to relate to what’s around us because we can handle better a relationship if use a language we know, and unconsciously we hope they use the same we do. It’s like when we used to hit the TV to make it work. In a way, we scared it to make it work because we couldn’t repair it as well as we hoped that gesture would fix the electrical issue. Another reason may be that we need that relationship because we can’t find it elsewhere, therefore we attribute humanity because we like who or what we’re having a relationship with. In the case of AI, it can be very engaging to talk to something that can answer questions we can’t normally ask, it’s like talking to someone who shares our own interests.