GPT-3 and similar neural networks may drastically change the world of contents

NLP (Natural Language Processing) algorithms like GPT-3, which can provide an output of unique answers based on questions or prompts by crawling all the existing content and code on the internet, can learn the pattern between data and relate such information to the requests.

Although it can become an extremely useful and powerful tool to process and collect content in an immediate way, it also brings with it some negative aspects: given its ability to process new content, it can create them from here on out, the most varied ones automatically such as sites, podcasts, etc… Everything will be created at an impressive speed and in huge quantities. Contents will be indistinguishable from those of humans. And if we think that this technology will be able to produce content so fast as to have material of years in a short time, it is easy to deduce that content creators will be unable to compete, as well as the users because they could be unable to keep up with everything that would be produced. We could have contents that exceed human’s ability to keep up with them, which is already the case if you think you want to listen to all the music produced. A lifetime would not be enough.

If this production of content were focused on profit, as it is already happening for many content producers, we would have to deal with a disproportionate amount of low-quality content that would overwhelm those that don’t produce them. Just think of how many people are now hypnotized by low-quality content that can be accessed increasingly faster. It’s a bit like scrolling through book covers and reading only the title and then claiming to have read those books. The instinct to produce content more for attracting than for transmitting value is therefore strong.

In addition, the amount of data used to train this Artificial Intelligence also risks inheriting errors, defects, and improper bias (i.e. the value that determines whether a neuron of a neural network should or should not activate, and consequently what leads to a certain output). Errors that can then flow into subsequent neural networks or algorithms based on the main one.

Another risk is the manipulation of information. If the imitation of contents will be indistinguishable from that of a human, opinions may also be indistinguishable, whether they come from an A.I. or from a person. And if they were presented in a particularly convincing way, such opinions could also influence those of people’s in a dangerous way.

Since ever more content is getting monetizable, we’re going in a direction where information and content are serving traffic and monetization rather than primarily communicating something. GPT-3, or whatever, could give a further push in this direction, with the risk, in the long run, of a complete loss of trust in the internet medium.

So, will it be all about attention-seeking for profit?

The risk this quickly will get out of hand is plausible, especially since there is no regulation or way to determine whether the content comes from humans or not.

Additional risks could extend to data and identity theft, too.

Moreover, the potentialities of algorithms like GPT-3 will not be limited to the production of textual contents, but they will also be able to program and realize videogames, videos, music, etc… The role of the creator, considered one of the most irreplaceable by the A.I., could soon be challenged by an Artificial Intelligence that will be able to be a blogger, Youtuber, copyrighter, etc…

The huge amount of data and the need to have a way to find that information ever more quickly and efficiently, undoubtedly requires the skill of an A.I., and undeniably algorithms like GPT-3 are a huge help in speeding up and simplifying many tasks, but we need to prevent them from taking over and dominating our lives in an uncontrolled way.

Source datasciencecentral.com