The first effort to regulate AI

The European Parliament’s vote to approve its draft guidelines for the AI Act came the same day that EU legislators filed a new antitrust action against Google, making it a significant week for European tech policy.

As explained here, the voting on the AI Act was overwhelmingly successful, and it has been hailed as one of the most significant breakthroughs in AI policy ever. It was referred to as “legislation that will no doubt be setting the global standard for years to come”, according to Roberta Metsola, president of the European Parliament.

However, the system in Europe is a little convoluted. Before the proposed guidelines become law, members of the European Parliament will have to negotiate the fine print with the European Union Council and the European Commission. A compromise between three quite different versions of the three institutions will result in the final legislation.

The outcome of the vote was the acceptance of the European Parliament’s stance in the approaching final negotiations. The AI Act, which is modeled after the EU’s Digital Services Act, which establishes legal guidelines for internet platforms, adopts a “risk-based approach” by imposing limitations based on how dangerous lawmakers believe an AI application may be. Also, businesses will be required to provide their own risk analyses related to the usage of AI.

If lawmakers judge the risk “unacceptable,” some AI applications would be outlawed completely, while “high-risk” technologies will face new restrictions on their use and transparency requirements.

The act provides 4 levels of risk:

>>>  NeRF by NVIDIA turns photos into a 3D scene
  • Minimal risks which can includes applications like videogames and spam systems for example, for which it’s not required intervention.
  • Limited risks which include deepfakes and chatbots for which is required transparency. ChatGPT was included in this category.
  • High risks which includes programs used in transports, education, health, safety, law enforcement, etc… for which is required a rigorous risk assessment; to use high-quality datasets to minimize risks and bias; to maintain activity logs for traceability; to provide comprehensive documentation for regulatory compliants; to ensure clear user information and human oversight measures.
  • Unacceptable risks: for example using information to profile people.

In addition, some other rules could be implemented:

  • Forcing companies to share copyrighted data for training to allow artist and others to claim compensation.
  • Making sure models don’t create illegal content.

Anyway, the following are a few of the key implications:

  1. Ban on AI that can recognize emotions. The proposed text of the European Parliament forbids the use of artificial intelligence (AI) that aims to identify people’s emotions in policing, education, and the workplace. Manufacturers of emotion-recognition software assert that AI can tell when a student is struggling to understand a concept or when a car driver may be nodding off. Although the use of AI for facial detection and analysis has come under fire for being inaccurate and biased, it is still permitted in the draft text from the other two organizations, indicating a potential political battle.
  2. Predictive policing and real-time biometrics prohibited in public areas. Because the various EU organizations will have to decide whether and how the prohibition is implemented into law, this will be a significant legislative battle. Real-time biometric technologies, according to policing organizations, should not be prohibited because they are essential for contemporary policing. In fact, several nations, like France, intend to employ facial recognition more frequently.
  3. Prohibiting social scoring. The practice of employing information about people’s social conduct to create generalizations and profiles, known as “social scoring” by governmental entities, would be prohibited. But, the prognosis for social scoring, which is frequently linked to authoritarian regimes like China’s, isn’t as straightforward as it might first appear. It is usual to use social behavior data to assess applicants for mortgages and insurance policies, as well as for hiring and advertising.
  4. New limitations for general AI. The first draft to suggest guidelines for generative AI regulation and outlaw the use of any copyrighted content in the training set of massive language models like OpenAI’s GPT-4. European legislators have already raised questions about OpenAI because of issues with copyright and data protection. The proposed law also mandates the identification of AI-generated content. Yet given that the tech industry is expected to exert lobbying pressure on the European Commission and individual nations, the European Parliament must now convince them of the merits of its approach.
  5. New guidelines for social media recommendation systems. In contrast to the other proposed bills, the current draft categorizes recommender systems as “high risk”. If it is approved, recommender systems on social media platforms will be much more closely examined in terms of how they operate, and tech corporations may be held more accountable for the effects of user-generated content.
>>>  Will A.I. really understand?

Margrethe Vestager, executive vice president of the EU Commission, identified the risks associated with AI as being pervasive. She has stressed worries about widespread surveillance, vulnerability to social manipulation by unscrupulous actors, and the future of trust in information.

L.I.A. could really pose a risk to humanity, and a regulation was due. Although some rules may safeguard the population in the future, some companies, on the other hand, believe that stringent rules might prevent the full development of their applications just as some institutions believe that the pervasiveness of AI in people’s lives could help security through more control.