The Future of AI Regulation: Draft Legislation from the European Commission Shows the Coming AI Legal Landscape

by Avi Gesser, Anna R. Gressel, and Steven Tegrar

This post is Part II of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here.

On April 21, 2021, the European Commission published its highly anticipated draft legislation governing the use of AI, which is being referred to as the “GDPR of AI” because, if enacted, it would place potentially onerous compliance obligations on a wide spectrum of companies using AI systems. The commission proposes to regulate AI based on the potential risk posed by its intended use: AI systems that pose an “unacceptable risk” would be banned outright; AI classified as “high risk” would be subject to stringent regulatory and disclosure requirements; and certain interactive, deepfake, and emotion recognition systems would be subject to heightened transparency obligations.

Notably, the increased focus on transparency in the use of AI, coupled with specific reporting obligations for AI providers and users, will almost certainly result in more scrutiny of AI by consumers, regulators, and stakeholders. Indeed, in the same way that GDPR caused companies to significantly expand their privacy compliance, the draft AI legislation is designed to encourage companies to treat AI as an enterprise-wide risk that requires attention from their leadership in the development, deployment, and oversight of their AI systems. That encouragement is to be reinforced in the draft legislation with the prospect of severe legal and reputational consequences for companies that fail to implement robust compliance policies around their AI systems that pose risks to EU residents. In addition, the Commission proposes a labeling regime (the CE marking of conformity), whereby certain AI systems would need to be assessed and certified for conformity by a qualifying “notified body” prior to entering the market.

Although the draft legislation will probably not take effect for several years, its broad scope, and the specificity of its obligations, situate the EU as the epicenter of AI regulation and, as GDPR was for subsequent privacy laws, it will serve as the standard against which all future AI regulations will be measured. Below we have provided a quick overview of the key features of this landmark draft AI legislation. 

Key Features of the Commission’s Draft AI Legislative Framework

How the Regulation Will Apply to U.S. Companies

The Commission intends the legislation to have broad extraterritorial reach, covering AI providers or users “irrespective of whether they are established within the Union,” so long as any AI systems affect users within the EU. In particular:

  • Providers – persons or entities that develop or place an AI system on the market under their own name or trademark, even if provided free of charge would be covered if (i) they place AI systems on the market or into service within the EU, or (ii) the output produced by the AI system is used in the EU.
  • Users – persons or entities that use an AI system under their authority, other than in a personal capacity, would be covered if (i) they are located within the EU, or (ii) the output produced by the AI system is used in the EU.

In many instances, multiple entities are involved in the development, training, marketing, and branding of AI systems, which could result in having several “providers” for a particular AI system.

Types of AI Systems That Will Be Banned

The draft law bans the use of certain AI, including:

  • Manipulative or exploitative systems. The legislation would prohibit AI systems that are designed to manipulate human behavior or decisions through “subliminal techniques,” or to exploit vulnerabilities of groups of persons due to age, physical, or mental disability, in a manner that would materially distort their behavior and cause them or others physical or psychological harm. These are sometimes referred to collectively as “Dark Patterns.” This prohibition will likely need further clarification because many common AI systems have been alleged to manipulate human behavior and exploit vulnerabilities (e.g., AI used for gaming, advertising, social media, dating apps, etc.)
  • “Real-time” remote biometric identification systems, such as facial or gait recognition systems. The use of these systems in public places for law enforcement purposes would be prohibited, subject to several enumerated exceptions.
  • General-purpose social scoring. The legislation would also prohibit social scoring based on a person’s social behavior or predicted personality characteristics, by or on behalf of a public authority that would lead to detrimental treatment of a person or group under certain circumstances.

AI Systems That Will Be Regulated as “High Risk”

The draft AI legislation expressly sets out in Annex III (PDF: 201 KB) the applications considered to be “high risk,” including:

  • AI systems  that  evaluate  consumer  creditworthiness or establish their credit score, with the exception of systems provided  by small entities for their own use;
  • AI systems for recruiting and workplace management, including evaluating candidates through interviews, making decisions  concerning promotions  or  termination, or monitoring and evaluating employee performance or behavior;
  • AI systems for education and vocational training;
  • Systems for biometric identification of natural persons, including both “real-time” and post hoc remote identification tools (other than the law enforcement uses described above that are banned);
  • AI systems for management and operation of critical infrastructure;
  • AI systems concerning access to  public  assistance benefits  or to  dispatch emergency first response services;
  • AI systems used by law enforcement, including risk assessments, polygraphs, deepfake detection, and crime analytics.

The commission would be empowered to add AI systems to this list if they pose a risk of harm to health and safety, or adverse impact on fundamental rights. Factors that the commission will consider in determining whether to classify additional AI applications as “high risk” include: the intended purpose of the AI system, the potential impact of future harm, and the vulnerability of intended users due to an imbalance of power, knowledge, age, or economic or social circumstances. Additionally, AI systems that produce decisions that are not easily reversible, or where “for practical or legal reasons it is not reasonably possible to opt-out from [the] outcome,” are also more likely to be considered high risk. Notably, the commission also states that it will consider “reports or documented allegations” of prior incidents of harm in classifying a system as “high risk,” signaling to companies that it will be carefully considering claims of AI bias or other AI incidents that may cause injury.

Part III of this series will discuss new obligations for companies using AI under the EU’s draft legislation, and how companies can begin planning for compliance now. 

Avi Gesser is a partner, and Anna R. Gressel and Steven Tegrar are associates, at Debevoise & Plimpton LLP. This post originally appeared on Debevoise’s Data Blog.

Disclaimer

The views, opinions and positions expressed within all posts are those of the authors alone and do not represent those of the Program on Corporate Compliance and Enforcement or of New York University School of Law.  The accuracy, completeness and validity of any statements made within this article are not guaranteed.  We accept no liability for any errors, omissions or representations. The copyright of this content belongs to the authors and any liability with regards to infringement of intellectual property rights remains with them.