Germany and the EU Artificial Intelligence Act
German attorney-at-law (Rechtsanwalt)
Dr. Axel Spies is a German attorney (Rechtsanwalt) in Washington, DC, and co-publisher of the German journals Multi-Media-Recht (MMR) and Zeitschrift für Datenschutz (ZD).
The impact of the Artificial Intelligence Act (AIA) proposed by the European Commission, and currently debated at the European Parliament (EP), has been underestimated in the United States. With approximately 3,000 amendments that must be reconciled, the AIA represents the first attempt to regulate artificial intelligence (AI) by a uniform law from cradle to grave. The AIA focuses on the providers of AI services that put them on the market or use them for their own purposes.
Germany is actively contributing to the debate: AI is mentioned in the federal government’s Coalition Treaty as a “digital key technology” and a European AIA is generally supported. In a recent article in the German daily Frankfurter Allgemeine Zeitung, data privacy experts argue that AI should be narrowly regulated on the basis of the EU’s General Data Protection Regulation: “In most cases, AI is harmless and useful, and its use may often even be ethically warranted, for example in the area of healthcare or in the fight against crime, once it has proven its suitability.” U.S.-based companies should pay close attention to the AIA because of its extraterritorial effect: It will likely cover AI systems in the United States, (similar to the GDPR) if the AI “output” is “used” in the EU. The proposed fines are very steep and even higher than the fines foreseen under the GDPR: up to 6 percent of the worldwide turnover of the violating company.
The German debate on AI regulation seems to have shifted entirely to the EU. In a recent written response to a formal information request by the Alternative für Deutschland party (AfD), the German government states that “negotiations [on the AIA on the EU level] are ongoing. Points whose clarification are considered central to progressing negotiations between EU member states from the German government’s perspective are the scope of the regulation, the definition of ‘AI system,’ and the scope of prohibited AI systems and high-risk AI systems.” For instance, the new government of Germany has said it supports a full ban on the use of facial recognition and AI in public places.
German industry and trade associations are currently lobbying the EU institutions to amend the AI Act. Most of the German industry supports the concept of the AIA. Iris Plöger, member of the executive board of the Federation of the German Federation of Industry (BDI) has summarized German industry’s position as follows: “With the draft [AIA] regulation, the EU Commission presents an initial proposal for a legal framework for AI that is shaped by European values. It is right that the proposal focuses on AI systems that may be associated with particularly high risks.” At the same time, she voices the BDI’s concern that overregulation of AI would discourage the development of innovative applications of the key technology at the outset. The legal framework through the AIA must be well balanced so that European companies can gain a decisive advantage in the combination of their industrial strength in the AI sector to compete with countries such as China, the USA or Israel, she states.
Currently, there are at least three major challenges for the AIA.
Challenge 1: How to Define and Catalog AI?
It is already a challenge to define AI in an understandable way. But it is even more difficult to catalog AI applications because the AIA follows a risk-based approach. What the EU legislators have in mind is a pyramid with “unacceptable risk” applications at the top (prohibited AI), a second tier of “high risks” (heavily regulated AI) and AI with limited risks at the bottom. The AIA already contains a long list of AI systems that fall into the “high risk” category. The idea is that the existing EU product safety regulations would be amended to reference AI requirements so that the system will comply. A lot of obligations will need to be fulfilled mainly by AI providers before and after the product is put on the market. Suitable risk management systems, record keeping, registration of AI systems in a data base to be set up by the EU Commission, and human oversight will be required. There is fundamental opposition to AI from some EP political parties that see AI as a general threat to humanity and want to balloon the category of “high risk” AI applications so that the regulatory scheme would no longer look like a pyramid, but more like a pumpkin.
Challenge 2: Why Is An AI Act Needed at All?
Many critics argue that the EU’s AIA is superfluous. AI providers are already regulated by the GDPR with its broad definition of personal data. The GDPR already contains the principles of data minimization, privacy by design, restrictions on automated decision making and profiling that (also) cover AI. Some data protection agencies have already presented their national guidelines on the use of AI. In addition, the new EU Digital Services Act also contains provisions on “algorithmic transparency” that cover AI and certain large market players. There are EU rules on electrical devices (NIST) and rules for the financial and healthcare sectors that cover AI.
There is thus a huge potential of legal overlaps, also with EU competition law. Some sections in the AIA proposal specifically refer to the GDPR and its level of data protection that the AIA proposal should not undermine. But there are many open issues: What is the legal ground for data processing when you are training an AI application? More broadly: Which data can be used for training an AI system? How will the parties in a supply chain (providers, distributors, importers, authorized representatives) share the huge compliance burden? How do the AIA definitions square with the definition of controllers and processors in the GDPR? Who would monitor the implementation of the AI Act within a company? Will the AI Act allow class actions for damages (as under the GDPR) in the courts?
Challenge 3: AI Act and the Brussels Effect—Will It Work?
The AI Act has an interesting political component—the Brussels Effect. Coined by a U.S. law professor, the Brussels Effect challenges the view that the EU is a declining world power and refers to the EU’s unilateral power to regulate global markets. The main example of this power is the GDPR, which is viewed in Brussels as a legal success story as other countries follow it or must comply with it so that their industry can do business in the EU (market location principle). One example of the Brussels Effect is that some U.S. states, such as California, have included parts of the GDPR in their recent privacy laws. It remains an open question, however, whether countries outside the EU will just copy and paste from the AIA with its complicated definitions and classifications for their own laws.
Whatever the outcome of the debate is, the timetable for the AIA is ambitious. The stated goal is to adopt the AIA as a directly applicable regulation before the next EU elections in 2024. The proponents of the AIA hope for the Czech Presidency of the Council of the EU to agree on a text among EU member states by the end of this year. This will lay the foundation for “trilogue” negotiations later in 2023 between the Council (led by Sweden and Spain likely in the second half of the year), the European Parliament, and the European Commission. Until the adoption of the AIA, discussion will likely continue on which rules in the AI Act are necessary and which will kill or inhibit innovation.