Building Transatlantic Trust for AI Innovation and Regulation
China Fellow; Program Officer, Geoeconomics
Yixiang Xu is the China Fellow, and Program Officer, Geoeconomics at AICGS, leading the Institute’s work on U.S. and German relations with China. He has written extensively on Sino-EU and Sino-German relations, transatlantic cooperation on China policy, Sino-U.S. great power competition, China's Belt-and-Road Initiative and its implications for Germany and the U.S., Chinese engagement in Central and Eastern Europe, foreign investment screening, EU and U.S. strategies for global infrastructure investment, 5G supply chain and infrastructure security, and the future of Artificial Intelligence. His written contributions have been published by institutes including The Chinese Academy of Social Sciences, The United States Institute of Peace, and The Asia Society's Center for U.S.-China Relations. He has spoken on China's role in transatlantic relations at various seminars and international conferences in China, Germany, and the U.S.
Mr. Xu received his MA in International Political Economy from The Josef Korbel School of International Studies at The University of Denver and his BA in Linguistics and Classics from The University of Pittsburgh. He is an alumnus of the Bucerius Summer School on Global Governance, the Global Bridges European-American Young Leaders Conference, and the Brussels Forum's Young Professionals Summit. Mr. Xu also studied in China, Germany, Israel, Italy, and the UK and speaks Mandarin Chinese, German, and Russian.
Artificial intelligence is among the most promising and crucial emerging technologies that will bring about striking changes to economic productivity, social interactions, and national security. Its significance is further accentuated by the growing geostrategic competition between the world’s top AI innovators, the United States and China, as well as the European Union’s ambition to define and implement rules that promote human-centric, ethical AI. Both the United States and Germany have tremendous interests in harvesting AI’s potential and, at the same time, in preventing the development, use, and proliferation of AI systems that are contrary to liberal democratic values and ethics.
On both sides of the Atlantic, governments have undertaken parallel efforts to mitigate privacy concerns and prevent AI risks, including algorithmic bias and socioeconomic inequality. In the United States, the National AI Initiative Office coordinates and supports the National AI Initiative to advance trustworthy AI through technical standards and governance and leverage trustworthy AI for government and national security. U.S. federal government agencies, including the Equal Employment Opportunity Commission (EEOC) and HUD’s Office of Fair Housing and Equal Opportunity (FHEO), have begun to explore regulatory changes to enforce existing laws in the face of increasing AI adoption that has already generated new concerns in areas such as employment and housing discrimination. Multiple U.S. state and local authorities have banned or limited the use of facial recognition technology by law enforcement agencies. Similarly, the European Parliament called for a ban on police use of facial recognition technology in public places and on predictive policing to profile potential criminals. In April this year, the European Commission proposed the Artificial Intelligence Act (AIA) that aims to safeguard individual safety and fundamental rights and imposes ex ante regulatory procedures on “high risk” AI systems.
Despite these shared values and interests, aligning AI policies across the Atlantic faces several challenges. Many Europeans continue to view the United States and U.S. digital technology giants as the principal targets of the EU’s persistent digital sovereignty movement. The EU’s comprehensive approach to AI and data regulation emphasizes user rights and privacy and differs from a still fragmented U.S. regulatory environment which takes care to avoid stifling innovation. At the same time, the EU’s trailing AI capacity raises the question of its digital geostrategic significance for the United States and could incentivize Washington to pursue AI agreements with other key digital partners. Without dedicated and extensive transatlantic consultations, European efforts to externalize its regulatory governance model, including the AIA and the Digital Services Act, could lead to more transatlantic frictions.
Furthermore, the EU and many of its members, including Germany, have been reluctant to define their AI strategies with a strong geostrategic focus, especially in the context of pushing back against China’s state-driven, autocratic AI governance model. Europe’s lack of urgency regarding Beijing’s digital agenda diverges significantly from the U.S. government’s call for a democratic digital alliance and risks cooling bipartisan enthusiasm for transatlantic digital cooperation.
Under the Biden administration, the United States has advocated AI cooperation with its European allies. Through the U.S. Innovation and Competition Act, Congress directs the U.S. government to harmonize technology governance regimes and coordinate on research and development initiatives with partners. The establishment of the EU-U.S. Trade and Technology Council and several other initiatives such as the Global Partnership on Artificial Intelligence and the AI partnership for defense provides new platforms for enhanced transatlantic dialogues on AI innovation, standards setting, and regulation. Additionally, the U.S. Department of State will with Congress to create a new Bureau of Cyberspace and Digital Policy for coordinating international cyber and digital policy as well as a new special envoy for critical and emerging technology. Transatlantic cooperation is also widely supported by U.S. industry stakeholders whose interests are better served by compatible legal and regulatory frameworks that could lower compliance costs across international jurisdictions and create a larger market as well as more competitive global digital supply chains.
With the AIA’s conclusion still years away and national authorities retaining the lead to formulate and implement AI policies and regulations, bilateral consultation and cooperation with Germany, the EU’s largest economy, will be particularly beneficial for the United States. Biden administration officials have called for the establishment of a strong bilateral relationship between the United States and its likeminded European partners to enhance regular exchanges of AI policy implementation and administrative applications to facilitate regulatory convergence in practice. Germany is already developing AI tools to aid law enforcement and the three political parties in negotiation to form the next German government have agreed to accelerate digital applications throughout the public administration and restructure the country’s AI strategy.
Future efforts by German government ministries and agencies to adapt their regulatory policies to AI deployments in multiple sectors create an additional area for bilateral transatlantic dialogues as U.S. federal government agencies pursue similar efforts. Mainstream German political parties’ growing calls to deal with China in a more critical manner could provide new opportunities to advance bilateral AI collaboration to safeguard human rights, fundamental freedoms, privacy, and rule of law through the development of common ethical and regulatory standards. These engagements between Washington and Berlin could serve as a blueprint for productive transatlantic AI cooperation for other EU members.
Several issues deserve particular attention in implementing AI cooperation across the Atlantic. The United States and its European partners should establish a common definition of AI to set the scope of regulatory cooperation. While a risk-based approach is broadly accepted by the United States and the EU, clear identification and classification of risks as well as types of risk assessment need to be worked out through bilateral dialogues. It is also crucial to further negotiations on transatlantic data governance in order to establish a robust and coherent framework for data protection and data sharing, thus both providing legal certainty and improving data access for AI innovation in crucial sectors such as medical research and AI-enabled infrastructure. A joint AI policy repository between the United States, the EU, and individual European nations, with up-to-date information on innovation developments and regulatory experiments, such as the AI risk management framework in development at the U.S. National Institute of Standards and Technology (NIST), would help to save resources, mutually inform policy making and product designs, and potentially enhance the transferability of trustworthy, compatible AI standards and products across transatlantic jurisdictions.