Decoding AI Governance: New Cooperation or New Game in Europe and the United States
In the fast-evolving landscape of artificial intelligence (AI), the competition for dominance in AI rule-making among major Western countries is intensifying. Recently, a trilateral agreement on the “AI Act” was reached between the European Parliament, EU member states, and the European Commission. This act is poised to become the world’s first comprehensive regulatory framework for the field of artificial intelligence. Thierry Breton, the Internal Market Commissioner for the EU, stated that the “AI Act” is not just a rulebook but a booster for EU startups and researchers, leading the global AI race.
1. EU: Pioneering Legislation in AI Governance
The European Union has embraced the philosophy of legislating ahead, gradually refining its legal framework over the past few years. In April 2018, the EU Commission outlined a coordinated approach to the development and deployment of AI, aiming to increase investment in AI research and innovation. The EU Commission’s “AI White Paper” in February 2020 proposed a regulatory framework covering risk assessment, transparency, data use, and legal liability, categorizing AI applications based on different risk levels. In April 2021, the EU Commission proposed the world’s first AI regulation to ensure the safety, transparency, traceability, non-discrimination, and environmental sustainability of AI systems used in the EU. In June 2022, the European Parliament approved the negotiating mandate for the “AI Act,” categorizing AI systems by risk and imposing restrictions on deepfakes while demanding higher transparency for generative AI.
2. US: Encouraging Technological Development in AI Policy
In contrast, the United States focuses its AI governance policy on the development and application of technology, with relatively lenient policies compared to the EU. Despite facing security challenges in AI, the U.S. has not conducted specialized research and classification of AI risks as the EU has. Instead, the U.S. emphasizes addressing the fairness issues caused by algorithmic discrimination and ensuring data privacy and security. The U.S. governance approach leans toward industry self-regulation and guidelines, urging companies to formulate their own AI ethics codes and reduce algorithmic discrimination risks through internal audits and self-supervision.
In recent years, as technologies like generative AI have matured, the U.S. government has progressively strengthened regulations to ensure the safety and reliability of these technologies. Since the release of the “AI Rights Act Blueprint” in 2022, the U.S. government has issued multiple guiding principles for the design, development, deployment, and use of AI systems. This approach encourages industries to voluntarily comply with these principles, forming the fundamental framework of AI governance in the United States.
3. EU-US: Rule-Making – A Blend of Competition and Cooperation
As leaders in AI rule-making, the EU and the U.S. engage in both competition and cooperation. The establishment of the U.S.-EU Trade and Technology Council in June 2021 serves as a platform for collaboration. Based on mutual values, both parties aim to guide the development of emerging technologies, seeking alignment on risk regulation. In December 2022, the council released the “Joint Roadmap for Trusted AI and Risk Management Assessment Tools,” guiding risk management and the development of trusted AI through terminology standardization, standard setting, and risk monitoring. The document emphasizes the joint support and leadership of international technical standardization efforts by the U.S. and the EU.
However, despite the convergence in certain aspects of AI regulation, practical coordination remains challenging due to structural differences. At the strategic level, the U.S. views AI as a crucial national security asset for great power competition, intending to expand its technological influence. In contrast, the EU, driven by economic development and values, is more concerned about the ethical challenges posed by AI technology. There are disparities in risk management philosophy, with the U.S. encouraging innovation and development while prioritizing scientific and flexible regulation. The EU adopts a dual approach, balancing development and regulation through high-standard legislation.
4. Global Collaboration in Shaping AI Development Environment
In the inaugural AI Security Summit held in November, representatives from the U.S., the UK, the EU, China, India, and other nations discussed the risks and opportunities arising from the rapid development of AI technology. Subsequently, 28 countries and the EU signed the “Bradley Declaration,” committing to jointly build a trustworthy and responsible AI. The UK, hosting the summit, announced that the next AI Security Summit would take place in France a year later, with South Korea co-hosting a virtual summit in the next six months.
It is evident that as AI continues to evolve and proliferate, international regulation and standardization of AI will become a global issue. The legislative efforts and cooperation between the EU and the U.S. in AI offer insights to other countries. However, beyond the few frontrunners, many nations actively participate in shaping the international AI governance standard system. All countries should leverage their influence to collaboratively create an equitable, open, and mutually beneficial environment for the development of AI.
In the evolving landscape of AI governance, Europe and the United States exhibit distinct approaches, reflecting their strategic priorities and values. While both regions strive for collaboration, structural differences pose challenges. The global community should actively contribute to shaping a conducive environment for AI development. As we navigate the complexities of AI governance, it is crucial to find common ground and build a framework that fosters innovation while ensuring ethical and responsible AI use.
FAQs: Deciphering AI Governance
Q1: How does the EU classify AI risks in its regulatory framework?
The EU categorizes AI applications based on different risk levels, ensuring a nuanced approach to regulation.
Q2: What is the U.S. stance on algorithmic discrimination in AI?
The U.S. focuses on addressing fairness issues caused by algorithmic discrimination, emphasizing industry self-regulation.
Q3: What challenges arise in the cooperation between the EU and the U.S. in AI governance?
Structural differences, especially in strategic priorities and risk management philosophy, pose challenges to effective coordination.
Q4: How are other countries contributing to global AI governance?
Many nations actively participate in shaping international AI governance standards, emphasizing the need for a collaborative and inclusive approach.
Q5: What is the significance of the Bradley Declaration in AI security?
The Bradley Declaration signifies a commitment among 28 countries and the EU to jointly build a trustworthy and responsible AI, emphasizing global cooperation.