From a Race to AI to a Race to AI Regulation

Introduction

  • AI is increasingly pervasive and strategically important for governments worldwide; regarded as one of the most transformative forces of our time.

  • AI’s general-purpose status promises benefits across many domains, boosting individual, economic, and societal welfare.

  • Global competition to develop and deploy AI (a ‘race to AI’) is driven by ambitions to lead in talent, research, startups, software/hardware, and investments.

  • Winning the race is framed as essential not only for market leadership but also for national/economic security and (in some views) national security.

  • High costs of non-adoption and the fear of being left behind intensify the rhetoric of the AI race.

  • Yet AI brings substantial ethical and legal risks (privacy, non-discrimination, manipulation, safety, security).

  • Regulators worldwide are weighing new or revised regulations to minimize AI harms while maximizing benefits; regulation is seen as a means to build trust and facilitate uptake.

  • The core argument: the global AI race is generating a parallel global race to regulate AI in a way that fosters trustworthy AI (legal, ethical, robust).

  • Trustworthy AI is a key objective, shaping regulatory choices and uptake; its concrete meaning includes transparency, accountability, privacy, and non-discrimination.

  • Trustworthy AI is controversial: some argue trust should not be placed in machines; others see it as oversight and governance requirements embodied in the concept rather than trust in the AI itself.

  • Regulatory design aims to balance enabling innovation with protection of fundamental rights, democracy, and rule of law.

  • The paper divides the analysis into: a) a toolbox of AI regulation and its obstacles; b) whether regulatory competition for AI is a possibility; c) whether it is a reality; d) whether it is desirable; and e) conclusions.

Regulating Artificial Intelligence: Tools and Obstacles

The regulatory toolbox in context

  • Regulation as a means to influence or constrain actors (individuals, groups, or legal entities).

  • Lawrence Lessig’s four modalities of regulation: ext{Law}, ext{Social Norms}, ext{Market}, ext{Architecture/Design of technology}.

  • Regulatory goals: protective (minimize harms) and enabling (stimulate beneficial innovation). In AI, enabling regulation can include subsidies, tax incentives, or fast-track migration policies for AI-skilled workers.

  • Protecting regulation examples: mandatory transparency/information obligations for AI developers/deployers; ethical guidelines for sector-specific use cases.

  • Boundaries between modalities are porous; laws can codify norms, norms can influence design, and architecture can shape markets.

  • Choosing a modality or tool has consequences: presumptions, burden of proof, cost of compliance, and liability.

  • Regulators must understand: landscape of existing regulations, limits of their toolbox, and jurisdictional competence (e.g., the EU’s competences are legally bounded).

  • A holistic view of AI regulation should consider the broader regulatory framework (tax, tort, privacy/data protection, IP, competition, health, public procurement, consumer protection, etc.).

  • Trustworthy AI (as defined by the EU High-Level Expert Group on AI) is an AI system that is not only legal but also ethical and robust. Components include transparency, accountability, privacy, and non-discrimination.

  • Trustworthy AI requirements can be mandatory (affecting market access) or voluntary (influencing buyer/deal terms).

  • Regulation can shape the behavior of multiple actors across modalities; it is not limited to one legal domain.

  • Regulatory coherence is important: policy coherence across different domains is needed to avoid conflicting obligations.

  • The EU’s better-regulation framework and the notion of innovation principles illustrate the balancing act between enabling innovation and protecting societal values.

  • The European Commission has signaled an intention to propose AI-specific regulation in the first quarter of 2021; CAHAI (Council of Europe Ad hoc Committee on AI) is exploring a legal framework for AI in line with human rights, democracy, and the rule of law.

  • Key takeaway: the regulatory toolbox for Trustworthy AI is diverse and can be deployed in a coordinated, but context-specific, manner; but defining AI and choosing the right tool are non-trivial tasks.

Regulating AI - Not a walk in the park

  • There is no universally accepted definition of AI; multiple definitions exist (e.g., EU AI HLEG definition, Russell & Norvig's definition). This definitional plurality complicates regulation, subsidies, and cross-country comparisons.

  • There is no single AI; many techniques and applications fall under the umbrella, with the scope of AI applications evolving over time (the “AI-effect”).

  • Without a unitary definition, governments may vary in what they count as AI, affecting policy design (e.g., eligibility for subsidies or regulatory obligations).

  • Regulating AI by singling out AI risks creating undesired consequences if those risks also arise from other technologies (technology-neutral regulation can help avoid mis-targeting and maintain scope across technologies).

  • GDPR is cited as an example of technology-neutral regulation focusing on the protection of personal data rather than on a specific technology; this approach shifts the focus to risks and rights rather than the technology itself.

  • There are instances where AI features warrant regulation tailored to AI (distinct features of AI may justify certain AI-specific requirements), but these should be balanced against broader risks that may arise from other technologies.

  • The regulation of AI faces tensions familiar from regulation of other evolving technologies: flexibility to adapt with the technology vs. predictability and legal certainty; opaque (“black box”) decision-making challenges; self-learning behavior and potentially unpredictable outcomes; delegation of human authority/oversight; and the broader socio-technical context in which AI operates.

  • The European Commission announced plans for AI-specific regulation; CAHAI is examining a legal framework; ongoing debates about the appropriate scope and definitions remain.

  • The regulation of AI is complicated by domain-specific risks and contexts (criminal justice vs. manufacturing vs. healthcare), suggesting that universal “AI regulation” may be less appropriate than application- and context-specific regulation with overarching risk-based criteria.

Regulatory Competition: a Possibility?

  • The concept of regulatory competition treats regulation as a commodity that can be used to attract economic activity; competition between jurisdictions can lead to trial-and-error improvements and discovery of optimal regulatory approaches.

  • Conditions identified in the literature for effective regulatory competition (Tiebout-inspired):
    1) decentralized decision-making power; 2) free information and transparency about regulatory efficiency; 3) ability to swiftly change course in light of better solutions; 4) low transaction costs for subjects to switch jurisdictions; 5) externalities that are not significant or are manageable.

  • In AI, states retain competence to regulate unless constrained by higher authorities; international organizations are pursuing consensus but lack binding enforcement yet, so national regimes remain the primary regulators.

  • Information about regulatory regimes is often not fully transparent; specialized legal advice can give governments and firms an advantage in understanding regimes and moving across jurisdictions.

  • Regulatory agility tends to be slower at the national level, especially in federal/supranational regimes (e.g., EU member states require cooperation for new measures).

  • Moving AI resources (talent, capital, infrastructure) across jurisdictions entails costs; migration policies and immigration regimes can either facilitate or hinder regulatory competition.

  • Negative externalities: a country without protective AI regulation may impose costs on others (e.g., cross-border harms); conversely, export-focused AI may limit domestic motivation to regulate if harms occur abroad.

  • The conditions for robust regulatory competition are doubtful in practice; empirical evidence is lacking on ease of switching regimes and magnitude of externalities; however, the potential for a race to the bottom exists if protections are too weak for competitiveness.

  • The paper discusses a potential “race to AI” that could still be compatible with some level of protection, especially if first movers establish higher standards that others imitate, leading to a form of “regulatory co-opetition.”

  • The GDPR’s status as a global standard provides a real-world example of first-mover effects and regulatory leadership influencing other jurisdictions.

Regulatory Competition: A Reality?

  • In practice, a full-blown regulatory race to the bottom has not materialized; trust in AI is increasingly seen as valuable and potentially market-tested through consumer preferences and business decisions.

  • Evidence from Capgemini Research Institute shows consumers are more loyal to and willing to pay more for ethically guided AI; this supports the view that ethics can be a competitive differentiator.

  • Civil society and private sector voices advocate for protective regulation to accompany innovation; there is a growing belief that regulation can support innovation and avoid a backlash if risks are adequately managed.

  • The GDPR’s regulatory export and perceived EU leadership on data protection have contributed to a broader trend toward regulatory convergence (data protection and, increasingly, trustworthy AI) and to a global dialogue on AI governance.

  • A global convergence trend is visible in: OECD ethics principles on AI (May 2019); G20 ministerial endorsement of ethics principles (June 2019); UNESCO’s work on global AI ethics standards; Council of Europe’s CAHAI initiative; active standard-setting by ISO/IEC, IEEE, and ITU; and national ethics guidelines (Japan, Canada, China, Dubai, Singapore, Australia, etc.).

  • There is a parallel trend of convergence in some AI governance standards within international organizations, which creates a platform for competition within a harmonized framework rather than pure competition between jurisdictions.

  • The emergence of a “co-opetition” model: competition and cooperation can coexist, with a base layer of harmonization to provide transparency and a level playing field, while allowing jurisdictions to exceed the baseline for higher protective standards.

  • In sum, while race-to-regulation remains a concern, the practical landscape shows some convergence and cooperation at global and regional levels, suggesting that governance of AI is moving toward a shared baseline rather than a pure race to the bottom.

Regulatory Competition: A Desirability?

  • Convergence to global standards that effectively address AI risks is desirable because it offers baseline protections and reduces cross-border externalities; it can also build public trust and support AI uptake.

  • Yet convergence presents risks: it may lock in a lowest-common-denominator set of safeguards, potentially undermining higher protections in stronger jurisdictions; it may reduce regulatory experimentation and learning through trial and error.

  • Stability concerns arise: international cooperation frameworks may be breached; sanctions and enforcement mechanisms are necessary to sustain convergence.

  • The issue of divergent development levels across countries is central: AI's impacts may be unevenly distributed, potentially widening global and domestic inequalities; what works for developed countries may not fit developing countries’ needs.

  • A one-size-fits-all global standard may hinder tailored responses to different economic, social, and cultural contexts; some AI risks and applications require context-specific regulation.

  • The authors advocate a model of regulatory co-opetition: combine some convergent baselines (for transparency, basic rights protections, and a level playing field) with ongoing national/regional experimentation and tailored policies where beneficial.

  • A nuanced approach is recommended: identify which areas benefit from convergence and which would benefit from diversity; use AI itself to help identify regulatory areas that warrant harmonization versus those that require competition.

  • Overall, convergence is desirable insofar as it provides robust protections and stability, but it should not foreclose valuable regulatory experimentation or ignore disparities among countries.

Conclusion

  • Regulators worldwide have prioritized AI on strategic agendas to harness benefits while mitigating risks; this creates a global governance dynamic that combines protection with innovation.

  • A broad regulatory toolbox exists to shape AI development and deployment toward trustworthy outcomes; however, AI’s diversity and domain-specificity require tailored, holistic policy approaches.

  • The current trend shows partial convergence at the global level (through OECD, G20, UNESCO, CAHAI, and standard-setting bodies), alongside ongoing national and regional innovation in regulation.

  • The paper argues for a model of regulatory co-opetition: a pragmatic blend of competition and cooperation that leverages convergence where beneficial and preserves room for national/differentiated approaches where appropriate.

  • The ultimate objective is a coherent, enforceable regulatory framework for AI that safeguards fundamental rights, democracy, and the rule of law while enabling responsible AI innovation.

Key terms and concepts

  • Race to AI vs. Race to AI Regulation: competition in AI development vs. competition/convergence in AI governance.

  • Trustworthy AI: a framework (often described by EU AI HLEG) requiring AI to be lawful, ethical, and robust; components include transparency, accountability, privacy, and non-discrimination.

  • Four modalities of regulation (Lessig): ext{Law}, ext{Social Norms}, ext{Market}, ext{Architecture/Design}

  • Enabling vs. Protecting regulation: enabling topics include subsidies, incentives, and migration policies; protecting topics include transparency, redress, and ethical guidelines.

  • Regulated domains (privacy, data protection, IP, tort, competition, health, procurement, consumer protection) interact with AI governance.

  • Regulatory co-opetition: a blended approach where cooperation and competition coexist to foster better regulation and innovation.

  • AI effect: the phenomenon that as AI technologies become normalized, they may lose their perceived “intelligence.”

  • First-mover advantage (regulatory): the benefits a jurisdiction gains by being early to adopt a regulatory framework.

  • Externalities: cross-border or global effects of regulation (positive or negative) that influence other states.

Notable examples and references (conceptual)

  • EU: AI strategy and Trustworthy AI guidelines; planned AI-specific regulation (as of 2021 planning).

  • GDPR: a technology-neutral regulation focusing on data protection, often cited as a model for broader convergence.

  • OECD AI Principles (2019): ethics-based guidance for robust, safe, fair, and trustworthy AI.

  • G20 Ministerial Statement on Trade and Digital Economy (2019): endorsement of AI ethics principles.

  • CAHAI (Council of Europe): exploring a legal framework for AI grounded in human rights, democracy, and the rule of law.

  • ISO/IEC and IEEE ITU standard-setting efforts: arenas for regulatory convergence and leadership in AI standards.

4 imes 10^{11} EUR is sometimes cited as the approximate cumulative added value to GDP by 2030 if AI is inadequately regulated; this figure is used to illustrate the economic stakes in timely and effective AI governance.