The Regulatory Challenge of the Decade

Artificial intelligence has moved from research labs to everyday life with extraordinary speed. AI systems now influence hiring decisions, medical diagnoses, credit scoring, content moderation, and national security — often without the people affected knowing it's happening. Governments around the world are scrambling to build regulatory frameworks for a technology that evolves faster than legislation typically can.

The result is a patchwork of approaches that differ dramatically in philosophy, scope, and enforceability.

The European Union: A Risk-Based Framework

The EU has moved furthest with comprehensive AI legislation. The EU AI Act, which entered into force in 2024, classifies AI systems by risk level:

  • Unacceptable risk: Banned outright — including social scoring by governments and most real-time biometric surveillance in public spaces.
  • High risk: Heavily regulated — including AI used in healthcare, education, employment, and law enforcement. Requires transparency, human oversight, and registration.
  • Limited and minimal risk: Subject to lighter transparency obligations or largely unregulated.

The Act also includes specific rules for general-purpose AI models — the large systems underpinning many applications — requiring capability assessments and, for the most powerful models, deeper scrutiny.

The United States: Sector-by-Sector and Executive Action

The US has not passed comprehensive federal AI legislation. Instead, regulation happens through executive orders, existing sector-specific law (financial regulations, medical device rules), and emerging state-level legislation. California, in particular, has become an active regulatory arena.

The approach gives the US flexibility and speed to adapt, but critics argue it creates inconsistency and leaves significant gaps — particularly around civil rights implications of automated decision-making.

China: Innovation with Party Control

China has developed specific AI regulations targeting particular use cases, including algorithmic recommendation systems, deepfakes, and generative AI services. The regulatory philosophy differs from the West: the focus is less on protecting individual rights and more on ensuring AI serves state goals and social stability. AI development is a national priority, and regulation is calibrated not to hinder it.

The Global Coordination Problem

AI systems are built in one country, trained on data from many, and deployed globally. This creates fundamental challenges for any national regulatory regime:

  • Companies can shift operations to avoid stringent rules — regulatory arbitrage.
  • Harms from AI systems often cross borders in ways that are difficult to attribute and address.
  • Geopolitical competition between the US and China makes deep multilateral AI governance difficult.

International efforts — including through the OECD, the G7's Hiroshima AI Process, and the UN's AI advisory body — have produced principles and voluntary frameworks, but binding global rules remain a distant prospect.

What Effective AI Governance Requires

Experts across the political spectrum tend to agree on a few fundamentals: transparency requirements so affected people understand when and how AI is being used; meaningful human oversight for high-stakes decisions; accountability mechanisms when AI systems cause harm; and regulatory bodies with genuine technical expertise. Building all of this, at speed, while the technology continues to develop, is one of the defining governance challenges of the current era.