Brussels AI Deal Stalled—EU Pact Falters, Friday Promises Fresh Chance

Brussels AI Deal Stalled—EU Pact Falters, Friday Promises Fresh Chance

European Decision-Makers Missing a Consensus on the AI Act

After a marathon session that lasted more than 22 hours in Brussels, Parliament and national governments were unable to forge a political agreement on the EU’s proposed AI legislation.

Key Developments

  • Long‑dated discussions: Negotiations stretched from early morning into the following day, yet no consensus materialized.
  • Diplomatic impasse: Both sides reached an impasse, with each side holding onto distinct priorities that could not be reconciled.
  • Impact on the legislative timetable: The lack of agreement threatens to delay the implementation of AI safeguards across the Union.
  • Future steps: Officials are expected to revisit the core issues in a series of follow‑up talks to prevent further setbacks.

AI Regulation Takes Shape After Intense Negotiations

From Wednesday afternoon to Thursday afternoon, European lawmakers and national governments engaged in a marathon discussion covering more than 23 agenda items. The session spanned the night and morning, underscoring the sheer complexity of the matter at hand.

The Global First for AI Governance

Dubbed the world’s inaugural comprehensive effort to regulate artificial intelligence, the draft Act seeks to balance innovation with ethical principles and environmental stewardship. Big Tech, start‑ups, and civil society groups have all weighed in, influencing the potential ripple effects of Brussels’ legislation on worldwide state initiatives.

Key Learnings from the Negotiations

  • Foundation Model Regulation: Discussions tackled how to supervise large language models powering chat‑bots such as OpenAI’s ChatGPT.
  • Biometric Identification: Proposed exemptions for real‑time biometric use in public spaces were examined.
  • Stakeholder Input: The collaboration between Members of the European Parliament (MEPs) and Member State governments highlighted the challenges posed by diverse national viewpoints.

Progress Across the Board

European Commissioner for the Internal Market, Thierry Breton, stated, “Lots of progress was made over the past 22 hours on the AI Act.” While lawmakers emphasized significant strides, they deliberately withheld specifics to preserve confidentiality.

Looking Ahead: Friday’s Round‑Two Session

The European Parliament and the Council will revisit the Act at 9:00 a.m. Friday to cement a provisional agreement. Spain, currently presiding over the Council, must reconcile the spectrum of perspectives from all 27 member states.

Should consensus be achieved, the draft—spanning hundreds of pages—will undergo a final refinement before a consolidated version is presented once more to the Parliament. Following parliamentary approval, the Council will grant the definitive green light.

Implementation Timeline

After navigating the legislative process, the law will enter a grace period before it becomes fully enforceable in 2026.

AI: An ever-evolving technology

European AI Act: A New Blueprint for Responsible Innovation

What It Is and Why It Matters

The AI Act, unveiled in April 2021, seeks to align the rapid growth of artificial intelligence with human‑centered values and ethical safeguards. It functions as a product‑first safety regime, imposing a tiered set of obligations that firms must satisfy before their AI offerings reach consumers across the EU’s single market.

Risk‑Based Structure

The regulation introduces a pyramid of risk categories that group AI systems according to the threat they pose to safety and fundamental rights:

  • Minimal risk – exempt from extra rules, streamlining deployment.
  • Limited risk – required to meet basic transparency mandates.
  • High risk – subjected to exhaustive controls that run from market entry to lifecycle, including mandatory updates.
  • Unacceptable risk – automatically forbidden throughout the EU.

High‑Risk Mandates

Systems identified as high risk—such as AI‑supported recruitment tools, robotic surgery assistants, and automated university grading—must:

  • Complete a conformity assessment.
  • Register in the EU database.
  • Sign a declaration of conformity.
  • Affix the CE mark before consumer availability.

After launch, national authorities oversee these products, and violations trigger fines that can reach multi‑million euros.

Unacceptable‑Risk Prohibitions

AI that could be used to score citizens or exploit socio‑economic weaknesses will be banned in all EU territories.

Evolution Driven by Foundation Models

The surge of dialogue systems beginning with OpenAI’s ChatGPT spurred a fresh debate in late 2022. Google’s Bard, Microsoft’s Bing Chat, and Amazon’s Q followed, all powered by foundation models trained on vast data sets (text, images, code, speech).

New Compliance Layer for Foundation Models

Because the original draft lacked rules for such systems, lawmakers added a new article with extensive obligations:

  • Guarantees that the AI respects fundamental rights.
  • Mandates energy‑efficiency benchmarks.
  • Requires transparency that the content is AI‑generated.

Political Dynamics and Soft‑Touch Preference

Parliamentary pressure met scepticism from member states, many of which favour a more gradual legislative approach.

Conclusion

The AI Act represents a landmark effort to harmonise innovation with responsibility. Its adaptive structure and recent updates for foundation models display Europe’s commitment to steering AI toward a future that prioritises safety, accountability, and human welfare.

Biometrics continues to be contentious

Europe Grapples with Regulating AI Foundations in the Wake of a Divisive Proposal

Three Leading Economies Present a Controversial Draft

The G‑7 powerhouses—Germany, France, and Italy—have introduced a draft that places mandatory self‑regulation, via codes of conduct, at the heart of standards for foundation models. While the proposal aims to streamline oversight, it has elicited intense backlash from legislators, threatening to stall the entire law‑making effort.

Key Points of the Proposed Framework

  • Foundation models must adhere to publicly published ethical codes.
  • Compliance is enforced through internal governance rather than external mandates.
  • European-wide coordination would rely on co‑legislator agreement rather than top‑down directives.

Partnership Negotiations & Emerging Consensus

During a recent session on Thursday, the co‑legislators discussed preliminary terms. Though the specifics were kept under wraps, the dialogue created a sense of optimism that a compromise could be reached. Nonetheless, this optimism is tempered by lingering disputes on critical issues.

Unresolved Issue: Real‑Time Remote Biometrics in Public Spaces

One of the most polarising subject areas concerns the employment of real‑time remote biometric tools—such as facial recognition—in everyday environments. These technologies analyze biological signatures (facial structure, iris patterns, fingerprints) to identify individuals, often without their explicit consent.

Positions of Policymakers
  • Lawmakers champion a complete prohibition of real‑time biometric identification, especially when it involves sensitive traits like gender, race, ethnicity, or political affiliations.
  • Member states argue that selective exemptions are essential, permitting law enforcement professionals to investigate criminal activities and pre‑empt national security threats.

While the debate remains unresolved, the priority for all stakeholders is to ensure that any regulations strike a balance between technological progress and the rights of individuals.

Next Steps for the EU Legislative Process

As discussions continue, negotiators will strive to address the biometric controversy and refine the foundation‑model framework. The goal is to produce a robust yet flexible policy that safeguards European citizens without stifling innovation.