Europe’s AI dilemma – innovate or regulate?

Europe’s AI dilemma – innovate or regulate?

The European Union has implemented the Artificial Intelligence Act, a regulation designed to govern the use of AI without stifling innovation. How will this legislation affect Europe’s competitiveness? In this article, we examine the balance between regulation and technological progress.

The aim of this entry is to review the legal framework defined by the European Union (EU) to regulate the use of artificial intelligence. As we have done so far, in addition to providing an overview of the EU’s regulatory proposal, I shall also add a touch of personal opinion, informed by more than a decade of experience working with AI algorithms.

Looking back, this is not the first time humanity has faced such a dilemma: regulation or innovation. Indeed, in antiquity, great Greek philosophers such as Plato and Aristotle studied the relationship between regulatory systems and freedom—a problem strikingly similar to the one the EU now faces. Both Plato and Aristotle recognised that laws are tools to protect and promote liberty, provided they are designed with wisdom and justice. In what follows, we will see how the EU’s approach is similar: to define a legal framework that allows progress without sacrificing the fundamental values of society. So, without further delay, let us examine how the EU has defined its regulatory framework for AI.

The impact of AI in Europe: Regulations for a safe and trustworthy future

Artificial intelligence (AI) is rapidly transforming our society, from industrial automation to personalised medicine. Yet, alongside its benefits, AI also raises challenges in terms of safety, fundamental rights, and regulation. To address these, the European Union (EU) has developed the Artificial Intelligence Act, aiming to establish harmonised rules and ensure that AI is used ethically and safely. Before continuing, I would like to thank my colleague R. D. Olivaw for his assistance in summarising the Artificial Intelligence Act.

Why is AI regulation necessary in Europe?

The EU recognises the potential of AI to improve key sectors such as healthcare, education, industry, and the environment. However, it has also identified issues requiring appropriate regulation:

  • Risks to safety and fundamental rights: AI can produce erroneous or biased decisions, affecting both citizens and businesses.
    Understandable, and something already noted in this blog. From a legal perspective, however, such errors could have serious consequences, and without a legal framework, there would be no clear accountability.
  • Lack of adequate oversight: Authorities need tools to monitor and enforce compliance with AI regulations.
    In theory this sounds good, but in practice it is difficult. At present there are no tools capable of auditing large language models (LLMs) due to their inherent complexity.
  • Legal uncertainty: Companies and developers need clarity in the rules to foster innovation.
    Indeed, a legal framework is needed to encourage innovation. But at what cost?
  • Public distrust of AI: Without clear regulation, citizens might hesitate to adopt AI-based technologies.
    Personally, I do not believe lack of regulation creates distrust to the extent that people will stop using AI tools. Today, the vast majority of people do not even understand how a simple classification algorithm works, yet they use much more complex systems because they are useful. In my view, distrust is more likely to arise from the indiscriminate use of AI by governments or large corporations in ethically questionable applications.
  • Market fragmentation: Different regulations across EU countries hinder the creation of a unified digital market.
    This, in truth, is largely a commercial necessity, aimed at preventing competition between EU member states.

The EU’s approach: Risk classification

To ensure regulation does not stifle innovation, the EU has adopted a risk-based approach, classifying AI systems into four main categories:

  • High-risk AI: Applications with significant impact on safety or fundamental rights, such as facial recognition in public spaces or systems used in recruitment and healthcare, will face strict regulations.
  • Limited-risk AI: Transparency requirements will apply so users are aware when they are interacting with AI.
  • Minimal-risk AI: Applications such as virtual assistants or spam filters will face lighter oversight.
  • Prohibited AI: Systems that manipulate human behaviour in harmful ways or enable mass surveillance will be restricted.

From my perspective, the lines separating these risk categories are rather blurred.

Option 3+ was chosen

The European Commission compared different regulatory approaches and concluded that Option 3+ was the most appropriate, offering a balance between mandatory regulation and voluntary flexibility. Its objective is to mitigate AI risks without obstructing technological development.

Key features of Option 3+

  1. Strict Regulation for High-Risk AI
    • Mandatory requirements for high-risk AI systems such as facial recognition, recruitment, or medical diagnosis.
    • These systems must undergo ex-ante conformity assessments and safety audits before commercial release.
    • Requirements include transparency, human oversight, risk management, and technical robustness.
      Words such as transparency and safety audit sound impressive, but are they truly applicable? Given the complexity of large LLMs, I do not believe such transparency is achievable today. For example, one cannot determine precisely which data were used in their training. A case much discussed in the press recently is that of DeepSeek-R1 and its responses about the events in Tiananmen Square, Beijing, June 1989. Repeatedly questioning DeepSeek-R1 about those events revealed that the model has been trained not to answer such questions. But I ask myself: what other events might have been omitted? Could the model have been trained to provide partisan answers in certain contexts? The potential social, political, and educational implications of biased LLMs quickly become apparent.
  2. Voluntary Codes of Conduct for Low-Risk AI
    • For applications not considered high-risk, voluntary codes of conduct are encouraged to promote good practice.
    • Again, the boundary between low- and high-risk applications remains blurred.
    • There will be no penalties for non-adoption, but companies that comply may benefit from regulatory and market incentives.
    • For this measure to be effective, the incentives for those adhering to voluntary codes must be substantial.
  3. Oversight and Enforcement
    • Establishment of a European AI Board to coordinate implementation of the regulation among Member States.
    • I consider this essential, and I hope it proves agile enough to adapt the regulation when needed.
    • Ex-ante verification for high-risk AI and post-market monitoring mechanisms to detect non-compliance.
    • This part of the regulatory framework is crucial and will require significant financial investment to build the infrastructure and mechanisms needed for supervision and scrutiny.
  4. Fostering Innovation and the Single Market
    • Creation of “regulatory sandboxes” where start-ups and SMEs can experiment with AI under regulatory supervision.
    • A fantastic idea, though limited in scope – why restrict these testing environments to small companies and start-ups?
    • The aim is also to avoid market fragmentation by ensuring homogeneous rules across the EU.
    • This is certainly interesting: all EU members will share the same regulatory conditions. But how will this compare to the frameworks and conditions present in other countries?

Comparison with other options considered

OpciónDescripción¿Por qué fue descartada?
Option 1Voluntary labelling for trustworthy AI.Did not guarantee safety; relied on the goodwill of companies.
Opción 2Sector-specific (ad hoc) regulation.Would fragment the market and create legal uncertainty.
Opción 3Regulation only for high-risk AI.Did not encourage good practice for low-risk AI.
Opción 4Mandatory regulation for all AI.Imposed excessive regulatory burdens, discouraging innovation.
Opción 3+Regulation for high-risk AI + voluntary codes of conduct for low-risk AI.Regulation for high-risk AI + voluntary codes of conduct for low-risk AI.

Option 3+ was selected because it protects fundamental rights without imposing excessive restrictions on innovation.

Expected impact of Option 3+

🔹 For companies and start-ups:

  • Greater legal clarity and reduced regulatory costs for low-risk AI.
  • Easier access to European markets without regulatory barriers.

🔹 For citizens:

  • Stronger protection against high-risk AI in healthcare, employment, and public safety.
  • Greater protection on paper, though I insist it is difficult to determine whether a model has been created with bias. How will we know if models are functioning as expected?
  • Transparency and oversight in the use of AI in automated decision-making.
  • Transparency and oversight sound excellent—but how do they expect to achieve such transparency? Once an LLM with billions of parameters is trained, true transparency is impossible.

🔹 For the EU as an economic bloc:

  • Strengthening Europe’s leadership in ethical and responsible AI.
  • Honestly, I do not know how other countries intend to regulate AI, but I find it hard to believe the EU framework will prove more flexible than those proposed by Donald Trump’s United States or the United Kingdom.
  • Creation of a single market for trustworthy AI that avoids conflicting national regulations.
  • In practice, this seems only to ensure fair competition between companies within EU member states, as they will all be subject to the same regulations. However, as we noted earlier, when entering the global market, these firms will face much tougher competition from companies developing products under more flexible regulatory systems.

Implementation

On 2 February 2025, prohibitions on certain AI systems came into effect, alongside requirements for AI literacy in Europe. Over the coming months, further provisions of the Act will be phased in. The timeline for implementation of the different measures under the EU AI Act can be found here.

Conclusion

The European Union stands at a critical crossroads: balancing the need to regulate the use of artificial intelligence with the risk that overly strict rules could drive talent and businesses towards countries with more lenient frameworks. In this context, Option 3+ emerges as a hybrid model that seeks to reconcile both challenges: it strictly regulates high-risk AI to safeguard fundamental rights, while maintaining flexibility for low-risk AI through voluntary codes of conduct. This approach aims to foster innovation and sustainable growth of the European AI ecosystem without compromising safety or ethical values.

In the coming years, it will be crucial to observe how this regulation is implemented and what impact it has on both industry and European society—setting a precedent in the global governance of artificial intelligence.

“Laws were created so that the weak might enjoy the same rights as the strong.” – Demosthenes

HERRAMIENTA

ChatGPT - versión: GPT-4

PROMPTS

- Generate a general-audience summary of the following document – https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence
- Translate to British English