Introduction

Artificial intelligence (AI) is rapidly reshaping industries, governance, and daily life, raising critical questions about how it should be regulated. Governments worldwide stand at a pivotal crossroads tasked with balancing innovation with ethical and legal safeguards as AI systems continue to evolve.

The European Union (EU) has positioned itself as a global pioneer in AI governance through the introduction of the AI Act. The world’s first legal framework designed to regulate AI according to its risks. Paired with the Digital Services Act, the EU aims to ensure that AI systems are safe, ethical, and trustworthy, with innovation taking a secondary role to these core values.

Across the Atlantic, U.S. policymakers have raised alarms about the potential downsides of overregulation. The U.S. approach prioritizes limiting regulatory constraints, often justifying this stance with the need of fostering innovation and the goal of solidifying technological leadership.

In his recent speeches at the Munich Security Conference and the AI Action Summit in Paris, U.S. Vice President J.D. Vance criticized European AI regulations, arguing that excessive regulation could stifle innovation, undermine global competitiveness, and pose risks to free speech and democracy.

This paper will analyse European AI regulation, point out the fundamental differences between the EU and U.S. approaches to AI governance and evaluate the critiques raised by JD Vance.

 

The EU’s Regulation Approach

In response to the specific challenges and harmful effects caused by AI systems, the EU has set itself the goal of establishing clear regulations as part of its digital strategy and thus creating optimal conditions for the development and use of AI systems.

The Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act) is part of a broader package of policy measures to support the development of human-centric and trustworthy AI, which also includes the AI Innovation Package, the deployment of AI Factories and the Coordinated AI Plan. Together, these measures aim to ensure health, safety, fundamental rights, including democracy, the rule of law and environmental protection and boost AI adoption, investment and innovation across the EU.

The AI Act is the world’s first comprehensive legal framework for AI. It is based on a proposal by the European Commission from April 21, 2021, introducing a risk-based approach.

The new law defines obligations for providers and users of AI systems, which are based on the respective risk level. This implies that AI systems will be categorised according to the risk they pose to users. AI systems are analysed according to their area of application and classified according to the risk they pose to users. The higher the risk, the stricter the requirements for transparency, security and compliance.

Designed to mitigate risks, the AI Law defines four risk levels for AI systems:  unacceptable, high, limited and minimal risk.

Unacceptable AI Practices

All AI systems that are considered a clear threat to the health, safety, livelihood and fundamental rights of people (protected legal interests) are prohibited. The prohibition applies to the placing on the market, the putting into service or the use of AI systems. According to Article 5 AI Act, the following AI practices are therefore classified as unacceptable:

  1. Malicious AI-based manipulation and deception
  2. Malicious AI-based exploitation of vulnerabilities
  3. Social scoring
  4. Threat assessment or prediction for individual offences
  5. Untargeted scraping of the internet or CCTV footage to create or enhance facial recognition databases
  6. Emotion recognition in workplaces and educational institutions
  7. Biometric categorisation to derive certain protected characteristics
  8. Remote biometric identification in real time for law enforcement purposes in publicly accessible spaces High-Risk AI Systems

AI use cases that may pose serious risks to the protected legal interests are categorised as high-risk. These high-risk use cases include, according to Article 6, Annex I, III AI Act:

  1. AI safety components in critical infrastructures (e.g. transport) whose failure could jeopardise the lives and health of citizens
  2. AI solutions that are used in educational institutions and that can determine access to education and the course of a person’s professional life (e.g. assessment of examinations)
  3. AI-based safety components of products (e.g. AI application in robotic surgery)
  4. AI tools for employment, worker management and access to self-employment (e.g. CV sorting software for recruitment)
  5. Certain AI use cases used to provide access to essential private and public services (e.g. credit scoring that denies citizens the opportunity to obtain a loan)
  6. AI systems for remote biometric identification, emotion recognition and biometric categorisation (e.g. AI system for retroactive identification of a shoplifter)
  7. AI use cases in law enforcement that can interfere with people’s fundamental rights (e.g. assessing the reliability of evidence)
  8. AI use cases in migration, asylum and border control management (e.g. automated examination of visa applications)
  9. AI solutions for the administration of justice and democratic processes (e.g. AI solutions for the preparation of court judgements)

Provider Obligations for High-Risk AI Systems

Placing such high-risk AI systems on the market or putting them into service is not prohibited, but strict compliance requirements apply. The requirements for providers of high-risk AI systems are listed in Art. 8 to 17 AI Act:

  1. Appropriate risk assessment and mitigation systems
  2. High quality of data sets feeding the system to minimise the risk of discriminatory results
  3. Logging of activities to ensure traceability of results
  4. Detailed documentation containing all necessary information about the system and its purpose to enable authorities to assess its compliance
  5. Clear and appropriate information for the operator
  6. Appropriate human oversight measures
  7. High robustness, cybersecurity and accuracy

Limited-Risk AI Systems

By interacting with individuals or producing content, AI systems may pose specific risks of impersonation or deception. Such AI systems that are unlikely to cause significant harm or violate fundamental rights but pose transparency risks are therefore classified as limited-risk AI systems under the AI Act.

The tasks fulfilled by these systems are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a context that is listed as a high-risk use.

Furthermore the risk is lowered, as such AI systems are intended to improve the result of a previously completed human activity which it is not meant to replace or influence, without a proper human review. Transparency Requirements for Limited-Risk AI Systems

Despite the significantly lower risk, EU lawmakers deem it essential that certain disclosure obligations are complied with when using AI systems with transparency risks, to ensure trustworthiness. Users of AI systems such as chatbots should be explicitly informed that they are interacting with a machine in order to enable them to make informed decisions.

Providers of generative AI are obliged to ensure that AI-generated content is recognisable. Certain content, such as deepfakes and texts published to inform the public about matters of public interest, must be clearly and visibly labelled.

Minimal-Risk AI Systems

The AI Act does not introduce any rules for AI that is considered minimal or no risk. Most AI systems currently in use in the EU fall into this unregulated category. This includes applications such as AIenabled video games or spam filters.

The AI Act came into force on August 1, 2024, will come into force gradually and be fully applicable by August 2026, with some exceptions. To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative aimed at supporting future implementation, working with stakeholders and inviting AI providers and vendors from Europe and beyond to fulfil the key obligations of the AI Act ahead of time.

AI Regulation and the Digital Service Act

Another increasingly crucial measure in the era of AI-generated media was introduced by the Digital Services Act (DSA), which came into force on November 16, 2022.

The DSA establishes obligations for all digital services operating in the EU, with particularly stringent requirements for “Very Large Online Platforms” (VLOPs) such as Instagram, Snapchat, TikTok, and YouTube, as well as “Very Large Online Search Engines” (VLOSEs) like Google and Bing. These services must take stronger measures to protect users’ rights, ensure safety, and curb the spread of illegal or harmful content. In particular, platforms are required to assess and mitigate risks related to key societal concerns such as the integrity of elections, public safety, users’ mental and physical health, and gender-based violence. The law aims to protect users, especially children and young people under the age of 18, from online risks such as harassment, bullying, false information, illegal content, and identity deception.

The DSA aims to establish mechanisms for the reporting and prompt removal of illegal content, which includes harmful or deceptive AI-generated material (e.g., deepfakes, disinformation). The AI Act requires AI providers to label synthetic content and ensure transparency, while the DSA enforces platform accountability, e.g., requiring VLOPs and VLOSEs to assess and to put measures into place to mitigate risks related to AI-driven content moderation or recommendation systems.

This framework gained renewed attention during the controversy surrounding the Romanian presidential election, which was annulled in December 2024 amid allegations of disinformation campaigns. U.S. Vice President J.D. Vance criticized the EU’s response at the Munich Security Conference as disproportionate and questioned the resilience of European democracies by stating, that “if your democracy can be destroyed with a few hundred thousand dollars of digital advertising from a foreign country, then it wasn’t very strong to begin with.”. The European Commission invoked the DSA to initiate a preservation order targeting TikTok and other platforms, aiming to secure evidence and prevent further manipulation in upcoming EU elections. This case underscores the EU’s regulatory strategy: prioritizing digital sovereignty and systemic risk prevention, even in the face of political sensitivities.

Impact on Businesses

Businesses worldwide must comply with EU regulations if they wish to access the European market. Therefore, a comparable global influence can be predicted.

Given that, the AI Act acknowledges the unique challenges faced by small and medium-sized enterprises (SMEs) and startups and contains specific provisions designed to alleviate regulatory burdens for these businesses.

To facilitate compliance, SMEs benefit from simplified conformity assessments for high-risk AI systems and the opportunity to test their models within regulatory sandboxes. This allows experimentation under more flexible conditions. Additionally, targeted financial support, including EU funding programs, financial incentives, and specialized training resources is available to help SMEs navigate regulatory requirements without putting technological advancement on hold. This approach not only supports individual SMEs but is a key factor to promote fair competition and drive innovation.

Furthermore, the AI Act introduces extended transition periods, granting SMEs and startups more time to achieve full compliance. By calibrating penalties according to the size and financial capacity of the entity, smaller businesses will not have to fear excessive burdens.

Finally, by establishing a single, harmonized regulatory framework, the Act enhances legal certainty and facilitates cross-border expansion within the EU, providing businesses with a stable foundation for growth in the AI sector. The AI Act’s proactive and comprehensive approach may set a precedent that could inspire other countries to adopt similar regulatory frameworks, thus shaping the future of international AI policy.

 

The US Perspective: Hands-off Approach

As of now, the United States lacks a comprehensive federal legal framework specifically governing the development or deployment of artificial intelligence. In January 2025, President Trump signalled a deregulatory stance by issuing the Executive Order on Removing Barriers to American Leadership in Artificial Intelligence, which effectively revokes President Biden’s earlier Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. The new directive instructs federal agencies to revise or repeal any policies, regulations, or administrative actions enacted under the Biden administration that are deemed incompatible with the goal of reinforcing America’s global dominance in AI. While numerous initiatives had already been implemented under Biden’s executive order, the scope and depth of the reversals initiated by the Trump administration remain to be seen. As U.S. Vice President J.D. Vance underlined during his speech at the Artificial Intelligence Action Summit in Paris on February 11, 2025,

now, with the President’s recent executive order on AI, we’re developing an AI Action Plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential.”, inviting other governments to follow that model.

In his recent speeches in Paris and at the Munich Security Conference on February 14, 2025, Vance offered further insights into the U.S. approach to AI regulation. The speeches mainly consisted of a sharp critique of prevailing regulatory approaches to AI and digital governance, particularly in the European Union. The critique is accompanied with a warning to other countries planning to follow the same path.

In contrast the EU presents its framework as a model of global best practice, encouraging other countries to adopt similar standards without directly cautioning against alternative approaches. This reflects a broader geopolitical divergence: while the U.S. uses warnings to discourage alignment with stricter regulatory models, the EU promotes its approach as a global standard through normative influence rather than direct pressure.

The False Dilemma of Innovation vs. Regulation

With his opening statement, “I’m not here (…)  to talk about AI safety, I’m here to talk about AI opportunity,” Vance immediately established a binary between caution and progress. He warned that regulation would mean “paralyzing one of the most promising technologies we have seen in generations”, framing any comprehensive restriction as a threat to innovation.

However, innovation and regulation are not inherently in conflict. Vance’s perspective ignores the fact that well-constructed regulation is precisely what enables safe and ethical innovation. Without basic safeguards, AI systems risk reinforcing discrimination, undermining democracy, and threatening fundamental rights. Allowing businesses to develop dangerous AI systems that infringe on individual rights, exacerbate societal inequalities, raise significant ethical and privacy concerns, and threaten national security yields no real benefit for society.

Risks of Market Supremacy of Dominant Players at the Expense of SMEs and Start-Ups

Vance argued that excessive regulatory caution prevents the growth of innovative start-ups, ultimately benefiting dominant firms rather than protecting consumers or democracy. He positioned the United States as a leader in AI development, asserting that «the AI future is not going to be won by handwringing about safety. It will be won by building.» In contrast, he cautioned that Europe could face technological stagnation by prioritizing precaution over experimentation. His remarks specifically targeted European regulation such as the General Data Protection Regulation (GDPR) and the DSA. He argued that these frameworks disproportionately burden smaller companies:

Meanwhile, for smaller firms, navigating the GDPR means paying endless legal compliance costs or otherwise risking massive fines. Now, for some, the easiest way to avoid the dilemma has been to simply block EU users in the first place. Is this really the future that we want, ladies and gentlemen?

Vance further warns that the primary voices demanding stricter regulation are often the dominant market players themselves.

To ensure a consistent level of protection for natural persons throughout the EU and to prevent divergences hampering the free movement of personal data within the internal market, a regulation is necessary to provide legal certainty and transparency for all economic operators, including micro, small and medium-sized enterprises. The assertion that smaller businesses face insurmountable regulatory challenges under the legal frameworks, potentially leading them to block EU users as a means of avoiding compliance overlooks that the GDPR, as well as the AI Act, call for the specific needs of micro, small and medium-sized enterprises to be taken into account. Rather than “unfairly benefiting incumbents,” as Vance claims, regulation can help level the playing field by ensuring that all actors, including dominant firms, are held to the same standards of accountability.

Call for an “Ideological-Bias Free AI”

Vance’s remarks reflect a recurring tension in the current U.S. discourse on AI regulation: the conflation of factual disinformation with ideological censorship.

On the one hand, he warns against “onerous international rules” like the DSA, arguing that requirements to remove “so-called misinformation” amount to ideological policing and threaten free expression. Vance seeks to evoke fear that European legislation will suppress legitimate opinion and silence political dissent. On the other hand, he insists that “AI must remain free from ideological bias,” without offering a clear distinction between bias prevention and content moderation based on factual integrity. Instead, he appears to be referring to the idea that AI systems might reflect or enforce a particular political agenda, especially one associated with liberal or progressive ideologies, that may not align with the views of the current U.S. government.

In reality, private individuals spreading misinformation by mistake are the lesser concern for European lawmakers. Ordinary users rarely have the necessary reach to have far-reaching influence. The real threat is posed by coordinated disinformation campaigns, which deliberately exploit generative AI tools at scale to manipulate public opinion. While the removal of opinion-based content may raise concerns about free speech, deliberate disinformation is not protected under the right to free expression. Freedom of speech is intended to support open discourse and democratic deliberation, not to shield intentionally deceptive manipulation.

Vance promises that “American AI will not be co-opted into a tool for authoritarian censorship,”. He expresses his trust in the American people, stating they are capable “to think, to consume information, to develop their own ideas, and to debate with one another in the open marketplace of ideas.” emphasizing that “you can’t force people what to think, what to feel, or what to believe.”. Meanwhile the measures of the AI Act and the DSA serve to protect public discourse from manipulation and deception, not to enforce ideological conformity. The labelling of AI-generated content, in particular, is not a form of censorship, but a transparency requirement aimed at preserving democratic decisionmaking in a rapidly increasing synthetic information environment. Vance’s opposition to content moderation, would naturally make it harder for users to identify the origin and reliability of information, undermining informed digital discourse in the very ecosystem he claims to protect. Without mechanisms like content labelling and enforcing transparency requirements, which are designed to guard against actual bias and AI-powered disinformation, AI systems risk reinforcing dominant narratives or spreading harmful falsehoods.

At the Munich Security Conference, Vance deepened his criticism by stating that efforts to combat false information are increasingly seen by many Americans as

“old, entrenched interests hiding behind ugly, Soviet-era words like ‘misinformation’ and ‘disinformation,’ who simply don’t like the idea that somebody with an alternative viewpoint might express a different opinion, or, God forbid, vote a different way, or, even worse, win an election.”

This rhetoric presents regulatory efforts not as tools for protecting democratic discourse, but as political weapons used by elites to silence dissent.

Yet this perspective overlooks that misinformation and disinformation, unlike political disagreement, are defined by their factual inaccuracy and demonstrable harm. Conflating these concepts could influence public perception by giving people the wrong idea about the relationship between false information and free speech, potentially shaping attitudes that make it more difficult for lawmakers to establish effective regulations targeting deception, while so far offering no framework for distinguishing between harmful falsehoods and protected speech. Regulatory efforts such as the DSA aim not to suppress dissent, but to ensure transparency and accountability in digital spaces that are increasingly shaped by algorithmic amplification and synthetic content.

Ironically, Vance himself acknowledges that

“(…) the extraordinary prospect of a new industrial revolution (…) will never come to pass if overregulation deters innovators (…) nor will it occur if we allow AI to become dominated by massive players looking to use the tech to censor or control users’ thoughts.”

This statement reveals a clear contradiction in his position: while he largely rejects regulation as a barrier to innovation, he simultaneously warns against the very outcome that lack of regulation enables, namely, the concentration of power in the hands of dominant actors who can shape discourse to their advantage. Without regulatory oversight, there are few checks on powerful entities, making the risks of ideological influence and misuse all the more acute.

It is true that SMEs and start-ups may face hard to navigate burdens when complying with complex regulation. However, this is a challenge acknowledged by both the GDPR and the AI Act, which include provisions to account for the specific needs of micro, small, and medium-sized enterprises. Moreover, this difficulty does not justify abandoning robust regulation altogether. A complete absence of oversight would allow powerful incumbents to shape markets and narratives unchecked. Yet Vance offers no solution to this dilemma, leaving it unclear how he would prevent misuse or ensure accountability while rejecting regulatory oversight.

Furthermore, the AI Act introduces measures to prevent biased decisions in other context, by explicitly addressing high-risk AI use cases to ensure fairness and transparency. For example, AI solutions used in educational institutions for assessing exams or determining access to educational opportunities could have significant long-term consequences for individuals. The AI Act aims to prevent these systems from perpetuating biases that might unfairly disadvantage certain groups of people. Similarly, AI tools used in employment, such as CV sorting software, or in determining access to essential services like credit scoring, are also regulated under the AI Act. These applications have the potential to affect individuals’ professional and financial opportunities, and the Act seeks to minimize biases in these decision-making processes. Additionally, AI systems used for social scoring, assessing individuals’ behaviour or actions based on personal data, are also regulated to protect individuals’ privacy and prevent discriminatory practices. Thus, the AI Act creates a balanced approach that promotes innovation while safeguarding fundamental rights and ensuring that AI technologies are used ethically and responsibly in critical areas of life.

Content Regulation as a Danger for Democracy

In his remarks, Vance frames any regulation of speech or content, especially in the context of AI, as a direct threat to democracy. He argues that dismissing or silencing people’s concerns, or restricting media and elections, destroys democratic values. Vance’s position is clear: restricting or controlling opinions, no matter the source or political affiliation, poses a danger to the integrity of democracy, as it undermines the essential democratic principle that “the voice of the people matters“.

However, this stance oversimplifies the relationship between content regulation and free expression. Vance emphasizes that to believe in democracy is to recognize the wisdom and voice of each citizen, but it’s crucial that the opinions and concerns expressed are based on accurate information, something that becomes increasingly difficult in an online space where facts can easily be manipulated or altered, potentially undermining the very democratic principles he claims to defend.

While he is correct in stating that “it is the business of democracy to adjudicate (…) big questions at the ballot box”, the efficacy of this process hinges on the electorate being properly informed. Informed decision-making is vital to ensuring that democratic choices are made based on truth rather than disinformation.

Frameworks like the AI Act and DSA do not aim to suppress opinions or electoral participation. Instead, they seek to protect democracy by ensuring transparency and accountability in the information landscape. By labelling AI-generated content, the regulation does not undermine free speech but rather protects it by making the information ecosystem more transparent, allowing citizens to form their opinions based on reliable information. In this context, Vance’s argument paradoxically neglects the very tools that protect democratic discourse in an age of synthetic information. Without these safeguards, it is precisely unregulated or unchecked speech that can distort democracy, not the regulation itself.

By framing these measures as efforts to suppress people’s voices and dismiss their concerns, he posits that the true essence of democracy lies in the unfettered expression of opinions, even when expressed by influential figures from outside the country or when based on made-up facts.

Conclusion

In conclusion, Vance’s commentary on AI regulation largely constitutes ideological posturing rather than a substantive engagement with European regulatory frameworks. His emphasis on the dangers of overregulation neglects to identify specific concerns within the AI Act or propose detailed policy alternatives. While advocating for minimal regulation to foster American technological leadership, he overlooks the potential risks of an unregulated AI landscape, such as ethical violations, misuse, and the concentration of power in the hands of a few dominant tech firms.

While start-ups may face challenges under regulatory frameworks, the AI Act aims to strike a balanced approach that safeguards fundamental rights without undermining innovation or excluding businesses from global competition. The prohibition of certain sensitive AI uses and the labelling of AI-generated content, rather than inhibiting free expression, fosters transparency, thereby protecting individuals from disinformation and manipulation. It allows them to form fact-based opinions, thus safeguarding both freedom of thought and speech, and preserving the integrity of public discourse, especially in the context of politics and elections, ultimately preserving democratic values.

Given the transformative potential of AI, the regulatory decisions made today will have profound longterm implications for both technological progress and the protection of democratic values. Vance’s stance, however, overlooks the necessity of this balance, favouring a widely innovation-first approach that fails to adequately address the ethical dilemmas and societal risks posed by AI.

 

 


The Transatlantic Divide on AI Regulation: U.S. Innovation vs. European Precaution?

An Analysis of the Regulatory Landscape and J.D. Vance’s Recent Speeches on AI Governance

Meaghan Sophie Ulbrich

Fundación Instituto Internacional de Tecnología y Derecho Digital