Property & Casualty

Setting a Global Standard | Comprehensive Artificial Intelligence Regulation

Setting a Global Standard | Comprehensive Artificial Intelligence Regulation

The EU AI Act: What it Does

The AI Act establishes a regulatory framework for artificial intelligence systems, setting standards based on the nature of their application and the level of risk they pose.

Prohibited Practices

At its strictest, the Act prohibits AI uses that infringe on fundamental rights, particularly privacy.

These include:

  • Biometric surveillance in public places, such as facial recognition or gait analysis used to identify and track individuals without their consent—practices that raise serious concerns about privacy and civil liberties.
  • Social scoring systems which evaluate individuals based on their behavior, actions or personal traits to determine access to services, benefits or impose penalties. Such systems are widely criticized for fostering discrimination and violating human rights.

Transparency

The Act also emphasizes transparency in the development and deployment of AI technologies by:

  • Promoting systems that are explainable, accountable and accessible to human understanding—moving away from opaque, “black box” models.
  • Requiring clarity around an AI system’s design, the data it processes and the logic behind its decisions.

The Act aims to have AI build trust, ensure fairness, support regulatory compliance and allow for the identification and mitigation of potential biases.

Risk Classification of AI Systems

The AI Act adopts a tiered, risk-based framework to regulate AI, categorizing systems into four distinct classes based on their potential impact:

  • Minimal risk: These systems pose little to no threat to users or society. An example is an AI algorithm that recommends videos based on viewing history.
  • Limited risk: This category includes foundation models—large-scale AI systems trained on extensive and diverse datasets. They serve as the backbone for many generative AI applications, such as chatbots that assist users on websites. While generally safe, they require transparency and disclosure to ensure responsible use.
  • High risk: AI systems in this class carry significant potential to affect health, safety or fundamental rights. Examples include algorithms used in hiring or employee performance evaluations, where biased outcomes could lead to discrimination or unfair treatment.
  • Unacceptable risk: These uses are deemed incompatible with EU values and are strictly prohibited. They include AI systems designed to manipulate behavior or infringe on rights.

Penalties and Global Reach

The AI Act imposes strict financial penalties for serious violations, with fines reaching up to 7% of a company’s total global annual revenue. Its provisions apply not only to EU-based organizations but also to companies outside the EU that offer AI services or products within its borders. Given that AI technologies have been in use for over half a century, the regulation may affect existing systems already deployed across various industries.

Insurance Implications of AI-driven Risks

Triggering Events

AI technologies introduce a range of potential exposures that can trigger insurance claims, including:

  • Manipulated and falsified information: Deepfake videos and voice cloning can enable security breaches and sophisticated social engineering attacks.
  • Hallucinations and misinformation: AI-generated outputs that are false or misleading may lead to liability for directors, officers and professionals who rely on them in decision-making.
  • Privacy violations: Sharing sensitive data with AI systems may breach contracts, privacy laws or regulatory obligations.
  • Intellectual property infringement: AI-generated content can unlawfully replicate or misuse protected assets such as images, code, music, trademarks or personal identifiers used in training datasets.
  • Model bias: Systematic errors in AI models can produce discriminatory or unfair outcomes, exposing organizations to reputational and legal risk.
  • False advertising (AI washing): Misrepresenting AI capabilities or minimizing associated risks may result in regulatory scrutiny or consumer claims.

These risks can lead to financial loss, legal liability, property damage or even bodily injury—implicating a broad spectrum of insurance coverage. Relevant policies may include Cyber Liability, Directors & Officers (D&O), Errors & Omissions (E&O), Media Liability, Employment Practices Liability (EPL), Products Liability and General Liability.

To address these emerging exposures, AI-specific insurance products have been developed. Brown & Brown brokers can assist organizations in identifying and securing coverage tailored to their unique AI risk profile.

The Evolving Landscape of AI Regulation

AI governance is rapidly shifting across all jurisdictions. While the Biden administration introduced federal compliance rules scheduled to take effect on May 15, 2025, those regulations were rescinded by the current administration. However, an Executive Order outlining guiding principles for AI development remains in force. The Trump administration has emphasized AI competence as a strategic priority to maintain U.S. leadership in technology, making comprehensive federal legislation unlikely in the near term.

In the absence of federal mandates, individual states have begun to act. Colorado has passed legislation modeled after the EU’s AI Act, and other states are expected to follow suit.

Internationally, regulatory momentum is building. UK authorities are pursuing sector-specific AI rules, while the European Union is advancing a unified legal framework that applies across all industries, regulated or not. The EU is also reforming liability standards for AI systems and AI-enhanced products, aiming to simplify the process for victims to seek compensation.

Globally, experts have identified over 70 jurisdictions with draft AI legislation under review. As the pace of AI innovation accelerates, regulatory frameworks will continue to expand, shaping how organizations develop and deploy AI technologies. Risk professionals must remain vigilant, ensuring that their risk transfer strategies and management programs evolve in step with this dynamic regulatory environment.

Christopher Keegan

Senior Managing Director

Britt Eilhardt

Managing Director