The EU Artificial Intelligence Act establishes a harmonised legal framework governing the placing on the market, putting into service and use of AI systems within the European Union. As a directly applicable Regulation, it creates uniform rules across all Member States and avoids divergent national approaches to AI regulation.

Its scope is deliberately broad. It applies to providers placing AI systems on the EU market, deployers using AI systems within the Union, and importers and distributors. It also applies to organisations established outside the EU where the output of the AI system is used within the Union. For businesses operating internationally, this extraterritorial reach is a material consideration.

The objectives of the Act are to ensure that AI systems placed on the EU market are safe and respect fundamental rights, to support the proper functioning of the internal market, and to foster trust in AI technologies.

The Regulation is structured around a risk-based model. Certain AI practices are prohibited outright. The most significant obligations apply to high-risk AI systems, including those used in areas such as recruitment, creditworthiness assessment, access to essential services and certain safety components of regulated products. Providers of high-risk systems must implement a risk management system, comply with data governance requirements, prepare detailed technical documentation, ensure appropriate human oversight and complete conformity assessment procedures before placing systems on the market. Post-market monitoring obligations also apply.

Other AI systems are subject primarily to transparency obligations, particularly where individuals interact directly with an AI system or where synthetic content is generated. For businesses developing AI-enabled digital products, correct classification under the Regulation is the key legal starting point, as it determines the scope of regulatory obligations.

EU Regulations Affecting Digital Products

The EU Artificial Intelligence Act does not replace existing EU legislation. It operates alongside established regulatory regimes that already govern digital and technology products.

Where AI forms part of a physical product subject to EU harmonisation legislation, compliance with the AI Act must be integrated into the relevant conformity assessment and CE marking process. Product safety law therefore remains central.

Where personal data is processed, the General Data Protection Regulation continues to apply in full. The AI Act does not displace data protection obligations. In practice, high-risk AI systems may require both compliance with the AI Act’s lifecycle risk management requirements and a data protection impact assessment under the GDPR.

Digital service providers must also consider obligations under the Digital Services Act, particularly where EU AI systems are used in recommender systems, advertising technologies or content moderation tools. Cybersecurity obligations, where in scope under the NIS2 Directive and related legislation, reinforce requirements relating to robustness and resilience.

For digital product businesses, regulatory analysis should therefore be holistic. AI compliance sits within a broader framework of product safety, data protection, digital services and cybersecurity law.

Compliance and Governance Considerations

The AI Act introduces structured and enforceable obligations, particularly for providers of high-risk AI systems. These obligations are operational and extend across the lifecycle of a system.

Providers must be able to demonstrate that risks have been identified and mitigated, that training and testing data meet quality standards, and that technical documentation is sufficient to allow regulatory scrutiny. Human oversight measures must be designed into the system, and logging and monitoring capabilities must be maintained.

Transparency requirements apply to certain AI systems even where they are not classified as high risk. Clear disclosure to users may require adjustments to user interfaces and customer communications.

From a governance perspective, the Act reflects a strong emphasis on accountability. Organisations will need clear allocation of responsibility, documented decision-making processes and senior oversight of AI risk. Regulators are likely to expect evidence of structured internal controls rather than informal compliance practices.

How Businesses Can Prepare for the EU Artificial Intelligence Act?

Preparation should begin with a structured internal assessment of AI systems in development and in use. This assessment should also capture any third-party AI components embedded in products or workflows (including foundation model APIs, hosted models and vendor AI tools), as downstream compliance may depend on supplier documentation, update notices and contractual cooperation. Without a clear understanding of what systems exist and how they are deployed, it is not possible to assess regulatory exposure accurately.

Once identified, systems should be reviewed against the definitions and classification criteria set out in the Regulation. This classification exercise should be documented and revisited as products evolve.

Businesses should also review governance arrangements to ensure that AI compliance responsibilities are clearly assigned across legal, compliance, product and technical teams. Regulatory requirements should be incorporated into product design and development processes at an early stage, particularly in relation to data governance, documentation standards and oversight mechanisms.

Finally, EU Artificial Intelligence Act preparation should be aligned with existing GDPR, product safety and cybersecurity compliance programmes. A coordinated approach will reduce duplication and support a defensible compliance framework.

General Purpose AI Models

The AI Act also introduces obligations for General Purpose AI (GPAI) models, including foundation models that may be integrated into multiple downstream systems. Providers must maintain technical documentation, provide information to downstream developers to support compliance, and implement policies addressing copyright in training data.

Where a GPAI model presents systemic risk, additional requirements apply, including model evaluation, risk mitigation and incident reporting.

EU Artificial Intelligence Act Timelines

The AI Act entered into force in August 2024, with obligations applying in stages.

  • February 2025 – rules on prohibited AI practices apply
  • August 2025 – obligations for GPAI models begin to apply
  • August 2026 – most provisions, including certain high-risk system requirements, become applicable
  • August 2027 – additional requirements apply to certain AI systems embedded in regulated products

Determining Risk

Organisations must assess whether an AI system falls within the high-risk category under the Regulation. This requires considering the system’s intended purpose and whether it falls within the categories listed in the Act. Examples include systems used in recruitment, credit scoring, biometric identification and access to essential services. Where classification is uncertain, the assessment and rationale should be documented.

Governance Structure and Documented Controls

The AI Act emphasises organisational accountability. Businesses developing or deploying AI systems should establish a governance framework with clear responsibilities for AI compliance. This typically includes documented risk assessments, internal approval processes, human oversight procedures and monitoring mechanisms.

Practical Steps for Businesses

Businesses can begin preparing by:

  • identifying AI systems currently in development or use (including any third-party AI components)
  • classifying those systems under the AI Act risk framework
  • reviewing policies relating to data governance, testing and oversight
  • establishing internal governance and compliance processes

Penalties

The AI Act introduces significant administrative fines. Breaches of prohibited AI practices may result in fines of up to €35 million or 7% of global annual turnover. Other violations may lead to fines of up to €15 million or 3% of turnover, while providing incorrect information to regulators may result in fines of up to €7.5 million or 1% of turnover.