Buyer's Playbook

EU AI Act: What Business Owners Building AI Products Must Know

By Riya Thambiraj11 min read
Two colleagues reviewing data on a laptop. - EU AI Act: What Business Owners Building AI Products Must Know

What Matters

  • -The EU AI Act is a law, not a guideline - penalties reach 35M euros or 7% of global revenue
  • -Your AI product's obligations depend on its risk classification - most business AI falls into "limited risk" (transparency) or "high risk" (strict controls)
  • -High-risk AI systems need risk management, human oversight, data governance, technical documentation, and conformity assessments
  • -The law applies to any AI product available to EU users - regardless of where your company is based
  • -Most requirements take effect August 2026 - but prohibited AI practices are already banned as of February 2025

In February 2025, the EU banned a set of AI practices outright. Social scoring systems. Emotion recognition in workplaces and schools. Real-time biometric surveillance in public spaces (with limited law enforcement exceptions). Manipulation techniques that exploit vulnerabilities.

These aren't future provisions. They're already law.

The rest of the EU AI Act rolls out through 2027, creating the world's first regulatory framework for artificial intelligence. And like GDPR, it applies to any company whose AI products reach EU users - regardless of where the company is headquartered.

If you're building an AI product that serves European markets, the compliance clock is already ticking.

TL;DR
The EU AI Act is the world's first law specifically regulating AI systems. It classifies AI into four risk tiers: Unacceptable (banned), High-Risk (strict requirements), Limited Risk (transparency obligations), and Minimal Risk (no specific requirements). Most business AI falls into Limited or High Risk. High-risk AI needs risk management, human oversight, data governance, and conformity assessments. The law applies to any AI available to EU users. Penalties reach 35M euros or 7% of global revenue. Prohibited practices are already banned; most other requirements take effect August 2026.

The Risk Classification System

The EU AI Act doesn't regulate all AI the same way. It uses a risk-based approach with four tiers. Your obligations depend on which tier your AI system falls into.

Tier 1: Unacceptable Risk (Banned)

These AI practices are prohibited in the EU as of February 2025:

  • Social scoring - Government or private systems that evaluate people based on social behavior or predicted personality traits, leading to detrimental treatment
  • Exploitation of vulnerabilities - AI that manipulates people through their age, disability, or social/economic situation
  • Real-time biometric identification in public spaces - With narrow law enforcement exceptions
  • Emotion recognition in workplaces and education - AI systems that infer emotions of employees or students
  • Untargeted facial image scraping - Building facial recognition databases from internet or CCTV footage
  • Biometric categorization by sensitive attributes - Classifying people by race, political opinions, religious beliefs, sexual orientation

If your AI product does any of these things, it cannot operate in the EU. Full stop.

Tier 2: High Risk (Strict Requirements)

High-risk AI systems are allowed but subject to mandatory compliance requirements. Your AI is high-risk if it falls into one of these categories (from Annex III):

Biometrics - Remote biometric identification, biometric categorization, emotion recognition (where not banned)

Critical infrastructure - AI managing safety of road traffic, water supply, gas, heating, electricity

Education - AI determining access to education, evaluating learning outcomes, monitoring student behavior during exams

Employment - AI used in recruitment (CV screening, interview evaluation), making termination or promotion decisions, task allocation based on behavior, monitoring and evaluation of performance

Access to essential services - Credit scoring and creditworthiness assessment, risk assessment for insurance pricing, evaluating eligibility for public benefits

Law enforcement - Risk assessment of individuals, polygraph or emotion detection, evidence evaluation, crime prediction

Migration and border control - Risk assessment of irregular migration, asylum application evaluation, border security monitoring

Administration of justice - AI assisting judicial decisions, alternative dispute resolution

The integration trap

If you integrate an LLM or AI model into a product that makes or supports decisions in any of these categories, the integrated system is high-risk - even if the underlying model on its own is general-purpose. A general-purpose chatbot is minimal risk. That same chatbot screening job applications is high-risk.

Tier 3: Limited Risk (Transparency Obligations)

AI systems that interact with people have transparency obligations:

  • Chatbots and virtual assistants - Must inform users they're interacting with AI
  • AI-generated content - Must be labeled as AI-generated (deepfakes, synthetic text, AI images)
  • Emotion recognition and biometric categorization - Must inform subjects they're being analyzed (where not banned)

Most consumer-facing AI products fall here. The core requirement is disclosure - don't pretend AI is human.

Tier 4: Minimal Risk (No Specific Requirements)

AI systems that don't fall into the above categories have no specific obligations under the Act. Examples: spam filters, AI-powered search, game AI, inventory optimization.

Even minimal-risk AI is encouraged to follow voluntary codes of conduct. But there's no legal requirement.

What High-Risk AI Systems Must Do

If your AI is classified as high-risk, here are the mandatory requirements:

Risk Management System (Article 9)

You must establish and maintain a risk management system throughout the AI system's lifecycle. This includes:

  • Identifying and analyzing known and foreseeable risks
  • Estimating and evaluating risks that emerge during use
  • Adopting risk management measures (design changes, human oversight, training data improvements)
  • Testing the system to identify the most appropriate risk management measures

This isn't a one-time assessment. It's a continuous process that runs from development through deployment and updates.

Data Governance (Article 10)

Training, validation, and testing data must meet quality standards:

  • Relevant, representative, and as error-free as possible
  • Account for the specific geographic, contextual, and behavioral setting where the system will be used
  • Examine and address potential biases in the data
  • If special category data (race, health, political opinions) is necessary for bias detection, specific safeguards apply

For business owners: This means your AI vendor needs to document where training data comes from, how it was cleaned, what biases were tested for, and how representative it is of your user base.

Technical Documentation (Article 11)

Before placing a high-risk AI system on the market, you must create technical documentation that demonstrates compliance. This includes:

  • General description of the AI system and its intended purpose
  • Development process description
  • Design specifications and system architecture
  • Data requirements and data governance measures
  • Monitoring, functioning, and control measures
  • Risk management system documentation
  • Changes made during the lifecycle

Record-Keeping (Article 12)

High-risk AI systems must have automatic logging capabilities. Logs must record:

  • Operating period of each use
  • Input data or reference database used
  • The results/outputs produced

Logs must be retained for a period appropriate to the intended purpose (minimum: the duration of the system's lifecycle or as required by EU/member state law).

Transparency and User Information (Article 13)

High-risk AI systems must be designed to allow deployers to interpret output and use it appropriately. Users must receive:

  • Provider identity and contact details
  • System characteristics, capabilities, and limitations
  • Performance metrics (accuracy, robustness, cybersecurity)
  • Foreseeable misuse scenarios and their risks
  • Human oversight measures
  • Expected lifetime and maintenance requirements

Human Oversight (Article 14)

High-risk AI systems must be designed to allow human oversight. The humans overseeing the system must be able to:

  • Understand the AI system's capabilities and limitations
  • Monitor the system's operation
  • Interpret the system's output correctly
  • Override or reverse AI decisions
  • Interrupt the system (a "stop button")

For business owners: This means you can't deploy a fully autonomous AI for high-risk decisions. A human must be in the loop with the ability and authority to override the AI.

Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must achieve appropriate levels of:

  • Accuracy - Performance metrics declared and achievable
  • Robustness - Resilient to errors, faults, and inconsistencies
  • Cybersecurity - Protected against attempts to exploit vulnerabilities, including data poisoning, model manipulation, and adversarial inputs

How the EU AI Act Affects Your Product

Product TypeRisk ClassificationKey Obligations
Customer-facing chatbotLimited RiskDisclose that users are interacting with AI
AI-generated content toolLimited RiskLabel output as AI-generated
AI-powered recommendation engineMinimal RiskNo specific obligations (voluntary code of conduct encouraged)
AI recruitment/screening toolHigh RiskFull compliance: risk management, data governance, human oversight, conformity assessment
AI credit scoring systemHigh RiskFull compliance plus financial regulation requirements
AI medical diagnostic supportHigh RiskFull compliance plus medical device regulation requirements
AI-powered internal analyticsMinimal Risk (usually)No specific obligations unless it affects employees in high-risk ways
AI content moderationLimited RiskTransparency about AI involvement

General-Purpose AI Models (GPAI)

The Act has separate provisions for general-purpose AI models (like GPT-4, Claude, Gemini). If you're building on top of these:

As a deployer (integrating GPAI into your product):

  • You're responsible for compliance at the application level
  • If your use case is high-risk, you bear the high-risk obligations for the integrated system
  • The GPAI provider handles model-level obligations; you handle deployment-level obligations

The practical split: OpenAI/Anthropic/Google handle model transparency and documentation. You handle how the model is used in your product, including risk management, human oversight, and user-facing transparency.

The Compliance Timeline

DateWhat Takes Effect
February 2025Prohibited AI practices banned. AI literacy requirements for providers and deployers.
August 2025Rules for GPAI models. Governance structures established.
August 2026Most high-risk AI obligations. Conformity assessments. Transparency obligations for limited risk. Enforcement structures.
August 2027High-risk AI obligations for systems embedded in regulated products (medical devices, automotive, aviation, toys).

If you're building a high-risk AI product now: you have until August 2026 to achieve compliance. That's roughly 16 months. Given that conformity assessments, risk management systems, and technical documentation take time to prepare, starting now is not optional.

What EU AI Act Compliance Costs

Costs vary significantly by risk classification:

Risk LevelCompliance CostWhy
Minimal RiskNear zeroNo specific requirements
Limited Risk$5K-$15KAI disclosure UX, content labeling systems
High Risk$50K-$300K+Risk management system, data governance, technical documentation, conformity assessment, human oversight mechanisms, ongoing monitoring

For high-risk systems, the largest cost drivers are:

  • Conformity assessment ($20K-$100K) - Third-party or self-assessment depending on category
  • Technical documentation ($15K-$50K) - Complete system documentation meeting Article 11 requirements
  • Risk management system ($15K-$50K) - Continuous risk identification, evaluation, and mitigation
  • Human oversight design ($10K-$30K) - Building override capabilities, interpretability features, monitoring dashboards

Questions to Ask Your Development Partner

  1. "What risk classification does our AI product fall under?" - They should be able to map your product to the Act's risk categories and explain why. If they're not familiar with the classification system, they haven't worked under the EU AI Act before.

  2. "How do you build human oversight into high-risk AI systems?" - Look for: interpretable outputs, override mechanisms, confidence scoring, fallback to human review for edge cases, and monitoring dashboards.

  3. "How do you handle AI transparency requirements?" - For limited risk: clear AI disclosure in the UI. For high risk: system documentation, performance metrics, and user-facing explanations of AI capabilities and limitations.

  4. "What's your approach to data governance for AI training data?" - They should describe data documentation, bias testing, representativeness analysis, and ongoing data quality monitoring.

  5. "Have you built AI products for the EU market under the AI Act?" - The Act is new, so deep experience is rare. But awareness of the requirements, a plan for conformity assessment, and experience with GDPR (which shares similar compliance rigor) are good indicators.

Your EU AI Act Compliance Checklist

Before development starts:

  • Classify your AI system's risk level (Unacceptable, High, Limited, or Minimal)
  • Verify your AI doesn't fall into prohibited categories
  • Identify if you're a provider (building the AI) or deployer (using someone else's AI in your product)
  • If high-risk: begin risk management system documentation
  • If using GPAI models: review provider's compliance documentation

During development (high-risk systems):

  • Document training data sources, quality measures, and bias testing
  • Build human oversight mechanisms (override, interrupt, monitoring)
  • Build logging and record-keeping for all system operations
  • Build interpretability features (users must understand AI output)
  • Implement accuracy and robustness testing
  • Implement cybersecurity measures against adversarial attacks
  • Create technical documentation per Article 11

During development (limited-risk systems):

  • Build AI disclosure UX ("You're interacting with an AI assistant")
  • Build AI content labeling for generated text, images, or media
  • Document the AI system's capabilities and limitations

Before launch:

  • Complete conformity assessment (high-risk systems)
  • Register in the EU AI database (high-risk systems)
  • Prepare user-facing documentation on AI capabilities and limitations
  • Ensure human oversight operators are trained
  • Verify all transparency obligations are met
  • Establish post-market monitoring plan (high-risk systems)
  • Establish incident reporting procedures

The EU AI Act is the first of its kind, but it won't be the last. The US, UK, China, and other jurisdictions are developing their own AI regulations. Building with compliance in mind now saves you from retrofitting as more countries follow the EU's lead.

Frequently asked questions

Yes, if your AI system is used by people in the EU. The law has extraterritorial scope similar to GDPR - it applies to providers who place AI systems on the EU market, deployers who use AI systems within the EU, and providers/deployers outside the EU whose AI system output is used in the EU. If EU residents interact with your AI product, the Act likely applies.

Share this article