KICompliance

EU AI Act 2025: What German Companies Need to Know Now

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Infographic of the EU AI Act with risk classification and enforcement timeline
Article

EU AI Act 2025: What Companies Need to Know Now

On August 1, 2024, the EU AI Act (Regulation (EU) 2024/1689) entered into force — the world's first comprehensive regulation for Artificial Intelligence. For companies that develop, deploy, or distribute AI systems within the EU, the implication is clear: compliance is no longer optional. It is mandatory.

This article is not an abstract legal analysis. It provides CTOs, IT leaders, and technical decision-makers with concrete guidance: Which obligations apply when? Which AI systems are affected? And what steps must you take now to remain compliant?

Why the EU AI Act Matters for Your Company

The EU AI Act follows a risk-based approach. Not every AI application is treated the same. What matters is the level of risk a system poses to fundamental rights, safety, and democracy. The critical point: even if you do not develop AI but only deploy it — as a so-called "deployer" — you are subject to obligations.

This applies to:

  • Companies that develop AI systems (providers)
  • Companies that deploy AI systems (deployers)
  • Importers and distributors of AI products in the EU market

If you operate an AI agent for automated document processing, run a chatbot for customer service, or use AI-powered scoring models for credit decisions — the EU AI Act affects you directly.

The Four Risk Categories in Detail

The EU AI Act classifies AI systems into four risk tiers. This classification determines which requirements and obligations apply to providers and deployers.

Unacceptable Risk (Prohibited)

AI systems in this category have been prohibited since February 2, 2025. There is no transition period.

Examples of prohibited systems:

  • Social scoring: Evaluating individuals based on social behavior or personal characteristics by public authorities or companies
  • Manipulative AI: Systems that subliminally manipulate human behavior, potentially causing physical or psychological harm
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrowly defined exceptions)
  • Emotion recognition in the workplace or educational institutions
  • Predictive policing based solely on profiling

Practical relevance for companies: Check whether internal systems perform employee scoring based on behavioral patterns or whether recruitment tools evaluate biometric data. Both scenarios can fall under the prohibition.

High-Risk AI Systems

This is the most heavily regulated category. High-risk systems are subject to extensive pre-market and post-market requirements.

High-risk AI includes:

  • Biometric identification and categorization of natural persons
  • Critical infrastructure: AI in energy, water, and transport management
  • Education and vocational training: Automated grading, exam results, admission decisions
  • Employment and HR management: AI-assisted hiring, promotion, termination
  • Access to essential services: Credit scoring, insurance scoring, social benefits
  • Law enforcement: Risk assessment, evidence evaluation
  • Migration and asylum: Application processing, risk assessment
  • Administration of justice and democratic processes: AI in court proceedings, election influence

Obligations for providers of high-risk AI:

RequirementDetails
Risk management systemContinuous risk assessment throughout the entire lifecycle
Data governanceQuality criteria for training, validation, and test data
Technical documentationComplete documentation of the system and its functionality
Record-keepingAutomatic logging for traceability
TransparencyClear instructions for use for deployers
Human oversightDesign enables effective human supervision
Accuracy and robustnessDemonstrable performance metrics, protection against manipulation
CybersecurityProtection against attacks and unauthorized access
Conformity assessmentBefore market access: self-assessment or third-party evaluation
EU database registrationEntry in the public EU database for high-risk AI

Obligations for deployers of high-risk AI:

  • Deploy in accordance with the provider's instructions for use
  • Ensure human oversight by qualified personnel
  • Monitor system performance and report incidents
  • Conduct a Fundamental Rights Impact Assessment (FRIA) before deployment
  • Inform affected individuals about the use of AI

Limited Risk (Transparency Obligations)

Systems with limited risk are primarily subject to transparency requirements.

Affected systems:

  • Chatbots and virtual assistants: Users must know they are interacting with AI
  • Deepfakes and AI-generated content: Must be labeled as such
  • Emotion recognition systems: Affected persons must be informed (unless falling under the prohibition)

Practical relevance: If you deploy an AI-powered customer service bot, you must clearly communicate to users that they are interacting with AI and not a human. This also applies to AI-generated emails and texts in business communications.

Minimal or No Risk

The majority of AI applications fall into this category. No specific regulatory requirements from the AI Act apply here.

Examples:

  • Spam filters
  • AI-powered search functions
  • Recommendation algorithms (with limitations)
  • AI-based translation tools
  • Automated data analysis without personal decision-making

The EU AI Act explicitly encourages providers in this category to voluntarily adhere to codes of conduct — even though no obligation exists.

Risk Classification Overview

Risk ClassExamplesRegulationStatus
UnacceptableSocial scoring, manipulative AI, biometric mass surveillanceFully prohibitedSince 02/2025
High-riskAI for hiring decisions, credit scoring, critical infrastructureExtensive compliance obligationsFrom 08/2026
LimitedChatbots, deepfake generators, emotion recognitionTransparency obligationsFrom 08/2026
MinimalSpam filters, translation tools, recommendation algorithmsNo specific obligationsVoluntary

Timeline: When Obligations Take Effect

The EU AI Act becomes effective in phases. The transition periods are clearly defined:

DateMilestone
August 1, 2024EU AI Act enters into force
February 2, 2025Prohibitions apply: All prohibited AI practices (Chapter II) and AI literacy requirements (Art. 4)
August 2, 2025General-Purpose AI (GPAI): Obligations for providers of general-purpose AI models and governance structures
August 2, 2026Main provisions: High-risk AI obligations (Annex III), transparency obligations, penalties
August 2, 2027Extended high-risk obligations: Systems under Annex I (products under EU harmonization legislation)

Recommendation: Even though the main obligations for high-risk AI only take effect in August 2026, preparation requires months. Risk classification, documentation, governance structures, and technical adjustments cannot be implemented overnight. Start now.

General-Purpose AI: Special Rules for Foundation Models

Since August 2, 2025, special rules apply to General-Purpose AI Models (GPAI) — models such as GPT-4, Claude, Llama, or Gemini that can be used for various purposes.

Obligations for All GPAI Providers

  • Create and maintain technical documentation
  • Provide information and documentation for downstream providers
  • Comply with copyright directive and ensure transparency about training data
  • Publish a summary of training data used

Additional Obligations for Systemic Risk

GPAI models with systemic risk (threshold: computational power > 10^25 FLOPs for training) face extended obligations:

  • Model evaluation according to standardized protocols
  • Assessment and mitigation of systemic risks
  • Documentation and reporting of serious incidents
  • Cybersecurity protection measures

Relevance for companies: If you use GPAI models within your AI agents — for example, for process automation — you are not directly subject to GPAI provider obligations as a deployer. However, you must ensure that your AI system provider fulfills these obligations, especially if the overall system is classified as high-risk.

What This Means for AI Agents in Enterprises

AI agents — autonomous systems that independently plan and execute tasks — become particularly relevant under the EU AI Act. The reason: the more autonomously a system operates, the higher the potential regulatory risk.

Risk Classification of AI Agents

The classification of an AI agent depends on its intended purpose, not the technology itself:

Use CaseRisk ClassObligations
Agent for internal document classificationMinimalVoluntary codes of conduct
Customer service agent with chat interfaceLimitedTransparency obligation (AI labeling)
Agent for automated credit scoringHigh-riskFull compliance requirements
Agent for hiring decisionsHigh-riskFull compliance requirements
Agent for production optimization (non-critical)MinimalVoluntary codes of conduct
Agent for critical infrastructure controlHigh-riskFull compliance requirements

Impact on Architecture

The EU AI Act has direct implications for the technical architecture of AI agent systems:

  • Logging and audit trail: Every decision by a high-risk agent must be traceably logged. This requires robust logging infrastructure, which we describe in detail in our article on AI agent architectures.
  • Human-in-the-loop: For high-risk applications, a human must be able to intervene in the decision process. The agent system needs defined escalation points.
  • Data protection by design: The AI Act complements the GDPR. Both must be fulfilled simultaneously. Our recommendation: on-premise LLMs and GDPR-compliant AI infrastructure as the foundation for privacy-critical applications.
  • Versioning and reproducibility: Model versions, prompt templates, and configurations must be versioned and documented.

Compliance Checklist for Companies

Use this checklist as a starting point for your AI Act compliance:

Phase 1: Inventory (Immediately)

  • Create an AI inventory: Identify and document all AI systems in use — including embedded AI in SaaS products
  • Conduct risk classification: Assign each system to one of the four risk tiers
  • Check prohibited practices: Ensure no prohibited AI applications are in use (deadline passed: February 2, 2025)
  • Clarify roles: Are you a provider, deployer, importer, or distributor?

Phase 2: Build Governance (Q1-Q2 2026)

  • Establish AI governance structure with clear responsibilities
  • Build AI literacy across teams (Art. 4 — mandatory since February 2, 2025)
  • Define risk management process for AI systems
  • Plan Data Protection Impact Assessment (DPIA) and Fundamental Rights Impact Assessment
  • Implement incident management for AI malfunctions

Phase 3: Technical Compliance (Q2-Q4 2026)

  • Create technical documentation for high-risk systems
  • Implement logging and monitoring (automatic record-keeping)
  • Build human oversight mechanisms into high-risk workflows
  • Deploy transparency measures (AI labeling, usage information)
  • Prepare and conduct conformity assessment

Phase 4: Continuous Compliance (From August 2026)

  • Establish post-market monitoring for high-risk systems
  • Schedule regular audits and risk classification reviews
  • Implement training programs for employees operating AI systems
  • Keep documentation current with system updates or model changes

Penalties: What Non-Compliance Means

The EU AI Act provides for significant penalties tied to revenue — similar to the GDPR:

ViolationMaximum Penalty
Use of prohibited AI practicesEUR 35 million or 7% of global annual turnover
Breach of high-risk requirementsEUR 15 million or 3% of global annual turnover
False information to authoritiesEUR 7.5 million or 1% of global annual turnover

SMEs and startups benefit from reduced caps, but proportionality does not mean immunity. National supervisory authorities — in Germany, likely the BNetzA (Federal Network Agency) as the central market surveillance authority — will receive enforcement powers. At the EU level, the EU AI Office coordinates enforcement and develops guidelines for harmonized implementation across member states.

Practical Example: AI Agent for Proposal Processing

Consider a concrete scenario: A mid-sized company deploys an AI agent that analyzes incoming tenders, extracts relevant information, and generates proposal drafts.

Risk classification: Minimal to Limited. The agent does not make decisions about natural persons and does not operate in a high-risk domain. However, if the agent automatically sends binding proposals, governance requirements increase.

Required measures:

  1. Transparency: Inform business partners about the use of AI in proposal creation
  2. Quality assurance: Human review before sending proposals
  3. Documentation: Traceable logging of agent decisions
  4. Data quality: Ensure training and context data are accurate and current

This scenario demonstrates: even at minimal regulatory risk, a structured approach to AI systems is valuable — not just for compliance, but for quality and trust.

The EU AI Act and GDPR: Working Together

The AI Act does not replace the GDPR — it complements it. Both regulatory frameworks must be fulfilled in parallel. The key intersections:

  • Data minimization (GDPR) meets data quality requirements (AI Act): You need high-quality training data but must not collect more data than necessary.
  • Data subject rights (GDPR) are reinforced by transparency obligations (AI Act): Affected individuals must be informed not only about data processing but also about AI-driven decisions.
  • Data Protection Impact Assessment (GDPR) and Fundamental Rights Impact Assessment (AI Act) can be conducted together.
  • On-premise deployment can address both requirements simultaneously — GDPR-compliant AI infrastructure with local LLMs minimizes data transfer risks and strengthens controllability.

Practical Steps: From Analysis to Implementation

The path to AI Act compliance does not need to be a shot in the dark. Here are five concrete steps you can initiate immediately:

Step 1: Inventory Your AI Systems

List every AI system in your organization — including embedded AI in SaaS tools such as CRM systems, HR platforms, or ERP solutions. Many companies underestimate the number of AI systems they already have in use. Microsoft Copilot, Salesforce Einstein, SAP Business AI — all of these count.

Step 2: Assign Risk Classes

For each identified system, work through the checklist: Is it prohibited? High-risk (Annex III)? Limited? Minimal? Document your assessment and the reasoning. This documentation is not just good practice — it will be expected during an audit.

Step 3: Establish a Governance Structure

Define responsibilities: Who is the AI officer? Who oversees compliance? Who conducts risk classifications? A clear governance structure is the prerequisite for all subsequent measures.

Step 4: Evaluate Technical Infrastructure

Assess whether your current IT infrastructure meets the AI Act's requirements: logging, monitoring, versioning, human-in-the-loop mechanisms. Identify gaps and prioritize by risk class. For AI automation in regulated environments, a solid technical foundation is essential.

Step 5: Launch a Training Program

Article 4 of the AI Act obliges companies to build AI literacy among employees who operate or oversee AI systems. This obligation has been in effect since February 2, 2025. Start with awareness training for leadership and in-depth training for technical teams.

Frequently Asked Questions (FAQ)

Does the EU AI Act apply to SMEs and startups?

Yes. The AI Act applies regardless of company size. However, the regulation provides accommodations: SMEs can benefit from reduced penalty caps, and the European Commission has announced guidelines and regulatory sandboxes to ease the compliance entry for smaller companies. The substantive requirements remain identical — a high-risk system is a high-risk system, regardless of who operates it.

We only use SaaS AI tools (e.g., ChatGPT Enterprise, Microsoft Copilot). Are we affected?

Yes, as a deployer. You are responsible for using AI systems in accordance with the provider's instructions, ensuring human oversight, and — for high-risk applications — conducting a Fundamental Rights Impact Assessment. Additionally, verify that the SaaS provider fulfills its provider obligations, particularly regarding technical documentation and conformity assessment.

How do I correctly classify my AI system?

Check in three steps: (1) Does the system fall under a prohibited practice (Art. 5)? (2) Does the intended purpose fall under Annex III (high-risk areas)? (3) Is the system a component of a product subject to EU harmonization legislation (Annex I)? If none apply, check transparency obligations for chatbots, deepfakes, or emotion recognition. When in doubt: seek legal advice with AI specialization.

What happens if the risk class of my system changes?

This can happen — for example, if you start using a previously internal AI agent for hiring decisions. You must continuously review the risk classification, especially when the intended purpose changes, models are updated, or functionality is expanded. If the system is reclassified upward, you must meet the corresponding compliance requirements within a reasonable timeframe.

How do the EU AI Act and ISO standards relate?

The European Commission has tasked the European standardization organizations CEN and CENELEC with developing harmonized standards for the AI Act. These standards will likely build on existing frameworks such as ISO/IEC 42001 (AI Management System) and ISO/IEC 23894 (AI Risk Management). Compliance with harmonized standards will provide a presumption of conformity — similar to CE marking.

Next Steps: How We Can Help

The EU AI Act is complex, but it is also an opportunity. Companies that establish regulatory clarity now gain trust from customers, partners, and employees. Compliantly deployed AI is better AI — documented, traceable, quality-assured.

At IJONIS, we help companies build AI systems that are not only technically excellent but also regulatory-safe:

  • AI inventory and risk classification: We analyze your existing AI systems and assign them to the correct risk categories.
  • Compliance architecture: We design AI agent systems with built-in logging, human-in-the-loop, and audit trails — AI Act-compliant from the start.
  • GDPR + AI Act: We implement AI automation that addresses both regulatory frameworks simultaneously.
  • On-premise and hybrid: For maximum control, we deploy local LLMs and secure infrastructure.

Want to know where your company stands with the EU AI Act? Talk to us — we conduct an initial assessment and develop your compliance roadmap together.


How ready is your company for AI? Find out in 3 minutes with our free, AI-powered readiness assessment. Take the free assessment →

End of article

AI Readiness Check

Find out in 3 min. how AI-ready your company is.

Start now3 min. · Free

AI Insights for Decision Makers

Monthly insights on AI automation, software architecture, and digital transformation. No spam, unsubscribe anytime.

Let's talk

Questions about this article?.

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Managing Director

Book appointment
WhatsAppQuick & direct

Send a message

This site is protected by reCAPTCHA and the Google Privacy Policy Terms of Service apply.