Zum Inhalt springen
KICompliance

AI Literacy Obligation: What Article 4 EU AI Act Requires

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Training room with an AI competency matrix on a whiteboard and professionals in an AI regulation workshop
Article

AI Literacy Obligation: What Article 4 of the EU AI Act Demands from Your Company

Most companies have heard of the EU AI Act. Very few know that one of the first obligations is already in effect — since February 2, 2025: the AI literacy obligation under Article 4. Providers and deployers of AI systems must ensure their staff possess a "sufficient level of AI literacy." Not an optional nice-to-have, but law.

What does "sufficient" mean? Who is affected? How do you prove compliance? This article answers these questions — with a competency matrix, a four-step training program, and documentation recommendations.

What Does Article 4 of the EU AI Act Require?

"The AI Act is not about slowing down innovation. It's about making sure that humans remain in control." — Thierry Breton, former EU Commissioner for Internal Market

The wording of Article 4 of the AI Regulation is deliberately broad:

"Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, the context in which the AI systems are to be used, and the persons or groups of persons on which the AI systems are to be used."

That sounds abstract. But the core elements are concrete:

  • Who is responsible? Providers and deployers — both companies that develop AI and those that deploy it.
  • Who must be trained? All persons involved in the operation and use of AI systems — including external contractors.
  • What is the standard? There is no universal benchmark. Competency must be context-dependent: adapted to the role, the use case, and the affected persons.
  • Since when does this apply? Since February 2, 2025 — alongside the prohibitions from Chapter II.
ℹ️

EU Commission: No One-Size-Fits-All

The EU Commission has clarified in its FAQ that there is no mandated certification and no standardized training format. Requirements depend on the specific deployment context. A customer service team managing a chatbot needs different competencies than an IT department operating an AI agent for process automation.

Who Is Affected?

Article 4 addresses two clearly defined groups: providers who develop and market AI systems, and deployers who use AI systems in their organizations. Both groups bear responsibility for ensuring their staff are adequately trained and competent.

Providers: Companies That Develop AI

Companies that develop and market AI systems. For them, Article 4 means: not only must their development team be AI-literate, but also the people providing support, training, and documentation for deployers.

Deployers: Companies That Use AI

Companies that use AI systems — and that covers the majority. If your company uses Microsoft Copilot, ChatGPT Enterprise, SAP Business AI, or an internal AI agent, you are a deployer under the AI Act.

Your staff who operate, monitor, or oversee these systems must have sufficient AI literacy.

Important: Even if you use AI only indirectly — through a CRM with embedded AI or an ERP system with AI-powered forecasting — you fall under deployer obligations. Industry chambers across Germany — from the Hamburg Chamber of Commerce to the IHK Cologne — are already publishing guidance for businesses in their regions, and similar guidance is emerging across the EU.

What Does "Sufficient AI Literacy" Mean?

The EU Commission deliberately avoided a rigid definition. Instead, Article 4 names four dimensions that must be considered when assessing competency:

  1. Technical knowledge — Understanding how the AI system works, its limitations, and its risks
  2. Experience and education — Existing qualifications and practical experience with AI
  3. Context of use — In which domain and for which tasks the AI system is deployed
  4. Affected persons — Who is impacted by the AI system's decisions or outputs

This means: a CEO deciding whether to deploy an AI system needs different competencies than the specialist using it daily. And an AI system that summarizes internal reports requires less in-depth knowledge than one making automated credit decisions.

Competency Matrix: Which Role Needs Which Knowledge?

One of the biggest challenges in implementing Article 4 is the question: what exactly does who need to know? The following competency matrix provides a practical framework.

Reading guide: "Yes" means in-depth knowledge required. "Basics" means awareness-level understanding. "Limited" means situational — only when the role is directly involved with AI-driven decisions.

What the Matrix Means in Practice

  • Executive leadership must understand which AI systems run in the organization, what risks they carry, and which legal obligations arise. No programming knowledge needed, but strategic AI literacy is essential.
  • IT leadership needs the full package: technical understanding, legal framework knowledge, and risk assessment capability. This role often serves as the internal AI governance anchor.
  • Business units (e.g., sales, marketing, procurement) need awareness: What can the AI tool do? What can it not? When should I escalate? Which outputs can I accept without review — and which cannot?
  • Data protection officers must understand the interface between GDPR and the AI Act — especially for systems processing personal data.

Practical Implementation: Training Program in Four Steps

The obligation is clear. Implementation does not have to be a massive project. Four steps are enough to build a solid AI literacy program.

Step 1: AI Inventory and Role Analysis

Before you train, you need to know what to train on. Create a complete inventory of all AI systems in your organization — including embedded AI in SaaS products. Then map each role to the systems it uses, monitors, or oversees.

Specifically:

  • List all AI systems (including Copilot, ChatGPT, embedded AI in CRM/ERP)
  • Classify each system by risk category (see EU AI Act risk classes overview)
  • Identify the roles involved with each system
  • Derive training needs per role from the competency matrix

Step 2: Differentiate Training Formats

Not every role needs the same format. Differentiate across three tiers:

TierTarget AudienceFormatDuration
AwarenessAll staff with AI contactE-learning, short videos, intranet FAQ2-4 hours
Applied competencyBusiness units using AI toolsWorkshops, hands-on training with real use cases1-2 days
Governance competencyLeadership, IT, DPO, complianceIntensive training on legal framework, risk assessment, documentation2-3 days

Step 3: Define Training Content

The EU Commission identifies four competency areas that every training program should cover:

  1. Technical fundamentals — How does the AI system work? What are large language models, neural networks, training data? Where are the limits (hallucinations, bias, data drift)?
  2. Legal context — What obligations does my company have as a deployer? What are the risk classes? Which deadlines apply? How do GDPR and the AI Act interact?
  3. Ethical aspects — How do I recognize bias in AI outputs? When is human oversight necessary? How do I handle AI-generated content transparently?
  4. Risk assessment — How do I evaluate the risk of an AI system? When must I escalate? What documentation is required?

Step 4: Update Regularly

AI literacy is not a one-time training. Technology evolves rapidly — what applies today may be outdated in six months. Plan for:

  • Annual refreshers of awareness training
  • Semi-annual updates for governance leads (new guidelines, enforcement developments)
  • Ad-hoc training when introducing new AI systems or major system updates
  • Onboarding modules for new hires who will use AI systems

Documentation and Evidence — What You Should Retain

Article 4 does not prescribe a specific documentation requirement. But: in the event of an audit by the national supervisory authority, you will need to prove that you took measures. "We sent a few emails" will not suffice.

💡

Recommended Documentation

Retain the following evidence in a structured manner:

  • AI inventory with risk classification and assigned roles
  • Competency matrix with target and actual profiles per role
  • Training plan with formats, content, and timelines
  • Attendance records (participant lists, certificates, e-learning completions)
  • Update log — when were trainings last conducted and updated?
  • Evaluation results — knowledge tests, feedback forms, competency self-assessments

This documentation serves more than compliance. It is the foundation for measuring progress and identifying gaps early.

Sanctions: No Direct Penalties — But a Due Diligence Amplifier

Missing AI literacy training under Article 4 does not trigger standalone fines, but it acts as an aggravating factor when supervisory authorities assess penalties for other AI Act violations. Here lies a common misconception: Article 4 has no standalone sanction regime. There are no fines specifically for missing AI training. That sounds reassuring — but it is not.

The reason: Article 4 acts as a due diligence amplifier. If your company violates other AI Act provisions — such as high-risk requirements, transparency obligations, or the prohibition of certain AI practices — and the supervisory authority finds that your staff were not adequately trained, this significantly escalates the assessment.

In concrete terms:

  • Insufficient AI literacy is treated as an aggravating factor in sanction calculations
  • The supervisory authority asks: "Did you take reasonable measures to prevent violations?" If the answer is "No, we did not train our staff," the fine increases
  • AI Act fines are substantial: up to EUR 35 million or 7% of global annual turnover for prohibited practices, up to EUR 15 million or 3% for high-risk violations

Timeline: What Applies Now and What Is Coming

DateMilestoneRelevance for Article 4
August 1, 2024EU AI Act enters into forceArticle 4 is part of the regulation
February 2, 2025AI literacy obligation appliesCompanies must take measures immediately
August 2, 2025GPAI obligations take effectExpand training content to cover GPAI topics
August 2, 2026Enforcement of main obligationsSupervisory authorities actively audit — including AI literacy documentation
August 2, 2027Extended high-risk obligationsCompetency requirements for Annex I products increase

The critical window: The obligation has been in effect since February 2025. Active enforcement begins in August 2026. You have six months to establish a documented training program — before supervisory authorities start auditing.

Why AI Literacy Is More Than Compliance

The purely regulatory perspective falls short. AI literacy is the prerequisite for everything that follows:

  • Before introducing AI agents, your teams must understand what agents can do and where they fail. Without this understanding, there is no foundation for meaningful governance. More on this: Implementing AI agents — a guide for mid-market companies.
  • Before automating AI processes, you need people who can evaluate outputs, spot errors, and make escalation decisions.
  • Before aligning data privacy with AI, your data protection team must understand how AI systems process data. The intersection of GDPR and the AI Act is not a side topic — it is central. More on GDPR-compliant AI infrastructure.

AI literacy is not the last step of compliance. It is the first step of AI strategy.

FAQ: AI Literacy Obligation Under Article 4

Do my employees need an AI certification?

No. Article 4 does not prescribe any specific certification. The EU Commission has explicitly clarified that there is no standardized examination format. What matters is demonstrating that you have taken context-appropriate measures — adapted to the roles, systems, and risks in your organization.

We only use ChatGPT and Copilot. Does Article 4 apply to us?

Yes, Article 4 applies even to companies that only use off-the-shelf AI products like ChatGPT or Copilot. As a deployer of AI systems, you are obligated to ensure that your staff are sufficiently competent in using these tools. This includes at minimum awareness training: What can the tool do? What are its limitations? When should I not accept the output without review?

What happens if we do not train?

In the short term: nothing directly. Article 4 has no standalone fine. But insufficient AI literacy acts as an aggravating factor for violations of other AI Act provisions. If your staff improperly operate a high-risk system and you cannot demonstrate training efforts, the sanction increases.

How much does an AI literacy program cost?

This depends on company size and the number of AI systems. Based on our project experience, a mid-sized company with 100 to 300 employees should expect EUR 5,000 to 20,000 for initial training development and delivery. Annual updates typically cost 20 to 30 percent of the initial investment based on industry benchmarks for corporate training programs.

Are there template training plans or frameworks?

Industry chambers in Germany are increasingly publishing guidance. At the EU level, the EU AI Office is working on guidelines. As of February 2026, there is no official template training plan. The best foundation is a custom competency matrix based on the four dimensions from Article 4.

Conclusion: Act Now Before Authorities Audit

In summary: the AI literacy obligation under Article 4 has been in effect since February 2025 for every company deploying AI. There is no standard certification, but there is a clear expectation: you must be able to demonstrate that your staff have been trained in a context-appropriate manner. Companies that establish a structured training program now and document the evidence will be prepared when enforcement begins in August 2026. The bottom line is that Article 4 is not a bureaucratic burden, but the foundation for responsible AI deployment.

Next Steps: How We Can Help

At IJONIS, we build and operate AI agents for mid-market companies. The AI literacy obligation affects us and our clients equally. That is why we advise not only on technology but also on organizational enablement:

  • AI inventory and competency audit: We analyze which AI systems you deploy and where competency gaps exist.
  • Competency matrix workshop: Together, we develop a role-specific matrix for your organization.
  • Compliance-ready architecture: We build AI systems with built-in logging, human-in-the-loop mechanisms, and documentation — so your teams can operate them safely.

Want to know where your company stands on AI literacy? Talk to us — we conduct an initial assessment and chart the path forward.


How AI-ready is your company? Find out in 3 minutes — with our free, AI-powered readiness check. Start the check now →

End of article

AI Readiness Check

Find out in 3 min. how AI-ready your company is.

Start now3 min. · Free

AI Insights for Decision Makers

Monthly insights on AI automation, software architecture, and digital transformation. No spam, unsubscribe anytime.

Let's talk

Questions about this article?.

Keith Govender

Keith Govender

Managing Partner

Book appointment

Auch verfügbar auf Deutsch: Jamin Mahmood-Wiebe

Send a message

This site is protected by reCAPTCHA and the Google Privacy Policy Terms of Service apply.