Model Context Protocol (MCP): The Universal Standard for Enterprise AI Integration
According to a Capgemini study on agentic AI, 46% of enterprises cite integration with existing systems as the primary challenge when deploying AI agents. The model is not the problem. The connection between the model and enterprise data is. Every ERP needs its own connector. Every CRM needs its own integration. Every DMS needs its own adapter. Multiply that by the number of AI models a company uses in parallel, and integration costs explode.
The Model Context Protocol (MCP) solves exactly this problem. And at IJONIS, we don't just talk about it — MCP is part of our daily production environment.
The N-times-M Problem: Why AI Integration Fails Today
Say your company uses three AI models (GPT-5.2, Claude, Gemini). You want to connect them to five systems (SAP, Salesforce, SharePoint, PostgreSQL, Confluence). Without a standard, you need 3 x 5 = 15 individual connectors. Each one has its own authentication, its own data formats, its own error handling.
Without MCP: N AI models x M systems = N x M connectors
3 models x 5 systems = 15 individual integrations
↓
Each integration: custom auth, custom formats, custom maintenance
This is the N-times-M problem. According to McKinsey, 70% of AI project budgets go to integration and data preparation — not to actual AI development. Mid-market companies get stuck after the prototype: the proof of concept with one API works. Production with ten systems does not.
If you've dealt with integration challenges firsthand, you'll recognize the pattern: we covered it extensively in our article on AI integration with ERP, CRM, and PIM. MCP is the answer to exactly this architectural challenge.
What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard that defines how AI models access external data sources and tools. Introduced by Anthropic in November 2024, MCP was adopted by OpenAI, Google, Salesforce, Microsoft, ServiceNow, and Workday within months. Its adoption speed outpaces OAuth, OpenAPI, and even HTTP at comparable stages.
The core idea: instead of N-times-M connectors, you need only N + M integrations. Every AI model implements the MCP client once. Every enterprise system provides one MCP server. Done.
With MCP: N clients + M servers = N+M integrations
3 MCP clients + 5 MCP servers = 8 integrations (instead of 15)
↓
Each server: built once, usable by every AI model
The Architecture: Host, Client, Server
MCP follows a three-layer architecture:
Host — The AI application (e.g., Claude Desktop, an internal chatbot, an automation agent). The host is the environment where the AI model runs and interacts with users.
Client — The protocol handler inside the host. It manages connections to MCP servers, negotiates capabilities, and routes requests. One client exists per server connection.
Server — The provider of data and tools. An MCP server encapsulates access to a specific system (database, API, filesystem) and exposes it through the standardized protocol.
┌─────────────────────────────────────────┐
│ HOST (AI Application) │
│ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Client A │ │ Client B │ ... │
│ └────┬─────┘ └────┬─────┘ │
└───────┼──────────────┼──────────────────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ Server A │ │ Server B │ ...
│ (SAP) │ │ (CRM) │
└──────────┘ └──────────┘
The Three Primitives: Resources, Tools, Prompts
MCP defines three building blocks through which servers expose functionality:
This separation is deliberate. Resources are read-only and safe. Tools require explicit approval. Prompts standardize recurring tasks. Together, they form the interface through which AI agents interact with enterprise systems. The result: secure, standardized, and auditable access.
MCP in Practice: Three Enterprise Scenarios
Scenario 1: ERP Integration (SAP, Microsoft Dynamics)
A procurement agent reviews purchase order suggestions from SAP. It compares supplier conditions and triggers the order upon approval. Without MCP, this means custom integration via SAP OData, RFC modules, and BAPI wrappers — built separately for each AI model.
With MCP: An SAP MCP server (e.g., via Theobald Software) exposes Resources (order data, supplier master, conditions) and Tools (create order, update status). Any AI model that speaks MCP can use this server — without SAP-specific code.
Concrete benefit: Switching from GPT-5.2 to Claude or a local model requires zero changes to the SAP integration. The MCP server remains the same.
Scenario 2: Document Management (SharePoint, Confluence)
A knowledge agent searches internal policies, finds relevant documents, and creates summaries. The documents live in SharePoint and Confluence. That means two different APIs and two different permission models.
With MCP: One MCP server each for SharePoint and Confluence. Both expose Resources (documents, metadata, permissions). The AI agent queries both sources through the same MCP standard without needing to understand the difference between SharePoint Graph API and Confluence REST API.
Scenario 3: CRM Integration (Salesforce, HubSpot)
A sales agent enriches customer data, logs activities, and suggests follow-ups. The CRM is Salesforce today. But leadership is also evaluating HubSpot as a replacement.
With MCP: The Salesforce MCP server is already available (Salesforce officially announced MCP support). If the switch to HubSpot happens, only the MCP server is swapped. The agent, its logic, and all agentic workflows remain unchanged. This is the strategic dimension of MCP: it decouples AI logic from the target system.
IJONIS in Practice
At IJONIS, we use MCP daily in production: Supabase MCP for direct database queries, Context7 for up-to-date library documentation, Playwright MCP for browser-based automation, and Serena for semantic code operations. This isn't theory — MCP is the infrastructure our agentic workflows run on.
MCP vs. Custom APIs vs. Middleware — Decision Guide
Not every integration needs MCP. The right choice depends on complexity, the number of systems, and your planned AI strategy.
Implementation: Step by Step to Your First MCP Server
Getting started with MCP does not have to be complex. With a clearly scoped use case and a pre-built server, you can have a production MCP server running within days — without writing any protocol code yourself. The following five steps form the roadmap.
Step 1: Identify the Use Case
Don't start with the most complex system. Choose a system with a clear API that an AI agent already uses or is planned to use. Typical starting points:
- PostgreSQL database — Pre-built MCP server available, immediately deployable
- Filesystem — Expose local documents via MCP
- Google Drive / Slack / GitHub — Community servers with broad adoption
Step 2: Choose or Build an MCP Server
MCP servers exist for over 2,000 platforms according to the MCP ecosystem tracker. Check the official registry first:
- Pre-built: PostgreSQL, MySQL, SQLite, Google Drive, Slack, GitHub, GitLab, Jira, Confluence, Notion, Stripe, Shopify, AWS, Docker
- Enterprise vendors: Salesforce (official), SAP (via Theobald), ServiceNow (official), Workday (official)
- Custom build: For proprietary or legacy systems, create your own MCP server. The MCP SDK (Python, TypeScript) reduces effort to business logic only
Step 3: Configure and Test
An MCP server runs as a standalone process and is registered via a configuration file. The host (e.g., Claude Desktop, your own agent) discovers the server automatically.
Typical workflow:
- Install the MCP server (e.g.,
npx @modelcontextprotocol/server-postgres) - Create configuration (connection details, permissions, allowed tools)
- Register the server in the host (JSON configuration)
- Test: fetch Resources, invoke Tools, validate results
Step 4: Define Permissions and Guardrails
Before the MCP server goes live, define:
- Read-only vs. read-write: Which Tools can the agent invoke? Start with read-only Resources.
- Data filters: Which records are accessible via MCP? Enforce row-level security in the source system.
- Rate limits: Maximum requests per minute to protect the source system from overload.
- Logging: Every MCP call is logged — input, output, timestamp, invoking agent.
Step 5: Expand Incrementally
After the first production MCP server, add more servers — one system at a time. Each new server automatically extends the capabilities of all connected AI models. That's the N+M effect in action.
Security and Governance for MCP Integrations
MCP standardizes the interface — but security is your responsibility. An open protocol does not mean open doors. Access control, data privacy, and auditability must be defined per server. Three layers are critical:
Authentication and Authorization
MCP servers must meet the same security standards as any other API integration:
- OAuth 2.0 / OIDC for authentication between client and server
- Scope-based authorization: Not every agent needs access to all Resources and Tools
- Service accounts with minimal permissions (Least Privilege Principle)
Data Privacy and Compliance
For DACH-region enterprises, specific requirements apply:
- Data classification: What data does the MCP server expose? Personal data requires additional protection
- Processing records: Document MCP server access as a processing activity (GDPR Art. 30)
- Data minimization: Expose only the Resources necessary for the use case
- EU hosting: Run MCP servers and underlying systems within the EU
For more on the legal foundations, see our article on GDPR-compliant AI.
Audit Trail and Traceability
The EU AI Act demands transparency about AI decisions. MCP supports this through its structured architecture:
- Every Tool invocation is a discrete, loggable event
- Resources have defined schemas — the data foundation of a decision is traceable
- The separation of Host, Client, and Server enables granular logging at each layer
Outlook: MCP as Infrastructure Standard 2026+
The momentum is unprecedented. From Anthropic's release in November 2024 to adoption by OpenAI, Google, Salesforce, Microsoft, and Workday, less than six months passed. Over 2,000 platforms already offer MCP interfaces. That is faster than OAuth's spread, faster than OpenAPI, faster than REST.
Three developments that will define 2026:
1. Enterprise expansion: MCP is moving from developer tools (IDE integrations, coding assistants) into business systems. SAP, Salesforce, ServiceNow — the systems that power mid-market companies are becoming MCP-enabled. This makes AI agents connectable to core systems without costly custom integration for the first time.
2. MCP-as-a-Service: Managed MCP servers as a cloud offering. Instead of running your own servers, you subscribe to a managed SAP MCP server with enterprise SLA, monitoring, and security updates. This lowers the entry barrier for companies without dedicated DevOps teams.
3. Multi-agent ecosystems: When every agent accesses the same servers via MCP, true multi-agent systems emerge: a research agent uses the Confluence MCP server, an analysis agent uses the SAP MCP server, a writing agent uses the email MCP server — all coordinated by an orchestrator. We describe the foundations in our article on agentic workflows.
FAQ: Model Context Protocol in the Enterprise
Is MCP only for Anthropic models (Claude)?
No, MCP is an open standard designed to be vendor-neutral from the start. Although Anthropic introduced the protocol, OpenAI, Google, and numerous other providers have implemented MCP support. MCP is model-agnostic — that is the entire point. You connect your systems once and can switch AI models at any time without touching the server side.
Do I need to replace our existing API integrations?
Not necessarily. MCP doesn't automatically replace existing integrations. It's most valuable when you use or plan to use multiple AI models that need access to the same systems. For a single, stable integration between one model and one system, a custom API can still make sense.
How secure is MCP for sensitive enterprise data?
MCP defines the protocol, not the security architecture. Security lies in the implementation: OAuth for authentication, scope-based authorization, transport-layer encryption (TLS), audit logging. A well-implemented MCP server is as secure as a well-implemented REST API — with the added benefit of standardized security patterns.
Do I need development resources for MCP?
For pre-built MCP servers (PostgreSQL, Google Drive, Slack), configuration is enough — no code required. For custom servers (proprietary systems, legacy ERPs), you'll need development capacity. The MCP SDK (Python and TypeScript) reduces the effort to business logic; the SDK handles the protocol layer.
How does MCP change the role of the IT department?
The IT department becomes an infrastructure provider for AI capabilities. Instead of building individual integrations for each AI project, it provides MCP servers that all AI applications in the organization can use. This shifts the focus from project-by-project integration to a strategic integration platform.
Conclusion: MCP Is Infrastructure, Not a Feature
The Model Context Protocol solves the fundamental problem of AI integration: the exponential complexity when multiple models need access to multiple systems. The data infrastructure that MCP unlocks — ERP data, CRM histories, document repositories — is the fuel that turns AI agents from prototypes into production tools.
For mid-market enterprises, this means: your existing systems — SAP, Salesforce, SharePoint — are not being replaced. They're being made accessible through a universal standard. No API spaghetti, no vendor lock-in, no dependency on a single AI provider.
The right time to start is now. Not because the hype demands it, but because the infrastructure is mature: the servers exist, the standards are stable, the major vendors have committed.
Want to evaluate MCP for your IT landscape? Talk to our integration experts in Hamburg — we'll analyze your system landscape and identify the MCP servers that deliver the biggest leverage for your AI strategy.
How ready is your company for AI? Find out in 3 minutes — with our free, AI-powered readiness assessment. Start the check now →


