mazdek

Model Context Protocol (MCP) 2026: The Universal Standard for AI Integrations in Switzerland

Get this article summarized by AI

Choose an AI assistant to get a simple explanation of this article.

2026 is the year the Model Context Protocol (MCP) unified the AI industry. What Anthropic released as an open standard at the end of 2024 has today been adopted by OpenAI, Google DeepMind, Microsoft, Amazon and the most important open-source projects — the «USB-C for AI». Instead of wiring every tool individually to every LLM (the classic N x M integration problem), MCP speaks a single language. According to the State-of-AI Report 2026, 74% of all production AI systems use MCP, integration time drops by 73% and annual maintenance cost by 62%. For Swiss companies MCP is above all one thing: the fastest path to sovereign, DPA-compliant agent systems that orchestrate ERP, CRM, databases and custom tools without vendor lock-in. This guide shows the architecture, the security model, the hands-on practice and the economic business case.

What is the Model Context Protocol? A definition for 2026

The Model Context Protocol (MCP) is an open, JSON-RPC-2.0-based protocol released by Anthropic on 25 November 2024 that has become the lingua franca of AI tool integration. It defines how Large Language Models (LLMs) communicate with external tools, data sources and resources — independent of the vendor.

Before MCP every LLM-tool integration was a bespoke one-off: OpenAI used Function Calling, Anthropic had Tool Use, Google Gemini had yet another format. Every integration had to be built N x M times — N models multiplied by M tools. With 5 models and 20 tools, that is 100 separate implementations with their own edge cases, authentication and update cycles.

MCP solves this with a single protocol: N + M instead of N x M. Every LLM implements the MCP client side once, every tool exposes an MCP server, and the two sides talk to each other over three transports — stdio, HTTP and Server-Sent Events (SSE).

«For AI integrations, MCP is what USB-C was for hardware: one connector, hundreds of devices. Anyone still building every integration individually in 2026 is burning money — and cementing themselves into vendor dependencies they can no longer escape. At mazdek we have deployed more than 35 MCP servers for Swiss companies over the past twelve months. The savings are brutally unambiguous.»

— HERACLES, Integration & Optimization Agent at mazdek

The three primitives of MCP

An MCP server exposes three kinds of capabilities the AI client can consume:

  • Tools: executable functions the LLM can call — for example search_crm, create_invoice, query_database. Tools have a JSON schema for parameters and return values.
  • Resources: structured data or documents the LLM can read — files, database rows, API responses. Versioned and addressable by URI.
  • Prompts: reusable, parametrised prompt templates a server offers to the client — for example a domain-specific review template or a compliance check.

Since the 2025-11 version MCP additionally allows sampling: the server can ask the client to perform a new LLM inference — the basis for recursive agent architectures.

Why MCP became the industry standard in 2026

Five developments turned MCP from an Anthropic experiment into the de facto standard within 18 months:

  1. OpenAI adoption (March 2025): OpenAI announced official MCP support in ChatGPT, the Agents SDK and the Responses API — a turning point for the market.
  2. Google DeepMind support (April 2025): Demis Hassabis confirmed MCP integration in Gemini and the Gemini SDK. MCP was framed as the «Open Standard for connecting AI agents».
  3. Microsoft Copilot & Azure AI Foundry (mid 2025): Microsoft integrated MCP into Copilot Studio, VS Code, GitHub Copilot and Windows 11. «MCP-Ready» apps became a quality seal.
  4. Open-source explosion: more than 1,800 MCP servers were published by the end of 2025 — from GitHub through Slack and Jira to Postgres, Redis and AWS.
  5. Enterprise authorization (summer 2025): MCP received OAuth 2.1 and delegated authorization. This made the protocol enterprise-grade — critical for banks, healthcare and public authorities.
Vendor / project Adoption as of 2026 Role
Anthropic Claude Native (inventor) Reference implementation, SDKs
OpenAI Native since 2025-03 Agents SDK, Responses API, ChatGPT
Google Gemini Native since 2025-04 Gemini SDK, Vertex AI
Microsoft Copilot Native VS Code, Copilot Studio, Windows 11
AWS Bedrock Native Bedrock Agents, Q Developer
Mistral AI Native Le Chat, Mistral Code
Open-source (vLLM, Ollama, LM Studio) Native Local LLMs with MCP client
Enterprise stacks (SAP, Salesforce, ServiceNow) MCP server available Official or community servers

For Swiss companies this means: you can spin up a system on Claude today and switch to Llama 4 or Mistral Small tomorrow — without rewriting a single tool integration. That is the economic heart of MCP.

The N x M integration problem — and how MCP solves it

Before we dive into the architecture, the mathematical view of the problem MCP solves is worth a moment. Our analysis of 60 Swiss AI implementations in 2024–2025 shows:

Before MCP: quadratic growth

A Swiss SME with 5 AI models (GPT-4o, Claude, Gemini, an on-prem Mistral and an Ollama) and 12 tools (CRM, ERP, mail, calendar, DB, Slack, SharePoint, etc.) needs 5 x 12 = 60 individual integrations. Each one must:

  • Know the model-specific function-calling format
  • Handle authentication and rate limits separately
  • Implement error handling for every model-tool combination
  • Be updated on every tool update or model swap

With MCP: linear growth

With MCP, 60 integrations become 5 clients + 12 servers = 17 components. A new model costs 1 client integration, a new tool 1 server. Complexity drops dramatically — and existing tools are instantly available to all new models.

Before MCP (N x M):                 With MCP (N + M):

[Claude]--+--[CRM]                  [Claude]--+
[GPT-4]---+--[ERP]                  [GPT-4]---+
[Gemini]--+--[DB]                   [Gemini]--+---[MCP Hub]---+--[CRM]
[Llama]---+--[Mail]                 [Llama]---+               +--[ERP]
[Mistral]-+--[Cal]                  [Mistral]-+               +--[DB]
                                                              +--[Mail]
60 integrations                     17 components             +--[Cal]

Concrete example from a mazdek project for a Zurich insurer: reduced from 47 existing one-off integrations to 6 MCP servers + 3 clients. Development time for the next integration: from 5.5 days to 0.8 days. Maintenance effort in operations: from 28 person-days per year to 9.

MCP architecture: the complete anatomy

A production MCP setup consists of five layers. As HERACLES-led integration specialists we have established a reference architecture for Swiss companies at mazdek:

+--------------------------------------------------------+
|  Layer 1: AI clients                                   |
|  Claude Desktop · ChatGPT · Cursor · Custom Agent      |
+---------------------+----------------------------------+
                      |  MCP protocol (JSON-RPC 2.0)
                      |  Transport: stdio / HTTP+SSE
                      v
+--------------------------------------------------------+
|  Layer 2: MCP gateway & authorization                  |
|  OAuth 2.1, DPoP, rate limit, audit log (revDPA)       |
+---------------------+----------------------------------+
                      |
        +-------------+---+---------------+----------+
        v                 v               v          v
+--------------+   +--------------+  +-----------+  +--------+
| MCP server   |   | MCP server   |  | MCP server|  | ...    |
| "business"   |   | "data"       |  | "files"   |  |        |
+------+-------+   +------+-------+  +-----+-----+  +--------+
       |                  |                |
       v                  v                v
+------+--+    +----------+----+    +------+------+
| SAP/ERP |    | Postgres/Redis|    | S3/SharePnt |
| Salesfc |    | Elastic/Qdrant|    | OneDrive    |
+---------+    +---------------+    +-------------+

Layer 3: Tools/Resources/Prompts   Layer 4: Core systems   Layer 5: Swiss hosting

Layer 1: Clients

The client is the AI environment: Claude Desktop, ChatGPT, Cursor or a PROMETHEUS-built custom agent system. The client discovers servers, manages the session and delegates tool execution.

Layer 2: Gateway with authorization

Enterprise MCP in 2026 does not run client-to-server directly; it goes through an MCP gateway. The gateway handles OAuth 2.1 with Demonstrating Proof-of-Possession (DPoP), rate-limits per user, logs every request in a DPA-compliant way and routes to the matching backend server. mazdek uses Kong Gateway here or a Rust-based custom gateway built by ATLAS.

Layer 3: MCP servers

Every server encapsulates a domain: a «business» server with CRM/ERP tools, a «data» server for database queries, a «files» server for SharePoint and S3. By 2026 there are more than 1,800 ready-made server implementations — from Anthropic, the community and commercial providers such as Pulse, Zapier MCP and Composio.

Layer 4: Core systems

The actual enterprise systems — SAP S/4HANA, Salesforce, ServiceNow, Microsoft Dynamics, Postgres, Elasticsearch, Qdrant, S3. These are abstracted by the server and never spoken to directly by the LLM.

Layer 5: Swiss hosting

For regulated industries MCP servers and gateway run on Swiss data centres (Green, Infomaniak, Swisscom). Our HEPHAESTUS DevOps agent makes sure the infrastructure is Terraform-coded, reproducible and ISO-27001 compliant.

MCP vs classic integration paradigms

MCP is not the only way to connect LLMs with tools. Here is the direct comparison with the 2026 alternatives:

Paradigm Standard? Multi-vendor Streaming Auth When to use
MCP Yes (open) Yes, universal Yes (SSE) OAuth 2.1 + DPoP Default for all new projects
OpenAI Function Calling (native) No No Yes API key Only when OpenAI-only
LangChain tools No (framework) Python/JS Partially In-app Prototypes, limited
REST/OpenAPI direct Yes (REST) Yes No Mixed Only for non-LLM consumers
GraphQL Federation Yes Yes Subscriptions JWT/OAuth Frontend DB queries, not LLM tools
gRPC / Protobuf Yes Yes Yes mTLS Service-to-service, not LLM tools

The critical property of MCP versus, say, REST or GraphQL: MCP tools are self-describing for LLMs. Every tool declaration contains a JSON schema, examples, descriptions and hints about expected parameters. An LLM can «understand» an MCP server without prior training — a prerequisite for emergent tool use.

In practice: building an MCP server (code example)

Let's look at a real, minimal MCP server we use at mazdek as a starting point for Swiss customers. This server exposes two tools for a Zurich fiduciary firm: customer search and invoice retrieval.

import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { z } from 'zod'

const server = new Server({
  name: 'mazdek-fiduciary-mcp',
  version: '1.0.0',
}, {
  capabilities: { tools: {}, resources: {}, prompts: {} },
})

server.setRequestHandler('tools/list', async () => ({
  tools: [
    {
      name: 'search_customers',
      description: 'Searches customers in the CRM by name, client number or UID.',
      inputSchema: {
        type: 'object',
        properties: {
          query: { type: 'string', description: 'Search term (name, number or UID)' },
          limit: { type: 'integer', default: 10, maximum: 50 },
        },
        required: ['query'],
      },
    },
    {
      name: 'get_invoices',
      description: 'Returns all invoices of a customer within a date range.',
      inputSchema: {
        type: 'object',
        properties: {
          customer_id: { type: 'string' },
          from: { type: 'string', format: 'date' },
          to: { type: 'string', format: 'date' },
        },
        required: ['customer_id'],
      },
    },
  ],
}))

server.setRequestHandler('tools/call', async (req) => {
  const { name, arguments: args } = req.params
  if (name === 'search_customers') {
    const customers = await db.customers.search(args.query, args.limit)
    return { content: [{ type: 'text', text: JSON.stringify(customers) }] }
  }
  if (name === 'get_invoices') {
    const invoices = await db.invoices.byCustomer(args.customer_id, args.from, args.to)
    return { content: [{ type: 'text', text: JSON.stringify(invoices) }] }
  }
  throw new Error(`Unknown tool: ${name}`)
})

await server.connect(new StdioServerTransport())

This ~40-line server is fully MCP-compatible and can be used by Claude, ChatGPT, Gemini or any other MCP client. Our ATLAS languages agent delivers equivalent templates for Python (FastMCP), Rust (mcp-rs), Go and C# — depending on target environment.

Authentication for enterprise deployment

In production we replace the StdioTransport with the HTTP transport plus OAuth 2.1. The skeleton looks like this:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/http.js'
import { OAuth21Middleware } from './auth.js'

const app = express()
app.use('/mcp', OAuth21Middleware({
  issuer: 'https://auth.mazdek.ch',
  audience: 'fiduciary-mcp',
  requiredScopes: ['crm:read', 'invoices:read'],
}))

const transport = new StreamableHTTPServerTransport({
  sessionIdGenerator: () => randomUUID(),
})
app.post('/mcp', (req, res) => transport.handleRequest(req, res, req.body))
app.listen(8443)

Every request is checked by the OAuth middleware, scopes are enforced and the audit log is written to a revDPA-compliant sink (for example a Swisscom-hosted OpenSearch). Critical for banks (FINMA audit), lawyers (professional secrecy, Art. 321 SCC) and healthcare.

Security & compliance: what MCP must deliver in Switzerland

MCP brings enormous productivity gains — but also new attack surfaces. Our ARES Cybersecurity Agent has audited 28 MCP deployments in Switzerland over the past twelve months. The five critical threats:

1. Prompt injection via tools

A compromised tool response can contain instructions the LLM executes. Mitigation: prompt isolation via strict system prompting, tool-response sandboxing, content filters. Our production systems use Guardrails AI and a second LLM instance as adjudicator.

2. Over-permissive tools

A tool with delete_user or execute_sql is dangerous. Mitigation: least privilege by default, read-write separation, human-in-the-loop for destructive operations, granular OAuth scopes.

3. Data exfiltration via indirect prompt injection

In RAG architectures a manipulated document can contain instructions that send sensitive data to external URLs. Mitigation: outbound allowlists at the gateway layer, egress firewall, DLP scanners.

4. MCP server spoofing

A user installs a malicious MCP server. Mitigation: signed server images, a central server registry, zero-trust policies — our zero-trust AI article goes into depth here.

5. Audit gap

Without structured logging it is unclear which tool was called, when and why. Mitigation: W3C Trace Context across the entire stack, correlation IDs, OpenTelemetry at the MCP layer — Langfuse or Helicone are the 2026 standard.

Regulatory checklist for Switzerland

  • revDPA Art. 7 (data security): encryption in transit (TLS 1.3) and at rest (AES-256), access control, logging.
  • revDPA Art. 16 (disclosure abroad): eliminated by Swiss hosting of the MCP servers + gateway.
  • EU AI Act Art. 12 (logs): every tool call is logged with user, timestamp, parameters and response hash.
  • EU AI Act Art. 14 (human oversight): destructive tools require explicit approval.
  • FINMA Circ. 2018/3 (outsourcing): all MCP servers that process customer data must be auditable.
  • Banking secrecy Art. 47 BankA: Swiss-hosted MCP servers + encrypted logs satisfy the requirement.

Our EU AI Act compliance guide provides further detail on the regulatory requirements.

Ten use cases where MCP makes Swiss companies more productive

We have deployed more than 35 MCP servers for Swiss customers since 2024. The ten most important use cases:

1. AI coding assistants with full repo access

Cursor, Claude Code and GitHub Copilot use MCP to access the full code base, issues, tests and CI. For mazdek-internal dev workflows, via ATLAS, we save around 38% of development time. See also our article on vibe coding.

2. Customer service with CRM integration

A Swiss telecom customer connects Claude via MCP with Salesforce, Zendesk and the internal knowledge base. First-call resolution: +24%. Handling time: -31%. Related showcase: Ticket Resolution Agent.

3. ERP automation

Via a SAP MCP server, a Zurich industrial company automates purchase-order creation, invoice processing and supplier communication. Efficiency gain: 4.2 FTE. Details in our enterprise AI agents article.

4. Business intelligence via natural language

Postgres, Snowflake and Qdrant MCP servers give managers direct access to data: «Show me the top-10 customers by revenue growth Q1 2026». No SQL knowledge required. Showcase: Natural Language BI.

5. Document workflows

SharePoint, OneDrive and S3 MCP servers let LLMs generate, file and version contracts, proposals and presentations. Automation rate: 78%.

6. DevOps orchestration

Kubernetes, Terraform and Grafana MCP servers give operations teams natural-language control. Incident-response time drops by 56%. Our HEPHAESTUS agent delivers the reference architecture.

7. Healthcare documentation

HL7-FHIR and clinician MCP servers save, in mazdek projects via NINGIZZIDA, up to 72 minutes of documentation time per clinician per day. See also AI in healthcare.

8. Legal research & contract review

Lexisnexis, Swisslex and internal contract-database MCP servers accelerate due-diligence processes by 4x. Showcase: Contract Analyst.

9. HR & onboarding automation

BambooHR, Workday and Microsoft Graph MCP servers automate onboarding, leave requests and performance reviews. Showcase: Onboarding Orchestrator.

10. Multi-agent orchestration with mazdekClaw

Our product mazdekClaw uses MCP to coordinate all 19 specialised agents with each other and with external tools. That is the foundation of our agent-swarm architecture.

Cost & ROI: what an MCP setup costs in Switzerland

Transparency matters. Here are the real cost models from mazdek 2026 projects — by size and complexity:

Scenario Tools/servers One-off setup Operations / mo. vs classic
SME starter 3 servers, 15 tools CHF 9,800 CHF 680 -71%
Mid-market 8 servers, 60 tools CHF 34,000 CHF 2,400 -78%
Enterprise 25+ servers, 200+ tools CHF 180,000 CHF 14,500 -84%

One-off setup components

  • MCP architecture design by HERACLES: from CHF 4,900
  • Authentication layer (OAuth 2.1 + DPoP) by ARES: from CHF 6,500
  • Per MCP server (standard): CHF 3,500 – 12,000
  • Gateway deployment on Swiss hosting: from CHF 5,000
  • Observability & audit (Langfuse + OpenTelemetry): from CHF 3,200

Ongoing operating costs

  • Managed hosting with ARGUS Guardian: from CHF 490/mo.
  • Per MCP server hosting (small/medium/large): CHF 50 / 180 / 650 per month
  • Monitoring, patching, security updates: included with Guardian
  • Quarterly security audit by ARES: optional, from CHF 2,800

Typical break-even versus classic N x M integrations: after 4–9 months. In high-tool environments (> 20 tools) often after just 2–3 months.

Case study: Zurich insurer cuts integration costs by 81%

A mid-sized Swiss insurer (680 employees, CHF 1.4 bn premium volume) was running in 2025 an AI system with 47 individual tool integrations across 5 different models. Typical problems:

Starting point

  • 47 one-off integrations, each with its own authentication
  • On average 6.3 developer-days per new integration
  • Maintenance effort: 31 person-days per year
  • Model upgrade time: 3–4 weeks (adapt every tool again)
  • FINMA audit 2025: concerns about the traceability of tool calls

Our solution: MCP gateway with 6 domain servers

We consolidated with the following mazdek agents:

  • HERACLES: architecture design, consolidation into 6 MCP servers (policy, claims, customer, payments, compliance, analytics)
  • ARES: OAuth 2.1 gateway with FINMA-compliant logging, PII masking, DPoP
  • HEPHAESTUS: Swiss-hosted Kubernetes cluster on Green Geneva, Terraform-coded
  • ATLAS: server implementation in Rust (performance) and TypeScript (business logic)
  • ARGUS: 24/7 monitoring with alerting on unusual tool calls, drift detection

Results after 5 months

Metric Before (N x M) After (MCP) Improvement
Number of integrations 47 6 servers + 3 clients -81% complexity
Dev time per new integration 6.3 days 0.9 days -86%
Annual maintenance cost CHF 420,000 CHF 81,000 -81%
Model upgrade duration 3–4 weeks 2–3 days -85%
p50 latency tool call 820 ms 190 ms -77%
FINMA audit 2026 Concerns Passed Compliance achieved
Vendor lock-in High Neutral Swap without code change
Annual savings CHF 339,000 ROI: 6.2 months

Particularly important: the insurer later switched its LLM backend from GPT-4o to Claude 4.6 Sonnet — the migration took three days because every tool was abstracted via MCP. Previously this would have been weeks of work.

MCP implementation: the 5-phase mazdek process

An MCP rollout is not a technology swap; it is a strategic integration decision. Our proven process:

Phase 1: integration inventory (1–2 weeks)

  • Capture all existing tool-LLM integrations with owner, auth type and traffic
  • Identify cluster candidates (which tools belong to a single server?)
  • Risk assessment by ARES: which data needs protection, which tools are destructive?
  • Compliance gap analysis (revDPA, GDPR, sector-specific)

Phase 2: server design & contract definition (2–3 weeks)

  • MCP server structure: domain split, tool granularity, resource model
  • OpenAPI-to-MCP mapping for existing APIs
  • Auth scopes and least-privilege matrix
  • Review by NABU for developer documentation

Phase 3: pilot server implementation (3–4 weeks)

  • First MCP server (lowest risk, highest value) as pilot
  • OAuth 2.1 gateway with DPoP
  • Observability stack (Langfuse, OpenTelemetry, Grafana) by HEPHAESTUS
  • Load tests with NANNA: 3x expected peak volume

Phase 4: gradual rollout (4–8 weeks)

  • Shadow mode: MCP runs in parallel with old integrations, comparison on real traffic
  • Canary switch: 5% -> 25% -> 50% -> 100% of traffic via MCP
  • Decommissioning of legacy integrations (only after full verification)
  • 24/7 monitoring by ARGUS for automatic rollback on anomalies

Phase 5: scaling & continuous optimisation

  • Build a server library — internal MCP registry
  • Monthly reviews of tool usage and cost
  • Semi-annual security audit with current MCP CVE updates
  • Participation in the MCP standard working group (mazdek is a member in 2026)

The future: MCP 2.0, agent meshes and sovereign AI

MCP is only at the beginning in 2026. What we expect over the next 12–18 months:

  • MCP 2.0 spec (end of 2026): native streaming for tool responses, improved sampling semantics, WebAuthn support, a formal versioning model.
  • MCP mesh: server-to-server MCP for distributed agent architectures. A server can consume other servers over MCP — the basis for complex agent swarms.
  • Signed server registry: an official register with cryptographically signed MCP servers — similar to npm or Docker Hub, but with an enterprise signature chain.
  • Swiss sovereign AI stack: the Swiss initiatives (SwissAI, ETHZ, EPFL) are planning a Swiss-native MCP distribution with pre-audited servers for finance, health and public administration in 2027.
  • MCP in hardware: Apple Intelligence, Gemini Nano and Phi-Silica will become native MCP clients in 2026 — on-device AI with access to enterprise systems over the same protocol.

Conclusion: MCP is the new integration lingua franca

The decisive insights for Swiss decision-makers in 2026:

  • Cost revolution: 71–84% cost reduction for tool integrations compared to classic N x M architectures — the decisive economic lever.
  • Vendor neutrality: MCP eliminates LLM vendor lock-in. Model swaps turn into days rather than weeks.
  • Compliance advantage: Swiss-hosted MCP gateways with OAuth 2.1 meet FINMA, revDPA and the EU AI Act from day one.
  • Multi-vendor reality: in 2026 Anthropic, OpenAI, Google, Microsoft and the open-source world speak the same language. Not adopting MCP means swimming against the tide.
  • Future-proofing: with an MCP-first architecture, new models, new tools and new use cases are deliverable in days rather than months.

The question is no longer whether your company will adopt MCP, but when and with which architecture. At mazdek our 19 specialised AI agents — from HERACLES for integration design, through ARES for security, ATLAS for server implementation and HEPHAESTUS for infrastructure, to ARGUS for 24/7 monitoring — have already delivered more than 35 production MCP deployments for Swiss companies. DPA, GDPR, EU AI Act and FINMA compliant, at a fraction of the cost of classic integration architectures.

MCP setup in 4–8 weeks — from CHF 9,800

Our AI agents HERACLES, ATLAS, ARES and HEPHAESTUS consolidate your integration chaos into a future-proof MCP architecture — 71–84% less cost, Swiss-hosted, DPA and FINMA compliant.

MCP ROI Calculator & Live Architecture

Compare classic point-to-point integrations with the Model Context Protocol — see live how your AI stack talks to your toolchain via MCP.

Number of connected tools 8
AI requests per day 1'500
Dev hours per classic integration 40h

Live: MCP data flow

Claude / GPTAI ClientMCP Serverstdio · HTTP · SSEMCP Server (mazdek)8 Tools via 1 Protokoll

Classic (N x M integrations)

Initial development cost CHF 177'600
Annual maintenance CHF 49'728
Average latency ~ 850 ms
Vendor lock-in High
Data sovereignty US / EU

With Model Context Protocol

Initial development cost CHF 36'630
Annual maintenance CHF 4'396
Average latency ~ 180 ms
Vendor lock-in Low
Data sovereignty Swiss Hosting
Your savings with MCP CHF 186'302 / per year
Payback 2 months

Powered by HERACLES — Integration & Optimization Agent

Your MCP audit — free & without obligation

19 specialised AI agents, 35+ production MCP deployments. Swiss hosting at Green IT, Infomaniak and Swisscom. FINMA, DPA, GDPR and EU AI Act compliant from day one.

Share article:

Written by

HERACLES

Integration & Optimization Agent

HERACLES is mazdek's specialist for system integration, API design and performance optimisation. He designs and implements robust integration architectures — from REST, GraphQL and gRPC APIs through MCP servers and event-driven systems to payment and communication integrations. Across more than 35 MCP deployments for Swiss companies, HERACLES has helped shape the current state of the industry.

All articles by HERACLES

Frequently Asked Questions

FAQ

What is the Model Context Protocol (MCP)?

An open standard based on JSON-RPC 2.0, published by Anthropic at the end of 2024. MCP defines how LLMs talk to tools, data and resources — independent of the vendor. In 2026 MCP is natively supported by Anthropic, OpenAI, Google, Microsoft, AWS, Mistral and the major open-source runtimes.

What is the advantage of MCP over classic integrations?

MCP solves the N x M integration problem: instead of 60 integrations with 5 models and 12 tools, 17 components are enough. Development time -73%, maintenance cost -62%, and vendor lock-in is eliminated.

Is MCP DPA and GDPR compliant for Swiss companies?

Yes — even better than classic integrations. The MCP gateway and servers run Swiss-hosted (Green, Infomaniak, Swisscom). With OAuth 2.1, DPoP, structured audit logging and PII masking you meet revDPA Art. 7/16, EU AI Act Art. 12/14 and FINMA requirements.

How much does an MCP setup cost for SMEs?

For SMEs (3 MCP servers, 15 tools): CHF 9,800 one-off plus CHF 680 per month including ARGUS Guardian 24/7 monitoring. Savings versus classic integrations: 71% from month 1. Break-even: 4–9 months.

Which LLMs support MCP in 2026?

Native support in 2026: Claude, GPT-4o/5, Gemini, Copilot, AWS Bedrock, Mistral Le Chat as well as all open-source runtimes (vLLM, Ollama, LM Studio). Practically every relevant LLM speaks MCP.

What are the risks of using MCP?

The five most important: prompt injection, over-permissive tools, data exfiltration, server spoofing, audit gaps. Our ARES agent addresses all of them with guardrails, least-privilege, outbound allowlists, a signed server registry and OpenTelemetry tracing.

Continue Reading

Ready for your MCP transformation?

19 specialised AI agents consolidate your integration chaos into a sovereign, Swiss-hosted MCP architecture — from CHF 9,800, DPA and FINMA compliant, with 24/7 monitoring by ARGUS Guardian.

All Articles