
If Your Enterprise Wants MCP, This Is the Right Way to Do It
- Mark Kendall
- 8 hours ago
- 4 min read
If Your Enterprise Wants MCP, This Is the Right Way to Do It
Introduction
As AI tools like Claude become part of the developer workflow, teams inevitably ask the same question:
“How does the AI safely connect to enterprise systems like Jira, Kafka, or internal APIs?”
Recently the term MCP (Model Context Protocol) has started appearing in discussions as the answer.
That’s fine — but if an enterprise is going to adopt MCP, it needs to be implemented correctly.
The biggest mistake teams make is assuming every developer or team should run their own MCP connections. That approach quickly creates security problems, inconsistent access, and operational chaos.
The correct enterprise model is much simpler:
Shared services own the MCP layer.
Development teams consume it.
If your organization decides MCP is the path forward, this article describes the practical architecture that actually works.
What Is MCP?
Model Context Protocol (MCP) is a structured interface that allows AI models to interact with tools and external systems.
Instead of an AI model directly calling services like Jira or Kafka, the model calls tools exposed by an MCP server.
Those tools then perform the actual integration with enterprise systems.
The architecture looks like this:
Claude (Developer AI Assistant)
│
│ Tool Request
▼
MCP Client (Developer Environment)
│
▼
Enterprise MCP Service Layer
│
├── Jira
├── Kafka
├── GitHub
├── ServiceNow
└── Internal APIs
The important point is this:
Claude never directly connects to enterprise systems.
Instead, it calls controlled tools exposed through MCP services.
The Enterprise MCP Model
In a real enterprise environment, MCP should be treated as a shared platform capability.
That means it belongs to the same type of group that manages:
CI/CD platforms
developer portals
internal APIs
security gateways
Typically this is a Platform Engineering or Shared Services team.
Their responsibilities include:
• running MCP servers
• integrating enterprise systems
• managing authentication
• defining available tools
• auditing usage
Development teams do not build their own MCP integrations.
They simply connect to the shared MCP service.
How Developers Connect
From the developer perspective, the setup should be extremely simple.
Claude runs locally on the developer machine through an IDE or CLI environment.
The developer then configures the MCP connection once.
Example configuration:
~/.mcp/config.json
Example:
{
"servers": {
"enterprise": {
"url": "https://mcp.company.internal"
}
}
}
Once configured, Claude can discover the tools provided by the enterprise MCP service.
Those tools might include:
jira.search_tickets
jira.create_ticket
kafka.publish_event
kafka.read_topic
deploy.trigger_pipeline
servicenow.create_incident
Developers do not manage tokens, credentials, or API integrations.
That complexity stays inside the shared service layer.
Security and Authentication
Security is one of the main reasons MCP exists.
In the enterprise model:
Developers authenticate using their normal identity provider.
Examples:
• SSO
• Okta
• Azure AD
• enterprise certificates
The MCP service then uses service accounts or managed credentials to interact with enterprise systems.
That means:
• Claude never sees system credentials
• developers never store integration tokens
• access can be centrally audited
This keeps the AI interaction within the same security model used for all enterprise systems.
Example: Jira Integration
A shared services MCP server might expose a Jira tool like this:
jira.search_tickets
A developer could ask Claude:
“Show me open bugs in the Payments service.”
Claude would translate this into a tool call:
jira.search_tickets(
project="PAYMENTS",
type="BUG",
status="OPEN"
)
The MCP service calls the Jira REST API and returns the results.
Claude then presents the information to the developer.
Example: Kafka Integration
Kafka can also be exposed safely through MCP tools.
Example tools:
kafka.publish_event
kafka.describe_topic
kafka.read_messages
A developer might ask Claude:
“Publish a test order event to the checkout topic.”
Claude sends a request through MCP:
kafka.publish_event(
topic="checkout.orders",
payload={...}
)
The MCP server sends the message to Kafka and returns the result.
Again, the AI never directly connects to Kafka.
Why Enterprises Should Centralize MCP
Running MCP servers as shared infrastructure provides several benefits.
Security
Credentials remain inside controlled infrastructure rather than developer machines.
Consistency
All teams use the same tool definitions and integrations.
Observability
Usage can be logged, audited, and monitored.
Governance
Access policies can be managed centrally.
Without this structure, MCP quickly turns into a collection of unmanaged local integrations.
When MCP Is Actually Necessary
MCP is useful when AI needs to interact with real enterprise systems.
Examples include:
ticket management
deployment pipelines
event systems like Kafka
internal APIs
operational tools
These are actions where security and control matter.
When MCP Is Not Required
Many AI workflows do not require MCP at all.
Examples:
generating code
reviewing pull requests
analyzing repositories
writing documentation
designing architecture
In these cases Claude already has the context it needs.
Adding MCP would simply add unnecessary complexity.
Key Takeaways
If an enterprise decides to adopt MCP, the architecture should follow a simple rule:
MCP is a shared service, not a team responsibility.
Shared services provide the integrations.
Developers connect once and use the available tools.
The result is a clean model:
Developer + Claude
│
▼
Enterprise MCP Platform
│
▼
Enterprise Systems
This keeps AI powerful while maintaining the same security, governance, and operational discipline expected from any enterprise platform.
Comments