From LLM Playground To Enterprise Control Plane
- philippebogaerts8
- Dec 8
- 5 min read

Why AI Gateways, MCP Registries and Kubernetes Are Becoming The Agentic Foundation
A year ago, “doing AI” in most organisations meant experimenting with ChatGPT in the browser or wiring a single LLM API into an internal tool.
Today, the conversation is very different.
We are talking about AI gateways, MCP registries, curated catalogs of agents and skills, and push-to-Kubernetes deployment paths. In other words, we are not just playing with models any more. We are quietly building a new enterprise layer for agentic systems.
This article is a short reflection on that journey and where I think we are heading.
Phase 1 – The LLM Playground
The first phase was simple and chaotic at the same time.
One LLM provider
A couple of internal proof of concepts
Prompt engineering in notebooks and slide decks
Almost no governance
The risk profile was low because the systems were not deeply integrated with real data or production workflows. We were still in the “assistant” mindset, not the “agent” mindset.
Then adoption took off.
Suddenly we had a dozen teams, five different providers, and prompts that were making real API calls to production systems. At that moment, it became obvious that we were missing an entire layer in the stack.
Phase 2 – Glue Code And Shadow Platforms
The second phase came when people started shipping AI features for real.
Backend teams built custom wrappers around each LLM:
central config for API keys
internal HTTP proxies with rate limiting
logging and cost tracking
a bit of red teaming and prompt filtering
On top of that, early adopters started experimenting with agent frameworks and tool calling. Every project invented its own way to:
connect to Jira, GitHub, Salesforce, internal APIs
decide which tool an agent could call
approve or block “dangerous” actions like deployments or data exports
This worked for a while, but it did not scale. Security teams had no single view on what was happening. Platform teams saw duplication everywhere. App teams were stuck reinventing the same pieces.
That pressure is exactly what gave birth to the next phase.
Phase 3 – Building The Enterprise Agentic Layer
The industry has started converging on a few key primitives that sit between agents and infrastructure.
1. AI gateways
Think of these as API gateways for AI.
They sit in front of models and tools, and they give you:
a single endpoint for multiple LLM providers
routing, retries and fallbacks
observability and cost analytics
centralised policies, DLP and guardrails
Instead of every team hard coding “call provider X with model Y”, you get one AI front door that can evolve without touching every application.
2. MCP registries and tool catalogs
The Model Context Protocol (MCP) gives us a standard way for agents to talk to tools and data sources.
Once you have a standard, a new need appears:“How do I discover, approve and manage those tools across the organisation?”
That is where MCP registries and tool catalogs come in:
they act as a directory of approved tools, MCP servers and integrations
they attach metadata like owner, scope, data sensitivity and compliance status
they become a central place where security and platform teams can reason about what agents are allowed to do
Some platforms go one step further and let you register agents and skills as first class objects alongside tools, so you can curate higher level capabilities, not just low level endpoints.
3. Skills and agent marketplaces
As the ecosystem matures, we see:
skills: reusable capability bundles such as “triage an incident”, “prepare a pull request”, “summarise customer tickets”
agents: preconfigured combinations of model, tools, prompts and policies
These are starting to look like app stores for agentic capabilities. Internally, they become building blocks that product teams can assemble instead of starting from scratch.
Why Kubernetes Keeps Showing Up In This Story
If you look at many of the products in this space, a pattern emerges:
“Here is our AI gateway, MCP registry or agent runtime.By the way, here is the Helm chart to deploy it to your cluster.”
Why is Kubernetes still the default substrate for this layer?
It is already thereMost enterprises have standardised on Kubernetes or a managed variant. They already have:
logging and metrics
identity and RBAC
network policies
GPU nodes and quotas
Putting gateways, MCP tools and registries into that environment is simply the path of least resistance.
It matches the workloadThe agentic control plane is a mix of:
stateless APIs and gateways
long running tool servers
background workers and schedulers
sometimes GPU heavy model services
This is precisely the class of workload that Kubernetes was designed to manage.
It aligns with platform engineeringMany organisations are building internal developer platforms on top of Kubernetes.The AI layer then becomes a set of golden paths:
“this is how you register a new MCP tool”
“this is how you deploy an agent sidecar”
“this is how policies are attached”
From the outside, it may look like “Kubernetes everywhere”. From the inside, Kubernetes is increasingly just plumbing that disappears behind higher level APIs and templates.
Will Anything Make Kubernetes Redundant?
It depends on who you are.
For small SaaS teams, Kubernetes is already avoidable:
managed container platforms like App Runner, Cloud Run, Azure Container Apps
fully managed agent platforms from the big clouds
You can happily build impressive agentic products without ever touching a cluster.
For enterprises, the picture is different:
years of investment into Kubernetes tooling and skills
centralised security and compliance models built around clusters
hybrid and multi cloud strategies that rely on a common orchestration layer
In that context, Kubernetes is unlikely to disappear soon. What will change is its visibility. It will feel more like the kernel of an operating system and less like something every team must interact with directly. The real competition is not “Kubernetes vs X”.It is raw Kubernetes vs higher level platforms that happen to run on top of it.
Where The Journey Goes Next
If you zoom out, the pattern is very clear. We are moving:
from ad hoc agents to governed agent platforms
from direct model calls to AI gateways and policies
from handwritten integrations to discoverable MCP tools and skills
from “spin up a cluster” to “push this agentic capability into our control plane”
In a few years, I expect most conversations in enterprises to sound less like:
“Which LLM are you using?”
and more like:
“How are your agents integrated into your AI gateway,which skills do they expose,and what policies are enforced on those skills in production?”
The journey is far from over, but the direction is clear.We are not just building clever agents.We are building the enterprise layer that makes those agents trustworthy, observable and safe to connect to real systems.






Comments