About This Talk
Agentic AI systems don't just "chat" anymore. They take actions: they call tools, hit services, chain steps together, and sometimes do things that look … unexpected.
When that happens, we usually have lots of opinions and some logs, but not enough ground truth. What did the agent really call? What did it touch? What did it spawn or connect to? And which patterns keep showing up across runs?
What I'll Share
In this talk I'll share an ongoing hands-on experiment: a small Kubernetes sandbox where I run multiple agents and MCP servers built with different SDKs, then observe them using open source runtime security and telemetry. Think of it as putting agents under a microscope.
I'll combine:
- Cilium for L4/L7 visibility
- Tetragon for runtime process signals
- Elastic to store and correlate everything into a timeline you can actually reason about: flows, requests, processes, and "what happened next"
Key Insights
Along the way, you also start to see the creative side of agents: how they "find a way" when they are blocked, how they compose tools in novel sequences, and how capabilities can effectively expand through combination and iteration, even when you didn't design it explicitly.
This isn't about claiming perfect detection. It's about learning faster, spotting behavioral patterns, surfacing surprising toolchains, and exploring what practical governance might look like next: registries, gateways, and policy-controlled tool exposure.