Deep Dive

Why AI Debugging Fails Without Runtime Context

AI coding assistants can read your source code, but they can't see what your code did. This is why they struggle with debugging, and what runtime context actually means.

The problem: AI can read your code, but not your runtime

When you ask an AI assistant to fix a bug, it analyzes your source files — function signatures, component trees, import graphs. But debugging is about what the code does, not what it says.

A race condition doesn't live in your source code. A stale cache doesn't appear in your component tree. An API returning 200 with malformed data won't show up in static analysis. These bugs only exist at runtime.

Without that context, AI assistants guess. You get "try adding a useEffect dependency" or "check if the API response matches the expected schema" — plausible advice that rarely hits the actual root cause.

Most developers bridge this gap manually: copy-pasting console logs into ChatGPT, describing event sequences in prompts, or screenshotting DevTools. You end up spending more time curating context for the AI than actually debugging.

What a runtime context layer does

The idea is straightforward: automatically capture what happens at runtime, correlate events across boundaries, and deliver the result to AI in a structured format. Four steps:

1. Capture

An SDK intercepts runtime events — console logs, fetch/XHR requests, state mutations, component renders, errors — without requiring manual instrumentation.

2. Correlate

Raw events are noise. A correlation engine connects them by timing, causality, and component relationship. A 500 error on /api/search gets linked to the state update that set results: [], which gets linked to the SearchResults component that re-rendered with empty data.

3. Structure

Correlated events are processed into a structured report — causal chains, anomaly flags, state diffs, render cascades. This format (Limelight calls it "Debug IR") is designed to be compact enough for LLM consumption but detailed enough for root cause analysis.

4. Deliver via MCP

The structured context is exposed to AI editors through an MCP server. The AI can query runtime state on demand — session overviews, error investigations, network logs, render profiles, state snapshots — instead of relying on whatever you manually paste in.

Example: debugging stale search results

You have a search screen. Users report that results sometimes show data from a previous query. You can't reproduce it consistently.

Without runtime context: You add console.log everywhere, try to reproduce by typing quickly, eventually catch it once, paste the logs into your AI assistant, and get a plausible but unconfirmed suggestion to add AbortController.

With runtime context: You reproduce the bug once. The AI queries the runtime data via MCP and gets this:

Causal Chain:
1. GET /api/search?q="react" → 200 (340ms)
2. GET /api/search?q="react native" → 200 (120ms)
3. State: searchResults ← response from #2 (correct)
4. State: searchResults ← response from #1 (OVERWRITES #3 — stale)

Root Cause: Race condition — request #1 resolved after
request #2 because #1 had higher latency (340ms vs 120ms).
The state setter does not check request ordering.

Now the AI has the complete causal chain. It can produce a targeted fix with confidence because it sees exactly what happened — not a hand-picked fragment of what you thought might be relevant.

Key concepts

Correlation signals

A correlation engine typically uses three signals to connect events: temporal proximity (events within a tight time window), causal links (a network response triggers a state update triggers a render), and component relationships (events from the same component tree or shared state slice).

Pattern detection

Beyond individual event chains, the engine can detect recurring anti-patterns automatically: N+1 query patterns, render loops, race conditions from out-of-order API responses, and state overwrites from competing updates.

Structured output for LLMs

Raw logs are expensive in tokens and hard for LLMs to parse. Structuring the output — with issue framing, causal chains, state diffs, violations, excluded causes, and render cascades — makes the context token-efficient and actionable. Sensitive values are redacted and replaced with type descriptions.

MCP as the delivery mechanism

MCP (Model Context Protocol) is an open standard for connecting AI models to external data sources. An MCP server exposes tools the AI can call on demand — querying session state, investigating errors, filtering network requests, pulling render profiles. This is the bridge between your app's runtime and your AI editor.

FAQ

Runtime context is the data your app produces while running — console logs, network requests, state changes, component renders, and errors. Source code tells you what the app should do. Runtime context tells you what it actually did.
You can, but you're manually selecting a small slice of uncorrelated data. You might miss the one event that matters. Automated capture gets everything, and correlation connects events across boundaries (network → state → render) so you see the full causal chain.
No. DevTools are inspection tools — they show you what's happening right now. A runtime context layer adds correlation across boundaries and delivers that context to AI assistants via MCP, which DevTools can't do. They're complementary.
MCP (Model Context Protocol) is an open standard for connecting AI models to external data sources. An MCP server exposes runtime data — correlation graphs, structured reports, state snapshots — directly to AI editors like Cursor, Claude Code, GitHub Copilot, and Windsurf.

Debug in real time. Ship with confidence.