AI coding assistants can read your source code, but they can't see what your code did. This is why they struggle with debugging, and what runtime context actually means.
When you ask an AI assistant to fix a bug, it analyzes your source files — function signatures, component trees, import graphs. But debugging is about what the code does, not what it says.
A race condition doesn't live in your source code. A stale cache doesn't appear in your component tree. An API returning 200 with malformed data won't show up in static analysis. These bugs only exist at runtime.
Without that context, AI assistants guess. You get "try adding a useEffect dependency" or "check if the API response matches the expected schema" — plausible advice that rarely hits the actual root cause.
Most developers bridge this gap manually: copy-pasting console logs into ChatGPT, describing event sequences in prompts, or screenshotting DevTools. You end up spending more time curating context for the AI than actually debugging.
The idea is straightforward: automatically capture what happens at runtime, correlate events across boundaries, and deliver the result to AI in a structured format. Four steps:
An SDK intercepts runtime events — console logs, fetch/XHR requests, state mutations, component renders, errors — without requiring manual instrumentation.
Raw events are noise. A correlation engine connects them by timing, causality, and component relationship. A 500 error on /api/search gets linked to the state update that set results: [], which gets linked to the SearchResults component that re-rendered with empty data.
Correlated events are processed into a structured report — causal chains, anomaly flags, state diffs, render cascades. This format (Limelight calls it "Debug IR") is designed to be compact enough for LLM consumption but detailed enough for root cause analysis.
The structured context is exposed to AI editors through an MCP server. The AI can query runtime state on demand — session overviews, error investigations, network logs, render profiles, state snapshots — instead of relying on whatever you manually paste in.
You have a search screen. Users report that results sometimes show data from a previous query. You can't reproduce it consistently.
Without runtime context: You add console.log everywhere, try to reproduce by typing quickly, eventually catch it once, paste the logs into your AI assistant, and get a plausible but unconfirmed suggestion to add AbortController.
With runtime context: You reproduce the bug once. The AI queries the runtime data via MCP and gets this:
Causal Chain:
1. GET /api/search?q="react" → 200 (340ms)
2. GET /api/search?q="react native" → 200 (120ms)
3. State: searchResults ← response from #2 (correct)
4. State: searchResults ← response from #1 (OVERWRITES #3 — stale)
Root Cause: Race condition — request #1 resolved after
request #2 because #1 had higher latency (340ms vs 120ms).
The state setter does not check request ordering.Now the AI has the complete causal chain. It can produce a targeted fix with confidence because it sees exactly what happened — not a hand-picked fragment of what you thought might be relevant.
A correlation engine typically uses three signals to connect events: temporal proximity (events within a tight time window), causal links (a network response triggers a state update triggers a render), and component relationships (events from the same component tree or shared state slice).
Beyond individual event chains, the engine can detect recurring anti-patterns automatically: N+1 query patterns, render loops, race conditions from out-of-order API responses, and state overwrites from competing updates.
Raw logs are expensive in tokens and hard for LLMs to parse. Structuring the output — with issue framing, causal chains, state diffs, violations, excluded causes, and render cascades — makes the context token-efficient and actionable. Sensitive values are redacted and replaced with type descriptions.
MCP (Model Context Protocol) is an open standard for connecting AI models to external data sources. An MCP server exposes tools the AI can call on demand — querying session state, investigating errors, filtering network requests, pulling render profiles. This is the bridge between your app's runtime and your AI editor.