Context
This is the first post in the Medium blog series tracked in #51. Topic: trace reconstruction + DAG rendering — the highest-value, lowest-barrier entry point for BQ Agent Analytics users.
Goal of the post: convert ADK users who have just installed the BigQuery Agent Analytics plugin into active SDK users, by showing the fastest possible "aha" moment (raw agent_events rows → readable conversation tree in one line of Python).
Title candidates
- "Your BigQuery Agent Analytics table is a graph. Here's how to see it." ← recommended
- "From event rows to conversation DAGs: debugging ADK agents with BigQuery Agent Analytics"
- "I stopped reading agent logs. Here's what I use instead."
Recommendation: #1 — concrete, specific curiosity-gap, promises a concrete outcome.
Target audience
ADK agent developers and ML platform engineers who:
- Have installed the BigQuery Agent Analytics plugin
- Have opened the
agent_events table in the BigQuery console
- Have bounced off its complexity at least once
- Want a faster debugging / observability loop than writing CTEs against raw events
Structure (Medium best practices)
Target length: 1,400–1,800 words (6–8 min read — Medium's sweet spot). One lede image, 3–4 inline images (code output, DAG screenshot, INFORMATION_SCHEMA query result), one closing image.
H1: Your BigQuery Agent Analytics table is a graph. Here's how to see it.
Sub: A 10-minute tour of turning raw agent_events rows into readable traces
with the BQ Agent Analytics SDK.
1. Hook (80 words)
- Real screenshot: a 47-row BQ query result of agent_events
- "This is what production looks like to most ADK teams. Good luck
debugging a multi-turn tool-call failure from that."
2. The problem in one paragraph (120 words)
- ADK plugin logs events. Events are flat. Conversations are trees.
- Before: write SQL CTEs to JOIN events by span_id, unnest JSON, pray.
- After: one SDK call.
3. Setup in 30 seconds (150 words)
- `pip install bigquery-agent-analytics`
- `bq-agent-sdk doctor --project-id=... --dataset-id=...`
- Screenshot of doctor output
4. The demo (600 words, the core)
- Real scenario: "The Calendar-Assistant bug" (narrative framing)
- User asks "book me a 1:1 with Priya next Tuesday"
- Agent fails — plain SQL shows the rows, nothing obvious
- One line: client.get_session_trace(id).render()
- ASCII tree reveals: tool_call search_contacts(name="Priya") →
returned 3 hits → agent picked wrong one → booked wrong meeting
- trace.tool_calls and trace.error_spans → extract structured data
5. Going deeper (250 words)
- list_traces(filter=...) to find similar failures
- Feed into post #2 (code-based eval in CI) — teaser, cross-link
6. What happens behind the scenes (200 words)
- "Every query the SDK runs is labeled — here's the one
INFORMATION_SCHEMA query that shows you exactly what it cost."
- Screenshot of IJOBS query output
- Bridges to the telemetry work (blocks on issue for label emission);
useful for eng-leader readers evaluating the SDK
7. Try it (100 words)
- Link to plugin quickstart + SDK GitHub + post #2 preview
- One-liner: "Install the plugin today, see your first DAG in 10 min"
Demo requirements
- Real ADK agent — build a small Calendar-Assistant ADK agent with 3 tools:
search_contacts, get_calendar_availability, book_meeting. Runs on real Gemini. Seeds a real failure mode (ambiguous contact name).
- Real BQ data — deploy the plugin to a sandbox GCP project, run 20–30 real sessions (some good, some failing), let the plugin log everything to
agent_events.
- Real scenario narrative — the "Priya ambiguity" bug is concrete, reproducible, and representative of a bug an agent builder would actually debug.
SDK improvements to ship alongside the post
Two code improvements that would make the post stronger. Worth PRing before publication:
Trace.render() output polish — audit for ASCII tree alignment with Unicode tool names, truncation of long prompts (>200 chars → …), optional color codes for errors in TTY mode (opt-in via render(color=True)). Current output works; this is about making screenshots camera-ready.
trace.error_spans convenience properties — verify error_message, tool_name, parent_span_id are first-class attrs (not requiring .raw_event["content"] digging). If not, add them.
Small, low-risk, makes the demo smoother and the reader's copy-paste experience reliable.
Dependency
Post section 6 ("What happens behind the scenes") depends on SDK query labeling — tracked in the companion issue. The post can still ship without it by removing section 6, but it's a stronger story with the INFORMATION_SCHEMA screenshot included.
Medium-specific tactics
- Publication: Submit to The Generator (tech audience, 50k+ subs) or Google Cloud Community (primary audience). The Google Cloud handle typically gets better reach for GCP content.
- Opening image: A stylized graphic of "rows → tree" — not a stock photo. Hire a designer or prompt Nano Banana Pro.
- Code blocks: Use Medium's embedded Gist renderer, not inline code blocks, for anything >5 lines. Gists render with syntax highlighting and a "Open in GitHub" link — doubles as SDK backlink.
- Callouts: Use blockquote for one key insight per section. Medium readers skim.
- Tags:
bigquery, ai-agents, google-cloud, python, observability (Medium max is 5).
- Links: Canonical URL on the Google Cloud dev blog if available, to preserve SEO juice.
- Closing CTA: Two actions — (1) install the plugin (primary), (2) star the SDK repo (secondary). Skip "follow me on Medium" — low-conversion.
Timeline
- Week 1: Ship SDK polish (
render() + error_spans convenience props)
- Week 1–2: Ship query labeling Phase 0 + Phase 1 (enables section 6 screenshot)
- Week 2: Build the Calendar-Assistant demo agent, generate real traces
- Week 2: Draft the post in Google Docs, iterate on hook and screenshots
- Week 3: Internal review (Google Cloud DevRel), publish
Open questions
- Publication target — Google Cloud publication, The Generator, or personal Medium under Google Cloud co-promotion?
- Do we have an existing real ADK agent with trace data we can reuse, or do we build the Calendar-Assistant demo from scratch? (Latter adds 2–3 days.)
- Who owns internal review and co-promotion amplification?
- Should the companion query-labeling work get its own launch post later, tying in a "how we track SDK adoption" engineering story?
Related
Context
This is the first post in the Medium blog series tracked in #51. Topic: trace reconstruction + DAG rendering — the highest-value, lowest-barrier entry point for BQ Agent Analytics users.
Goal of the post: convert ADK users who have just installed the BigQuery Agent Analytics plugin into active SDK users, by showing the fastest possible "aha" moment (raw
agent_eventsrows → readable conversation tree in one line of Python).Title candidates
Recommendation: #1 — concrete, specific curiosity-gap, promises a concrete outcome.
Target audience
ADK agent developers and ML platform engineers who:
agent_eventstable in the BigQuery consoleStructure (Medium best practices)
Target length: 1,400–1,800 words (6–8 min read — Medium's sweet spot). One lede image, 3–4 inline images (code output, DAG screenshot, INFORMATION_SCHEMA query result), one closing image.
Demo requirements
search_contacts,get_calendar_availability,book_meeting. Runs on real Gemini. Seeds a real failure mode (ambiguous contact name).agent_events.SDK improvements to ship alongside the post
Two code improvements that would make the post stronger. Worth PRing before publication:
Trace.render()output polish — audit for ASCII tree alignment with Unicode tool names, truncation of long prompts (>200chars →…), optional color codes for errors in TTY mode (opt-in viarender(color=True)). Current output works; this is about making screenshots camera-ready.trace.error_spansconvenience properties — verifyerror_message,tool_name,parent_span_idare first-class attrs (not requiring.raw_event["content"]digging). If not, add them.Small, low-risk, makes the demo smoother and the reader's copy-paste experience reliable.
Dependency
Post section 6 ("What happens behind the scenes") depends on SDK query labeling — tracked in the companion issue. The post can still ship without it by removing section 6, but it's a stronger story with the INFORMATION_SCHEMA screenshot included.
Medium-specific tactics
bigquery,ai-agents,google-cloud,python,observability(Medium max is 5).Timeline
render()+error_spansconvenience props)Open questions
Related