AI-assisted meta-analysis pipeline. This file is auto-loaded by Claude Code.
First time setup? → Run bash setup.sh then bash verify_environment.sh (see ENVIRONMENT_QUICK_START.md)
New project? → Say "brainstorm" or "help me find a topic" (see ma-topic-intake)
Have TOPIC.txt? → Say "start" or "continue" (see ma-end-to-end)
At Stage 06? → Say "complete manuscript" (see ma-manuscript-quarto)
Want to see an example? → Check projects/ici-breast-cancer/ (99% complete meta-analysis)
Want to validate the workflow? → See metadat Validation
Location: projects/ici-breast-cancer/
A real, 99% complete meta-analysis on immune checkpoint inhibitors in triple-negative breast cancer (TNBC):
Key Metrics:
- 5 RCTs, N=2,402 patients
- Primary outcome: RR 1.26 (95% CI 1.16-1.37), p=0.0015, ⊕⊕⊕⊕ HIGH quality
- Manuscript: 4,921 words (compliant with Lancet Oncology, JAMA Oncology)
- Time invested: ~14 hours (vs 100+ hours manual)
Quick Tour:
projects/ici-breast-cancer/README.md- Complete navigation guideprojects/ici-breast-cancer/00_overview/FINAL_PROJECT_SUMMARY.md- All findings (415 lines)projects/ici-breast-cancer/07_manuscript/- Full manuscript (5 sections + 7 tables)projects/ici-breast-cancer/06_analysis/- All R scripts + results
Use this as a template when starting your own meta-analysis.
All projects are now in projects/<project-name>/ directory.
meta-pipe/
├── ma-*/ # Framework code modules (each has SKILL.md)
├── docs/archive/ # Archived documentation
├── tooling/ # Shared tools and scripts
└── projects/ # All your meta-analysis projects
├── legacy/ # Historical data (migrated 2026-02-08)
├── ici-breast-cancer/ # Example: complete meta-analysis
└── your-project/ # Your new projects go here
├── 01_protocol/
├── 02_search/
├── ...
└── TOPIC.txt
When running commands: Replace <project-name> with your actual project name.
Each stage has a dedicated skill with commands and workflow guidance
| Stage | Skill | Key Tasks | Invoke |
|---|---|---|---|
| 00 | /ma-topic-intake |
Brainstorming, feasibility checks | /brainstorm or use skill |
| 01-02 | /ma-search-bibliography |
PROSPERO, search, dedupe | Use skill for detailed commands |
| 03 | /ma-screening-quality |
Dual-review screening, kappa | Use skill for detailed commands |
| 03b | /ma-screening-quality |
Analysis type confirmation gate | Confirm NMA vs pairwise (Step 8) |
| 04 | /ma-fulltext-management |
PDF retrieval, Unpaywall | Use skill for detailed commands |
| 04b | /ma-fulltext-management |
Full-text eligibility screening | ai_screen.py --stage fulltext |
| 05 | /ma-data-extraction |
Data extraction, RoB assessment | Use skill for detailed commands |
| 06a | /ma-meta-analysis |
Pairwise MA (R scripts 01-12) | Use skill for detailed commands |
| 06b | /ma-network-meta-analysis |
NMA (R scripts nma_01-10) | Use skill for detailed commands |
| 07 | /ma-manuscript-quarto |
Manuscript assembly, rendering | Use skill for detailed commands |
| 08 | /ma-peer-review |
GRADE assessment, SoF table | Use skill for detailed commands |
| 09 | /ma-publication-quality |
QA, overclaim audit, readiness | Use skill for detailed commands |
| 10 | /ma-submission-prep |
PROSPERO, final checks, submit | Use skill for detailed commands |
Orchestration: /ma-end-to-end - Complete workflow management | /ma-agent-teams - Agent team coordination
Share your work: /post-to-discussion - Post your completed project to GitHub Discussions with figures and results
Note: Skills are invoked using the Skill tool. Each skill contains both workflow guidance and complete command references.
Coordinate multiple Claude Code instances for parallel meta-analysis pipeline work.
- "Create a team for project X" → Full pipeline team (all stages)
- "Parallel screen project X" → Dual-review screening team only
- "Analysis team for project X" → Statistician + manuscript writer + QA auditor
- Lead reads
/ma-agent-teamsskill for orchestration playbook - Teammates spawned with role-specific prompts from
ma-agent-teams/prompts/ - Shared task list tracks dependencies; hooks enforce quality gates
- Each teammate owns specific directories (no cross-teammate file writes)
uv run tooling/python/team_spawn_helper.py --project <project-name> --role <role> [--list]- Claude Code v2.1.32+
- Enabled via
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1in.claude/settings.local.json
| Role | Stages | File Ownership |
|---|---|---|
| protocol-architect | 00-01 | 01_protocol/** |
| search-specialist | 02 | 02_search/** |
| screener-a / screener-b | 03 | 03_screening/** |
| fulltext-manager | 04 | 04_fulltext/** |
| data-extractor | 05 | 05_extraction/** |
| statistician | 06 | 06_analysis/** |
| manuscript-writer | 07 | 07_manuscript/** |
| qa-auditor | 08-09 | 08_reviews/**, 09_qa/** |
Why: Prevents 10-40 hours of wasted work on unanswerable research questions.
Then proceed:
- Ask for project name if not already specified
- Read
projects/<project-name>/TOPIC.txtto understand the research question - Check project state - which stages are complete in
projects/<project-name>/? - Ask only essential questions before proceeding:
- Databases to search — PubMed + Scopus are mandatory minimum (PRISMA requires ≥2 databases); optionally add Embase, Cochrane
- Date range limits?
- Language restrictions?
- Study design (RCTs only, or include observational?)
- Preliminary analysis type (two-stage decision — confirmed after screening):
- If TOPIC.txt describes ≥3 treatments →
analysis_type.preliminary: nma_candidate(provisional) - If TOPIC.txt describes 2 treatments →
analysis_type.preliminary: pairwise - Copy
analysis-type-decision-template.md→01_protocol/analysis-type-decision.md, fill Stage 1 nma_candidaterequires confirmation after screening (see Step 3b in end-to-end)
- If TOPIC.txt describes ≥3 treatments →
- Initialize project if not done:
cd /Users/htlin/meta-pipe uv run tooling/python/init_project.py --name <project-name>
- Execute pipeline stages in order, validating at each step
projects/<project-name>/. All commands are in module-specific SKILL.md files.
See ma-end-to-end/SKILL.md for detailed resume behavior.
Quick summary:
cd /Users/htlin/meta-pipe/tooling/python
# 1. Check project status
uv run project_status.py --project <project-name> --verbose
# 2. Show last session summary
uv run session_log.py --project <project-name> resume
# 3. Check for NEXT_STEPS file
ls -t projects/<project-name>/NEXT_STEPS_*.md | head -1Then provide personalized report with next actions.
See ma-manuscript-quarto/SKILL.md for detailed workflow.
Phase 1 (MANDATORY): Fill manuscript_outline.md and get user approval before writing any sections.
Phase 2: Use the meta-manuscript-assembly skill (6-8 hours to 90% publication-ready manuscript)
Phase 3 (QUALITY REFINEMENT)
- Transforms 90% → 95-98% readiness (+10% acceptance rate)
- 5 Required Items (Total: 2-3 hours)
- ROI: Prevents 6-12 months revision delay
- Environment:
- First-time setup:
bash setup.sh(30-60 min, one-time) - Verify anytime:
bash verify_environment.sh(2 min) - See ENVIRONMENT_QUICK_START.md for details
- First-time setup:
- Python: Always
uv run, neverpython3directly - Dependencies:
- Python:
uv add <package>intooling/python/ - R:
install.packages()thenrenv::snapshot()in project root
- Python:
- API keys: Read from
.env(ma-search-bibliography/references/api-setup.md) - Rounds: Keep all
round-XXdata, never overwrite - Delete: Use
ripnotrm
Only ask if information is missing from TOPIC.txt:
- Target population, intervention, comparator, outcomes (PICO)
- Analysis type (pairwise vs network): preliminary by treatment count (≥3 →
nma_candidate), confirmed after screening with transitivity assessment - Additional databases beyond PubMed + Scopus (mandatory minimum): Embase? Cochrane?
- Risk-of-bias tool (RoB 2 vs ROBINS-I)
- Effect measure (RR/OR/HR/SMD/MD)
- Subgroup variables
Essential Guides:
- Time Investment Guidance - Realistic timeline expectations (22-32 hours)
- Getting Started - Detailed step-by-step guide
Module-Specific:
- Each
ma-*/SKILL.mdcontains commands, validation criteria, and key outputs - Each
ma-*/references/contains detailed methodology guides
Network Meta-Analysis (for ≥3 treatments):
- NMA Overview - When to use NMA vs pairwise MA (10 min)
- NMA R Guide - Step-by-step Bayesian NMA workflow (30-45 min)
- NMA Completion Checklist - 25-item systematic checklist
R Resources:
- R Figure Generation Guide - Task-based guides
- Forest Plots - 15-30 min
- Table 1 - 30-60 min
- Multi-Panel Figures - 15-20 min
Journal Preparation:
- Journal Formatting - Lancet/JAMA/Nature Medicine requirements
- Supplementary Materials Template
Example Project:
- ICI in TNBC Meta-Analysis - Complete 99% finished project (5 RCTs, N=2,402)
- See
projects/ici-breast-cancer/README.mdfor navigation - Use as template for your own meta-analysis
- See
See ma-end-to-end/SKILL.md for complete QA threshold table.
Key validation points:
- Dual-review kappa ≥ 0.60
- Figure DPI ≥ 300
- PRISMA checklist 27/27 (or 32/32 for NMA)
- Publication readiness score ≥ 95%
AI Automation: 95-98% (up from 85-90%)
See ma-publication-quality/SKILL.md for details on:
publication_readiness_score.py— Objective 0-100% score (8 components)validate_nma_outputs.py— NMA-specific validation (7 checks)- Enhanced
claim_audit.py— Overclaim detection (12 patterns) nma-completion-checklist.md— 25-item pre-submission checklist
Impact:
- Manual QA time: 8-12h → 3-4h (-60%)
- NMA checklist errors: 40% → <5%
- Overclaim detection: 0% → 95%
- Publication readiness clarity: Subjective → Objective 0-100%