v0.1.0 — First public release
Citation verification for AI-assisted research. Catches confabulated academic citations before they ship.
Features
audit— Scan directories for citation keys, match against BibTeX, flag orphans and ambiguous referencesverify— Verify a single DOI against CrossRef APIsearch— Fuzzy-search CrossRef by titlearxiv— Audit recent arXiv papers for citation qualityaggregate— Generate key-claims summary from Jupyter notebooksnotebook-audit— Verify[NB##:K##]claim references in prose--jsonflag for machine-readable output on all commandsagent.md— Drop-in Claude Code agent definition
Quick start
npx github:andyed/science-agent audit ./docs --bibtex=./refs.bibWorks with any AI coding assistant (Claude Code, ChatGPT, Gemini, Cursor, Windsurf) — it's a CLI tool, not a plugin.
CI/CD
See docs/github-actions.md for a copy-paste GitHub Actions workflow that audits citations on every PR.
Why
AI coding assistants confabulate citations at 12–24% in assisted docs vs 0–7% in human-written docs. The fix: require a DOI for every citation, verify against CrossRef. See FINDINGS.md for the full audit.