Skip to content

v0.1.0 — First public release

Latest

Choose a tag to compare

@andyed andyed released this 14 Apr 12:08
· 2 commits to main since this release

v0.1.0 — First public release

Citation verification for AI-assisted research. Catches confabulated academic citations before they ship.

Features

  • audit — Scan directories for citation keys, match against BibTeX, flag orphans and ambiguous references
  • verify — Verify a single DOI against CrossRef API
  • search — Fuzzy-search CrossRef by title
  • arxiv — Audit recent arXiv papers for citation quality
  • aggregate — Generate key-claims summary from Jupyter notebooks
  • notebook-audit — Verify [NB##:K##] claim references in prose
  • --json flag for machine-readable output on all commands
  • agent.md — Drop-in Claude Code agent definition

Quick start

npx github:andyed/science-agent audit ./docs --bibtex=./refs.bib

Works with any AI coding assistant (Claude Code, ChatGPT, Gemini, Cursor, Windsurf) — it's a CLI tool, not a plugin.

CI/CD

See docs/github-actions.md for a copy-paste GitHub Actions workflow that audits citations on every PR.

Why

AI coding assistants confabulate citations at 12–24% in assisted docs vs 0–7% in human-written docs. The fix: require a DOI for every citation, verify against CrossRef. See FINDINGS.md for the full audit.