Track the carbon footprint of your Claude Code sessions.
1. Install (or update):
curl -fsSL https://raw.githubusercontent.com/gwittebolle/claude-carbon/main/install.sh | bashSame command to install and to update to the latest version.
2. Restart Claude Code. Your CO2 appears in the status line:
claude-carbon β₯ main | π’ Opus 4.7 ββββββββββ 35% | $0.50 Β· 65g COβ | Use 24% β»13:00
Segments, left to right: project + git branch, model + context window %, session cost + CO2, 5h block usage % + reset time. A π₯ prefix appears when the sustained burn rate would overshoot 100% of the limit by the end of the 5h block (after a 15 min grace window, only once usage reaches 15%).
5h quota source. The percentage comes directly from Anthropic's /api/oauth/usage endpoint (the same data Claude Code displays in /usage). No heuristic, no token-limit file to seed. Two sources in order:
- stdin (preferred): if Claude Code injects
rate_limits.five_hour.used_percentagein the statusline JSON, that value is used straight away. - OAuth API fallback:
GET https://api.anthropic.com/api/oauth/usagewith the bearer token from macOS Keychain,CLAUDE_CODE_OAUTH_TOKEN, or~/.claude/.credentials.json. Cached 60s in~/.claude/claude-carbon/oauth-usage.json.
Accurate on every plan, including Max 20x.
3. Use the slash commands:
/carbon-report- text report with totals, equivalences, top sessions/carbon-card- generate shareable PNG report cards (requiresplaywright-core, see Dependencies)
- Adds a live CO2 estimate to the Claude Code status line, next to the session cost
- Persists each session to a local SQLite database
- Backfills historical data from existing
~/.claudetranscripts - Two slash commands:
/carbon-report(text) and/carbon-card(PNG)
Generate yours with /carbon-card in Claude Code. Exports summary and detailed PNGs to exports/.
Advanced options (CLI)
# Since a specific date
bash ~/code/claude-carbon/scripts/generate-report.sh --since 2026-03-01
# All time
bash ~/code/claude-carbon/scripts/generate-report.sh --allCustom install directory
CLAUDE_CARBON_DIR=~/my-path/claude-carbon curl -fsSL https://raw.githubusercontent.com/gwittebolle/claude-carbon/main/install.sh | bashManual install
git clone https://github.com/gwittebolle/claude-carbon.git ~/code/claude-carbon
bash ~/code/claude-carbon/scripts/setup.shThen add to ~/.claude/settings.json:
{
"statusLine": {
"type": "command",
"command": "~/code/claude-carbon/scripts/statusline.sh"
},
"hooks": {
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/code/claude-carbon/scripts/persist-session.sh"
}
]
}
]
}
}Restart Claude Code.
Three data paths, two levels of accuracy:
| Script | Trigger | Data source | Subagents | Cache reads | Accuracy |
|---|---|---|---|---|---|
backfill.sh |
Manual / setup | JSONL files | Included | Excluded | Best estimate |
persist-session.sh |
Stop hook (session end) | JSONL files | Included | Excluded | Best estimate |
statusline.sh |
Every turn (live) | context_window JSON |
Not included | Included* | Approximate |
backfill and persist-session parse the raw JSONL transcripts (main session + subagent files), applying per-model emission factors. Only input_tokens and cache_creation_input_tokens are counted. These feed the SQLite database used by reports.
statusline reads context_window.total_input_tokens from Claude Code at each turn. This value represents the current context size (not a cumulative total), includes cache reads, and does not account for subagent tokens. It's an indicative live display, not a data source for reports.
*Claude Code does not expose cache_read_input_tokens separately in the statusline JSON. Parsing JSONL incrementally would be too slow for a per-turn display. A proper fix would require Anthropic to expose the token breakdown in the status hook API.
| Command | What it does |
|---|---|
/carbon-report |
Text report with totals, equivalences, top sessions |
/carbon-card |
Generate shareable PNG report cards |
Scripts (run automatically, rarely needed manually)
| Script | What it does |
|---|---|
setup.sh |
Init database, backfill historical sessions, show total |
statusline.sh |
Status line script (called automatically by Claude Code) |
persist-session.sh |
Stop hook (saves session data on exit) |
backfill.sh |
Re-parse all historical JSONL transcripts (incl. subagents) |
generate-report.sh |
Export PNG report cards (CLI, with --since / --all) |
Factors from Jegham et al. 2025, a peer-reviewed study measuring energy consumption of LLM inference on AWS infrastructure.
| Model | Input (gCO2e/Mtok) | Output (gCO2e/Mtok) | Basis |
|---|---|---|---|
| Opus | 500 | 3000 | Extrapolated (3x Sonnet) |
| Sonnet | 190 | 1140 | Measured |
| Haiku | 95 | 570 | Extrapolated (0.5x Sonnet) |
Important: these are order-of-magnitude estimates, not precise measurements.
- Sonnet factors are derived from Jegham et al. direct measurements. Opus and Haiku are extrapolated (no public data from Anthropic on per-model energy consumption).
- Cache read tokens are excluded from the calculation (only
input_tokensandcache_creation_input_tokensare counted). Cache reads represent the majority of tokens in Claude Code but consume negligible energy. - Carbon intensity uses AWS grid-average (0.287 kgCO2e/kWh), not real-time grid data.
- Anthropic does not publish Scope 1, 2, or 3 emissions. These estimates are independent and based on academic research, not provider data.
Factors are editable in data/factors.json. See METHODOLOGY.md for the full scientific basis, formula, and equivalences.
jq- JSON parsingsqlite3- local databasegit- branch detection in status line (optional)curl- 5h quota usage via Anthropic's/api/oauth/usageendpoint (optional, 60s cache)playwright-core+ Chromium - PNG export for/carbon-card(optional)
jq and sqlite3 are pre-installed on macOS. On Linux: apt install jq sqlite3.
To use /carbon-card, install Playwright and its Chromium browser:
npm install -g playwright-core
npx playwright install chromiumMeasuring is step one. Here are concrete levers to reduce your AI carbon footprint, ranked by impact.
Output tokens cost 5x more energy than input tokens. Opus consumes ~3x more than Sonnet per token.
{
"env": {
"CLAUDE_CODE_SUBAGENT_MODEL": "claude-haiku-4-5"
}
}Use Opus for architecture and planning. Sonnet for daily work. Haiku for subagents (exploration, file reading, reviews). This alone can cut your emissions by 60%.
RTK is a CLI proxy that filters noise from shell outputs (progress bars, verbose logs, passing tests) before they hit the context window. 60-90% token reduction on CLI commands, zero quality loss.
brew install rtk-ai/tap/rtk
rtk init -gClaude's extended thinking can use up to 32k hidden tokens per message. Capping it reduces consumption without degrading quality on routine tasks.
{
"env": {
"MAX_THINKING_TOKENS": "10000"
}
}By default, Claude Code compacts context at 95% usage. Compacting earlier keeps context cleaner and avoids bloated sessions.
{
"env": {
"CLAUDE_AUTOCOMPACT_PCT_OVERRIDE": "50"
}
}Add to your project's CLAUDE.md:
Be concise. No preamble, no summaries unless asked.
Output tokens are the most expensive in both cost and energy.
| Lever | Estimated reduction |
|---|---|
| Right model per task | -60% vs all-Opus |
| RTK | -70% on CLI tokens |
| Thinking cap at 10k | -70% on thinking tokens |
| Haiku subagents | -80% on exploration |
| All combined | -50 to 70% total |
- IEA - Energy and AI (2025) - data center projections
- Jegham et al. - How Hungry is AI? - per-model energy measurements
- UCL/UNESCO - 90% AI energy reduction - frugal AI approaches
- GreenIT.fr - AI impacts 2025-2030 - French data
Every Claude Code session uses real compute, real energy, real emissions. The number is small per query, but it adds up. Making it visible is the first step to owning it.
claude-carbon is free and open source under the MIT license. Contributions welcome.
Built by Gaetan Wittebolle.

