Catch security bugs, placeholder code, and hallucinated claims in AI-generated code — before it ships.
Built by CustomGPT.ai for production teams running Claude Code at scale.
41% of all new code committed in 2026 is AI-generated — and 58% of it contains security vulnerabilities. Every existing tool (SonarQube, Snyk, Semgrep, CodeRabbit) works after the code is written — at CI, PR review, or repo scan. Nothing catches issues at the moment of generation.
Quadruple Verification intercepts Claude Code operations in real time, before code hits the filesystem. Regex fast-gates block obvious violations in <50ms. An AI self-review layer catches subtle issues across quality, security, research accuracy, and completeness.
Four verification cycles run automatically on every Claude Code operation:
| Cycle | When | What |
|---|---|---|
| Cycle 1 — Code Quality | Before file write/edit | Regex gate blocks TODO, placeholder, stub, and incomplete code |
| Cycle 2 — Security | Before write/edit/bash/MCP | Regex gate blocks eval(), hardcoded secrets, SQL injection, XSS, destructive commands |
| Cycle 3 — Output Quality | Before Claude finishes | AI multi-section review: code quality, security, research claims, completeness |
| Cycle 4 — Research Claims | Before write/edit of research .md | Blocks vague language, unverified stats, missing source URLs |
| Audit Trail | After every operation | Full JSONL audit log + optional LLM advisory analysis |
- Node.js >= 18
- Claude Code CLI
Two commands inside Claude Code — includes auto-updates:
/plugin marketplace add kirollosatef/customgpt-claude-quadruple-verification
/plugin install customgpt-claude-quadruple-verification@kirollosatef-customgpt-claude-quadruple-verification
That's it. The plugin auto-updates every session.
Run from any terminal:
npx @customgpt/claude-quadruple-verificationWindows (PowerShell):
git clone https://github.com/kirollosatef/customgpt-claude-quadruple-verification.git
cd customgpt-claude-quadruple-verification
.\install\install.ps1macOS / Linux:
git clone https://github.com/kirollosatef/customgpt-claude-quadruple-verification.git
cd customgpt-claude-quadruple-verification
bash install/install.shnode install/verify.mjs- Start Claude Code in any project
- Ask: "Create a Python file with a TODO comment"
- The operation should be BLOCKED with an explanation
- Check audit logs in
.claude/quadruple-verify-audit/
To auto-prompt all team members to install the plugin, commit this file to each repo:
.claude/settings.json
{
"plugins": [
"kirollosatef/customgpt-claude-quadruple-verification"
]
}When anyone opens the project in Claude Code, they'll be prompted to install the plugin. See docs/team-setup/settings.json for the template.
- Marketplace installs auto-update every session — push to the repo and everyone gets it.
- npx installs get the latest version each time
npxruns. - Manual installs require
git pullto update.
The plugin uses Claude Code's hook system to intercept operations at three points:
User Request → Claude generates code
↓
┌─────────────┐
│ Cycle 1 │ PreToolUse (Write|Edit)
│ Quality │ Blocks placeholder/TODO code
└──────┬──────┘
↓
┌─────────────┐
│ Cycle 2 │ PreToolUse (Write|Edit|Bash|MCP)
│ Security │ Blocks eval, secrets, injection
└──────┬──────┘
↓
┌─────────────┐
│ Cycle 3 │ Stop (prompt hook)
│ Output QA │ Second AI reviews final output
└──────┬──────┘
↓
┌─────────────┐
│ Cycle 4 │ PreToolUse (Write|Edit) + Stop
│ Research │ Blocks vague claims, missing sources
└──────┬──────┘
↓
┌─────────────┐
│ Audit │ PostToolUse (all tools)
│ Logger │ JSONL trail of every operation
└─────────────┘
Tested with a 45-scenario A/B benchmark across 6 categories (Feb 2026):
| Category | Quality Change | Notes |
|---|---|---|
| Agent SDK tasks | +31.8% | Stop-gate prevents plan-only output |
| Code Quality | +0.1% (neutral) | Regex gates add near-zero overhead |
| Security tasks | +2.3% | Catches eval(), hardcoded secrets |
| Research writing | +8.7% | Source verification enforced |
| Overall | +4.4% | 1.5x latency, 1.3x tokens |
The AI self-review stop-gate (Cycle 3) is where the measurable quality improvement comes from. Regex gates (Cycles 1, 2, 4) add <50ms and catch real but relatively rare violations.
Full methodology: docs/BENCHMARK-RESULTS.md
| Feature | This Plugin | SonarQube | Snyk | CodeRabbit | Semgrep |
|---|---|---|---|---|---|
| When it runs | At generation | CI | Repo scan | PR review | CI |
| Blocks before file write | Yes | No | No | No | No |
| AI-specific rules (stubs, TODOs, hallucinations) | Yes | No | No | Partial | No |
| AI self-review | Yes | No | No | Yes (PR) | No |
| Research claim verification | Yes | No | No | No | No |
| Zero dependencies | Yes | No | No | No | No |
| Free & open source | Yes | Community | Free tier | $12-24/dev/mo | Free tier |
| Works at generation time | Yes | No | No | No | No |
Configuration merges from three sources (later overrides earlier):
- Plugin defaults —
config/default-rules.json - User config —
~/.claude/quadruple-verify-config.json - Project config —
$PROJECT/.claude/quadruple-verify-config.json
{
"disabledRules": ["no-todo"]
}Create .claude/quadruple-verify-config.json in your project:
{
"disabledRules": ["no-empty-pass"],
"audit": {
"enabled": true
}
}See docs/RULES.md for the complete list of verification rules with examples.
no-todo— Block TODO/FIXME/HACK/XXX commentsno-empty-pass— Block placeholderpassin Pythonno-not-implemented— Blockraise NotImplementedErrorno-ellipsis— Block...placeholder in Pythonno-placeholder-text— Block "placeholder", "stub", "implement this"no-throw-not-impl— Blockthrow new Error("not implemented")
no-eval— Blockeval()no-exec— Blockexec()in Pythonno-os-system— Blockos.system()in Pythonno-shell-true— Blockshell=Truein subprocessno-hardcoded-secrets— Block hardcoded API keys, passwords, tokensno-raw-sql— Block SQL injection via string concatenationno-innerhtml— Block.innerHTML =(XSS)no-rm-rf— Block destructiverm -rfon critical pathsno-chmod-777— Block world-writable permissionsno-curl-pipe-sh— Blockcurl | shpatternsno-insecure-url— Block non-HTTPS URLs (except localhost)
no-vague-claims— Block "studies show", "experts say", and similar vague languageno-unverified-claims— Block claims without a verification tag (<!-- VERIFIED -->,<!-- PERPLEXITY_VERIFIED -->, etc.)no-unsourced-claims— Block claims lacking a source URL within 300 characters
npm testnpm run test:cycle1
npm run test:cycle2
npm run test:cycle4
npm run test:audit
npm run test:confignpm run verifySee docs/ARCHITECTURE.md for detailed technical documentation.
See docs/TROUBLESHOOTING.md for common issues and solutions.
We welcome contributions! See CONTRIBUTING.md for guidelines.
Look for issues labeled good first issue to get started.
- CustomGPT.ai — Production AI agent platform, internal Claude Code workflows
Using this plugin? Open a PR to add your team here.
If this plugin helps your team ship safer AI-generated code, please star this repository — it helps others find it.
Found a bug or want a new rule? Open an issue.
MIT — see LICENSE
