Skip to content

Commit 3c365f4

Browse files
committed
docs: rewrite README, add DOCS.md, update PRD v3.0 and CLAUDE.md for v0.3.1
1 parent 9333f9d commit 3c365f4

4 files changed

Lines changed: 792 additions & 183 deletions

File tree

CLAUDE.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ cargo build --release
2020
- **Hybrid Git**: gix for repo discovery, git CLI for diffs (documented choice)
2121
- **Tree-sitter**: Full file parsing with hunk mapping (not just +/- lines)
2222
- **Parallelism**: rayon for CPU-bound tree-sitter parsing, tokio JoinSet for concurrent git content fetching
23-
- **LLM**: Ollama primary (qwen3:4b), OpenAI/Anthropic secondary
23+
- **LLM**: Ollama primary (qwen3.5:4b), OpenAI/Anthropic secondary
2424
- **Streaming**: Line-buffered JSON parsing with CancellationToken
2525

2626
## Key Design Decisions
@@ -61,7 +61,7 @@ Location: platform-dependent (use `commitbee init` to create, `commitbee doctor`
6161

6262
```toml
6363
provider = "ollama"
64-
model = "qwen3:4b"
64+
model = "qwen3.5:4b"
6565
ollama_host = "http://localhost:11434"
6666
max_diff_lines = 500
6767
max_file_lines = 100
@@ -160,7 +160,7 @@ src/
160160
### Running Tests
161161

162162
```bash
163-
cargo test # All tests (178 tests)
163+
cargo test # All tests (182 tests)
164164
cargo test --test sanitizer # CommitSanitizer tests
165165
cargo test --test safety # Safety module tests
166166
cargo test --test context # ContextBuilder tests
@@ -240,9 +240,10 @@ When adding or updating crates:
240240
### Known Issues
241241

242242
- **No streaming during split generation**: When commit splitting generates per-group messages, LLM output is not streamed to the terminal (tokens are consumed silently). Single-commit generation streams normally. Low priority — split generation is fast since each sub-prompt is smaller.
243-
- **Thinking model output**: Models with thinking enabled (e.g. `qwen3:4b` default) prepend `<think>...</think>` blocks before their JSON response. The sanitizer now strips both `<think>` and `<thought>` blocks (closed and unclosed) during parsing, so this is handled. However, with tight token budgets (`num_predict: 256`), thinking tokens still consume output budget. Consider passing `think: false` in Ollama API options for models that support it, or increasing `num_predict` for thinking models.
243+
- **Thinking model output**: Models with thinking enabled prepend `<think>...</think>` blocks before their JSON response. The sanitizer strips both `<think>` and `<thought>` blocks (closed and unclosed) during parsing. The `think` config option (default: `false`) controls whether Ollama's thinking separation is used. The default model `qwen3.5:4b` does not use thinking mode and works well with the default `num_predict: 256`.
244244
- **Think-then-Compress prompting**: Evaluated and removed in v0.3.0. Adding `<thought>` instructions to prompts caused small models (<10B) to spend their token budget on analysis text instead of JSON output. The pre-computed EVIDENCE/CONSTRAINTS/SYMBOLS sections already do the "thinking" for the model. **Future consideration**: revisit for larger models (70B+, cloud APIs) where chain-of-thought genuinely improves output quality — would require bumping `num_predict` to 512+ and careful prompt engineering to keep thinking concise.
245+
- **Retry improvement plan**: Current retry is single-pass (one correction attempt via `validate_and_retry()`). **Future improvement**: configurable `max_retries` (default 3), prioritized violation ordering — fix critical errors first (e.g., `breaking_change` detection, invalid type), then structural issues, then length shortening last. Per-group retry for split commits. Would require a new config field and loop in `validate_and_retry()`.
245246

246247
### Post-Implementation Documentation TODOs
247248

248-
- **README.md Running Tests**: Kept in sync with test count updates (currently 178).
249+
- **README.md Running Tests**: Kept in sync with test count updates (currently 182).

0 commit comments

Comments
 (0)