Project-specific rules and conventions for AI assistants and contributors.
cargo build # Build
cargo test # Run all tests
cargo clippy # Lint
cargo fmt --all # Format (CI enforces this)Pushing code: always use just push instead of git push.
It runs fmt → clippy → test before pushing, preventing CI failures.
Supports the same arguments as git push (e.g. just push -u origin branch).
This is the single most important rule for this codebase.
Different LLM providers have different API quirks (field names, message format
requirements, schema restrictions, etc.). We handle these differences through
the ProviderCompat configuration layer, not through hardcoded conditionals.
Never do this:
// WRONG: hardcoded provider detection
if self.base_url.contains("api.openai.com") {
body["max_completion_tokens"] = json!(max_tokens);
} else {
body["max_tokens"] = json!(max_tokens);
}
// WRONG: hardcoded model name check
if request.model.starts_with("deepseek") {
msg["reasoning_content"] = json!("");
}
// WRONG: hardcoded vendor workaround
if is_kimi_model {
body["temperature"] = json!(1.0);
}Always do this:
// CORRECT: read from compat config
let field = self.compat.max_tokens_field.as_deref().unwrap_or("max_tokens");
body[field] = json!(request.max_tokens);
// CORRECT: configurable content filtering
if let Some(patterns) = &self.compat.strip_patterns {
for p in patterns { text = text.replace(p, ""); }
}Why: Hardcoded quirks accumulate fast and turn the codebase into an unmaintainable "workaround warehouse". Provider behaviors change, new providers appear, and model-name checks go stale. Configuration-driven compat keeps the code clean and gives users control.
How it works:
- Each provider type has default compat presets (see
ProviderCompat::openai_defaults(), etc.) - Users override any setting via
[providers.xxx.compat]or[profiles.xxx.compat]in config - Provider code reads
self.compat.*fields — never inspects URLs or model names
If you need a new compat behavior:
- Add an
Option<T>field toProviderCompat - Set its default in the appropriate preset function
- Use it in provider code via
self.compat.field_name - Document it in the config reference
All providers implement the LlmProvider trait. The engine never sees
provider-specific details. Keep it that way:
LlmRequest/LlmEvent/Message/ContentBlockare provider-neutral- Format conversion happens inside each provider's
build_messages()/build_request_body()
Any platform-specific behavior (paths, permissions, shell commands, line endings, etc.) must be wrapped in a single centralized function. All call sites use that function — never scatter raw platform detection across multiple crates or modules.
When adding new platform-aware logic:
- Create one function in the appropriate low-level crate
- Replace all direct platform API calls with calls to that function
- Tests and docs must use platform-neutral notation (e.g.
<config_dir>/aionrs), never hardcoded platform-specific literals (e.g.~/.config/aionrs)
If multiple crates need the same functionality, extract it to the appropriate existing crate in the dependency graph — don't copy-paste or reimplement. Choose the extraction target based on where it semantically belongs and where it minimizes dependency changes.
Don't create a new crate just for one shared function.
This is a Cargo workspace under crates/. Dependencies flow downward:
aion-typesis the bottom layer — zero internal dependenciesaion-config,aion-protocoldepend only onaion-types- Higher-level crates (
aion-agent,aion-cli) may depend on lower ones - Never introduce circular dependencies or upward references
When adding a new crate, check cargo metadata to verify it fits the
existing dependency graph before adding cross-crate imports.
- One file per logical unit within each crate
- Keep files under 800 lines; extract modules when approaching the limit
- Organize by domain responsibility, not by type
| Location | What goes there |
|---|---|
Inline #[cfg(test)] in each .rs file |
Unit tests for that module's internals |
crates/<crate>/tests/ |
Integration tests for that crate |
Unit tests target internal logic and code paths. Integration tests target functional requirements and public API — write them from the spec, not from reading the implementation.
Every test must verify a meaningful behavior or edge case. No trivial tests that just assert the happy path without checking boundaries, error conditions, or non-obvious logic.
CI runs on macOS, Linux, and Windows. Local dev can only test the
current platform's #[cfg(...)] code — other platform branches are
verified by CI alone.
Rules:
- Never hardcode platform paths (
/tmp/...,C:\...) in production code. UsePath::join(),dirs::config_dir(),tempfile::tempdir(), etc. - In tests, hardcoded Unix paths (
Path::new("/foo/...")) are fine for pure string operations (join, display) or nonexistent-path error handling. Only add#[cfg(unix)]/#[cfg(windows)]variants when the path is passed tois_absolute(),validate_memory_path(), or similar platform-sensitive checks. - Use
std::path::Component::Normal(not byte length) when checking path depth — prefix/root components differ across platforms.
- Rust 2021 edition, stable toolchain
cargo clippymust pass without warnings- Comments in English, commit messages in English