Skip to content

Releases: rand/ananke

v0.2.1: Language Precision & Quality

02 Mar 21:48

Choose a tag to compare

What changed

v0.2.1 is a quality and precision improvement across Ananke's 14-language extractor pipeline. No new CLI commands, no API changes, no breaking changes.

Comment-aware pattern matching

Extractors now skip patterns inside comments, strings, and doc comments. Previously, a // function foo() comment could produce a false constraint. All 14 languages filter correctly.

Type system coverage: 4 new languages

C, Ruby, PHP, and Swift now have type parsers, primitives, and inhabitation edges in the type system. This means constrained generation respects type relationships for all 14 supported languages, not just the original 10.

Per-language precision improvements

  • Go: grouped multi-line imports parsed correctly
  • Rust: Box/Arc/Rc/HashMap/dyn Trait recognized as wrapper types
  • Java: wildcard generics (? extends/super) handled
  • TypeScript: N-ary unions and utility types (Partial, Required, Pick, Omit)
  • C++: smart pointers (unique_ptr, shared_ptr, weak_ptr)

Tier 2 language promotion

Five Tier 2 languages gained deeper extraction:

  • Swift: extension methods
  • Kotlin: value classes, typealias
  • C#: delegates, IAsyncEnumerable
  • Ruby: visibility tracking, attr_accessor/reader/writer
  • PHP: union return types

Dynamic scope inhabitation

When Homer scope graph data is available, user-defined types now automatically generate inhabitation edges. This means the type system learns from your codebase rather than relying solely on builtin type relationships.

Multi-line extractor state machines

Go grouped imports, Rust where clauses, Java annotation blocks, and Python multi-line signatures are now parsed correctly via small state machines rather than single-line regex.

Property-based tests

8 new property-based tests: parser crash safety, primitive round-trips, optional wrapping idempotency, BFS reflexivity, edge monotonicity, builtin edge validity, comment filtering, and pattern determinism.

Numbers

  • 512 Zig tests (+39 from v0.2.0's 473), 144 Rust tests, 0 failures
  • CI green across all 7 jobs (security, lint, coverage, ubuntu, macos, integration, gate)
  • +3,246 lines across 29 files

Full Changelog: v0.2.0...v0.2.1

v0.2.0

02 Mar 18:21

Choose a tag to compare

v0.1.0 extracted constraints and compiled them. v0.2.0 makes them compose.

CLaSH constraint algebra

Five constraint domains in two tiers. Hard domains (Syntax, Types, Imports) compose by intersection: if any domain rejects a token, it can't be generated. Soft domains (ControlFlow, Semantics) bias the distribution without blocking. The key invariant: adding soft constraints can never make a satisfiable set unsatisfiable.

Domain fusion combines all five into a single per-token decision. Hard domains fuse via exact mask intersection (~10us/token). Soft domains fuse via additive logit reweighting within the feasible set. CRANE-style adaptive switching relaxes constraints during reasoning tokens and tightens them for structured output.

Type inhabitation

Given a target type, which expressions in scope can produce it? The inhabitation graph does BFS reachability over 9 edge kinds across 10 languages, then generates token masks from the reachable set.

Fill-in-the-middle

IDE completions via grammar quotienting: left-quotient by the prefix, right-quotient by the suffix, generate only in the residual. Hole scale (expression through module) maps to constraint intensity.

Homer integration

Cross-file intelligence from Homer scope graphs: name resolution, call graph context (upstream callers + downstream callees), four-quadrant salience scoring, temporal analysis, convention mining. All optional. The system degrades gracefully without it.

14 languages

Kotlin, C#, Ruby, PHP, and Swift join the existing 9. 383 patterns across all extractors, all supporting full CLaSH domain compilation.

sglang backend

OpenAI-compatible endpoint with constraint_spec extension field. Backend auto-detection: sglang if configured, Modal as fallback. New export-spec command for one-shot constraint pipeline.

Eval framework

Multi-sample pass@k with statistical significance testing. 24 task categories, paired constrained vs. unconstrained comparison, batch evaluation.

Numbers

  • Tests: 301 to 617 (473 Zig + 144 Rust), zero failures
  • Patterns: 101 across 5 languages to 383 across 14
  • Tree-sitter integration fully working (was pending in v0.1.0)

Full changelog: v0.1.0...v0.2.0

v0.1.0 - Foundation

04 Dec 03:48

Choose a tag to compare

Ananke v0.1.0 - Foundation

Release Date: December 2025

The initial release of Ananke - a constraint-driven code generation system that transforms AI from probabilistic guessing into controlled search through valid program spaces.

Highlights

  • Constraint Extraction (Clew): 101+ patterns for TypeScript/Python
  • Constraint Compilation (Braid): JSON Schema, Grammar, Regex, Token Masks
  • Orchestration (Maze): Rust-based async orchestration with Zig FFI
  • Modal Inference: vLLM + llguidance deployment (~50μs/token overhead)
  • CLI: 6 commands for extraction, compilation, generation, validation
  • Security: OWASP Top 10 compliance, path traversal protection, API key zeroing

Performance

Operation Achieved
Constraint extraction ~10ms
Constraint compilation ~1ms
Token-level enforcement ~50μs/token
Cache hit latency ~5-15μs

Test Coverage

  • 301 tests passing (100% pass rate)
  • Zero memory leaks
  • 23 security edge case tests

See RELEASE_NOTES.md for full details.