Naftiko Framework is the engine for Spec-Driven Integration. Capabilities are declared entirely in YAML — no Java required. The framework parses them and exposes them via MCP, SKILL, or REST servers.
- Language: Java 21, Maven build system
- Specification:
src/main/resources/schemas/naftiko-schema.json— keep this as first-class citizen in your context - Wiki: https://github.com/naftiko/framework/wiki (Specification, Tutorial, Use Cases, FAQ)
| Path | Purpose |
|---|---|
src/main/resources/schemas/naftiko-schema.json |
Naftiko JSON Schema (source of truth) |
src/main/resources/schemas/examples/ |
Capability examples (cir.yml, notion.yml, skill-adapter.yml, ...) |
src/main/resources/tutorial/ |
Shipyard Track tutorial (step-1-shipyard- to step-10-shipyard-) |
src/test/resources/ |
Test fixtures (not examples) |
src/main/resources/scripts/pr-check-wind.ps1 |
Local pre-PR validation (Windows) |
src/main/resources/scripts/pr-check-mac-linux.sh |
Local pre-PR validation (Unix/macOS) |
CONTRIBUTING.md |
Full contribution workflow |
All commands must be run from the repository root (framework/).
# Run unit tests (standard local workflow — requires JDK 21)
mvn clean test --no-transfer-progress
# Build Docker image (Maven runs inside Docker — no local Maven needed)
docker build -f src/main/resources/deployment/Dockerfile -t naftiko .
# Build native CLI binary (requires GraalVM 21 — triggered by version tags in CI)
mvn -B clean package -Pnative
# Pre-PR validation (Windows)
.\src\main\resources\scripts\pr-check-wind.ps1
# Pre-PR validation (Unix)
bash ./src/main/resources/scripts/pr-check-mac-linux.shBefore contributing, ensure your local environment has at least JDK 21 and Maven.
Required: JDK 21, Maven 3.9+
java -version # must be 21+
mvn -version # must be 3.9+Trivy and Gitleaks are not required locally — they run automatically in CI. The pr-check scripts use them if installed, but mvn clean test is enough to validate your changes before a PR.
If you still want to run the full pre-PR checks locally, install Trivy and Gitleaks.
Java — follows Google Style. Configure VS Code with Language Support for Java by Red Hat and apply settings from naftiko/code-standards — java.
Method visibility — prefer package-private (no modifier) over private for methods that implement non-trivial logic. This allows direct unit testing from the same package without reflection. Reserve private for truly internal helpers that are trivially covered by public API tests (e.g. one-liner formatters, simple getters).
Never modify CI/CD workflows (.github/workflows/), security configs, or branch protection rules.
When writing or generating tests, follow these rules:
Do:
- Test behavior through the public API — assert observable outcomes, not implementation details
- When a method is not accessible from a test, make it package-private in the production code (remove
private) rather than using reflection — this is the correct fix - Write one focused assertion per test, or group only closely related assertions in a single test
- Name tests in the form
methodShouldDoSomethingWhenCondition
Don't:
- Use
getDeclaredMethod/setAccessible(true)to access non-public methods - Write tests whose only purpose is to reach a coverage threshold — every test must document a real behavior or guard against a real regression
- Name tests
shouldCoverXxxBranchesor similar — names must describe behavior, not implementation structure - Group unrelated scenarios in a single test method — split them into separate
@Testmethods
When designing or modifying a Capability:
Do:
- Keep the Naftiko Specification and the Naftiko Rules as first-class citizens — the schema enforces structure, the rules enforce cross-object consistency, quality, and security
- Look at
src/main/resources/schemas/examples/for patterns before writing new capabilities - When renaming a consumed field for a lookup
match, also add aConsumedOutputParameteron the consumed operation to map the raw field name to a kebab-case name — otherwise the lookup has nothing to match against - Use
aggregatesto define reusable domain functions when the same operation is exposed through multiple adapters (REST and MCP) — this follows the DDD Aggregate pattern: one definition, multiple projections - Declare
semantics(safe, idempotent, cacheable) on aggregate functions to describe domain behavior — the engine derives MCPhintsautomatically - Override only adapter-specific fields when using
ref(e.g.,methodfor REST,hintsfor MCP) — let the rest be inherited from the function
Don't:
- Expose an
inputParameterthat is not used in any step - Declare consumed
outputParametersthat are not used in the exposed part - Prefix variables with the capability/namespace/resource name — they are already scoped, unless disambiguation is strictly needed
- Set a type property for
inputParameterin a rest consumes bloc - Use an
integertype instead of anumbertype foroutputParametersin a mcp exposes bloc - Bind two
exposesadapters (e.g.skillandrest) to the same port - Use
items:or nestedtype:onMcpToolInputParameterfor array-typed parameters — onlyname,type,description, andrequiredare allowed - Use YAML list syntax (
- type: object) foritemsinMappedOutputParameterArray—itemsis a singleMappedOutputParameterobject, not an array - Use snake_case identifiers where the schema expects
IdentifierKebab(e.g.match,name,namespace) — use kebab-case - Use
operationinstead ofcallin steps —operationis not a valid property inOperationStepCall, onlycallis - Use
MappedOutputParameter(withmapping, noname) when the tool/operation usessteps— useOrchestratedOutputParameter(withname, nomapping) instead - Use typed objects for lookup step
outputParameters— they are plain string arrays of field names to extract (e.g.- "fullName") - Put a
pathproperty on anExposedOperation— extract multi-step operations with a different path into their ownExposedResource - Duplicate a full function definition inline on both MCP tools and REST operations — use
aggregates+refinstead - Chain
refthrough multiple levels of aggregates —refresolves to a function in a single aggregate, not transitively
See CONTRIBUTING.md for the full workflow. Key rules:
- Open an Issue before starting work
- Branch from
main:feat/,fix/, orchore/prefix - Use Conventional Commits:
feat:,fix:,chore:— no scopes for now - AGENTS.md improvements are
feat:, notchore:— they add value to the agent workflow - Rebase on
mainbefore PR — linear history, no merge commits - One logical change per PR — keep it atomic
- CI must be green (build, tests, schema validation, Trivy, Gitleaks)
- Always read the repository templates before creating issues or PRs:
- Issues:
.github/ISSUE_TEMPLATE/— use the matching template and fill in all required fields - PRs:
.github/PULL_REQUEST_TEMPLATE.md— follow the structure exactly, do not improvise
- Issues:
- Do not use
git push --force— use--force-with-lease - When the user corrects a mistake, note it immediately so the insight is not lost — see Self-Improvement
- When the workflow is complete, review any noted corrections and propose rule updates if warranted
When you identify a bug — whether discovered during development, debugging, or user-reported — follow these steps in order before writing any fix:
Create a GitHub issue using the Bug Report template (.github/ISSUE_TEMPLATE/bug_report.yml).
Fill in all required fields: component, description (actual vs expected), steps to reproduce, root cause if known, proposed fix.
If the PR was created or assisted by an AI agent, fill in the Agent Context block.
If you cannot create the issue directly (e.g. no gh CLI available, no API token), provide the user with all the elements needed to create it manually: suggested title, label, filled-in template body ready to paste. Do not proceed to step 2 until the user confirms the issue number.
If there is any work in progress on the current branch (modified files, untracked files), save it first so nothing is lost and the user can return to it after the fix:
git stash push -m "wip: <description>" -- <only the relevant files>
# or, if everything on the branch belongs to the in-progress work:
git stash push -m "wip: <description>"Note the stash ref or branch name so you can restore it later with git stash pop or git checkout <branch>.
Then create the fix branch from up-to-date main:
git checkout main
git pull origin main
git checkout -b fix/<short-description>Never start a fix branch from a feature branch or a stale local main.
When the fix is merged, remind the user to switch back to their original branch and restore the stash if needed.
For every bug fix, two tests are required:
Unit test — targets the smallest unit of code that contains the bug (method or class level). Place it in the test class corresponding to the fixed class (e.g. ConverterTest, ResolverTest). If the class has no test file yet, create one. If a test already covers the scenario but is wrong, fix the test first and explain why in a comment.
Integration test — validates the fix end-to-end, typically loading a YAML capability fixture and exercising the full chain (deserialization → engine → output). Place the fixture in src/test/resources/ and the test class in the package closest to the integration point (e.g. io.naftiko.engine.exposes.mcp).
Run the full test suite before committing:
mvn testOrdering:
- Write the tests first — only modify files under
src/test/(andsrc/test/resources/) - Run
mvn testand confirm the new tests fail (proving the bug exists) - Only then implement the fix in
src/main/ - Run
mvn testagain and confirm all tests pass
Do not edit production code (src/main/) and test code (src/test/) in the same phase.
All existing tests must stay green. If a pre-existing test fails, investigate before touching it.
When the user corrects a workflow step, note it immediately so the insight is not lost — see Self-Improvement.
Once the fix is merged (or the PR is open and CI is green), review any corrections the user made during the workflow and evaluate them against the Self-Improvement criteria. Propose rule updates only at this point.
When a user corrects agent-generated code or workflow, note it immediately so the insight is not lost, then resume the current workflow without interruption. Do not propose a rule change mid-workflow — wait until the workflow is complete (see Bug Workflow Step 4, Contribution Workflow end-of-list).
Suggest an AGENTS.md update only when all three conditions are met:
- The corrected code or action was generated by the agent (not pre-existing code being refactored)
- The correction is structural — it targets a convention, pattern, or style choice (e.g. visibility, naming, test design, workflow step) — not a one-off logic bug or domain-specific mistake
- The correction is generalizable — the same mistake could plausibly recur in a different file or context
When all three conditions are met, propose the specific Do/Don't entry and the section it belongs to. Do not apply it — let the user decide.
When the conditions are not met, do not propose anything — avoid noise. Most corrections are one-off and do not need a rule.
For reference, the Test Writing Rules and Method Visibility sections in this file were both added through this process.