This guide walks you through setting up the Kilo Code monorepo for local development on macOS.
You need the following system-level tools installed before proceeding. If you already have any of these, skip the relevant step.
xcode-select --installInstall from https://brew.sh or from the GitHub releases.
If Homebrew isn't on your PATH yet:
echo 'export PATH=/opt/homebrew/bin:$PATH' >> ~/.zshrc
source ~/.zshrcbrew install git git-lfs
git lfs install --skip-repoThe --skip-repo flag avoids conflicts with the project's Husky hooks. Git LFS is used for large binary files (videos).
The project requires Node.js 24.14.1 locally (see .nvmrc) and accepts any Node.js 24.x runtime in package.json engines.
brew install nvm
mkdir -p ~/.nvmAdd the following to your ~/.zshrc:
# nvm (Node Version Manager)
export NVM_DIR="$HOME/.nvm"
[ -s "/opt/homebrew/opt/nvm/nvm.sh" ] && \. "/opt/homebrew/opt/nvm/nvm.sh"
[ -s "/opt/homebrew/opt/nvm/etc/bash_completion.d/nvm" ] && \. "/opt/homebrew/opt/nvm/etc/bash_completion.d/nvm"Then reload your shell:
source ~/.zshrcThe project uses pnpm as its package manager (version pinned in package.json packageManager field).
brew install pnpmInstall Docker Desktop either from the website or via Homebrew:
brew install --cask dockerImportant: Open Docker Desktop at least once after installation — it configures the CLI tools needed for docker compose.
Used to pull environment variables from the Vercel project:
pnpm add -g vercelbrew install stripe/stripe-cli/stripegit clone git@github.com:Kilo-Org/cloud.git
cd cloud
nvm install
nvm usepnpm install
git lfs pullThe project pulls environment variables from Vercel. Run these commands interactively (each will prompt for browser-based authentication):
vercel login
vercel link --project kilocode-app
vercel env pullThis creates .env.local with all required environment variables.
The KiloClaw pages (/claw/*) render the Pylon support chat widget, which requires two env vars to activate:
NEXT_PUBLIC_PYLON_APP_ID— the Pylon app ID from the Pylon dashboardPYLON_IDENTITY_SECRET— the identity verification secret used to HMAC-sign user emails
Both are already present in Vercel and pulled by vercel env pull. If either is missing the widget is silently skipped, so local dev continues to work without Pylon configured.
The project uses PostgreSQL 18 with pgvector, running via Docker. The compose file is at dev/docker-compose.yml:
docker compose -f dev/docker-compose.yml up -dThis starts a PostgreSQL container on port 5432 with:
- User:
postgres - Password:
postgres - Database:
postgres
pnpm drizzle migrateYou need to re-run this every time you pull new migrations from the repository.
pnpm dev:startThis launches a tmux dashboard with the Next.js app and related services. The web app will be available at http://localhost:3000.
To stop all services:
pnpm dev:stopRun the test suite to confirm everything is working:
pnpm testAll tests should pass against the local PostgreSQL database.
| Command | Description |
|---|---|
pnpm dev:start |
Start all local services in a tmux dashboard |
pnpm dev:stop |
Stop the tmux session and all services |
pnpm dev:env |
Sync .dev.vars files from .env.local (see Worker .dev.vars setup) |
pnpm test |
Run the Jest test suite |
pnpm typecheck |
Run the TypeScript type checker |
pnpm lint |
Lint all source files |
pnpm format |
Format all supported files with oxfmt |
pnpm format:changed |
Format only files changed since main |
pnpm validate |
Run typecheck, lint, and tests |
pnpm drizzle migrate |
Apply pending database migrations |
pnpm drizzle generate |
Generate a new migration after schema changes |
pnpm --filter web stripe |
Start Stripe webhook forwarding to localhost |
pnpm test:e2e |
Run Playwright end-to-end tests |
- Direct commits to
mainare blocked by a git hook. Always work on a feature branch. - The pre-push hook runs
pnpm format:check,lint, andtypecheck --changes-onlyin parallel.
To test Stripe integration locally:
- Log in to Stripe CLI:
stripe login - Start the webhook forwarder:
pnpm stripe - Copy the webhook signing secret from the CLI output
- Add it to
.env.development.local:STRIPE_WEBHOOK_SECRET="whsec_..."
- Edit the schema in
packages/db/src/schema.ts - Generate a migration:
pnpm drizzle generate - Apply it:
pnpm drizzle migrate
If you prefer Nix, the project includes a flake.nix with a dev shell that provides all required tools. With direnv installed, the .envrc file will automatically activate the Nix environment when you enter the project directory.
In local development, you can sign in without real OAuth by navigating to:
http://localhost:3000/users/sign_in?fakeUser=<email>
This creates a local-only user with the @@fake@@ hosted domain. You can append callbackPath to go directly to a page after login:
http://localhost:3000/users/sign_in?fakeUser=someone@example.com&callbackPath=/profile
Some features (e.g., admin panels) are only visible to users with is_admin = true. The admin flag is set at user-creation time based on the email address:
- Real OAuth: emails ending in
@kilocode.aiwith thekilocode.aihosted domain are admins. - Fake login: emails must end in
@admin.example.comto get admin access.
To sign in as a fake admin:
http://localhost:3000/users/sign_in?fakeUser=yourname@admin.example.com
A non-@admin.example.com email (e.g., someone@kilocode.ai) used via fake login will not be an admin, because the fake-login provider sets hosted_domain to @@fake@@, not kilocode.ai.
New organizations start with a 30-day enterprise trial. After expiry, the UI progressively locks down: first a soft lock (read-only with dismiss option), then a hard lock (no access without subscribing). This can be inconvenient in local development.
The easiest approach is to use the pre-configured dev organization. While signed in, run the following in the browser console (the endpoint is POST-only):
fetch('http://localhost:3000/api/dev/create-kilocode-org', { method: 'POST' })
.then(r => r.json())
.then(console.log);This creates a "Kilocode Local" org (id: 00000000-0000-0000-0000-000000000000) with:
plan: 'enterprise'require_seats: false— bypasses all trial/subscription checksfree_trial_end_at: '9999-12-31'— effectively never expires
If you've already created an organization and want to prevent its trial from expiring, you have two options:
Option A: Set require_seats to false in the database
This is the most reliable bypass — it short-circuits all trial enforcement (server-side middleware, client-side UI, and login redirects):
UPDATE organizations SET require_seats = false WHERE id = '<your-org-id>';Option B: Use the admin panel
- Sign in as a fake admin (
yourname@admin.example.com) - Open the admin panel from the account dropdown in the top-right corner
- Find your organization and either:
- Set
free_trial_end_atto a far-future date - Toggle on
suppress_trial_messaging(hides all trial UI)
- Set
Trial status is checked at three layers:
| Layer | Mechanism | Bypassed by require_seats = false |
|---|---|---|
| tRPC mutations | requireActiveSubscriptionOrTrial() middleware throws FORBIDDEN on hard expiry |
Yes |
| Login redirect | isOrganizationHardLocked() redirects to /profile |
Yes |
| Client UI | OrganizationTrialWrapper shows banners and lock dialogs |
Yes |
A script creates 6 organizations with different trial states for UI testing:
pnpm --filter web script:run db create-trial-test-orgs yourname@admin.example.comThe application consists of the Next.js app plus several Cloudflare Worker services (see pnpm-workspace.yaml). In local development, most day-to-day work only requires the Next.js app and PostgreSQL — workers are started individually as needed.
AI inference works locally without any extra services. The Next.js app includes an OpenRouter proxy route (/api/openrouter/[...path]) that calls real AI providers using API keys from .env.local. There are no mocks or local stubs — all inference hits real APIs (OpenRouter, OpenAI, Anthropic, Mistral, etc.).
Each worker in the workspace can be started individually with wrangler dev (or pnpm dev) from its directory. Workers communicate with Next.js over HTTP using env vars like CLOUD_AGENT_API_URL, CODE_REVIEW_WORKER_URL, etc. Dev ports are defined in each worker's wrangler.jsonc.
The easiest way to run workers is with pnpm dev:start (see Common Development Commands), which starts groups of related services in a tmux dashboard.
Most workers require a .dev.vars file with secrets like NEXTAUTH_SECRET and INTERNAL_API_SECRET. A script automates this:
pnpm dev:envThe script (dev/local/env-sync/) scans every .dev.vars.example in the repo, resolves each variable's value, and writes (or patches) the corresponding .dev.vars file. Before applying, it shows a diff of what will change and asks for confirmation.
Values are resolved using annotations in .dev.vars.example comment lines:
| Annotation | What it does | Example |
|---|---|---|
| (none) | Copies the value from .env.local if the key matches, otherwise keeps the default |
INTERNAL_API_SECRET=your-secret-here |
# @url <service> |
Builds http://localhost:<port> from the service's dev port in wrangler.jsonc |
# @url nextjs → http://localhost:3000 |
# @from <KEY> |
Copies the value of a different key from .env.local |
# @from CODE_REVIEW_WORKER_AUTH_TOKEN |
# @pkcs8 |
Copies from .env.local and converts PKCS#1 PEM keys to PKCS#8 format |
# @pkcs8 above a private key var |
For example, in a .dev.vars.example:
# @url nextjs
API_URL=http://localhost:3000
# @from CODE_REVIEW_WORKER_AUTH_TOKEN
BACKEND_AUTH_TOKEN=your-backend-auth-tokenThe @url annotation accepts multiple comma-separated services (e.g., # @url svc-a,svc-b) and appends path suffixes (e.g., # @url nextjs/api/events).
Run pnpm dev:env again after pulling changes that add new env vars to any .dev.vars.example.
- Service bindings between workers don't function in local
wrangler dev. This affects chains like session-ingest → o11y, webhook-agent → cloud-agent, and app-builder → db-proxy/git-token-service. - Webhook → KiloClaw Chat triggers require the KiloClaw worker running on port 8795. The webhook worker calls it via
KILOCLAW_API_URL(HTTP, not a service binding) to deliver messages to Stream Chat. Stream Chat credentials (STREAM_CHAT_API_KEY,STREAM_CHAT_API_SECRET) must be inkiloclaw/.dev.vars. - Cloudflare Containers (used by cloud-agent, cloud-agent-next, app-builder) always run on Cloudflare's remote infrastructure, even in dev mode. Purely local execution is not possible.
- Cloudflare-specific features like Analytics Engine, Pipelines, and dispatch namespaces don't work locally.
The core Next.js app handles profiles, organizations, usage tracking, billing, and the OpenRouter inference proxy without any workers. Features that require a specific worker (e.g., Cloud Agent sessions, code reviews, app builder) will fail gracefully or show connection errors if that worker isn't running.
If you use git worktree to run multiple checkouts simultaneously, set the KILO_PORT_OFFSET environment variable to avoid port collisions between worktrees:
# Automatic offset derived from the worktree directory name (0 for the primary worktree):
export KILO_PORT_OFFSET=auto
# Or a fixed numeric offset (added to every service port):
export KILO_PORT_OFFSET=100With auto, the primary worktree gets offset 0 (default ports), and secondary worktrees get a deterministic offset based on the directory name. The offset is added to the Next.js port (3000), all worker dev ports, and the URLs generated by pnpm dev:env.
If you see errors about unsupported Node.js versions, ensure you're using the pinned Node 24 release:
nvm use
node --version # Should output v24.14.1Make sure the PostgreSQL container is running:
docker compose -f dev/docker-compose.yml up -d
docker ps | grep postgresThe connection string used by the app is postgres://postgres:postgres@localhost:5432/postgres.
The dev server won't start without environment variables. Run vercel env pull to create .env.local. If you don't have Vercel access yet, ask a team member for help.
This means your active Node.js version doesn't match the supported 24.x range in package.json. Switch to the pinned local version with nvm use.
If image/video files appear as small text files with oid sha256:..., run:
git lfs pull