Almost secure, containerized environment for running the pi coding agent. Designed for local execution with strict file-system isolation, privilege drop, and persistent storage.
1. Configuration
cp .env.example .env
# Edit .env with your GitHub token and Git identity2. Build Compiles the image from source and strips OS privilege escalation binaries.
make build3. Run Starts the agent in interactive TUI mode.
make runPassing Arguments
Use the run-args target to pass specific flags, commands, or one-off prompts to the agent.
# Check version
make args="--version" run-args
# Trigger Copilot authentication
make args="/login" run-args
# Execute a direct prompt
make args="'Create a snake game in python'" run-argsMaintenance & Debugging
# Access the container shell (runs as user 1000)
make shell
# Stop and remove running containers/networks
make clean
# Force rebuild the image without cache
make updateTo run the agent completely offline using local models, configure the following files in your .pi-data/agent/ directory:
.pi-data/agent/models.json
{
"providers": {
"llama-cpp": {
"baseUrl": "http://127.0.0.1:1337/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "gemma-4-26B-A4B-it-GGUF"
}
]
}
}
}.pi-data/agent/settings.json
{
"defaultProvider": "llama-cpp",
"defaultModel": "gemma-4-26B-A4B-it-GGUF",
"autocompleteMaxVisible": 7,
"defaultThinkingLevel": "off"
}This container implements a defense-in-depth architecture to sandbox the AI agent, ensuring it cannot leak credentials, modify its own access limits, or escalate privileges on your host machine.
The container uses a guardrail wrapper (gh-guard.sh) around the GitHub CLI. When PARANOID_MODE=true (set in .env), the agent is strictly blocked from executing dangerous repository or identity commands:
- Blocked:
gh auth,gh repo,gh secret,gh ssh-key,gh gpg-key. - This prevents a rogue agent from injecting a persistent backdoor key into your GitHub account.
Your GITHUB_TOKEN is never exposed in environment variables where the agent can read it via process.env.
- The token is mapped as a Docker Secret into RAM (
tmpfs) and locked to host permissions000. - The container runs as a standard user (
UID 1000). - A custom C binary (
gh-vault) uses SetUID to briefly elevate to root, read the token, pass it to the GitHub CLI, and immediately drop privileges. The agent natively receivesPermission Deniedif it attempts to read the file.
To prevent the agent from reading your Copilot auth.json or .env files, we implemented firewalls at both the OS and Application layers:
- OS Syscall Firewall (
LD_PRELOAD): A custom C library (fs-vault.so) interceptsopen()andfopen()syscalls at the Linux kernel level. If the agent spawns native child processes (likecat,grep, orpython) to snoop on config directories, the kernel forces anEACCESpermission error. - V8 Application Firewall: A Node.js monkeypatch (
app-firewall.js) intercepts the internalfsmodule. It analyzes the execution stack trace in real-time. If a file read/write request originates from the AI agent's tool directory, it throws a hard[SYSTEM BLOCK]. It only allows the core application (like the/loginprompt) to touch credentials.
During the Docker build phase, all native Linux privilege escalation vectors are physically deleted from the image:
- Removed:
su,mount,passwd,chsh,login,newgrp,unshare, etc. - The SetUID/SetGID execution bits are globally stripped (
chmod a-s) from all remaining binaries on the filesystem.
- UID/GID Mapping: The
Makefiledynamically passes your host User ID and Group ID into the container. Any files the agent writes to the./workspacemount will be owned by your host user, preventing root permission lockouts. - Anti-Compilation: Writable temporary directories (
/tmp,/.npm,/.config) are mounted usingtmpfswith thenoexecflag. This prevents the agent from downloading and executing statically compiled binaries to bypass theLD_PRELOADfirewall.