AIStation Monitor — Bun-based local LLM and AI Chat system telemetry and model dashboard
bun-monitor is a high-performance, lightweight telemetry dashboard designed for the modern AI workstation. Built with the speed of the Bun runtime and TypeScript, it provides a unified "command center" view of your local LLM environment—streaming real-time NVIDIA GPU metrics, hardware thermals, and Ollama model status directly to a sleek Tailwind-powered UI. Whether you're monitoring a heavy inference load or tracking system health across a Tailscale connection, this tool ensures your Linux AI stack stays cool and responsive.
- Real-time NVIDIA GPU stats (fan, temp, power, memory, utilization)
- System thermals (CPU, SSD, VRM, pump, fans) via
sensors - Disk usage and load average
- Ollama model status (via Docker)
- Modern web UI (Tailwind CSS)
- Fast Bun TypeScript backend
- Docker-ready for easy deployment
Install dependencies:
bun installRun the server:
bun index.tsOpen http://localhost:4000 in your browser.
Build and run with Docker Compose:
docker compose up -d --buildView logs:
docker compose logs -fRun sensors inside the container:
docker exec -it bun-monitor bash
sensors- Source code is mounted into the container for live development
- Edit TypeScript and HTML/CSS, refresh browser to see changes
- Console logs are visible via Docker logs
/api/stats— Combined telemetry (GPU, system, disk, models)/api/fresh_stats— Uncached fresh telemetry
- Bun runtime (see Bun)
- Docker (for containerized deployment)
- NVIDIA drivers and Docker GPU toolkit (for GPU stats)
- lm-sensors (for system thermals)
MIT License
