Automatic transcription and meeting minutes generation from audio recordings of German municipal meetings.
Automatische Transkription und Protokollerstellung aus Audioaufnahmen von deutschen Kommunalsitzungen.
| Upload | Processing |
![]() |
![]() |
| Assign segments to agenda items | Export meeting minutes |
![]() |
![]() |
| Requirement | Minimum | Recommended |
|---|---|---|
| Disk Space | 25 GB | 40 GB |
| RAM | 8 GB | 16 GB |
| Internet | Required for setup | Required for setup |
| Operating System | Windows 10/11, macOS 11+, Linux | - |
If you have an NVIDIA graphics card, the application can transcribe audio much faster. macOS users will use CPU mode (still works, just much slower).
Wenn Sie eine NVIDIA-Grafikkarte haben, kann die Anwendung Audio viel schneller transkribieren. macOS-Benutzer verwenden den CPU-Modus (funktioniert trotzdem, nur viel langsamer).
Download the application from GitHub:
Laden Sie die Anwendung von GitHub herunter:
- Go to: https://github.com/aihpi/pilotproject-protokollierungsassistenz
- Click the green "Code" button
- Click "Download ZIP"
- Save the file to your computer (e.g., Downloads folder)
- Extract the ZIP file:
- Windows: Right-click the ZIP file → "Extract All..." → Choose a location (e.g., Desktop or Documents)
- macOS: Double-click the ZIP file to extract it
You should now have a folder called protokollierungsassistenz-main (or similar).
Sie sollten jetzt einen Ordner namens protokollierungsassistenz-main (oder ähnlich) haben.
Docker is required to run the application. Download and install Docker Desktop:
Docker wird benötigt, um die Anwendung auszuführen. Laden Sie Docker Desktop herunter und installieren Sie es:
| Operating System | Download Link |
|---|---|
| Windows | Download Docker Desktop for Windows |
| macOS | Download Docker Desktop for Mac |
| Linux | Download Docker Desktop for Linux |
After installation, start Docker Desktop and wait until it shows "Docker Desktop is running".
Nach der Installation starten Sie Docker Desktop und warten Sie, bis "Docker Desktop is running" angezeigt wird.
- Open the folder where you downloaded/extracted the application
- Find the file
setup.ps1 - Right-click on it and select "Run with PowerShell"
- Follow the on-screen instructions
If you see a security warning, click "Run anyway" or "More info" → "Run anyway".
- Open Terminal (macOS: Applications → Utilities → Terminal)
- Navigate to the application folder:
cd /path/to/protokollierungsassistenz - Run the setup script:
./setup.sh
- Follow the on-screen instructions
The setup will download the application images (~6 GB) and AI models (~5 GB). This may take 5-15 minutes depending on your internet speed.
Das Setup lädt die Anwendungsimages (~6 GB) und KI-Modelle (~5 GB) herunter. Dies kann je nach Internetgeschwindigkeit 5-15 Minuten dauern.
You will see progress messages. When complete, your browser will open automatically.
Sie sehen Fortschrittsmeldungen. Nach Abschluss öffnet sich Ihr Browser automatisch.
Once setup is complete, the application is available at:
Nach Abschluss des Setups ist die Anwendung verfügbar unter:
If you restart your computer, you need to start the application again:
Wenn Sie Ihren Computer neu starten, müssen Sie die Anwendung erneut starten:
- Start Docker Desktop (if not running)
- Run the setup script:
- Windows: Right-click
setup.ps1→ "Run with PowerShell" - macOS/Linux: Open Terminal in the application folder and run
./setup.sh
- Windows: Right-click
The script will check if the application is already running and open your browser automatically.
Das Skript prüft, ob die Anwendung bereits läuft und öffnet automatisch Ihren Browser.
To stop the application and free up resources:
Um die Anwendung zu stoppen und Ressourcen freizugeben:
- Windows:
.\setup.ps1 stop - macOS/Linux:
./setup.sh stop
| Command | Description |
|---|---|
./setup.sh status |
Check if services are running / Status der Dienste prüfen |
./setup.sh logs |
View live logs / Live-Logs anzeigen |
./setup.sh restart |
Restart the application / Anwendung neu starten |
Make sure Docker Desktop is started and shows "Running" status.
Stellen Sie sicher, dass Docker Desktop gestartet ist und den Status "Running" zeigt.
Free up at least 25 GB of disk space before running setup.
Geben Sie mindestens 25 GB Speicherplatz frei, bevor Sie das Setup ausführen.
- Transcription on CPU is slower than GPU (this is normal)
- First transcription may take longer due to model loading
- Ensure Docker Desktop has enough memory allocated (8 GB+)
To see what's happening:
docker compose logs -fPress Ctrl+C to stop viewing logs.
If something goes wrong and you want to start fresh:
docker compose down -v
./setup.sh # or .\setup.ps1 on WindowsDiese Anwendung sendet anonyme Nutzungsstatistiken, um uns bei der Weiterentwicklung zu helfen.
This application sends anonymous usage statistics to help us improve the tool.
- Audiodauer und Verarbeitungszeiten / Audio duration and processing times
- Hardware-Informationen (GPU-Typ, Speicher) / Hardware info (GPU type, memory)
- Verwendete Modelle (Whisper, LLM) / Models used (Whisper, LLM)
- Anzahl Tagesordnungspunkte / Number of agenda items
- Textlängen (Transkript, Protokoll) / Text lengths (transcript, protocol)
- Verwendeter System-Prompt / System prompt used
- Inhalte von Transkripten oder Protokollen / Transcript or protocol content
- Namen oder persönliche Daten / Names or personal data
- Audio-Dateien / Audio files
If you have an NVIDIA GPU and want faster transcription:
- Install NVIDIA Container Toolkit
- Run the setup script - it will detect your GPU automatically
- Choose "Yes" when asked about GPU mode
macOS does not support NVIDIA GPUs.
Found a bug? Have a feature request? We'd love to hear from you!
Haben Sie einen Fehler gefunden? Haben Sie einen Funktionswunsch? Wir freuen uns über Ihr Feedback!
- Go to: https://github.com/aihpi/pilotproject-protokollierungsassistenz/issues
- Click "New Issue"
- Include the following information:
- Your operating system (Windows/macOS/Linux)
- What you were trying to do
- What happened (error message, screenshot if possible)
- Steps to reproduce the problem
Have an idea for improving the application? Create an issue and describe:
- What feature you'd like to see
- Why it would be helpful for your work
Click to expand developer documentation
This application provides a web-based workflow for generating meeting minutes (Sitzungsprotokolle) from audio recordings:
- Upload - Upload audio recording and enter agenda items (Tagesordnungspunkte/TOPs)
- Transcribe - Automatic transcription with speaker diarization using WhisperX + PyAnnote
- Assign - Manually assign transcript segments to each TOP
- Summarize - Generate summaries per TOP using an LLM (Qwen3 8B via Ollama)
- Export - Download the final meeting minutes
protokollierungsassistenz/
├── app/
│ ├── frontend/ # React + TypeScript web application
│ └── backend/ # FastAPI Python backend
├── scripts/ # Setup and utility scripts
├── .github/workflows/ # CI/CD for building Docker images
└── docker-compose.yml # Production deployment
Docker images are automatically built and published to GitHub Container Registry:
ghcr.io/aihpi/pilotproject-protokollierungsassistenz/frontend:latestghcr.io/aihpi/pilotproject-protokollierungsassistenz/backend:cpu-latestghcr.io/aihpi/pilotproject-protokollierungsassistenz/backend:gpu-latest
These images include all ML models pre-bundled, so no HuggingFace token is required for end users.
# macOS
brew install ollama
# Start Ollama server
ollama serve
# Pull the model (in another terminal)
ollama pull qwen3:8bcd app/backend
# Install dependencies with uv
uv sync
# Set environment variables
export HF_TOKEN=your_huggingface_token
# Run development server
uv run uvicorn main:app --port 8010The backend runs on http://localhost:8010.
cd app/frontend
# Install dependencies
npm install
# Run development server
npm run devThe frontend runs on http://localhost:5173.
To build images locally (requires HuggingFace token):
# CPU image
docker build --build-arg HF_TOKEN=$HF_TOKEN -t backend:cpu ./app/backend
# GPU image
docker build -f Dockerfile.gpu --build-arg HF_TOKEN=$HF_TOKEN -t backend:gpu ./app/backend| Variable | Description | Default |
|---|---|---|
HF_TOKEN |
HuggingFace token (only for local builds) | - |
WHISPER_MODEL |
Whisper model size | large-v2 |
WHISPER_DEVICE |
Device for inference (cuda, cpu, auto) |
auto |
WHISPER_BATCH_SIZE |
Batch size for transcription | 16 |
WHISPER_LANGUAGE |
Language code | de |
LLM_BASE_URL |
Ollama API endpoint | http://localhost:11434/v1 |
LLM_MODEL |
Model name for summarization | qwen3:8b |
TELEMETRY_WEBHOOK_URL |
Google Apps Script webhook URL for telemetry | (empty, disabled) |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/transcribe |
POST | Upload audio and start transcription |
/api/transcribe/{job_id} |
GET | Get transcription job status |
/api/audio/{job_id} |
GET | Stream audio file |
/api/summarize |
POST | Generate summary for a TOP segment |
/api/extract-tops |
POST | Extract TOPs from PDF |
/api/telemetry/session-complete |
POST | Report session completion telemetry |
Frontend:
- React 19 with TypeScript
- Vite
- Tailwind CSS
Backend:
- FastAPI
- WhisperX (speech-to-text with word-level timestamps)
- PyAnnote (speaker diarization)
- Ollama with Qwen3 8B (summarization)
The AI Service Centre Berlin Brandenburg is funded by the Federal Ministry of Research, Technology and Space under the funding code 01IS22092.



