Entwicklung des Chat Bots GSKI, der die Erarbeitung von Sicherheitskonzepten gemäß den BSI-Standards als zuverlässiger Wissensträger unterstützt. Fragen dazu beantwortet er ebenso richtig wie vollständig und durch Quellenangaben sorgt er für Transparenz und Wissenstransfer.
- Concise Answers with Source Citations: Instead of lengthy text responses, the chatbot provides precise jump links into the original IT-Grundschutz documents. Source passages are displayed side-by-side in the browser, ensuring transparency and allowing users to verify answers directly against the original material.
- Guided Topic Exploration via Follow-up Questions: The chatbot leverages the structure of the IT-Grundschutz to suggest relevant follow-up questions. Logical and hierarchical relationships between modules (Bausteine) and methods are reflected in the suggestions, helping users navigate complex topics systematically.
- User-Controlled Role and Topic Focus: Answers are tailored to the user's role (e.g. IT security officer, department head, IT operations) and to topics from previous conversations. This personalization is fully transparent and configurable via the settings panel.
- User Feedback: Thumbs up/down feedback on answers, persisted to PostgreSQL with CSV export for evaluation.
- Chat History & Export: Full conversation persistence with OpenAI-format JSON/JSONL export.
- Docker Compose Deployment: One-command setup with Chainlit, PostgreSQL, Qdrant, and auto-ingestion.
- Docker and Docker Compose
- A running LLM endpoint compatible with the OpenAI API (e.g. LiteLLM, Ollama)
- An embedding model accessible via the same endpoint
The LLM endpoint is configured via LITELLM_BASE_URL in the .env file. This works with any OpenAI-compatible API, including Ollama (e.g. LITELLM_BASE_URL=http://localhost:11434/v1).
Two authentication methods are supported:
- GitHub OAuth (recommended): Configure
OAUTH_GITHUB_CLIENT_IDandOAUTH_GITHUB_CLIENT_SECRETin.envfor single sign-on via GitHub. - Password login: Set
CHAINLIT_AUTH_USERNAMEandCHAINLIT_AUTH_PASSWORDin.envfor local admin access. Additional users can self-register via the built-in registration form.
-
Clone the repository:
git clone https://github.com/aihpi/pilotprojekt-GrundschutzKI.git cd pilotprojekt-GrundschutzKI -
Configure and start the Chainlit app:
cd apps/chainlit cp .env.example .env # Edit .env: set LITELLM_BASE_URL, LITELLM_API_KEY, CHAT_MODEL, EMBED_MODEL docker compose up -d --build
-
Access the application:
- Chat UI:
http://localhost:8000
- Chat UI:
For detailed configuration options, ingestion workflows, environment variables, and troubleshooting, see the Chainlit app README.
- Open the chat UI at
http://localhost:8000and log in with the credentials configured in.env(CHAINLIT_AUTH_USERNAME/CHAINLIT_AUTH_PASSWORD). - Select a role profile in the settings sidebar to receive answers tailored to your perspective (e.g. IT security officer, IT operations, management).
- Ask questions about IT-Grundschutz — the system retrieves relevant passages and generates answers with source citations.
- Source citations: Click on source references (e.g. "Quelle 1: ...") to open the corresponding PDF at the cited page.
- Follow-up questions: Suggested follow-up questions appear as clickable buttons below each answer.
- Feedback: Use the thumbs up/down buttons to rate answer quality. Optionally add a comment.
- Chat commands: Type
/helpin the chat for available slash commands (history, export, keywords, prompt customization).
- Feedback export: Administrators can download all user feedback as CSV at
/export/feedback(requires admin login). - Chat export: Use
/export allin the chat or the sidebar export button for OpenAI-format JSONL export.
For the full user and administration guide, see the Chainlit app README.
- Prototype / Vibe-Coded: This application was developed using AI-assisted "vibe coding" and must be hardened and security-reviewed before any production deployment.
- Local models: Local LLMs can be used via Ollama by pointing
LITELLM_BASE_URLin the.envfile to the Ollama endpoint (e.g.http://localhost:11434/v1). AdjustCHAT_MODELandEMBED_MODELaccordingly.
This project is licensed under the MIT License.
The AI Service Centre Berlin Brandenburg is funded by the Federal Ministry of Research, Technology and Space under the funding code 01IS22092.


