A collection of self-contained, runnable scenarios demonstrating how to use Grafana Alloy for telemetry collection and processing. Each scenario includes a full LGMT stack (Loki, Grafana, Mimir, Tempo) with pre-configured dashboards so you can explore immediately.
- Docker and Docker Compose
# Option 1: Navigate to the scenario directory
cd <scenario-dir> && docker compose up -d
# Option 2: Use centralized image management (from repo root)
./run-example.sh <scenario-directory>The centralized approach manages all Docker image versions in a single image-versions.env file, making it easy to update images across all scenarios.
Once a scenario is running:
- Grafana: http://localhost:3000 (no login required)
- Alloy UI: http://localhost:12345 (pipeline debugging)
Each scenario includes a docker-compose.coda.yml file that defines the demo application services separately from the infrastructure stack. This lets you run just the observability backend on its own, or layer in the app when you're ready:
# Infrastructure only
cd <scenario-dir> && docker compose up -d
# Infrastructure + demo app
cd <scenario-dir> && docker compose -f docker-compose.yml -f docker-compose.coda.yml up -dIf you have the coda CLI installed, it manages the app overlay automatically:
coda start <scenario-dir> # Start app containers
coda stop <scenario-dir> # Stop app containers
coda status <scenario-dir> # Show container status
coda list # List all available scenarioscd <scenario-dir> && docker compose down| Scenario | Description |
|---|---|
| GELF log ingestion | Ingest structured logs from applications using the GELF (Graylog Extended Log Format) protocol over UDP. |
| Kafka logs | Consume and process logs from Apache Kafka topics. |
| Log API gateway | Use Alloy as a centralized log gateway that accepts logs via a Loki-compatible push API endpoint. |
| Log routing | Route logs from multiple sources to different Loki tenants based on log content and origin. |
| Log secret filtering | Automatically redact sensitive credentials and secrets from logs using pattern matching before storage. |
| Logs from file | Monitor and tail log files using Alloy. |
| Logs over TCP | Receive and process TCP logs in JSON format. |
| Popular logging frameworks | Parse logs from popular logging frameworks across 7 programming languages. |
| Structured log parsing | Parse structured logs into labels and structured metadata. |
| Syslog monitoring | Monitor non-RFC5424 compliant syslog messages using rsyslog and Alloy. |
| Scenario | Description |
|---|---|
| Distributed tracing | Learn distributed tracing through a sofa delivery workflow from order to doorstep. |
| Game of tracing | An interactive strategy game teaching distributed tracing, sampling, and service graphs. |
| OpenTelemetry basic tracing | Collect and visualize OpenTelemetry traces using Alloy and Tempo. |
| OpenTelemetry service graphs | Generate service graphs using the Alloy servicegraph connector. |
| OpenTelemetry span metrics | Generate RED metrics (Request rate, Error rate, Duration) from OpenTelemetry traces using the span metrics connector. |
| OpenTelemetry tail sampling | Apply tail sampling policies to OpenTelemetry traces with Alloy and Tempo. |
| Scenario | Description |
|---|---|
| Blackbox probing | Monitor endpoint availability and response times using synthetic HTTP probes. |
| OTel metrics pipeline | Forward OpenTelemetry metrics from applications through Alloy with batching and transformation into Prometheus. |
| Scenario | Description |
|---|---|
| Continuous profiling | Collect and visualize CPU, memory, and goroutine profiles from Go applications using Grafana Pyroscope. |
| Scenario | Description |
|---|---|
| Faro frontend observability | Collect frontend web telemetry (logs, errors, web vitals) from browser applications using the Faro Web SDK. |
| Scenario | Description |
|---|---|
| Docker monitoring | Monitor Docker container metrics and logs. |
| Monitor Linux | Monitor a Linux server's system metrics using Alloy. |
| Monitor Windows | Monitor Windows system metrics and Event Logs. |
| Self-monitoring | Configure Alloy to monitor itself, collecting its own metrics and logs. |
| SNMP monitoring | Monitor SNMP devices using the Alloy SNMP exporter. |
| Scenario | Description |
|---|---|
| Elasticsearch monitoring | Monitor Elasticsearch cluster health, node status, and performance metrics. |
| Memcached monitoring | Monitor Memcached instance metrics including connections, memory usage, and command performance. |
| MySQL monitoring | Monitor MySQL database server metrics and performance indicators. |
| PostgreSQL monitoring | Monitor PostgreSQL transaction statistics, connections, and server configuration. |
| Redis monitoring | Monitor Redis instance metrics including connections, memory usage, and command throughput. |
| Scenario | Description |
|---|---|
| Kubernetes | A series of scenarios demonstrating Alloy setup using the Kubernetes monitoring Helm chart. See subdirectories for telemetry-specific examples. |
Alloy v1.14+ includes an experimental OTel Engine that runs standard OpenTelemetry Collector YAML configs directly. These scenarios use alloy otel instead of River/HCL syntax. See the OTel examples README for details.
| Scenario | Description |
|---|---|
| File log processing | Collect and parse mixed-format log files using the OTel filelog receiver with operator chains. |
| PII redaction | Scrub credit cards, emails, and IPs from traces and logs using OTTL replace_pattern. |
| Multi-tenant routing | Route logs to different Loki tenants based on resource attributes using fan-out and filter. |
| Cost control | Drop health checks, filter debug logs, and apply probabilistic sampling to cut telemetry volume. |
| Resource enrichment | Auto-attach host, OS, and Docker metadata to all signals via resourcedetection. |
| Count connector | Derive request rate and error rate metrics from traces and logs using the count connector. |
| OTTL transform cookbook | A cookbook of OTTL patterns: JSON parsing, severity mapping, attribute promotion, truncation. |
| Host metrics | Collect CPU, memory, disk, and network metrics using the hostmetrics receiver. |
| Multi-pipeline fan-out | Send traces to two backends with different processing per destination. |
| Kafka buffer | Buffer traces through Kafka for durability and backpressure handling. |
Contributions of scenarios or improvements to scenarios are welcome. You can contribute in several ways:
If you have an idea for a scenario but don't have time to implement it:
- Open an issue with the label
scenario-suggestion - Describe the scenario and what it would demonstrate
- Explain why this would be valuable to the community
- Outline any special requirements or considerations
If you'd like to contribute a complete scenario:
- Fork this repository and create a branch
- Create a directory in the root of this repository with a descriptive name for your scenario
- Follow the scenario template below
- Submit a pull request with your scenario
To improve a scenario:
- Fork this repository and create a branch
- Make your improvements to the scenario
- Submit a pull request with a clear description of your changes
When creating a scenario, include the following files:
docker-compose.yml- Docker Compose file with the LGMT stackdocker-compose.coda.yml- Docker Compose override with the demo app services (for use with thecodaCLI or-fflag)config.alloy- Alloy configuration file for the scenarioREADME.md- Documentation explaining the scenario- Any additional files needed for your scenario, such as scripts or data files
Before submitting your scenario, ensure that you have:
- Created a directory in the root of this repository with a descriptive name
- Included a docker-compose.yml file with the necessary components, such as LGMT stack or subset
- Created a complete config.alloy file that demonstrates the monitoring approach
- Written a README.md with:
- A clear description of what the scenario demonstrates
- Prerequisites for running the demo
- Step-by-step instructions for running the demo
- Expected output and what to look for
- Screenshots if applicable
- Explanation of key configuration elements
- Added the scenario to the table in this README.md
- Ensured the scenario works with the centralized image management system
- Verified all components start correctly with
docker compose up -d
- Keep the scenario focused on demonstrating one concept
- Use clear, descriptive component and variable names
- Add comments to explain complex parts of your Alloy configuration
- Consider including a "Customizing" section in your README.md
- Provide sample queries for Grafana/Prometheus/Loki/Tempo that work with your scenario
- Use environment variables for versions and configurable parameters
If you have questions about creating a scenario or need help with Alloy:
- Join the Grafana Labs Community Forums
- Check the Grafana Alloy documentation
This repository is licensed under the Apache License, Version 2.0. Refer to LICENSE for the full license text.
