Declarative, environment-aware deployment for:
- Red Hat OpenShift Service Mesh
3.3.1 - Red Hat Connectivity Link
1.3.1 - Cluster Observability stack (Monitoring + Tempo + OpenTelemetry)
using OpenShift GitOps (Argo CD) across dev, test, and prod.
flowchart LR
GIT[Git Repository] --> ARGO[OpenShift GitOps Argo CD]
ARGO --> DEV[dev apps]
ARGO --> TEST[test apps]
ARGO --> PROD[prod apps]
subgraph DEVCL[Dev Cluster]
SMDEV[Service Mesh 3.3.1]
RHCLDEV[Connectivity Link 1.3.1]
OBSDEV[Cluster Observability]
SMDEV --> RHCLDEV
SMDEV --> OBSDEV
RHCLDEV --> OBSDEV
end
subgraph TESTCL[Test Cluster]
SMTEST[Service Mesh 3.3.1]
RHCLTEST[Connectivity Link 1.3.1]
OBSTEST[Cluster Observability]
SMTEST --> RHCLTEST
SMTEST --> OBSTEST
RHCLTEST --> OBSTEST
end
subgraph PRODCL[Prod Cluster]
SMPROD[Service Mesh 3.3.1]
RHCLPROD[Connectivity Link 1.3.1]
OBSPROD[Cluster Observability]
SMPROD --> RHCLPROD
SMPROD --> OBSPROD
RHCLPROD --> OBSPROD
end
flowchart TB
NAMESPACE[Application Namespace] --> LBL1["istio.io/rev=<env-tag>"]
NAMESPACE --> LBL2["istio-discovery=mesh-<env>"]
LBL1 --> SIDE[Istio Sidecar Injection]
LBL2 --> DISC[Env-scoped Istio Discovery]
SIDE --> TEL[Telemetry -> OTel Collector]
DISC --> TEL
TEL --> TEMPO[TempoStack]
TEL --> PROM[MonitoringStack]
TEMPO --> KIALI[Kiali Traces]
PROM --> KIALI[Kiali Metrics]
gitops/
├── servicemesh/
│ ├── operators/{base,overlays}
│ ├── controlplane/{base,overlays}
│ ├── kiali/{base,overlays}
│ ├── clusters/{dev,test,prod}
│ └── applications/
├── connectivity-link/
│ ├── operators/{base,overlays}
│ ├── instance/{base,overlays}
│ ├── smoke-test/{base,overlays}
│ ├── clusters/{dev,test,prod}
│ └── applications/
├── cluster-observability/
│ ├── operators/{base,overlays}
│ ├── stack/{base,overlays}
│ ├── clusters/{dev,test,prod}
│ └── applications/
└── smoke-test/
├── base
└── overlays/{dev,test,prod}
- OpenShift cluster(s) with admin access.
- OpenShift GitOps installed in namespace
openshift-gitops. - Access to
redhat-operatorscatalog.
TempoStack needs object storage. This repo is designed for ODF NooBaa S3.
- OpenShift Data Foundation (ODF) installed.
- NooBaa available and healthy.
openshift-storage.noobaa.ioAPI available forObjectBucketClaim.
Without ODF/NooBaa S3, Tempo pods will fail (typically compactor/querier/ingester crash loops).
- Service Mesh Operator (
servicemeshoperator3, stable-3.3, pinned to 3.3.1) - Kiali Operator (
kiali-ossm) - Red Hat Connectivity Link Operator (
rhcl-operator, pinned to 1.3.1) - Cluster Observability Operator
- Tempo Operator
- Red Hat build of OpenTelemetry Operator
For this repository, apps are expected to deploy in-cluster with:
destination.server: https://kubernetes.default.svc
If you use cluster aliases (destination.name), ensure matching Argo cluster secrets exist.
This repo supports:
- all environments (
dev,test,prod) fromapplications/kustomization.yaml - single environment by applying only one Application CR
If your cluster should run only dev, apply only *-dev.yaml Applications.
oc apply -k gitops/servicemesh/applicationsoc apply -f gitops/servicemesh/applications/project.yaml
oc apply -f gitops/servicemesh/applications/servicemesh-dev.yamloc apply -f gitops/servicemesh/applications/project.yaml
oc apply -f gitops/servicemesh/applications/servicemesh-test.yaml
# or:
oc apply -f gitops/servicemesh/applications/servicemesh-prod.yamlThis repository deploys Red Hat Connectivity Link (Kuadrant) with mTLS enabled.
Expected Kuadrant spec values:
spec.mtls.enable: truespec.mtls.authorino: truespec.mtls.limitador: true
Quick check:
oc -n kuadrant-system get kuadrant kuadrant -o jsonpath='{.spec.mtls.enable}{"|"}{.spec.mtls.authorino}{"|"}{.spec.mtls.limitador}{"\n"}'oc apply -k gitops/connectivity-link/applicationsoc apply -f gitops/connectivity-link/applications/project.yaml
oc apply -f gitops/connectivity-link/applications/connectivity-link-dev.yamloc apply -k gitops/cluster-observability/applicationsoc apply -f gitops/cluster-observability/applications/project.yaml
oc apply -f gitops/cluster-observability/applications/cluster-observability-dev.yaml| Environment | TempoStack size | OTel Collector profile | Monitoring retention |
|---|---|---|---|
| dev | 1x.demo |
very small | 12h |
| test | 1x.extra-small |
medium | 3d |
| prod | 1x.small |
larger | 7d |
All environments use ODF/NooBaa-backed S3 object storage for Tempo.
Namespaces must have both labels for stable revision-based onboarding:
istio.io/rev=<env-tag>istio-discovery=mesh-<env>
Examples:
oc label ns <ns> istio.io/rev=dev istio-discovery=mesh-dev --overwrite
oc label ns <ns> istio.io/rev=test istio-discovery=mesh-test --overwrite
oc label ns <ns> istio.io/rev=prod istio-discovery=mesh-prod --overwriteKiali is discovery-selector scoped per environment:
- dev:
istio-discovery=mesh-dev - test:
istio-discovery=mesh-test - prod:
istio-discovery=mesh-prod
- Istio telemetry exports spans to platform OTel Collector.
- RHCL (Kuadrant) tracing exports to the same OTel Collector.
- OTel exports to TempoStack.
- Kiali reads traces from Tempo and metrics from MonitoringStack Prometheus.
Cluster Observability enables console plugins for:
- distributed tracing
- dashboards
- logging
- troubleshooting panel
If no Grafana backend is deployed, Kiali Grafana integration is intentionally disabled in this repo to avoid false red health indicators.
kiali-netcheck is expected. It is a short-lived network check pod created by Kiali and usually ends in Completed. It is not a persistent failure.
oc -n openshift-gitops get app servicemesh-dev connectivity-link-dev cluster-observability-devoc get istio -A
oc get istiocni -A
oc get istiorevisiontag -A
oc -n istio-system get kialioc -n kuadrant-system get kuadrant
oc -n kuadrant-system get pods
oc -n kuadrant-system get servicemonitoroc -n cluster-observability get monitoringstack
oc -n cluster-observability get tempostack
oc -n cluster-observability get opentelemetrycollector
oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}{"\n"}'oc apply -k gitops/smoke-test/overlays/dev
oc -n mesh-smoke get pod -l app=mesh-smoke -o jsonpath='{.items[0].spec.containers[*].name}{"\n"}'
oc delete -k gitops/smoke-test/overlays/dev./scripts/rhcl-smoke-test.sh dev
./scripts/rhcl-smoke-cleanup.sh devflowchart LR
A[Apply Smoke Overlay] --> B[Sample App Ready]
B --> C[Gateway Programmed]
C --> D[mTLS Runtime Ready]
D --> E[API Key Approved]
E --> F[Auth Behavior]
F --> G[PlanPolicy Path]
G --> H[RateLimit Path]
H --> I[RHCL SMOKE TEST REPORT]
The script prints a consolidated report with PASS/FAIL/SKIP per stage:
- manifest apply
- app rollout
- gateway programmed
- mTLS datapath readiness
- API key and auth behavior
- policy hierarchy and rate limiting
Example:
----------------------------------------------------------------------
RHCL SMOKE TEST REPORT
----------------------------------------------------------------------
Environment : dev
Namespace : rhcl-smoke
Overlay : gitops/connectivity-link/smoke-test/overlays/dev
----------------------------------------------------------------------
SUMMARY
----------------------------------------------------------------------
# | Check | Result | Details
----------------------------------------------------------------------
1 | Manifest apply | PASS | Applied overlay
2 | Sample app rollout | PASS | Deployment ready
3 | Gateway programmed | PASS | Address assigned
4 | Kuadrant mTLS runtime | PASS | mtlsAuthorino=true mtlsLimitador=true
5 | API key approval | PASS | APIKey approved
6 | Policy hierarchy | PASS | plan + explicit rate-limit enforced
7 | Auth traffic behavior | PASS | no-key=401 with-key=200
8 | Explicit rate-limit behavior | PASS | 429 responses observed after threshold
----------------------------------------------------------------------
Totals: PASS=8 FAIL=0 SKIP=0
----------------------------------------------------------------------
Save report output:
./scripts/rhcl-smoke-test.sh dev | tee rhcl-smoke-dev-report.txt- Ensure app destination is valid for your Argo setup.
- Prefer
destination.server: https://kubernetes.default.svcfor in-cluster deployments.
- Check both namespace labels:
istio.io/revandistio-discovery. - Check
IstioRevisionTagexists for the tag used.
- Verify ODF/NooBaa object storage is healthy.
- Verify TempoStack pods are running.
- Verify OTel Collector receives and exports traces.
- Generate traffic after deployment before checking UI.
- Confirm Kuadrant CR reports
mtlsAuthorino=trueandmtlsLimitador=true. - Verify gateway-side mTLS prerequisites and cert/issuer objects.