obs-mcp is a mcp server to allow LLMs to interact with Prometheus or Thanos Querier instances via the API.
Note
This project is moved from jhadvig/genie-plugin preserving the history of commits.
Run make help to see all available commands.
The easiest way to get the obs-mcp connected to the cluster is via a kubeconfig:
- Login into your OpenShift cluster
- Run the server with
make runOr directly:
go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --insecureThis will auto-discover the metrics backend in OpenShift. By default, it tries thanos-querier route first, then falls back to prometheus-k8s route. Use --metrics-backend to control which route is preferred.
Warning
kubeconfig auth mode requires a bearer token.
Run oc whoami -t to verify you have one.
If it fails, either:
- Re-login with:
oc login --token=<token>oroc login -u user -p password - Use port-forwarding with
--auth-mode headerinstead
Example using Prometheus as the preferred backend:
go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --metrics-backend prometheus --insecureExample using Thanos as the preferred backend:
Note
Thanos versions before v0.40.0 do not expose the /api/v1/status/tsdb endpoint, so guardrails that rely on TSDB stats (max-metric-cardinality, max-label-cardinality) will fail. Use --guardrails=none when using older Thanos versions. Thanos v0.40.0+ (#8484) added TSDB status support to the Query component, so guardrails should work if your cluster runs that version or later.
make run-no-guardrailsOr directly:
go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --metrics-backend thanos --insecure --guardrails=noneImportant
How the Metrics Backend URL is Determined:
PROMETHEUS_URLenvironment variable (if set, always used)--metrics-backendflag route discovery (only inkubeconfigmode)- Default:
http://localhost:9090
Example using explicit PROMETHEUS_URL:
PROMETHEUS_URL=https://thanos-querier.openshift-monitoring.svc.cluster.local:9091/ make runPort-forwards prometheus-k8s-0:9090 to localhost and starts obs-mcp with header auth. Requires oc login:
make run-openshift-pf-prometheusUse the E2E test infrastructure for a fully working local environment with Prometheus:
make test-e2e-setupThis creates a Kind cluster with:
- Prometheus Operator
- Prometheus (accessible at
prometheus-k8s.monitoring.svc.cluster.local:9090) - Alertmanager
make test-e2e-deploykubectl port-forward -n obs-mcp svc/obs-mcp 9100:9100To connect an MCP client, use http://localhost:9100/mcp.
When done:
make test-e2e-teardownSee TESTING.md for more details.
# sets up Prometheus (and exporters) on your local single-node k8s cluster
helm install prometheus-community/prometheus --name-template <prefix>
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=local" -o jsonpath="{.items[0].metadata.name}") && kubectl --namespace default port-forward $POD_NAME 9090
go run ./cmd/obs-mcp/ --auth-mode header --insecure --listen :9100 You can test the MCP server using curl. The server uses JSON-RPC 2.0 over HTTP.
Tip
For formatted JSON output, pipe the response to jq:
curl ... | jq
List available tools:
curl -X POST http://localhost:9100/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'|jqCall the list_metrics tool:
curl -X POST http://localhost:9100/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"list_metrics","arguments":{}}}' | jqExecute a range query (e.g., get up metrics for the last hour):
curl -X POST http://localhost:9100/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"execute_range_query","arguments":{"query":"up{job=\"prometheus\"}","step":"1m","end":"NOW","duration":"1h"}}}' | jq| Document | Description |
|---|---|
| DEPLOYMENT.md | Authentication modes, in-cluster deployment, configuration |
| TOOLS.md | Available MCP tools |
| TESTING.md | Testing guide |