Skip to content

ibmstorage/ibm-storageinsights-ecosystem-prometheus

Repository files navigation

IBM Storage Insights Prometheus Exporter

Prometheus exporter for IBM Storage Insights

Overview

Prometheus is the de-facto monitoring system for metrics that is well-suited for monitoring dynamic systems. It features a dimensional data model as well as a powerful query language and integrates instrumentation, gathering of metrics, service discovery, and alerting, all in one ecosystem.

This application is intended for storage system administrators who want to onboard metrics gathered by IBM Storage Insights onto Prometheus, so that they can leverage alerting and powerful query language, PromQL, offered by Prometheus.

IBM Storage Insights exposes REST APIs over which performance and capacity metrics of different storage systems and their components can be fetched. This application is a Prometheus exporter whose aim is to bridge those metrics into Prometheus. It runs as a server process that translates Storage Insights metrics into a format that Prometheus can scrape. The exporter architecture is shown below. image Fetching and translating the metrics from Storage Insights happens synchronously for every scrape from Prometheus. Each time Prometheus scrapes metrics, the exporter calls Storage Insights Metric API passing the request details and the authentication token. The authentication token is valid for 15 minutes, after which a fresh token is fetched by the exporter automatically before making API calls.

Prerequisites

Software and experience prerequisites for using the Prometheus Exporter:

  • Software requirements

    The exact steps have been tested on MacOS. However, the overall procedure should be portable to any modern Linux distribution or Windows. You will need to have:

    • A functioning Prometheus monitoring setup
    • A functioning Go language setup
  • Required experience

    You will need a basic understanding of Prometheus-based monitoring.

    You will also need to understand the basics of software development in Go.

Storage Insights REST API

The exporter calls the following Storage Insights REST APIs

Name URL Method GET parameters Description
Token API /restapi/v1/tenants/{tenant_uuid}/token POST - Creates an API token for tenant user
Metric API /restapi/v1/tenants/{tenant_uuid}/storage-systems/metrics GET metric types Returns capacity and performance metric values for all storage systems of a given tenant

Authentication

Authentication is performed by the exporter by passing an authentication token in the header x-api-token. The authentication token is retrived by calling the Token API passing the API key in the header x-api-key. A tenant admin can generate API key for tenant users by logging into Storage Insights GUI.

Configuration

The exporter supports both configuration files and environment variables.

Environment variables override values loaded from a config file. If --config is not supplied, the exporter will read config.json when it exists. This makes local development file-friendly while allowing containers to run from environment variables only.

Supported environment variables

Variable Description
SI_BASE_URL Storage Insights base URL, for example https://insights.ibm.com
SI_TENANT_ID Storage Insights tenant UUID
SI_API_KEY Tenant API key
SI_METRICS Comma-separated metric list, for example disk_total_data_rate,disk_total_response_time
SI_LISTEN_ADDRESS Exporter listen address, for example :8085
SI_DEBUG Debug logging flag, true or false
SI_METRICS_DURATION Metrics API duration window, default 1h
SI_REQUEST_TIMEOUT_SECONDS Shared HTTP client timeout in seconds, default 30
SI_TOKEN_REFRESH_MARGIN_SECONDS Early token refresh margin in seconds, default 60
SI_CONFIG_FILE Optional config file path

Example config file

Copy the checked-in example and fill in your tenant details:

cp config.example.json config.json
config

Do not commit real API keys. Keep config.json local and untracked.

Local Run

Clone the repository and run the exporter from the repo root:

git clone https://github.com/ibmstorage/ibm-storageinsights-ecosystem-prometheus.git
cd ibm-storageinsights-ecosystem-prometheus
go run ./main --listen-address :8089 --config config.json

You can also run without a config file by setting environment variables:

export SI_BASE_URL="https://insights.ibm.com"
export SI_TENANT_ID="replace-with-your-tenant-uuid"
export SI_API_KEY="replace-with-your-tenant-api-key"
export SI_METRICS="disk_total_data_rate,disk_total_response_time"
export SI_LISTEN_ADDRESS=":8089"
go run ./main

Prometheus scrape example

Add the exporter as a target in prometheus.yml:

static_configs:
  - targets: ["localhost:8089"]

Container Build

Build the container image from the repo root:

docker build -t ibm-storageinsights-prometheus-exporter:latest .

The image does not bake in any secrets. Provide runtime configuration with environment variables or a mounted config file.

Published Images

GitHub Actions publishes container images to GitHub Container Registry using the repository-aware naming convention:

ghcr.io/<owner>/<repo>

Tag behavior:

  • latest for pushes to main
  • sha-<shortsha> for published builds
  • v1.2.3 plus 1.2 and 1 for Git tags that match v*

Pull a published image with:

docker pull ghcr.io/<owner>/<repo>:latest

Run the published image with:

docker run --rm \
  -p 8085:8085 \
  -e SI_BASE_URL="https://insights.ibm.com" \
  -e SI_TENANT_ID="replace-with-your-tenant-uuid" \
  -e SI_API_KEY="replace-with-your-tenant-api-key" \
  -e SI_METRICS="disk_total_data_rate,disk_total_response_time" \
  ghcr.io/<owner>/<repo>:latest

Container Run

Run the exporter container with environment variables:

docker run --rm \
  -p 8085:8085 \
  -e SI_BASE_URL="https://insights.ibm.com" \
  -e SI_TENANT_ID="replace-with-your-tenant-uuid" \
  -e SI_API_KEY="replace-with-your-tenant-api-key" \
  -e SI_METRICS="disk_total_data_rate,disk_total_response_time" \
  -e SI_LISTEN_ADDRESS=":8085" \
  -e SI_DEBUG="false" \
  ibm-storageinsights-prometheus-exporter:latest

You can also mount a local config file for development:

docker run --rm \
  -p 8085:8085 \
  -v "$(pwd)/config.json:/app/config.json:ro" \
  -e SI_CONFIG_FILE="/app/config.json" \
  ibm-storageinsights-prometheus-exporter:latest

The image exposes /metrics and /healthz. The default runtime healthcheck calls /healthz.

Compose Run

A sample compose.yaml is included. Start it with:

docker compose up --build

Set required values in your shell or a local .env file before starting Compose:

SI_BASE_URL=https://insights.ibm.com
SI_TENANT_ID=replace-with-your-tenant-uuid
SI_API_KEY=replace-with-your-tenant-api-key
SI_METRICS=disk_total_data_rate,disk_total_response_time
SI_EXPORTER_PORT=8085

Extending Prometheus Exporter

The application is also intended to act as a guide to those developers who want to extend the functionality of the exporter. The primary mode of getting metrics from Storage Insights is via its REST API. There are multiple Metrics APIs that provide metrics from different systems and their components. The current storage-system implementation lives in simetrics/client.go and simetrics/normalize.go. To fetch more metrics, it is advisable to follow these steps:

  • Declare the metric types in the config module
  • Create a new Go module to fetch the additional metrics
    go init work <module-name>
    Use the code in module simetrics as an illustration of how to use REST API to fetch metrics data from Storage Insights.
  • Refer to the Storage Insights REST API docs to get the details of the API.
  • Register the new metrics with Prometheus
  • Push the metrics to Prometheus on every scrape, optionally adding tags for description.
  • Build the modules

Limitations

  • IBM Storage Insights collects metrics every 5 mins from the storage systems. Therefore, it is suggested to scrape at a frequency not less than 1 minute. In case you hit the rate limit, it is advisable to increase the scraping interval.
  • Metrics API allows fetching upto 3 metric types for each of the storage system in one REST API call. If more than 3 metric types are desired, it is advisable to make multiple REST API calls as appropriate.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors