Skip to content

PromQL โ€‹

๐Ÿšง Roadmap. PromQL is not yet shipped. The MIME type application/promql is reserved for it. Until the frontend lands, requests with that Content-Type fall through to the SQL parser and error with 400 sql_parse_error โ€” i.e. the response is honest about what hasn't been implemented yet, but the surface isn't usable. Track progress in the README roadmap.

Why PromQL fits โ€‹

kyma's query path is structured around a single trait, QueryFrontend. A frontend is a parser: it takes a source string and returns a logical plan that the rest of the engine โ€” DataFusion execution, the three-level pruning cascade, Arrow Flight transport โ€” already knows how to run.

Today there are two implementations: KQL (kyma-kql) and SQL (DataFusion's own parser). PromQL becomes a third. Once the parser lands, every PromQL query benefits from the same machinery as the other two:

  • Catalog pruning by time range and per-column min/max.
  • Block-level inverted-index pruning for label predicates.
  • Zero-copy Arrow Flight transport for results.
  • Multi-node read fan-out via the same read-router.

The trait is in kyma-core/src/query_frontend.rs. Frontend authors implement parse(source, ctx) -> Arc<dyn Any>, and the registry in kyma-plan downcasts the payload to the concrete LogicalPlan. There's no special path for PromQL queries โ€” they're just another frontend.

What's reserved today โ€‹

FieldValue
MIME typeapplication/promql
EndpointPOST /v1/query
StatusNot implemented

The MIME type is reserved so existing client code can be written against the eventual surface today. When the frontend ships, the same request shape becomes valid โ€” no new endpoint, no new auth model, no new configuration knob.

Migration story โ€‹

Existing Prometheus dashboards (Grafana, custom UIs, anything that speaks PromQL HTTP) point at kyma's query endpoint with no other changes. The query string is still PromQL; the response is still result rows. What changes underneath is that the query runs against years of history pruned to milliseconds, instead of a Prometheus TSDB sized for a few weeks.

Long-retention metrics, joins between metrics and logs in the same query, and federated queries against a synced Postgres table all work the same way they do for KQL and SQL today โ€” see Multi-source data.

Where to track progress โ€‹

What to use today โ€‹

  • SQL for ad-hoc analytical queries โ€” DataFusion's full surface plus federation.
  • Arrow Flight for zero-copy result transport when the NDJSON HTTP path is the bottleneck.
  • The agent endpoint for natural-language questions that compile to KQL or SQL.