# Instructions
1. Query OpenTelemetry metrics stored in Axiom using MPL (Metrics Processing Language). NOT APL.
2. The query targets a metrics dataset (kind "otel-metrics-v1").
3. Use listMetrics() to discover available metric names in a dataset before querying.
4. Use listMetricTags() and getMetricTagValues() to discover filtering dimensions.
5. ALWAYS restrict the time range to the smallest possible range that meets your needs.
6. NEVER guess metric names or tag values. Always discover them first.
# MPL Query Syntax
A query has three parts: source, filtering, and transformation. Filters must appear before transformations.
## Source
```
<dataset>:<metric>
```
Backtick-escape identifiers containing special characters: ``my-dataset``:``http.server.duration``
## Filtering (where)
Chain filters with `|`. Use `where` (not `filter`, which is deprecated).
```
| where <tag> <op> <value>
```
Operators: ==, !=, >, <, >=, <=
Values: "string", 42, 42.0, true, /regexp/
Combine with: and, or, not, parentheses
## Transformations
### Aggregation (align) — aggregate data over time windows
```
| align to <interval> using <function>
```
Functions: avg, sum, min, max, count, last
Intervals: 5m, 1h, 1d, etc.
### Grouping (group) — group series by tags
```
| group by <tag1>, <tag2> using <function>
```
Functions: avg, sum, min, max, count
Without `by`: combines all series: `| group using sum`
### Mapping (map) — transform values in place
```
| map rate // per-second rate of change
| map increase // increase between datapoints
| map + 5 // arithmetic: +, -, *, /
| map abs // absolute value
| map fill::prev // fill gaps with previous value
| map fill::const(0) // fill gaps with constant
| map filter::lt(0.4) // remove datapoints >= 0.4
| map filter::gt(100) // remove datapoints <= 100
| map is::gte(0.5) // set to 1.0 if >= 0.5, else 0.0
```
### Computation (compute) — combine two metrics
```
(
`dataset`:`errors_total` | group using sum,
`dataset`:`requests_total` | group using sum;
)
| compute error_rate using /
```
Functions: +, -, *, /, min, max, avg
### Bucketing (bucket) — for histograms
```
| bucket by method, path to 5m using histogram(count, 0.5, 0.9, 0.99)
| bucket by method to 5m using interpolate_delta_histogram(0.90, 0.99)
| bucket by method to 5m using interpolate_cumulative_histogram(rate, 0.90, 0.99)
```
### Prometheus compatibility
```
| align to 5m using prom::rate // Prometheus-style rate
```
## Identifiers
Use backticks for names with special characters: ``my-dataset``, ``service.name``, ``http.request.duration``
# Examples
Basic query:
`my-metrics`:`http.server.duration` | align to 5m using avg
Filtered:
`my-metrics`:`http.server.duration` | where `service.name` == "frontend" | align to 5m using avg
Grouped:
`my-metrics`:`http.server.duration` | align to 5m using avg | group by endpoint using sum
Rate:
`my-metrics`:`http.requests.total` | align to 5m using prom::rate | group by method, path, code using sum
Error rate (compute):
(
`my-metrics`:`http.requests.total` | where code >= 400 | group by method, path using sum,
`my-metrics`:`http.requests.total` | group by method, path using sum;
)
| compute error_rate using /
| align to 5m using avg
SLI (error budget):
(
`my-metrics`:`http.requests.total` | where code >= 500 | align to 1h using prom::rate | group using sum,
`my-metrics`:`http.requests.total` | align to 1h using prom::rate | group using sum;
)
| compute error_rate using /
| map is::lt(0.2)
| align to 7d using avg
Histogram percentiles:
`my-metrics`:`http.request.duration.seconds.bucket` | bucket by method, path to 5m using interpolate_delta_histogram(0.90, 0.99)
Fill gaps:
`my-metrics`:`cpu.usage` | map fill::prev | align to 1m using avg