Currently working from this draft pr for translation. Classic condition backend has been merged. For next see list.
New "classic conditions" in backend of Server Side Expressions (SSE) that behaves like dashboard conditions. It only ever returns labels {}
, so it such that one alert rule is always one alert (instance) - single dimensional.
No frontend yet, but model is close so hopefully will not be too hard.
Main differences are that it works on data frames and uses SSE for execution.
Currently pulls from alert
table's setting
's column's (which is JSON) conditions
property. This contains the dashboard conditions and the queries inside the condition (post Extractor
processing).
Builds queries and classic conditions (based on old condition) in the model meant for SSE:
- Replaces "Query" in each condition with just refId, and creates queries with their own time ranges.
- Builds that into alerting NG's alert rule's model for query
The following endpoint:
- Gets alert conditions from the DB by the alert ID
- Translates them to SEE Queries+Classic condition
- Submits translated SSE Queries+classic conditions to ngalert's eval
- returns the evaluation result
Test-Data Always Alerting
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/21 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
record 1/1...
col[0] "": ["Alerting"]
Test-Data Always Ok
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/24 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
version: V4
record 1/1...
col[0] "": ["Normal"]
Test-Data Always NoData
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/51 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
version: V4
record 1/1...
col[0] "": ["NoData"]
This is what you find in Grafana's DB in the alert table, settings column, which contains json and has a conditions property.
`{
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"datasourceId": 2,
"model": {
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"legendFormat": "",
"refId": "A"
},
"params": [
"A",
"5m",
"now"
]
},
"reducer": {
"params": [],
"type": "avg"
},
"type": "query"
},
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"datasourceId": 2,
"model": {
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"legendFormat": "",
"refId": "A"
},
"params": [
"A",
"10m",
"now-5m"
]
},
"reducer": {
"params": [],
"type": "avg"
},
"type": "query"
}
]}
This is what NG alerting currently expects. In terms of the API, it is effectively the "grafana_alert"
's condition and data property as seen in swagger ruler/{datasourceId}/api/v1/rules
.
{
"condition": "C",
"data": [
{
"refId": "A",
"queryType": "",
"relativeTimeRange": {
"from": 300,
"to": 0
},
"model": {
"datasource": "",
"datasourceUid": "000000002",
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"intervalMs": 1000,
"legendFormat": "",
"maxDataPoints": 100,
"refId": "A"
}
},
{
"refId": "B",
"queryType": "",
"relativeTimeRange": {
"from": 600,
"to": 300
},
"model": {
"datasource": "",
"datasourceUid": "000000002",
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"intervalMs": 1000,
"legendFormat": "",
"maxDataPoints": 100,
"refId": "B"
}
},
{
"refId": "C",
"queryType": "",
"relativeTimeRange": {
"from": 0,
"to": 0
},
"model": {
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"Params": [
"A"
]
},
"reducer": {
"type": "avg"
}
},
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"Params": [
"B"
]
},
"reducer": {
"type": "avg"
}
}
],
"datasource": "__expr__",
"datasourceUid": "-100",
"intervalMs": 1000,
"maxDataPoints": 100,
"refId": "C",
"type": "classic_conditions"
}
}
]
}
Currently working from this draft pr for translation. Classic condition backend has been merged. For next see list.
New "classic conditions" in backend of Server Side Expressions (SSE) that behaves like dashboard conditions. It only ever returns labels {}
, so it such that one alert rule is always one alert (instance) - single dimensional.
No frontend yet, but model is close so hopefully will not be too hard.
Main differences are that it works on data frames and uses SSE for execution.
Currently pulls from alert
table's setting
's column's (which is JSON) conditions
property. This contains the dashboard conditions and the queries inside the condition (post Extractor
processing).
Builds queries and classic conditions (based on old condition) in the model meant for SSE:
- Replaces "Query" in each condition with just refId, and creates queries with their own time ranges.
- Builds that into alerting NG's alert rule's model for query
The following endpoint:
- Gets alert conditions from the DB by the alert ID
- Translates them to SEE Queries+Classic condition
- Submits translated SSE Queries+classic conditions to ngalert's eval
- returns the evaluation result
Test-Data Always Alerting
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/21 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
record 1/1...
col[0] "": ["Alerting"]
Test-Data Always Ok
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/24 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
version: V4
record 1/1...
col[0] "": ["Normal"]
Test-Data Always NoData
curl -H 'Content-Type: application/json' http://admin:admin@localhost:3000/api/alert-definitions/evalOldByID/51 | jq -r '.instances[0]' | base64 -d > ~/tmp/af && arrow-cat ~/tmp/af
version: V4
record 1/1...
col[0] "": ["NoData"]
This is what you find in Grafana's DB in the alert table, settings column, which contains json and has a conditions property.
`{
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"datasourceId": 2,
"model": {
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"legendFormat": "",
"refId": "A"
},
"params": [
"A",
"5m",
"now"
]
},
"reducer": {
"params": [],
"type": "avg"
},
"type": "query"
},
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"datasourceId": 2,
"model": {
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"legendFormat": "",
"refId": "A"
},
"params": [
"A",
"10m",
"now-5m"
]
},
"reducer": {
"params": [],
"type": "avg"
},
"type": "query"
}
]}
This is what NG alerting currently expects. In terms of the API, it is effectively the "grafana_alert"
's condition and data property as seen in swagger ruler/{datasourceId}/api/v1/rules
.
{
"condition": "C",
"data": [
{
"refId": "A",
"queryType": "",
"relativeTimeRange": {
"from": 300,
"to": 0
},
"model": {
"datasource": "",
"datasourceUid": "000000002",
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"intervalMs": 1000,
"legendFormat": "",
"maxDataPoints": 100,
"refId": "A"
}
},
{
"refId": "B",
"queryType": "",
"relativeTimeRange": {
"from": 600,
"to": 300
},
"model": {
"datasource": "",
"datasourceUid": "000000002",
"expr": "avg_over_time(sum by (instance) (up)[1h:5m])",
"interval": "",
"intervalMs": 1000,
"legendFormat": "",
"maxDataPoints": 100,
"refId": "B"
}
},
{
"refId": "C",
"queryType": "",
"relativeTimeRange": {
"from": 0,
"to": 0
},
"model": {
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"Params": [
"A"
]
},
"reducer": {
"type": "avg"
}
},
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"Params": [
"B"
]
},
"reducer": {
"type": "avg"
}
}
],
"datasource": "__expr__",
"datasourceUid": "-100",
"intervalMs": 1000,
"maxDataPoints": 100,
"refId": "C",
"type": "classic_conditions"
}
}
]
}