Introduction
The following article describes how to configure alerts that are triggered when your log volume exceeds a configured threshold. Before implementing these alerts, we recommend familiarizing with the configuration of alerts via the DataSet Alerts documentation page.
Conversion Factor
You may be wondering how the 0.00008046627 conversion factor was calculated --
-
0.00008046627 converts from bytes/sec to GB/day
-
1 day = 24*60*60 = 86400 seconds, and
-
1 GB = 1024^3 = 1073741824 bytes,
-
We divide the 86400 / 1073741824 = .00008046627
Since these log volume calculations are performed by the sumPerSecond
function, this conversion factor converts bytes/sec to GB/day if the displayed rate were sustained for 24h.
Alert Examples
Creating Alerts To Monitor Log Volume
These alerts must be created by editing the Alerts JSON. Go to the Alerts tab and select Settings -> Alerts JSON.
Set the alert configuration to get notified when log volume exceeds a certain threshold. For example, trigger the alert if more than 120gb was ingested in a 24 hour period:
{
alerts: [
// existing alerts here
{
trigger: "sumPerSecond:1d(tag='logVolume' metric='logBytes') * 0.00008046627 > 120",
alertAddress: "enter webhook and/or email here, separated by commas",
description: "Exceeding 120 GB/Day"
},
// existing alerts here
]
}
Set the alert configuration to get notified when log volume is X percent greater than it was the same day the previous week. For example, alert if there is an increase of 10 percent week over week.
{
alerts: [
// existing alerts here
{
trigger: "sumPerSecond:1d(tag='logVolume' metric='logBytes') >= sumPerSecond:1d:1w(tag='logVolume' metric='logBytes') * 1.1",
alertAddress: "enter webhook or email here",
description: "Week Over Week Log Volume Increase"
},
// existing alerts here
]
}
Turning off Metrics to Reduce Log Volume
As a sidenote, you can disable some of the Agent's default log metrics by setting the following parameters in the agent.json configuration file. These entries are entered at the base level of the JSON (the same level as the API key).
Linux
implicit_metric_monitor: false,
implicit_agent_process_metrics_monitor: false
Kubernetes
Note: Add the following parameters to your Kubernetes cluster's ConfigMap
SCALYR_IMPLICIT_METRIC_MONITOR: "false"
SCALYR_REPORT_CONTAINER_METRICS: "false"
SCALYR_REPORT_K8S_METRICS: "false"
Comments
0 comments
Please sign in to leave a comment.