Once data is streaming from NGINX > Filebeat > Kafka > DataSet, you should see the near real-time data show up in DataSet. The following table contains a list of important fields to consider.
Assuming you added the accessLog parser, you should be able to click on the webserver dashboard and view the metrics derived from the logs.
Configurations:
Filebeat:
filebeat.inputs:
- type: log
paths:
- "/var/log/nginx/*"
fields:
parser: accessLog
app: nginx
fields_under_root: true
Kafka:
{
"name": "scalyr-sink-connector",
"config": {
"connector.class": "com.scalyr.integrations.kafka.ScalyrSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"false",
"tasks.max": "1",
"topics": "logs",
"api_key": "<SCALYR LOG WRITE API TOKEN>",
"event_enrichment": "key1=kafka",
"custom_app_event_mapping":"[
{
\"matcher\":
{
\"attribute\": \"app.name\",
\"value\": \"myapp\"
},
\"eventMapping\":
{
\"message\": \"message\",
\"logfile\": \"log.path\",
\"serverHost\": \"host.hostname\",
\"parser\": \"fields.parser\"
}
}
]"
}
}
Index | Field | Description |
1 | parser | Set in the producer. In this case filebeat.yml (see here). You should see the value reflect there. If you set the name to a prebuilt parser, you should see the access log automagically parse. |
2 | serverHost | Set in custom_app_event_mapping (see here). |
3 | message | Required raw message. This is the line that Nginx writes to the disk. |
4 | dataset | This comes from the parser. If you are using the accessLog parser, this will show up and will indicate that this is being parsed. This is also used in the webserver dashboard. |
5 | logfile | Set in custom_app_event_mapping (see here). |
6 | source | This is what serverHost is translated to. |
7 | key1 | Event enrichment. These are custom attributes set in event_enrichment config here. |
Comments
0 comments
Please sign in to leave a comment.