The Scalyr Sink sends data in the Kafka message value.
Once you have configured the Kafka Connect distributed properties and run $KAFKA_HOME/bin/connect-distributed.sh as seen in the previous tutorial, it is time to start the sink connection.
1. Create the connect-scalyr-sink.json Kafka Connect Scalyr Sink JSON configuration file.
cd $KAFKA_SCALYR_SINK_CONFIG
vim connect-scalyr-sink-custom-app.json
2. Modify the config - The value.converter and value.converter.schemas.enable should be set according to the application topic data format and whether schemas are used. See Custom Sink Mappings for more information on the custom_app_event_mapping key. You can skip to that section if you are mapping fields to the DataSet UI.
{
"name": "scalyr-sink-connector",
"config": {
"connector.class": "com.scalyr.integrations.kafka.ScalyrSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"false",
"tasks.max": "1",
"topics": "logs",
"api_key": "<SCALYR LOG WRITE API TOKEN>",
"event_enrichment": "tag=kafka",
# "custom_app_event_mapping":"[{\"matcher\": {\"attribute\": \"app.name\", \"value\": \"myapp\"}, \"eventMapping\": {\"message\": \"message\", \"logfile\": \"log.path\", \"source\": \"host.hostname\", \"parser\": \"fields.parser\", \"version\": \"app.version\", \"appField1\":\"appField1\", \"appField2\":\"nested.appField2\"}}]"
}
}
3. Make REST Call to start the Scalyr Connector.
curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d @connect-scalyr-sink-custom-app.json
Comments
0 comments
Please sign in to leave a comment.