Skip to content

Instantly share code, notes, and snippets.

View rmoff's full-sized avatar

Robin Moffatt rmoff

View GitHub Profile
curl -i -X PUT -H "Content-Type:application/json" \
http://localhost:8083/connectors/sink-elastic-orders-00/config \
-d '{
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"topics": "orders",
"connection.url": "http://elasticsearch:9200",
"type.name": "type.name=kafkaconnect",
"key.ignore": "true",
"schema.ignore": "false",
"errors.tolerance":"all",
@rmoff
rmoff / msg.json
Last active November 21, 2022 21:36
Kafka Connect JSON message with schema/payload
{
"schema": {
"type": "struct",
"fields": [{
"type": "int32",
"optional": true,
"field": "c1"
}, {
"type": "string",
"optional": true,

This is a set of instructions for use with the blog article Streaming data from Oracle using Oracle GoldenGate and Kafka Connect.

@rmoff / September 15, 2016


First up, download the BigDataLite VM, unpack it and import it to VirtualBox. You'll need internet access from the VM for the downloads, so make sure you include a NAT network adaptor, or bridged onto a network with internet access. Login to the machine as oracle/welcome1. All the work done in this article is from the command line, so you can either work in Terminal, or you can run ip a to determine the IP address of the VM and then SSH into it from your host machine.

Install Confluent Platform

bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
value.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter.schema.registry.url=http://localhost:8081
@rmoff
rmoff / gist:cfa93f2d8b722f4e6bf96aedec5074ef
Created January 22, 2020 17:34
InfluxDB Sink connector example
kafkacat -b localhost:9092 -P -t testdata-json4 <<EOF
{ "schema": { "type": "struct", "fields": [ { "type": "map", "keys": { "type": "string", "optional": false }, "values": { "type": "string", "optional": false }, "optional": false, "field": "tags" }, { "field": "sn", "optional": false, "type": "string" }, { "field": "value", "optional": false, "type": "float" } ], "optional": false, "version": 1 }, "payload": { "tags": { "tagnum": "5" }, "sn": "FOO", "value": 500.0 } }
EOF
curl -i -X PUT -H "Accept:application/json" \
-H "Content-Type:application/json" http://localhost:8083/connectors/SINK_INFLUX_01/config \
-d '{
"connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "true",
@rmoff
rmoff / 00.README.md
Last active December 1, 2021 23:45
RMarkdown - repeating child block with changing variable value

When you use knit_expand it appears that the inclusion of the Rmd is done on the first pass, and then the complete document evaluated. This means that a Rmd block referenced in loop with knit_expand will only evaluate changing variables at their last value.

This can be worked around by passing the literal value of the variable at the time of the knit_expand with {{var}} syntax.

This is documented in the knitr_expand docs, but less clear (to an R noob like me) for embedded documents rather than strings.

@rmoff
rmoff / smt.md
Created January 24, 2020 11:14
Kafka Connect - IllegalArgumentException: Invalid decimal scale: 127

From Mayank Patel on http://cnfl.io/slack

Problem:

org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:284)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:309)
@rmoff
rmoff / gist:f32543f78d821b25502f6db49eee9259
Created August 29, 2017 13:08
Kafka Connect JDBC source with JSON converter
{
"name": "jdbc_source_mysql_foobar_01",
"config": {
"_comment": "The JDBC connector class. Don't change this if you want to use the JDBC Source.",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"_comment": "How to serialise the value of keys - here use the Json converter. We want to retain the schema in the message (which will generate a schema/payload JSON document) and so set schemas.enable=true",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true",