This is a set of instructions for use with the blog article Streaming data from Oracle using Oracle GoldenGate and Kafka Connect.
@rmoff / September 15, 2016
First we prepare the database for Oracle GoldenGate. Connect as SYSDBA:
This is a set of instructions for use with the blog article Streaming data from Oracle using Oracle GoldenGate and Kafka Connect.
@rmoff / September 15, 2016
First we prepare the database for Oracle GoldenGate. Connect as SYSDBA:
curl -i -X PUT -H "Content-Type:application/json" \ | |
http://localhost:8083/connectors/sink-elastic-orders-00/config \ | |
-d '{ | |
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", | |
"topics": "orders", | |
"connection.url": "http://elasticsearch:9200", | |
"type.name": "type.name=kafkaconnect", | |
"key.ignore": "true", | |
"schema.ignore": "false", | |
"errors.tolerance":"all", |
{ | |
"schema": { | |
"type": "struct", | |
"fields": [{ | |
"type": "int32", | |
"optional": true, | |
"field": "c1" | |
}, { | |
"type": "string", | |
"optional": true, |
This is a set of instructions for use with the blog article Streaming data from Oracle using Oracle GoldenGate and Kafka Connect.
@rmoff / September 15, 2016
First up, download the BigDataLite VM, unpack it and import it to VirtualBox. You'll need internet access from the VM for the downloads, so make sure you include a NAT network adaptor, or bridged onto a network with internet access. Login to the machine as oracle/welcome1. All the work done in this article is from the command line, so you can either work in Terminal, or you can run ip a
to determine the IP address of the VM and then SSH into it from your host machine.
bootstrap.servers=localhost:9092 | |
key.converter=io.confluent.connect.avro.AvroConverter | |
value.converter=io.confluent.connect.avro.AvroConverter | |
key.converter.schema.registry.url=http://localhost:8081 | |
value.converter.schema.registry.url=http://localhost:8081 |
kafkacat -b localhost:9092 -P -t testdata-json4 <<EOF | |
{ "schema": { "type": "struct", "fields": [ { "type": "map", "keys": { "type": "string", "optional": false }, "values": { "type": "string", "optional": false }, "optional": false, "field": "tags" }, { "field": "sn", "optional": false, "type": "string" }, { "field": "value", "optional": false, "type": "float" } ], "optional": false, "version": 1 }, "payload": { "tags": { "tagnum": "5" }, "sn": "FOO", "value": 500.0 } } | |
EOF | |
curl -i -X PUT -H "Accept:application/json" \ | |
-H "Content-Type:application/json" http://localhost:8083/connectors/SINK_INFLUX_01/config \ | |
-d '{ | |
"connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector", | |
"value.converter": "org.apache.kafka.connect.json.JsonConverter", | |
"value.converter.schemas.enable": "true", |
When you use knit_expand
it appears that the inclusion of the Rmd is done on the first pass, and then the complete document evaluated. This means that a Rmd block referenced in loop with knit_expand
will only evaluate changing variables at their last value.
This can be worked around by passing the literal value of the variable at the time of the knit_expand
with {{var}}
syntax.
This is documented in the knitr_expand docs, but less clear (to an R noob like me) for embedded documents rather than strings.
From Mayank Patel on http://cnfl.io/slack
Problem:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:284)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:309)
{ | |
"name": "jdbc_source_mysql_foobar_01", | |
"config": { | |
"_comment": "The JDBC connector class. Don't change this if you want to use the JDBC Source.", | |
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", | |
"_comment": "How to serialise the value of keys - here use the Json converter. We want to retain the schema in the message (which will generate a schema/payload JSON document) and so set schemas.enable=true", | |
"key.converter": "org.apache.kafka.connect.json.JsonConverter", | |
"key.converter.schemas.enable":"true", |