Skip to content

Instantly share code, notes, and snippets.

@recursivecodes
Last active December 20, 2019 18:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save recursivecodes/2639dddfb7dc6be017454762f774ffd1 to your computer and use it in GitHub Desktop.
Save recursivecodes/2639dddfb7dc6be017454762f774ffd1 to your computer and use it in GitHub Desktop.
connect-distributed.properties
bootstrap.servers=<bootstrap server from stream pool connection settings>
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<tenancy>/<username>/<stream pool OCID>" password="<auth token>";
producer.sasl.mechanism=PLAIN
producer.security.protocol=SASL_SSL
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<tenancy>/<username>/<stream pool OCID>" password="<auth token>";
consumer.sasl.mechanism=PLAIN
consumer.security.protocol=SASL_SSL
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<tenancy>/<username>/<stream pool OCID>" password="<auth token>";
database.history.producer.sasl.mechanism=PLAIN
database.history.producer.security.protocol=SASL_SSL
database.history.producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<tenancy>/<username>/<stream pool OCID>" password="<auth token>";
database.history.consumer.sasl.mechanism=PLAIN
database.history.consumer.security.protocol=SASL_SSL
database.history.consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<tenancy>/<username>/<stream pool OCID>" password="<auth token>";
retries=1
max.in.flight.requests.per.connection=1
config.storage.replication.factor=1
status.storage.replication.factor=1
offset.storage.replication.factor=1
config.storage.partitions=1
status.storage.partitions=1
offset.storage.partitions=1
offset.flush.interval.ms=10000
offset.flush.timeout.ms=5000
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
task.shutdown.graceful.timeout.ms=10000
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment