Skip to content

Instantly share code, notes, and snippets.

Confluent, Inc. confluentgist

Block or report user

Report or block confluentgist

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
View docker-compose.yml
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
View Streaming_Machine_Learning_with_Kafka_and_TensorFlow.py
import numpy as np
import tensorflow as tf
import tensorflow_io.kafka as kafka_io
# 1. Consume streaming data with Kafka and TensorFlow I/O
def func_x(x):
# Decode image to (28, 28)
x = tf.io.decode_raw(x, out_type=tf.uint8)
x = tf.reshape(x, [28, 28])
# Convert to float32 for tf.keras
@confluentgist
confluentgist / increased-partitions.sql
Created Jan 11, 2020 — forked from miguno/increased-partitions.sql
ksqlDB example: Create a new stream with the desired number of partitions.
View increased-partitions.sql
CREATE STREAM products ...;
CREATE STREAM products_repartitioned
WITH (PARTITIONS=30) AS
SELECT * FROM products
EMIT CHANGES;
@confluentgist
confluentgist / aggregation.java
Created Jan 9, 2020 — forked from miguno/aggregation.java
Kafka Streams Example: Continuously aggregating a stream into a table
View aggregation.java
// Continuously aggregating a KStream into a KTable.
KStream<String, String> locationUpdatesStream = ...;
KTable<String, Long> locationsPerUser
= locationUpdatesStream
.groupBy((k, v) -> v.username)
.count();
@confluentgist
confluentgist / aggregation.sql
Created Jan 9, 2020 — forked from miguno/aggregation.sql
ksqlDB example: Continuously aggregating a stream into a table with a push query
View aggregation.sql
-- Continuously aggregating a stream into a table with a ksqlDB push query.
CREATE STREAM locationUpdatesStream ...;
CREATE TABLE locationsPerUser AS
SELECT username, COUNT(*)
FROM locationUpdatesStream
GROUP BY username
EMIT CHANGES;
@confluentgist
confluentgist / topic-as-table.java
Created Jan 9, 2020 — forked from miguno/topic-as-table.java
Kafka Streams Example: read topic as table
View topic-as-table.java
// Create KTable from Kafka topic.
KTable<String, String> table = builder.table("input-topic", Consumed.with(Serdes.String(), Serdes.String()));
@confluentgist
confluentgist / topic-as-table.sql
Created Jan 9, 2020 — forked from miguno/topic-as-table.sql
ksqlDB example: read topic as table
View topic-as-table.sql
-- Create ksqlDB table from Kafka topic.
CREATE TABLE myTable (username VARCHAR, location VARCHAR)
WITH (KAFKA_TOPIC='input-topic', KEY='username', VALUE_FORMAT='...');
@confluentgist
confluentgist / topic-as-stream.java
Created Jan 9, 2020 — forked from miguno/topic-as-stream.java
Kafka Streams Example: read a topic as a stream
View topic-as-stream.java
// Create KStream from Kafka topic.
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> stream =
builder.stream("input-topic", Consumed.with(Serdes.String(), Serdes.String()));
@confluentgist
confluentgist / topic-as-stream.sql
Created Jan 9, 2020 — forked from miguno/topic-as-stream.sql
ksqlDB example: read topic as stream
View topic-as-stream.sql
-- Create ksqlDB stream from Kafka topic.
CREATE STREAM myStream (username VARCHAR, location VARCHAR)
WITH (KAFKA_TOPIC='input-topic', VALUE_FORMAT='...');
View gist:5cc32f336179d2b37dfafcabfc220f7e
“task”: {
“type”: “.TaskRole,
“initialDelayMs”: 20000,
“taskSpecs”: {
"bench0": {
"class": "org.apache.kafka.trogdor.workload.ProduceBenchSpec",
"startMs": 0,
"durationMs": 720000,
"producerNode": "node0",
"bootstrapServers": "%{bootstrapServers}",
You can’t perform that action at this time.