Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
export DATA_LAKE_BUCKET="<your_data_lake_bucket_name>"
# artists data, MoR table type, continuous upserts
spark-submit \
--jars /usr/lib/spark/jars/spark-avro.jar,/usr/lib/hudi/hudi-utilities-bundle.jar \
--conf spark.sql.catalogImplementation=hive \
--class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/lib/hudi/hudi-utilities-bundle.jar \
--table-type MERGE_ON_READ \
--source-ordering-field __source_ts_ms \
--props "s3://${DATA_LAKE_BUCKET}/hudi/deltastreamer_artists_apicurio_mor.properties" \
--source-class org.apache.hudi.utilities.sources.AvroDFSSource \
--target-base-path "s3://${DATA_LAKE_BUCKET}/moma/artists_mor/" \
--target-table moma_mor.artists \
--schemaprovider-class org.apache.hudi.utilities.schema.SchemaRegistryProvider \
--enable-sync \
--continuous \
--op UPSERT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment