Following custom file commiter, enables concurently spark processes to save data to same destination.
for each spark execution/process provide different pending.dir
# enable
spark.sql.parquet.output.committer.class=io.debezium.server.batch.spark.ParquetOutputCommitterV2
# provide custom pending.dir
mapreduce.fileoutputcommitter.pending.dir=_temporary
mapreduce.fileoutputcommitter.pending.dir=_temporary2
mapreduce.fileoutputcommitter.pending.dir=_temporary3
FWIW the committer factory in the cloud storage module lets you change to a new committer back end without having to patch FileOutputCommitter...just clone it and reference it in the relevant configs