Skip to content

Instantly share code, notes, and snippets.

@claudinei-daitx
Created December 15, 2017 13:02
Show Gist options
  • Save claudinei-daitx/3766d01b070f3f0f8d1b64fd06b71585 to your computer and use it in GitHub Desktop.
Save claudinei-daitx/3766d01b070f3f0f8d1b64fd06b71585 to your computer and use it in GitHub Desktop.
Create a Spark session optimized to work with Amazon S3.
import org.apache.spark.sql.SparkSession
object SparkSessionS3 {
//create a spark session with optimizations to work with Amazon S3.
def getSparkSession: SparkSession = {
val spark = SparkSession
.builder
.appName("my spark application name")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.hadoop.fs.s3a.access.key", "my access key")
.config("spark.hadoop.fs.s3a.secret.key", "my secret key")
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
.config("spark.hadoop.fs.s3a.multiobjectdelete.enable","false")
.config("spark.hadoop.fs.s3a.fast.upload","true")
.config("spark.sql.parquet.filterPushdown", "true")
.config("spark.sql.parquet.mergeSchema", "false")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.speculation", "false")
.getOrCreate
// You can use this hadoop configuration as alternative to spark.hadoop configuration
spark.sparkContext.hadoopConfiguration.set("fs.s3a.multiobjectdelete.enable","false")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key","my access key")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key","my secret key")
}
}
@geofflangenderfer
Copy link

// You can use this hadoop configuration as alternative to spark.hadoop configuration

could you explain this comment? do you mean we should use config() or set?

@LilMonk
Copy link

LilMonk commented Aug 17, 2023

It depends on when you want to provide the S3 connection configs to Spark.
For example, if you want to use S3 as your Spark warehouse location

.config("spark.sql.warehouse", "s3a://bucket_name/")

so here you need to first provide the S3 connection configs while creating the SparkSession.

.config("spark.hadoop.fs.s3a.access.key", "my access key")
.config("spark.hadoop.fs.s3a.secret.key", "my secret key")
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
.config("spark.hadoop.fs.s3a.multiobjectdelete.enable","false")
.config("spark.hadoop.fs.s3a.fast.upload","true")
.config("spark.sql.warehouse", "s3a://bucket_name/")

And if you want to setup the S3 connection later then you can configure SparkContext after you initialize SparkSession like this,

spark.sparkContext.hadoopConfiguration.set("fs.s3a.multiobjectdelete.enable","false")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key","my access key")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key","my secret key")

Let me know if it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment