Skip to content

Instantly share code, notes, and snippets.

@rxin
Last active January 26, 2017 00:44
Show Gist options
  • Save rxin/c1592c133e4bccf515dd to your computer and use it in GitHub Desktop.
Save rxin/c1592c133e4bccf515dd to your computer and use it in GitHub Desktop.
DataFrame simple aggregation performance benchmark
data = sqlContext.load("/home/rxin/ints.parquet")
data.groupBy("a").agg(col("a"), avg("num")).collect()
val data = sqlContext.load("/home/rxin/ints.parquet")
data.groupBy("a").agg(col("a"), avg("num")).collect()
import random
from pyspark.sql import Row
data = sc.parallelize(xrange(1000)).flatMap(lambda x: [Row(a=random.randint(1, 10), num=random.randint(1, 100), str=("a" * random.randint(1, 30))) for i in xrange(10000)])
dataTable = sqlContext.createDataFrame(data)
dataTable.saveAsParquetFile("/home/rxin/ints.parquet")
pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num")
sum_count = (
pdata.map(lambda x: (x.a, [x.num, 1]))
.reduceByKey(lambda x, y:
[x[0] + y[0], x[1] + y[1]])
.collect())
[(x[0], float(x[1][0]) / x[1][1]) for x in sum_count]
val pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num")
val sum_count = pdata.map { row => (row.getInt(0), (row.getInt(1), 1)) }
.reduceByKey { (a, b) =>
(a._1 + b._1, a._2 + b._2)
}.collect()
sum_count.foreach { case (a, (sum, count)) => println(s"$a: ${sum/count}") }
@0x0FFF
Copy link

0x0FFF commented Aug 14, 2015

Here you can find some analysis of this benchmark and its code: http://0x0fff.com/spark-dataframes-are-faster-arent-they/

@Sandy4321
Copy link

it is confusing , since in data science machine learning we need real life algorithms comparison. Take random forest from sci kit learn (not from spark ML very poor library) or deep neural network and run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment