Create a gist now

Instantly share code, notes, and snippets.

@rxin /df.py
Last active Jan 26, 2017

Embed
DataFrame simple aggregation performance benchmark
data = sqlContext.load("/home/rxin/ints.parquet")
data.groupBy("a").agg(col("a"), avg("num")).collect()
val data = sqlContext.load("/home/rxin/ints.parquet")
data.groupBy("a").agg(col("a"), avg("num")).collect()
import random
from pyspark.sql import Row
data = sc.parallelize(xrange(1000)).flatMap(lambda x: [Row(a=random.randint(1, 10), num=random.randint(1, 100), str=("a" * random.randint(1, 30))) for i in xrange(10000)])
dataTable = sqlContext.createDataFrame(data)
dataTable.saveAsParquetFile("/home/rxin/ints.parquet")
pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num")
sum_count = (
pdata.map(lambda x: (x.a, [x.num, 1]))
.reduceByKey(lambda x, y:
[x[0] + y[0], x[1] + y[1]])
.collect())
[(x[0], float(x[1][0]) / x[1][1]) for x in sum_count]
val pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num")
val sum_count = pdata.map { row => (row.getInt(0), (row.getInt(1), 1)) }
.reduceByKey { (a, b) =>
(a._1 + b._1, a._2 + b._2)
}.collect()
sum_count.foreach { case (a, (sum, count)) => println(s"$a: ${sum/count}") }
@melrief

This comment has been minimized.

Show comment
Hide comment
@melrief

melrief Mar 24, 2015

This is really interesting! I have some questions:

  • is it always better to use DataFrames instead of the functional API? Is there a better way to implement the sum_count in the rdd so it is faster with Spark 1.3 or for this kind of operations the functional API should never be used?
  • the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? Would it be possible to bring this optimization also to the functional API?

melrief commented Mar 24, 2015

This is really interesting! I have some questions:

  • is it always better to use DataFrames instead of the functional API? Is there a better way to implement the sum_count in the rdd so it is faster with Spark 1.3 or for this kind of operations the functional API should never be used?
  • the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? Would it be possible to bring this optimization also to the functional API?
@nealmcb

This comment has been minimized.

Show comment
Hide comment
@nealmcb

nealmcb Apr 9, 2015

In Python, it looks like you first have to do this to avoid NameError: name 'sqlContext' is not defined:

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

nealmcb commented Apr 9, 2015

In Python, it looks like you first have to do this to avoid NameError: name 'sqlContext' is not defined:

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
@hnahak

This comment has been minimized.

Show comment
Hide comment
@hnahak

hnahak Apr 21, 2015

I tried DFs in python and scala but col() and avg() , inside agg() function is not recognized by any of application. Should we need to import any other package ?

hnahak commented Apr 21, 2015

I tried DFs in python and scala but col() and avg() , inside agg() function is not recognized by any of application. Should we need to import any other package ?

@samthebest

This comment has been minimized.

Show comment
Hide comment
@samthebest

samthebest Jul 24, 2015

@melrief

// is it always better to use DataFrames instead of the functional API? //

No, it depends on the application. DataFrames are great for interactive analysis and BAU BI, but when writing my own machine learning algorithms or building complex applications I stick to the functional API.

// the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? //

No, I imagine it also uses mutable integers under the hood, whereas the Scala version uses tuple copying.

// Would it be possible to bring this optimization also to the functional API? //

Yes and no, yes in that the user could use something slightly lower level, like combineByKey or mapPartitions to pre-aggregate mutably at the partition level first. No in that it would be impossible for the Spark API to automagically do this.

@melrief

// is it always better to use DataFrames instead of the functional API? //

No, it depends on the application. DataFrames are great for interactive analysis and BAU BI, but when writing my own machine learning algorithms or building complex applications I stick to the functional API.

// the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? //

No, I imagine it also uses mutable integers under the hood, whereas the Scala version uses tuple copying.

// Would it be possible to bring this optimization also to the functional API? //

Yes and no, yes in that the user could use something slightly lower level, like combineByKey or mapPartitions to pre-aggregate mutably at the partition level first. No in that it would be impossible for the Spark API to automagically do this.

@0x0FFF

This comment has been minimized.

Show comment
Hide comment
@0x0FFF

0x0FFF Aug 14, 2015

Here you can find some analysis of this benchmark and its code: http://0x0fff.com/spark-dataframes-are-faster-arent-they/

0x0FFF commented Aug 14, 2015

Here you can find some analysis of this benchmark and its code: http://0x0fff.com/spark-dataframes-are-faster-arent-they/

@Sandy4321

This comment has been minimized.

Show comment
Hide comment
@Sandy4321

Sandy4321 Aug 31, 2016

it is confusing , since in data science machine learning we need real life algorithms comparison. Take random forest from sci kit learn (not from spark ML very poor library) or deep neural network and run.

it is confusing , since in data science machine learning we need real life algorithms comparison. Take random forest from sci kit learn (not from spark ML very poor library) or deep neural network and run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment