Last active
August 29, 2015 14:15
-
-
Save florianverhein/b644f0e2ae2edadf94a9 to your computer and use it in GitHub Desktop.
Running scalaz-stream Processor inside Spark example
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.spark._ | |
import scalaz.stream._ | |
/** | |
* Simple proof of concept - fill an RDD from files that have been | |
* processed by a scalaz-stream Process (in parallel). | |
*/ | |
object SparkScalazStream { | |
def main(args: Array[String]) { | |
val conf = new SparkConf().setAppName("Spark scalaz-stream test") | |
val spark = new SparkContext(conf) | |
val files = spark.parallelize(args.toSeq, args.length) | |
val contents = files.flatMap { case f => | |
// assuming f exists on every node. would really read from HDFS... | |
val in = scalaz.stream.io.linesR(f) | |
val p = in //actually, some really complicated stream | |
//processing of in that relies on order, etc | |
p.runLog //TODO MUST AVOID THIS!!!! | |
.run | |
} | |
val lines = contents.map(_ => 1).reduce(_ + _) | |
println("lines = " + lines) | |
spark.stop() | |
} | |
} | |
/* | |
* TODO Solve this problem: | |
* turn p ( a Process[Task,String] ) into a TraversableOnce[String] | |
* and let spark drive the state machine, rather than the Task | |
*/ |
Since Iterator is a TraversableOnce, attempted this:
https://gist.github.com/florianverhein/2ed965bde7324cb73325
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks @pchlupacek. Would you mind elaborating?
I thought about implementing an iterator that steps through the Process on each
next()
and returns the emitted value somehow... but unsure of the details... I think this is what you meant withProcess.step
?Why would you not recommend this?
I need to parallelise beyond a single host due to data size, so running
Process
es within spark seems a natural solution (I have a library of these and would like to lift them into spark - and later, I would also like to process data in RDDs with scalazProcess
es via mapPartition). I don't know much about njoin or mergeN beyond reading the API just now, but I think these would be limited to a single host.