Skip to content

Instantly share code, notes, and snippets.

@myui
Last active August 29, 2015 14:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save myui/84795a5afc0d6b16e2ad to your computer and use it in GitHub Desktop.
Save myui/84795a5afc0d6b16e2ad to your computer and use it in GitHub Desktop.
spark training on kddcup2012 track2 dataset
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
val training = MLUtils.loadLibSVMFile(sc, "hdfs://dm01:8020/user/hive/warehouse/kdd12track2.db/training_libsvmfmt_10k", multiclass = false, numFeatures = 16777216, minPartitions = 64)
//val training = MLUtils.loadLibSVMFile(sc, "hdfs://dm01:8020/user/hive/warehouse/kdd12track2.db/training_libsvmfmt_10k", multiclass = false)
val model = LogisticRegressionWithSGD.train(training, numIterations = 1)
//val model = LogisticRegressionWithSGD.train(training, numIterations = 20)
@myui
Copy link
Author

myui commented Jun 10, 2014

The dataset used for the training.
https://dl.dropboxusercontent.com/u/13123103/spark/training_libsvmfmt_10k.t

We evaluated spark 1.0 on 33 nodes in which each executor uses 7GB of memory.
The Hadoop version used in the evaluation is CDH3u6.

Spark seems too slow (does not finish in at least in 30m!) though Liblinear requires just 2m39s for convergence with 11 iterations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment