Skip to content

Instantly share code, notes, and snippets.

Last active August 9, 2023 17:32
Show Gist options
  • Save vrilleup/9e0613175fab101ac7cd to your computer and use it in GitHub Desktop.
Save vrilleup/9e0613175fab101ac7cd to your computer and use it in GitHub Desktop.
Spark/mllib SVD example
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg._
import org.apache.spark.{SparkConf, SparkContext}
// To use the latest sparse SVD implementation, please build your spark-assembly after this
// change:
// Input tsv with 3 fields: rowIndex(Long), columnIndex(Long), weight(Double), indices start with 0
// Assume the number of rows is larger than the number of columns, and the number of columns is
// smaller than Int.MaxValue
// sc is a SparkContext defined in the job
val inputData = sc.textFile("hdfs://...").map{ line =>
val parts = line.split("\t")
(parts(0).toLong, parts(1).toInt, parts(2).toDouble)
// Number of columns
val nCol =
// Construct rows of the RowMatrix
val dataRows = inputData.groupBy(_._1).map[(Long, Vector)]{ row =>
val (indices, values) = => (e._2, e._3)).unzip
(row._1, new SparseVector(nCol, indices.toArray, values.toArray))
// Compute 20 largest singular values and corresponding singular vectors
val svd = new RowMatrix(, computeU = true)
// Write results to hdfs
val V = svd.V.toArray.grouped(svd.V.numRows).toList.transpose
sc.makeRDD(V, 1).zipWithIndex()
.map(line => line._2 + "\t" + line._1.mkString("\t")) // make tsv line starting with column index
.saveAsTextFile("hdfs://...output/right_singular_vectors") => row.toArray).zip(
.map(line => line._2 + "\t" + line._1.mkString("\t")) // make tsv line starting with row index
sc.makeRDD(svd.s.toArray, 1)
Copy link

gocanal commented Oct 8, 2015

Thank you very much for sharing the code. I am looking for a solution to support matrix inverse, SVD could be one. A couple of questions:

  1. Looking at the Spark API Doc for RowMatrix.computeSVD:
    matrix A (m x n) ... we assume n is smaller than m.
    Does this mean that computeSVD does not work for a square matrix?
  2. What if matrix A is bigger than the physical memory of a node ? Will the program distribute sub matrix to different nodes in the hadoop cluster ?

thank you very much

Copy link

Sayan21 commented Nov 2, 2015


Can you please tell me the article whose method you implemented in particular? I want to go through the theory a little.



Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment