Skip to content

Instantly share code, notes, and snippets.

@coatless
Last active August 18, 2017 14:04
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save coatless/5c4f95c69f7a1b61b2a5 to your computer and use it in GitHub Desktop.
Save coatless/5c4f95c69f7a1b61b2a5 to your computer and use it in GitHub Desktop.
#include <RcppArmadillo.h>
#include <omp.h>
using namespace Rcpp;
using namespace arma;
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::plugins(openmp)]]
// [[Rcpp::export]]
void updateImplicitX_p(arma::mat & X, const arma::mat & Y, const arma::mat & P, const arma::mat & C, double lambda, int cores = 1) {
int num_users = C.n_rows;
int num_prods = C.n_cols;
int num_factors = Y.n_cols; // or X.n_cols
Rprintf(".");
arma::mat YTY = Y.t() * Y;
arma::mat fact_eye = eye(num_prods, num_prods);
arma::mat lambda_eye = lambda * eye(num_factors, num_factors);
#pragma omp parallel for num_threads(cores)
for (int u = 0; u < C.n_rows; u++) {
arma::mat Cu = diagmat(C.row(u));
arma::mat YTCuIY = Y.t() * (Cu) * Y;
arma::mat YTCupu = Y.t() * (Cu + fact_eye) * P.row(u).t();
arma::mat WuT = YTY + YTCuIY + lambda_eye;
arma::mat xu = solve(WuT, YTCupu);
// Update gradient -- maybe a slow operation in parallel?
X.row(u) = xu.t();
}
}
/*** R
test = matrix(rnorm(10000), ncol=100, nrow=100)
install.packages(microbenchmark
library(microbenchmark)
microbenchmark(updateImplicitX(test,test,test,test,5),
updateImplicitX_p(test,test,test,test,5,1),
updateImplicitX_p(test,test,test,test,5,2),
updateImplicitX_p(test,test,test,test,5,3),
updateImplicitX_p(test,test,test,test,5,4),
updateImplicitX_p(test,test,test,test,5,5))
*/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment