Skip to content

Instantly share code, notes, and snippets.

View irwenqiang's full-sized avatar
🎯
Focusing

ryenchen irwenqiang

🎯
Focusing
View GitHub Profile
#!/usr/bin/env python
from numpy import asmatrix, asarray, ones, zeros, mean, sum, arange, prod, dot, loadtxt
from numpy.random import random, randint
import pickle
MISSING_VALUE = -1 # a constant I will use to denote missing integer values
def impute_hidden_node(E, I, theta, sample_hidden):
@irwenqiang
irwenqiang / als.cpp
Last active January 3, 2016 07:49
als source code reading
/**
* iver:
* In ALS, we factorize the matrix into two lower rank matrix such that A ~= U*V'
* It is not a symmetric matrix factorization where A ~= Q' Q as you write.
*
* The non zero entries of the matrix A are the edges between user and item nodes.
* The edge direction is always between user -> item.
*
* The user latent vectors are stored in the user vertices (vertex.num_out_edges() > 0).
* The item latent vectors are stored in the item vertices (vertex.num_out_edges() == 0)
@irwenqiang
irwenqiang / nmf.cpp
Created January 15, 2014 07:42
In NMF, the matrix A into two lower rank matrix such that A ~= U*V' The non zero entries of the matrix A are the edges between user and item nodes. The edge direction is always between user -> item. The user latent vectors are stored in the user vertices "prev" The item latent vectors are stored in the item vertices "prev"
/**
* Copyright (c) 2009 Carnegie Mellon University.
* All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
package topic
import spark.broadcast._
import spark.SparkContext
import spark.SparkContext._
import spark.RDD
import spark.storage.StorageLevel
import scala.util.Random
import scala.math.{ sqrt, log, pow, abs, exp, min, max }
import scala.collection.mutable.HashMap
import spark.SparkContext
import SparkContext._
/**
* A port of [[http://blog.echen.me/2012/02/09/movie-recommendations-and-more-via-mapreduce-and-scalding/]]
* to Spark.
* Uses movie ratings data from MovieLens 100k dataset found at [[http://www.grouplens.org/node/73]]
*/
object MovieSimilarities {
package topic
import spark.broadcast._
import spark.SparkContext
import spark.SparkContext._
import spark.RDD
import spark.storage.StorageLevel
import scala.util.Random
import scala.math.{ sqrt, log, pow, abs, exp, min, max }
import scala.collection.mutable.HashMap
@irwenqiang
irwenqiang / group_lasso.py
Created December 15, 2015 10:06 — forked from fabianp/group_lasso.py
group lasso
import numpy as np
from scipy import linalg, optimize
MAX_ITER = 100
def group_lasso(X, y, alpha, groups, max_iter=MAX_ITER, rtol=1e-6,
verbose=False):
"""
Linear least-squares with l2/l1 regularization solver.
#include <algorithm>
#include <map>
#include <set>
#include <string>
#include <utility>
#include <vector>
#include "hdf5.h"
#include "caffe/common.hpp"
#ifndef CAFFE_NET_HPP_
#define CAFFE_NET_HPP_
#include <map>
#include <set>
#include <string>
#include <utility>
#include <vector>
#include "caffe/blob.hpp"