Skip to content

Instantly share code, notes, and snippets.

View irwenqiang's full-sized avatar
🎯
Focusing

ryenchen irwenqiang

🎯
Focusing
View GitHub Profile
x=["SyBPtQfAZ","H1S8UE-Rb","S1sRrN-CW","Syt0r4bRZ","HkPCrEZ0Z","rJ5C67-C-","H1T2hmZAb","Hymt27b0Z",
"HJ5AUm-CZ","r1nzLmWAb","HkGJUXb0-","SkERSm-0-","BJlrSmbAZ","HJXyS7bRb","SyhRVm-Rb","SkwAEQbAb",
"B1mvVm-C-","S1TgE7WR-","H1DkN7ZCZ","SJ71VXZAZ","ryk77mbRZ","HJIhGXWCZ","BJInMmWC-","H1I3M7Z0b",
"Bk-ofQZRb","SJx9GQb0-","BJoBfQ-0b","SJyVzQ-C-","HJNGGmZ0Z","H1kMMmb0-","HkGbzX-AW","rJIgf7bAZ",
"SyCyMm-0W","r1ayG7WRZ","H1Nyf7W0Z","HkCvZXbC-","ByED-X-0W","ByuI-mW0W","H1BHbmWCZ","SkqV-XZRZ",
"rk07ZXZRb","HJCXZQbAZ","H1bbbXZC-","rkaqxm-0b","S1XolQbRW","B1TYxm-0-","Bkftl7ZCW","SyBBgXWAZ",
"SkrHeXbCW","S1ANxQW0b","ByOExmWAb","By4Nxm-CW","r1l4eQW0Z","B12QlQWRW","ry831QWAb","B1EGg7ZCb",
"HyMTkQZAb","rJ6iJmWCW","rkZB1XbRZ","HJnQJXbC-","Sy3fJXbA-","HJ8W1Q-0Z","HknbyQbC-","BkrsAzWAb",
"ryH20GbRW","r1HhRfWRZ","B1KFAGWAZ","Byht0GbRZ","B1hYRMbCW","S1q_Cz-Cb","BJ7d0fW0b","HyydRMZC-",
"SyZI0GWCZ","rJSr0GZR-","ryZERzWCZ","rkeZRGbRW","ryazCMbR-","Hyig0zb0Z","H11lAfbCW","HkXWCMbRW",
@jarutis
jarutis / tf_serving.sh
Created October 24, 2016 18:37
Install Tensorflow Serving on Centos 7 (CPU)
sudo su
# Java
yum -y install java-1.8.0-openjdk-devel
# Build Esentials (minimal)
yum -y install gcc gcc-c++ kernel-devel make automake autoconf swig git unzip libtool binutils
# Extra Packages for Enterprise Linux (EPEL) (for pip, zeromq3)
yum -y install epel-release
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@waleking
waleking / SparkGibbsLDA.scala
Last active January 31, 2020 11:15
We implement gibbs sampling for LDA by Spark. This version performs much better than alpha version, and now can handle 3196204 words, 100 topics, 1000 sample iterations on server in 161.7 minutes. To solve the long time consuming in collect() process in alpha version, we utilize the cache() method as line 261 and line 262. We also solve a pile o…
package topic
import spark.broadcast._
import spark.SparkContext
import spark.SparkContext._
import spark.RDD
import spark.storage.StorageLevel
import scala.util.Random
import scala.math.{ sqrt, log, pow, abs, exp, min, max }
import scala.collection.mutable.HashMap
@MLnick
MLnick / MovieSimilarities.scala
Created April 1, 2013 17:49
Movie Similarities with Spark
import spark.SparkContext
import SparkContext._
/**
* A port of [[http://blog.echen.me/2012/02/09/movie-recommendations-and-more-via-mapreduce-and-scalding/]]
* to Spark.
* Uses movie ratings data from MovieLens 100k dataset found at [[http://www.grouplens.org/node/73]]
*/
object MovieSimilarities {
@fabianp
fabianp / group_lasso.py
Created December 2, 2011 14:17
group lasso
import numpy as np
from scipy import linalg, optimize
MAX_ITER = 100
def group_lasso(X, y, alpha, groups, max_iter=MAX_ITER, rtol=1e-6,
verbose=False):
"""
Linear least-squares with l2/l1 regularization solver.