Skip to content

Instantly share code, notes, and snippets.

View shikharbhardwaj's full-sized avatar

Shikhar Bhardwaj shikharbhardwaj

View GitHub Profile
################################################################################
With 1 random shuffle, before iteration.
################################################################################
λ hogwild/mc ∴ ./a.out -v
Thead share size : 25049701
[INFO ] Called specialization
[INFO ] Start computing loss for iteration : 1
[INFO ] Parallel SGD: iteration 1, objective 1.82124e+09.
[INFO ] Number of elements to shuffle : 100198806
[INFO ] Start shuffle for iteration : 1
@shikharbhardwaj
shikharbhardwaj / stdout
Last active August 22, 2017 16:25
Sparse SVM run on RCV1 dataset
λ mlpack_spike/hogwild ∴ g++ svm_main.cpp -O2 -std=c++11 -Wall -larmadillo -lmlpack -fopenmp
λ mlpack_spike/hogwild ∴ ./a.out
RMSE : 0.797936
Initial loss : 70176.4
Final loss : 114.332
RMSE : 0.050055
λ mlpack_spike/hogwild ∴ ./a.out
RMSE : 0.797936
Initial loss : 70176.4
Final loss : 114.369
@shikharbhardwaj
shikharbhardwaj / sample_mc_test.cpp
Created July 11, 2017 21:53
Sample test for sparse matrix completion
#include <mlpack/core.hpp>
#include <mlpack/core/optimizers/parallel_sgd/parallel_sgd.hpp>
#include <mlpack/core/optimizers/parallel_sgd/decay_policies/constant_step.hpp>
#include <mlpack/core/optimizers/parallel_sgd/sparse_mc_function.hpp>
using namespace std;
using namespace mlpack;
using namespace mlpack::optimization;
int main()
{
@shikharbhardwaj
shikharbhardwaj / sparse_mc_test.cpp
Last active July 5, 2017 13:25
Another approach for testing Matrix completion
#include <mlpack/core.hpp>
#include <mlpack/core/optimizers/parallel_sgd/parallel_sgd.hpp>
#include <mlpack/core/optimizers/parallel_sgd/decay_policies/constant_step.hpp>
#include <mlpack/core/optimizers/parallel_sgd/sparse_mc_function.hpp>
using namespace std;
using namespace mlpack::optimization;
int main()
{
size_t numRows, numCols;
@shikharbhardwaj
shikharbhardwaj / mc_test.cpp
Last active July 3, 2017 13:49
Matrix completion run
#include <mlpack/core.hpp>
#include <mlpack/core/optimizers/parallel_sgd/decay_policies/constant_step.hpp>
#include <mlpack/core/optimizers/parallel_sgd/parallel_sgd.hpp>
#include <mlpack/core/optimizers/parallel_sgd/sparse_mc_function.hpp>
using namespace std;
using namespace mlpack;
using namespace mlpack::optimization;
int main(int argc, char* argv[])
{
@shikharbhardwaj
shikharbhardwaj / convert.cpp
Created June 25, 2017 20:27
Sparse SVM run with HOGWILD!
#include <armadillo>
#include <iostream>
int main() {
// Load the data from the file to memory
const size_t num_examples = 23149, num_features = 47236;
const size_t num_locations = 1780950;
// Lets first get the data in dense form
arma::umat locations(2, num_locations);
@shikharbhardwaj
shikharbhardwaj / CAS_impl.cpp
Created June 18, 2017 14:52
Implementation of HOGWILD! in mlpack
#include <atomic>
// Maintain the decison variable as std::atomic
std::atomic<float> x[10];
void update(std::atomic<float> &f, float val){
float old;
do {
old = f.load();
}
@shikharbhardwaj
shikharbhardwaj / KMeans_test.py
Last active June 12, 2017 17:15
Work on the parallel KMeans implentation in mlpack
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
def showKMeans(data, clusters):
plt.clf()
plt.scatter(data[:, 0], data[:, 1])
plt.scatter(clusters[:, 0], clusters[:, 1])
plt.show()
<?xml version="1.0" encoding="UTF-8"?>
<Site BuildName="(empty)"
BuildStamp="20170607-1102-Experimental"
Name="(empty)"
Generator="ctest-3.7.2"
CompilerName=""
CompilerVersion=""
OSName="Linux"
Hostname="workstation"
OSRelease="4.10.0-21-generic"
@shikharbhardwaj
shikharbhardwaj / naive_bayes_classifier_impl.hpp
Last active June 9, 2017 15:26
Implementation of the parallel naive bayes classifier in mlpack
/**
* @file naive_bayes_classifier_impl.hpp
* @author Parikshit Ram (pram@cc.gatech.edu)
* @author Vahab Akbarzadeh (v.akbarzadeh@gmail.com)
* @author Shihao Jing (shihao.jing810@gmail.com)
*
* A Naive Bayes Classifier which parametrically estimates the distribution of
* the features. This classifier makes its predictions based on the assumption
* that the features have been sampled from a set of Gaussians with diagonal
* covariance.