Skip to content

Instantly share code, notes, and snippets.

This file has been truncated, but you can view the full file.
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="pandoc" />
@ibayer
ibayer / little.cpp
Last active May 4, 2017 21:22
Test fastFM2 static lib
#include <iostream>
#include "../fastFM/fastfm.h"
using namespace std;
using namespace fastfm;
int main(int argc, char** argv) {
cout << endl << "[ii] Little test" << endl;
@ibayer
ibayer / numba_dot
Created November 3, 2013 18:18
trying to call cblas without python function call overhead
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
{
"embeddedFonts" : [ ],
"pageSetup" : {
"size": "A4",
"width": null,
"height": null,
"margin-top": "2cm",
"margin-bottom": "2cm",
"margin-left": "2cm",
"margin-right": "2cm",
"""
Benchmarks of enet_coordinate_descent vs. enet_coordinate_descent
using the true solution as warm-start
First, we fix a training set and increase the number of
samples. Then we plot the computation time as function of
the number of samples.
In the second benchmark, we increase the number of dimensions of the
training set. Then we plot the computation time as function of
"""
Benchmarks of enet_coordinate_descent vs. enet_coordinate_descent
using the true solution as warm-start
First, we fix a training set and increase the number of
samples. Then we plot the computation time as function of
the number of samples.
In the second benchmark, we increase the number of dimensions of the
training set. Then we plot the computation time as function of
@ibayer
ibayer / bench_enet_refactoring.py
Created August 3, 2012 14:07
bench enet refactoring
"""
Benchmarks of refactored against current enet implementation
First, we fix a training set and increase the number of
samples. Then we plot the computation time as function of
the number of samples.
In the second benchmark, we increase the number of dimensions of the
training set. Then we plot the computation time as function of
the number of dimensions.
@ibayer
ibayer / strong_rule_enet.py
Created July 27, 2012 13:50 — forked from agramfort/gist:3181189
strong rules lasso and enet
# -*- coding: utf-8 -*-
"""
Generalized linear models via coordinate descent
Author: Fabian Pedregosa <fabian@fseoane.net>
"""
import numpy as np
from scipy import linalg
@ibayer
ibayer / strong_rule_enet.py
Created July 25, 2012 20:41
strong rule for enet
import numpy as np
from scipy import linalg
MAX_ITER = 100
# cdef double l1_reg = alpha * rho * n_samples
# cdef double l2_reg = alpha * (1.0 - rho) * n_samples
def enet_coordinate_descent(X, y, alpha, rho, warm_start=None, max_iter=MAX_ITER):
n_samples = X.shape[0]
@ibayer
ibayer / convert_R_sparse_data_to_mm.R
Created July 9, 2012 21:13
Testing how sparse datasets that are only available as RData file could be converted to the mldata.org hdf5 format. This is not working as the hdf5 specification stated on mldata.org is not recognized from the there parser.
load('InternetAd.RData')
file = file.path(getwd(), 'InternetAd.mtx')
library(Matrix)
writeMM(InternetAd$x, file=file)
file = file.path(getwd(), 'InternetAd.target')
write(InternetAd$y, file=file, ncolumns=1)