Skip to content

Instantly share code, notes, and snippets.

View szilard's full-sized avatar

Szilard Pafka szilard

View GitHub Profile
@szilard
szilard / datable_20Gx3GB_join.R
Last active November 29, 2016 01:06
data.table 20GB x 3GB join
library(data.table)
n <- 2e9
m <- 1e9
system.time( d <- data.table(x = sample(m, n, replace=TRUE), y = runif(n)) )
# user system elapsed
# 103.843 8.255 112.242
system.time( dm <- data.table(x = sample(m)) )
# user system elapsed
# 47.298 1.860 49.288
@szilard
szilard / API_DL_FC_catdata--tools.R
Last active December 3, 2016 06:32
API deep learning fully connected with categorical data: h2o > R mxnet > py keras >>>>> tensorflow
#### h2o
library(h2o)
h2o.init(max_mem_size = "50g", nthreads = -1)
dx_train <- h2o.importFile("train-1m.csv")
dx_test <- h2o.importFile("test.csv")
Xnames <- names(dx_train)[which(names(dx_train)!="dep_delayed_15min")]
@szilard
szilard / caret-slowdown-issue.R
Created May 15, 2017 18:37
caret slowdown issue
library(caret)
library(readr)
library(ROCR)
set.seed(123)
d <- read_csv("https://raw.githubusercontent.com/szilard/teach-data-science-UCLA-master-appl-stats/master/wk-06-ML/data/airline100K.csv")
@szilard
szilard / h2o_scoring.R
Last active March 12, 2018 08:54
ML Scoring (REST API) - h2o.ai
## training a model
library(h2o)
h2o.init(nthreads = -1)
dx_train <- h2o.importFile("https://s3.amazonaws.com/benchm-ml--main/train-0.1m.csv")
md_rf <- h2o.randomForest(x = 1:(ncol(dx_train)-1), y = ncol(dx_train), training_frame = dx_train,
model_id = "h2o_RF",
ntrees = 100, max_depth = 10, nbins = 100)
#include <stdio.h>
#include <stdlib.h>
#define N 100
#define B0 100
#define R 1000000
int main() {
int b[N], rec[N];
for (int i=0; i<N; i++) b[i] = B0;
#include <stdio.h>
#include <stdlib.h>
#define N 128
#define B0 100
#define R 1000000
#define M 1000
int cmpfunc (const void * a, const void * b)
{
@szilard
szilard / dataset_sizes_pmlb.py
Last active August 20, 2017 04:25
Size distribution of datasets in the Penn Machine Learning Benchmarks
## https://github.com/EpistasisLab/penn-ml-benchmarks
## pip install pmlb
import numpy as np
from pmlb import fetch_data
from pmlb import dataset_names
x = np.zeros(len(dataset_names))
for i, dn in enumerate(dataset_names):
@szilard
szilard / dataset_size_openML.R
Created August 21, 2017 00:13
Dataset sizes in OpenML
# OpenML Benchmarking Suites and the OpenML100
# https://arxiv.org/abs/1708.03731
# https://www.openml.org/s/14/data
library(OpenML)
ids <- getOMLStudy('OpenML100')$data$data.id
dsall <- listOMLDataSets()
sum(dsall$data.id %in% ids) ## 96???
ds <- dsall[dsall$data.id %in% ids,]
@szilard
szilard / simul_unbal_methods.R
Last active August 25, 2017 18:39
A little framework for experimenting with the impact of various methods for dealing with unbalanced classes for machine learning
## partial credit :) to @earino for the idea
library(lightgbm)
library(data.table)
library(ROCR)
d0_train <- fread("/var/data/bm-ml/train-10m.csv")
d0_test <- fread("/var/data/bm-ml/test.csv")
d0 <- rbind(d0_train, d0_test)
@szilard
szilard / lightgbm_example.R
Created August 27, 2017 03:48
Minimal lightgbm example
library(data.table)
library(ROCR)
library(lightgbm)
set.seed(123)
d_train <- fread("/var/data/bm-ml/train-0.1m.csv")
d_test <- fread("/var/data/bm-ml/test.csv")