Skip to content

Instantly share code, notes, and snippets.

View abarmat's full-sized avatar

Ariel Barmat abarmat

View GitHub Profile
@abarmat
abarmat / onename.txt
Created January 29, 2017 15:28
Onename verify ID
Verifying that "abarmat.id" is my Blockstack ID. https://onename.com/abarmat

Keybase proof

I hereby claim:

  • I am abarmat on github.
  • I am abarmat (https://keybase.io/abarmat) on keybase.
  • I have a public key whose fingerprint is 7A5E 39AF 8053 DD3F F514 8831 51DF 9451 E697 7209

To claim this, I am signing this object:

import pprint
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics, cross_validation
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
@abarmat
abarmat / tp2_aa_svm.R
Created June 27, 2016 14:58
DMUBA TP2 AA - SVM
library('e1071');
library(Amelia)
FILENAME <- 'tp2-work.csv'
# Randomizer
set.seed(100)
# Read file
data <- read.csv(FILENAME, header=TRUE, sep=";")
@abarmat
abarmat / tp2_aa_bayes.R
Last active May 30, 2017 18:27
DMUBA TP2 AA - Naive Bayes
library('e1071');
library('SparseM');
library('tm');
FILENAME <- 'tp2-work.csv'
# Randomizer
set.seed(100)
# Read file
@abarmat
abarmat / j48_random_forest.R
Created June 20, 2016 21:09
DMUBA TP2 AA - Random Forest & J48
setwd("D:/Datamining")
# para eliminar error de java
if (Sys.getenv("JAVA_HOME")!="")
Sys.setenv(JAVA_HOME="")
# paquetes requeridos
if(!require (RWeka)){
install.packages('RWeka')
require (RWeka)
@abarmat
abarmat / tp2_aa_xgboost_tree.R
Last active May 30, 2017 18:27
DMUBA TP2 AA - Gradient Boosting Tree
library(xgboost)
library(stringr)
preprocess <- function(items) {
# Select features
attr_list = c('anio', 'mes', 'tipoprop', 'lugar', 'sup_tot_m2',
'lat_lon', 'sup_cub_m2', 'piso', 'cant_amb', 'geoname_num', 'Clase')
df <- items[attr_list]
@abarmat
abarmat / add_pub_year.py
Created May 13, 2016 01:42
Datamining UBA Add Publication Years Script
import hashlib
import logging
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from sklearn import preprocessing
from itertools import groupby
from operator import itemgetter
for file in $(ls -p | sort -R | grep -v / | grep -v s.sh | tail -6000)
do
mv $file ./cv/1/
done
Web3 = require('web3')
web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"))