Skip to content

Instantly share code, notes, and snippets.

View bryant1410's full-sized avatar

Santiago Castro bryant1410

View GitHub Profile
public String partToString() throws Exception {
javax.servlet.http.Part part;
try (java.util.Scanner scanner = new java.util.Scanner(part.getInputStream())) {
if (scanner.useDelimiter("\\A").hasNext()) {
return scanner.next();
} else {
return "";
}
}
}
@bryant1410
bryant1410 / onehot_pandas_scikit.py
Last active August 29, 2015 14:07 — forked from kljensen/onehot_pandas_scikit.py
This function helps to do a one hot encoding of a pandas' dataframe instead of a features numpy matrix. This has some advantages, for instance the fact of knowing which new columns have been created (identifying them easily).
# -*- coding: utf-8 -*-
""" Small script that shows hot to do one hot encoding
of categorical columns in a pandas DataFrame.
See:
http://scikit-learn.org/dev/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder
http://scikit-learn.org/dev/modules/generated/sklearn.feature_extraction.DictVectorizer.html
"""
import pandas
import random
@noniq
noniq / gist:4147547
Created November 26, 2012 10:25
Prolog nonogram solver
% Succeeds if `Lines` represents the nonogram specified by `ColumnSpecs` and
% `LineSpecs`. For example:
%nonogram
% 1
% 2 1 2
% +------
% 1 | . # . ColumnSpecs = [[2], [1,1], [2]]
% 1 1 | # . # LineSpecs = [[1], [1,1], [3]]
% 3 | # # # Lines = [[0,1,0], [1,0,1], [1,1,1]]
nonogram(ColumnSpecs, LineSpecs, Lines) :-
@cedricvidal
cedricvidal / ObservablesCartesianProduct.java
Last active September 5, 2016 14:51
RxJava Observables Cartesian Product
import rx.Observable;
import rx.functions.Func1;
import rx.functions.Func2;
import static java.util.Arrays.asList;
import static java.util.Collections.singleton;
/**
* Computes the cartesian product of Observables.
*
@robince
robince / make-arXiv-package.sh
Last active October 25, 2016 20:25 — forked from holgerdell/make-arXiv-package.sh
Script to prepare arXiv package of a document that depends on texlive2015's version of biblatex (using pdflatex)
#!/bin/bash
#
# This script is useful if:
# - you have a manuscript that you want to upload to the arXiv,
# - you are using biblatex, and
# - you are using texlive2015 while arXiv is still on texlive2011
#
# Put this file in a directory containing the manuscript you want to
# upload to arXiv.org, and adapt the paths below.
@bryant1410
bryant1410 / make-arXiv-package.sh
Last active January 24, 2017 01:31 — forked from robince/make-arXiv-package.sh
Script to prepare arXiv package of a document that depends on a recent texlive version of biblatex (using pdflatex)
#!/usr/bin/env bash
# This script is useful if:
# - you have a manuscript that you want to upload to the arXiv,
# - you are using biblatex, and
# - you are using a recent version of texlive while arXiv is still on texlive2011
#
# Put this file in a directory containing the manuscript you want to
# upload to arXiv.org, and adapt the paths below.
diff --git a/app.py b/app.py
index a2b76f9..cd4c055 100755
--- a/app.py
+++ b/app.py
@@ -198,7 +198,8 @@ class SplashScreen(wx.SplashScreen):
self.control = Controller(self.main)
self.fc = wx.FutureCall(1, self.ShowMain)
- wx.FutureCall(1, parse_comand_line)
+ options, args = parse_comand_line()
# Your init script
#
# Atom will evaluate this file each time a new window is opened. It is run
# after packages are loaded/activated and after the previous editor state
# has been restored.
#
# An example hack to log to the console when each text editor is saved.
#
# atom.workspace.observeTextEditors (editor) ->
# editor.onDidSave ->
@Turbo87
Turbo87 / gist:147153a7ece904ebb0e4
Created November 17, 2014 09:34
Diff ODT files

Diff ODT files

Install odt2txt

Linux: sudo apt-get install odt2txt

OS X: git clone https://github.com/dstosberg/odt2txt.git && cd odt2txt && make

Register a global diff helper

Recently, GitHub introduced the change in how atx headers are parsed in Markdown files.

##Wrong

Correct

While this change follows the spec, it breaks many existing repositories. I took the README dataset which we created at source{d} and ran a simple