Skip to content

Instantly share code, notes, and snippets.

@mfcabrera
mfcabrera / stream_sample.py
Last active August 29, 2015 14:27 — forked from anonymous/stream_sample.py
Stream sample
"""
Programming task
================
Implement the method iter_sample below to make the Unit test pass. iter_sample
is supposed to peek at the first n elements of an iterator, and determine the
minimum and maximum values (using their comparison operators) found in that
sample. To make it more interesting, the method is supposed to return an
iterator which will return the same exact elements that the original one would
have yielded, i.e. the first n elements can't be missing.
@mfcabrera
mfcabrera / min-char-rnn.py
Last active August 29, 2015 14:26 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
In [4]: np.arange(10).astype(object).mean(axis=0)
Out[4]: 4.5
In [5]: np.__version__
Out[5]: '1.8.1'
# Generic Aliases
alias ll='ls -latr' # List all file in long list format by modification time
alias ..='cd ..' # Go up one directory
alias ...='cd ../..' # Go up two directories
alias ....='cd ../../..' # Go up three directories
alias -- -='cd -' # Go back
alias c='clear' # Clear Screen
alias k='clear' # Clear Screen
alias cls='clear' # Clear Screen
alias _="sudo" # Execute with sudo
COPY (
select r.cluster_id, r.token_array from hotel4x.review as r where
r.lang = 'en' and r.cluster_id in
(SELECT h.cluster_id
FROM hotel4x.hotel AS h
INNER JOIN (
SELECT s.cluster_id, COUNT(*) AS source_count
FROM hotel4x.source AS s
def delimited(filename, delimiter=' ', bufsize=4096):
'''
Creates a generator of word from a file based on a delimiter (by default white space).
'''
buf = ''
with open(filename) as file:
while True:
newbuf = file.read(bufsize)
if not newbuf:
#!/usr/bin/perl
# Program to filter Wikipedia XML dumps to "clean" text consisting only of lowercase
# letters (a-z, converted from A-Z), and spaces (never consecutive).
# All other characters are converted to spaces. Only text which normally appears
# in the web browser is displayed. Tables are removed. Image captions are
# preserved. Links are converted to normal text. Digits are spelled out.
# Adapted for the german language based on the script written by Matt Mahoney
# http://mattmahoney.net/dc/textdata.html
<ref...> ... </ref>
@mfcabrera
mfcabrera / thingsexporttodos.rb
Last active December 13, 2015 17:38
A script using MacRuby to export all the todos from Things projects into text files (my goal is to "copy and paste" them into 2DO.
#make sure you use macruby
framework "ScriptingBridge" #let's use the scripting bridge framework
# this is aweful - maybe a wrapper would be nice
# you can ge this BundleIdentifier using the mdls
# mdls -name kMDItemCFBundleIdentifier /Applications/Things.app
things = SBApplication.applicationWithBundleIdentifier("com.culturedcode.things")
@mfcabrera
mfcabrera / tumlecturesgen.rb
Created October 9, 2012 13:21
Generation of Courses out of MyinTUM
require 'rubygems'
require 'open-uri'
require 'pp'
require 'nokogiri'
class Lecture
attr_accessor :module_id,:title,:ects, :cycle,:language,:area,:name
attr_reader :urlimytum
@@urlimytum_root = "https://drehscheibe.in.tum.de/myintum/kurs_verwaltung/"