Skip to content

Instantly share code, notes, and snippets.

View mrb's full-sized avatar
🍕
Helping companies market and sell more software

Michael Bernstein mrb

🍕
Helping companies market and sell more software
View GitHub Profile
;; Take a list of (possibly repeated) elements, and return all those
;; that appear more than once.
(include "~/repos/simple-miniKanren/mkdefs.scm")
;; Regular Scheme version for comparison
(define morethanonce
(lambda (ls)
(cond
[(null? ls) '()]
@pbailis
pbailis / list.md
Last active April 15, 2018 08:54
Quick and dirty (incomplete) list of interesting, mostly recent data warehousing/"big data" papers

A friend asked me for a few pointers to interesting, mostly recent papers on data warehousing and "big data" database systems, with an eye towards real-world deployments. I figured I'd share the list. It's biased and rather incomplete but maybe of interest to someone. While many are obvious choices (I've omitted several, like MapReduce), I think there are a few underappreciated gems.

###Dataflow Engines:

Dryad--general-purpose distributed parallel dataflow engine
http://research.microsoft.com/en-us/projects/dryad/eurosys07.pdf

Spark--in memory dataflow
http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf

I've been reading a bit about concatenative languages recently, and figured I'd compose a short reading list for others interested.

I won't explain what a concatenative language is, but I do want to quickly say a couple of the reasons I find them interesting.

Refactoring

I already like to write really tiny functions, and concatenative languages make it really easy to factor a function into a smaller one. Here's an example, in a made up concatenative language:

@swannodette
swannodette / notes.clj
Last active July 5, 2022 13:32
Generates lists of size M containing natural numbers which add up to N
;; list comprehensions are often used as a poor man's Prolog
;; consider the following, it has only one solution
;; [1 1 1 1 1 1 1 1 1 1] yet we actually consider 10^10 possibilities
(for [a (range 1 11)
b (range 1 11)
c (range 1 11)
d (range 1 11)
e (range 1 11)
f (range 1 11)
(use '[clojure.core.logic])
(require '[clojure.core.logic.fd :as fd])
(defn simple []
(run* [x y]
(fd/in x y (fd/interval 0 9))
(fd/eq
(= (+ x y) 9)
(= (+ (* 4 x) (* 2 y)) 24))))
I'd like some clarification about Ruby 2.0:
* Is it intended to be a development release?
* If it is not a development release, why are experimental features included?
* Whether or not it is a development release, what process will be used to remove
features after release? (For example, we still have $' and friends years after
Matz has repeatedly said he wouldn't add them in retrospect.)
* Why are APIs like IO.readlines not being fixed to use keyword arguments?
* Why are API changes not being added to RubySpec?
* Is Ruby 2.0 expected to pass RubySpec before release? (This does not allow for
@funny-falcon
funny-falcon / 00-description.md
Last active January 11, 2017 14:33
Performace patch for ruby-1.9.3-p327 detailed

This are parts of falcon patch for ruby-1.9.3-p327

  • 01-backport-speedup-require.diff - backport of Greg Price's patch for speedup require.

    For a long time my patch were famous cause of require speedup. ruby-core prefers slightly simpler patches from Greg Price, and I wish not to compete with ruby-core, so that I just backport accepted changes.

  • 02-st_opt.diff - speedup of Hash

This patch contains:

@headius
headius / 1.txt
Created September 20, 2012 05:28
system ~/projects/jruby $ jruby -J-Xmx1024m -J-Xmx1024m -Isrc gc_stress.rb
ID Type Timestamp(sec) Before(kB) After(kB) Delta(kB) Heap(kB) GC Time(ms)
2 PS Scavenge 1.5624 48541 31837 16703 108288 7.1670000000
3 PS Scavenge 1.7100 66597 32293 34304 108672 6.8490000000
4 PS Scavenge 1.7796 66797 32429 34368 143104 6.0990000000
5 PS Scavenge 1.8868 101293 32445 68847 143104 6.2590000000
6 PS Scavenge 1.9809 101309 32509 68800 215936 5.9520000000
7 PS Scavenge 2.2277 170258 32634 137623 216256 6.6460000000
8 PS Scavenge 2.4213 170363
@swannodette
swannodette / memo-dyn.txt
Created August 24, 2012 17:55 — forked from dvanhorn/memo-dyn.txt
Memoization vs dynamic programming
Memoization is fundamentally a top-down computation and dynamic
programming is fundamentally bottom-up. In memoization, we observe
that a computational *tree* can actually be represented as a
computational *DAG* (the single most underrated data structure in
computer science); we then use a black-box to turn the tree into a
DAG. But it allows the top-down description of the problem to remain
unchanged.
In dynamic programming, we make the same observation, but construct
the DAG from the bottom-up. That means we have to rewrite the

I've been playing more with this Erlang factoring technique. As an exercise, I've been trying to force myself to adopt the method, by writing functions that are 3 or less lines long (function clauses actually, so multiple pattern-matched claues are OK).

Death to receive expressions

One place I noticed was causing myself