Skip to content

Instantly share code, notes, and snippets.

View fogus's full-sized avatar
💭
attempting to learn how to better learn

Fogus fogus

💭
attempting to learn how to better learn
View GitHub Profile
import re
def markdown_to_bbcode(s):
links = {}
codes = []
def gather_link(m):
links[m.group(1)]=m.group(2); return ""
def replace_link(m):
return "[url=%s]%s[/url]" % (links[m.group(2) or m.group(1)], m.group(1))
def gather_code(m):
;; Still haven't found a brief + approachable overview of Clojure 1.7's new Transducers
;; in the particular way I would have preferred myself - so here goes:
;;; Transducers recap
;; * (fn reducer-fn [] [accumulation] [accumulation next-input]) -> val [1].
;; * (fn transducer-fn [reducer-fn]) -> new-reducer-fn.
;; * So a transducer-fn is a reducer-fn middleware[2], and composes like it.
;; * All (numerous!) benefits[4] fall out of this simple +
;; not-particularly-impressive-looking[3] definition.
;;
(comment ; Fun with transducers, v2
;; Still haven't found a brief + approachable overview of Clojure 1.7's new
;; transducers in the particular way I would have preferred myself - so here goes:
;;;; Definitions
;; Looking at the `reduce` docstring, we can define a 'reducing-fn' as[1]:
(fn reducing-fn ([]) ([accumulation next-input])) -> new-accumulation
;; We choose to define a 'transducing-fn' as:
@fogus
fogus / chunked.clj
Last active August 29, 2015 14:06 — forked from cgrand/chunked.clj
; there are some obvious micro optimizations, I left them out for clarity
; the relative ordering of read and writes with volatile and plain array should be thread-safe (if not, point it out)
; @wagjo asked "Have you found use for such concept? Must be pretty slow compared to unchunked one"
; The idea came out of a discussion on transducers so not used for real, yet.
; Once you optimize it (remove the boxing induced by the volatile, switch to unchecked math) there should not be much
; of an overhead.
; When you have one big composed reducing fn (eg several mapping stages) then each item is going to be processed in
; each stage, each stage of the mapping may evict from the data cache stuff used by previous stages. So you have cache
; misses for each item.

Found way fewer than I expected with bibliographies. But also way fewer 2009-present books. The select few w/ nice beefy bibliographies:

  • The Joy of Clojure (Fogus/Houser)
  • The Linux Programming Interface (Kerrisk)
  • Systems Performance (Gregg)
  • The Art of Multiprocessor Programming (Herlihy/Shavit)
  • Machine Learning (Flach)

I counted 8/18 tech books on my shelf since 2009 with bibliographies - way fewer than I'd have guessed. This is even w/ some selection bias towards the type of book that would have a bibliography, I suspect.

#lang at-exp racket
@literal-algol{
begin
printsln (`hello world')
end
}
#lang algol60
begin
integer procedure SIGMA(x, i, n);
value n;
integer x, i, n;
begin
integer sum;
sum:=0;
for i:=1 step 1 until n do
if(typeof require != "undefined") {
var lodash = require("lodash"),
_ = require("underscore");
}
var MapTransformer = function(f, nextTransformer) {
this.f = f;
this.nextTransformer = nextTransformer;
};
<title>CHIP-8 Emulator</title>
<script>
var chip8=null;
var context=null;
var videocard=null;
function printoutGraphicsBuffer()
{
for (var i=0; i<chip8.gfx.length; i++)
{
<html>
<head>
<title>Versus</title>
<script src="https://code.jquery.com/jquery-1.11.1.min.js"></script>
<script src="me-rge.js"></script>
<script src="swal.js"></script>
<link rel="stylesheet" type="text/css" href="swal.css">
</head>
<body>