I hereby claim:
- I am jldailey on github.
- I am jldailey (https://keybase.io/jldailey) on keybase.
- I have a public key whose fingerprint is B96B FB50 426A 1BF8 360A F1FF F0A6 EFCC 74BB 1E76
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
def manual_gzip(data, cache_file=None): | |
_header = ("\037\213\010\000" # magic, type, flags | |
"\000\000\000\000" # mtime | |
"\002\377") # xfl, os | |
co = zlib.compressobj() | |
return ''.join([_header, co.compress(data)[2:], co.flush()]) | |
# struct.pack("<ll",zlib.crc32(data),len(data))]) |
$("body").append("<div></div>"); |
<decision-points> | |
<decision-point code="home-page" description="Home Page"> | |
<decisions> | |
<decision code="welcome-style" description="Welcome Style"> | |
<decision-option code="plain" description="Plain (old)"/> | |
<decision-option code="fancy" description="Fancy (new)"/> | |
</decision> | |
</decisions> | |
</decision-point> | |
</decision-points> |
# In other words, load your async scripts in any order; | |
# express their relationships locally with the code. | |
# This is just some hobby code I wrote last night, | |
# I welcome comments on how to make it simpler or better. | |
# Works with any functions: | |
# > depends ["A","B"], provides "C", -> console.log "win!" | |
# > depends "A", -> provide "B" | |
# > provide "A" |
#!/usr/bin/env sh | |
## | |
# This is script with usefull tips taken from: | |
# https://github.com/mathiasbynens/dotfiles/blob/master/.osx | |
# | |
# install it: | |
# curl -sL https://raw.github.com/gist/2960681/hack.sh | sh | |
# |
The Promise API's true value lies in lifting control out of the compiler's hands, and into the hands of our runtime, using a very simple structure. Rather than the syntax of the source code being the only description of the relationship between pieces of code (e.g. a callback pyramid), now we have a simple API for storing and using those relationships.
TL;DR The core API of a Promise object should be:
.wait(cb) # cb gets (err, result)
.finish(result) # cb will get (undefined, result)
.fail(err) # cb will get (err, undefined)
This is an idea for scaling out certain data when transitioning to a highly clustered architecture.
TL;DR Don't just read, mostly subscribe.
Ideally suited for data that is read often, but written rarely; the higher the r:w ratio, the more you gain from this technique.
This tends to happen to certain data when growing up into a cluster, even if you have data that has a 2:1 ratio for a single server (a very small margin in this context, meaning it is read twice for every time it is written), when you scale it up, you often don't get a 4:2 ratio, instead you get 4:1 because the two writes end up being redundant. That is, if you can publish knowledge of the change fast enough that other edges don't make the same change.
# Generates the text of asetniop.vim | |
singles = | |
a: "a" | |
s: "s" | |
e: "d" | |
t: "f" | |
n: "j" | |
i: "k" | |
o: "l" | |
p: ";" |