Skip to content
.reveal .slides section {
padding: 10px;
.reveal .slides section iframe {
-webkit-transform: scale(0.5) translate(-50%, -50%);
-moz-transform: scale(0.5) translate(-50%, -50%);
transform: scale(0.5) translate(-50%, -50%);
min-width: 200%;
min-height: 200%;
<!doctype html>
<html lang="en">
<meta charset="utf-8">
<link rel="stylesheet" href="css/reveal.css">
<link rel="stylesheet" href="css/theme/default.css" id="theme">

Cartodb install on Digital Ocean ubuntu 12.04 64-bit

based on with additions as necessary
11/2013 to 02/2014

Install git

sudo apt-get install git-core

Clone project

// Session
require(['models/session'], function(Session) { window.Session = Session });
s = new Session();
// Returns a Session model with the following:
// - attributes contain simple attributes
// - trips (TripList) a collection of Trip models; 'light' versions.
// - user (User) the logged in user; check its attributes for email and name.
// - expenseCategories
// - expensePaymentTypes
<!doctype html>
<html lang="en">
<meta charset="utf-8">
<link rel="stylesheet" href="css/reveal.css">
<link rel="stylesheet" href="css/theme/default.css" id="theme">
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals
from scrapy.exceptions import DropItem
from scrapy.utils.serialize import ScrapyJSONEncoder
from carrot.connection import BrokerConnection
from carrot.messaging import Publisher
from twisted.internet.threads import deferToThread
$f("", {},
clip: {
onMetaData: function(c) {
var fd = c.duration;
//create a cue point for 25, 50 and 75% of player progress
var cues = [
time: fd * .25 * c.cuepointMultiplier,
name: "25%"

require('child_process').fork() and beyond


The purpose of this sample is to show the power of EventEmitter2 in the context of a specific example centered around [DIRT]0 and [ETL]0 applications in node.js. Given the clear limitations of the V8 heap-size doing any exceptionally large data processing in node will require such patterns, and it is in the interest of the community that we start solidifying them.


Lets suppose that you have an ETL that you need to run on a large set of logs which has already been partitioned into files of a size that will by themselves not overload the V8 heap. These kind of size-limited log or data files are common and should need no explaination.

This ETL runs with initial conditions (very common), and thus there may be many sets of worker processes analyzing the same data for different purposes. As an intelligent developer knowning the blocking nature of in-memory data manipulation you decided

* Based off of the Lucene prolog parser in the wordnet contrib package within the
* main Lucene project. It has been modified to remove the Lucene bits and generate
* a synonyms.txt file suitable for consumption by Solr. The idea was mentioned in
* a sidebar of the book Solr 1.4 Enterprise Search Server by Eric Pugh.
* @see <a href="">Lucene Sandbox WordNet page</a>
* @see <a href="">SVN Repository of the WordNet contrib</a>
* @see <a href="">Solr 1.4 Enterprise Search Server Book</a>
function inspect(obj, maxLevels, level)
var str = '', type, msg;
// Start Input Validations
// Don't touch, we start iterating at level zero
if(level == null) level = 0;
// At least you want to show the first level
if(maxLevels == null) maxLevels = 1;
Something went wrong with that request. Please try again.