NOTE I now use the conventions detailed in the SUIT framework
Used to provide structural templates.
Pattern
t-template-name
trait CanReadXML[A] { | |
def reads(seq: scala.xml.NodeSeq): Either[String, A] | |
} | |
trait Foo { def foo: Int } | |
trait Bar { def bar: Int } | |
object Def { | |
def fromXML[A: CanReadXML](seq: scala.xml.NodeSeq) = | |
implicitly[CanReadXML[A]].reads(seq) |
NOTE I now use the conventions detailed in the SUIT framework
Used to provide structural templates.
Pattern
t-template-name
Locate the section for your github remote in the .git/config
file. It looks like this:
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = git@github.com:joyent/node.git
Now add the line fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
to this section. Obviously, change the github url to match your project's URL. It ends up looking like this:
<script> | |
_invoke = []; // name your api global array | |
_invoke.push(['call1', 'arg1', 'arg2']); // make an api call. | |
(function() { | |
var _s=document.createElement('script'); | |
_s.src=window.location.protocol + '//your-script-here.js'; // yes we include protocol for all the browsers | |
var _fs = document.getElementsByTagName('script')[0]; | |
_fs.parentNode.insertBefore(_s, _fs); | |
})(); | |
</script> |
import spark.streaming.StreamingContext._ | |
import spark.streaming.{Seconds, StreamingContext} | |
import spark.SparkContext._ | |
import spark.storage.StorageLevel | |
import spark.streaming.examples.twitter.TwitterInputDStream | |
import com.twitter.algebird.HyperLogLog._ | |
import com.twitter.algebird._ | |
/** | |
* Example of using HyperLogLog monoid from Twitter's Algebird together with Spark Streaming's |
import spark.streaming.{Seconds, StreamingContext} | |
import spark.storage.StorageLevel | |
import spark.streaming.examples.twitter.TwitterInputDStream | |
import com.twitter.algebird._ | |
import spark.streaming.StreamingContext._ | |
import spark.SparkContext._ | |
/** | |
* Example of using CountMinSketch monoid from Twitter's Algebird together with Spark Streaming's | |
* TwitterInputDStream |
ror, scala, jetty, erlang, thrift, mongrel, comet server, my-sql, memchached, varnish, kestrel(mq), starling, gizzard, cassandra, hadoop, vertica, munin, nagios, awstats
This gist started with a collection of resources I was maintaining on stream data processing — also known as distributed logs, data pipelines, event sourcing, CQRS, and other names.
Over time the set of resources grew quite large and I received some interest in a more guided, opinionated path for learning about stream data processing. So I added the reading list.
Please send me feedback!
## this should get loaded by your ~/.Rprofile | |
## either just stick all these definitions right in ~/.Rprofile, or | |
## have ~/.Rprofile source this file, etc. | |
source.dirs <- c( | |
## add any "base" locations here. this is like a "classpath" in java | |
## my default is c("~/myRUtils/src","~/companyRUtils/src","~/publicRUtils"), | |
## but you can use whatever you like | |
paste(Sys.getenv("HOME"),"myRUtils","src", sep="/"), | |
paste(Sys.getenv("HOME"),"companyRUtils","src", sep="/"), |
You got your hands on some data that was leaked from a social network and you want to help the poor people.
Luckily you know a government service to automatically block a list of credit cards.
The service is a little old school though and you have to upload a CSV file in the exact format. The upload fails if the CSV file contains invalid data.
The CSV files should have two columns, Name and Credit Card. Also, it must be named after the following pattern:
YYYYMMDD
.csv.