Skip to content

Instantly share code, notes, and snippets.

@j14159
j14159 / ConsistentHash.scala
Created October 9, 2012 00:52
Naive and simple first crack at consistent hashing.
/*
* Very naive approach to simulating a distributed cache using consistent hashing. Based on the following:
* http://thor.cs.ucsb.edu/~ravenben/papers/coreos/kll%2B97.pdf
*
* Initially a translation of:
* http://weblogs.java.net/blog/tomwhite/archive/2007/11/consistent_hash.html
*
* And I also looked at this gist a bit from http://github.com/opyate towards the end as well:
* https://gist.github.com/1927001
*
@j14159
j14159 / gist:4699916
Created February 3, 2013 00:32
Floor example
"/hello/:name" get {
request => Future {
/*
* we're in The Future here so you can do
* heavier stuff without blocking up other
* things like the Netty handler.
*/
val name = request.path(":name")
Response.ok(s"Hello, ${name}")
}
package definterp
import org.scalatest._
class AdditionProgram extends FlatSpec with Matchers {
val addition =
Lambda("x",
Lambda("y", LetRec("add-until",
Lambda("xx",
@j14159
j14159 / gist:d3cbe172f7b962d74d09
Created July 18, 2014 20:40
Naive/early S3N RDD for Spark
/**
* Started to rough this naive S3-native filesystem RDD out because I need to use IAM
* profiles for S3 access and also https://issues.apache.org/jira/browse/HADOOP-3733.
*
* Use at your own risk, bear in mind this is maybe 30 - 45min of work and testing and
* expect it to behave as such.
*
* Feedback/criticism/discussion welcome via Github/Twitter
*
* In addition to Spark 1.0.x, this depends on Amazon's S3 SDK, dependency is as follows:
val databaseUrl = "postgresql://some-hostname:5432/db-name"
Class.forName("my.sql.database.driver.classname")
class BasicJdbcActor(connFac: () => Connection) extends Actor {
lazy val conn = connFac()
override def preRestart(why: Throwable, msg: Option[Any]): Unit =
try { conn.close() }
my-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
//2 threads per core
parallelism-factor = 2.0
// The max that the dispatcher will create:
// very naive, be more specific based on your problem:
val restartStrategy = OneForOneStrategy(
maxNrOfRetries = 10,
withinTimeRange = 1 minute) {
case _ => Restart
}
def newPool(sys: ActorSystem): ActorRef = {
val props = Props(new BasicJdbcActor(connFac))
val pool = RoundRobinPool(4, supervisorStrategy = restartStrategy)
val props = Props(new BasicJdbcActor(connFac))
.withDispatcher("my-dispatcher")
val pool = RoundRobinPool(4, supervisorStrategy = restartStrategy)
sys.actorOf(pool.props(props))
case class Person(name: String, email: String)
case class PersonById(id: Int)
class PersonDao(cf: () => Connection) extends Actor {
lazy val conn = cf()
override def preRestart(why: Throwable, msg: Option[Any]): Unit =
try { conn.close() }
def receive = {
trait PersonClient {
// supply a router with a pool of PersonDao:
val personPool: ActorRef
// how long should we wait for a response from PersonDao:
val timeoutInMillis: Long
implicit val timeout = Timeout(timeoutInMillis millis)
def addPerson(p: Person): Future[Int] =