Skip to content

Instantly share code, notes, and snippets.

View cddr's full-sized avatar

Andy Chambers cddr

View GitHub Profile
@cddr
cddr / puma.clj
Last active February 15, 2017 10:48
(defn holdings-manager
"Returns a collection of kstreams, one for each different way in which
loan ownership state may be updated"
[puma-events]
(let [ownership-store (k/store "loan-ownership")]
(-> (k/kstream puma-events)
(k/transform (update-ownership ownership-store))
(k/branch [trade-result?
repayment-result?]))])
$ lein deps :tree
[ch.qos.logback/logback-classic "1.1.1" :scope "test"]
[ch.qos.logback/logback-core "1.1.1" :scope "test"]
[clj-http "2.3.0" :scope "test"]
[commons-codec "1.10" :scope "test" :exclusions [[org.clojure/clojure]]]
[commons-io "2.5" :scope "test" :exclusions [[org.clojure/clojure]]]
[org.apache.httpcomponents/httpclient "4.5.2" :scope "test" :exclusions [[org.clojure/clojure]]]
[commons-logging "1.2" :scope "test"]
[org.apache.httpcomponents/httpcore "4.4.5" :scope "test" :exclusions [[org.clojure/clojure]]]
[org.apache.httpcomponents/httpmime "4.5.2" :scope "test" :exclusions [[org.clojure/clojure]]]
$ lein deps :tree
Possibly confusing dependencies found:
[org.apache.kafka/kafka_2.11 "0.10.0.1"]
overrides
[io.confluent/kafka-schema-registry "3.0.1"] -> [org.apache.kafka/kafka_2.11 "0.10.0.1-cp1"]
Consider using these exclusions:
[io.confluent/kafka-schema-registry "3.0.1" :exclusions [org.apache.kafka/kafka_2.11]]
[org.apache.kafka/kafka-clients "0.10.0.1"]
@cddr
cddr / brain-flush.el
Created September 25, 2016 17:41
Brain Flush
(define-ibuffer-filter buffer-modified-p
"Toggle current view to buffers that have been modified"
(:description "unsaved file buffers")
(and (buffer-local-value 'buffer-file-name buf)
(buffer-modified-p buf)))
(defun brain-flush ()
(interactive)
(ibuffer t "*Brain Flush*"
'((buffer-modified-p . true))))
(ns kafka-streams.embedded
"Lifecycle component for an embedded Kafka cluster
The components defined here use the component lifecycle to model
the dependencies between the various services (zookeeper, kafka broker
and kafka client).
The test-harness function returns a system containing all the relevent
components, and starting that system starts the components in the
correct order (i.e. zookeeper, kafka, test-client).
@cddr
cddr / sqlite.clj
Created August 15, 2016 02:34
SQLite implementation of a hitchhicker-tree backend
(ns hitchhiker.sqlite
(:require [clojure.java.jdbc :as jdbc]
[clojure.edn :as edn]
[clojure.string :as str]
[hitchhiker.tree.core :as core]
[hitchhiker.tree.messaging :as msg]
[clojure.core.cache :as cache]
[taoensso.nippy :as nippy])
(:import [java.sql SQLException]))
@cddr
cddr / compat-test.clj
Created July 12, 2016 07:59
Better than avro?
(s/def ::foo-1.0 (s/keys :req [::my-string ::my-int]))
(s/def ::foo-1.1 (s/keys :req [::my-string]))
(defspec compatability-test
(prop/for-all [v1-1 (s/gen ::foo-1.1)]
(s/valid? ::foo-1.0 v1-1)))
@cddr
cddr / kafka-data
Last active September 21, 2016 09:59
#!/bin/bash
#
# Here's the scenario
#
# You can ssh into some shared environment containing kafka but you need to
# go through a bastion server to get there. How can you easily stream kafka
# data into your local system (e.g. to get test data, gather data from shared
# environments to reproduce a bug etc)
#
# This script can be executed on the bastion. It picks a random mesosslave to

Keybase proof

I hereby claim:

  • I am cddr on github.
  • I am cddr (https://keybase.io/cddr) on keybase.
  • I have a public key ASD2dsyvhcBJmvz3D64pTyiLJqdcsCxDqKy-rPKgy4ATugo

To claim this, I am signing this object:

(defn run-test-job
"Runs the email job for `timeout` milliseconds before returning whatever
has been accumulated in `sent-emails`.
TODO: Consider promoting more generic version of this to samza-config"
[timeout]
(let [job (MapConfig. (assoc (into {} (find-job 'notifications-handler.core.email))
"job.task.send-fn" (full-name 'mock-send-fn)))]
(let [job-proc (.run (JobRunner. job) true)]