Skip to content

Instantly share code, notes, and snippets.

View fr33m0nk's full-sized avatar

fr33m0nk fr33m0nk

View GitHub Profile
@fr33m0nk
fr33m0nk / logback.xml
Created March 31, 2022 12:04 — forked from jeroenr/logback.xml
Logback configuration splitting to stdout and stderr
<configuration>
<appender name="STDERR" class="ch.qos.logback.core.ConsoleAppender">
<target>System.err</target>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>WARN</level>
</filter>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
@fr33m0nk
fr33m0nk / map-zipper.clj
Created May 9, 2022 09:45 — forked from dvingo/map-zipper.clj
Clojure zipper for recursively traversing a map
;; inspired by: https://clojuredocs.org/clojure.zip/zipper#example-54d91161e4b081e022073c72
(defn map-zipper
[m ref-keys]
{:pre [(set? ref-keys)]}
(z/zipper
(fn is-branch? [x]
(let [ret
(cond
(not (coll? x))
@fr33m0nk
fr33m0nk / clojure-java-interop.clj
Created September 4, 2022 04:19 — forked from JulienRouse/clojure-java-interop.clj
Notes about clojure-java interop for a Montreal Clojure Meetup lightning talk
;Montreal Clojure Meetup 2019
;;sources
; - https://clojure.org/reference/java_interop
; - https://leanpub.com/clojure-java-interop/read_sample
; - http://blog.jayfields.com/2011/12/clojure-java-interop.html
; - https://stackoverflow.com/questions/2181774/calling-clojure-from-java
; - https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html#jvms-4.3.2-200
@fr33m0nk
fr33m0nk / 00_destructuring.md
Created November 16, 2022 15:34 — forked from john2x/00_destructuring.md
Clojure Destructuring Tutorial and Cheat Sheet

Clojure Destructuring Tutorial and Cheat Sheet

(Related blog post)

Simply put, destructuring in Clojure is a way extract values from a datastructure and bind them to symbols, without having to explicitly traverse the datstructure. It allows for elegant and concise Clojure code.

Vectors and Sequences

(ns scratch)
(defn map [f]
(fn [xf]
(fn
([] (xf))
([acc] (xf acc))
([acc itm]
(xf acc (f itm))))))
(defn r-some?
"Super fast version of clojure.core/some that does minimal allocation"
[pred coll]
(reduce
(fn [acc v]
(if-let [val (pred coll)]
(reduced val)
acc))
false
coll))
@fr33m0nk
fr33m0nk / tx-pipeline-inorder.clj
Created November 18, 2022 17:40 — forked from favila/tx-pipeline-inorder.clj
Transact transactions in core.async channel `from-ch` against datomic connection `conn` with a pipeline depth of `n`.
(defn tx-pipeline-inorder
"Transact transactions in core.async channel `from-ch` against datomic connection `conn` with a pipeline depth of `n`.
Returns a map with keys `:stop!` and `:result`. `:stop!` is a no-arg function you can call to immediately
cease sending transactions (already sent transactions cannot be stopped). `:result` is a promise channel
returning a variant of the result of the tx-pipelining after from-ch is closed and drained, or
:stop! is called and all send transaction futures are deref-ed, or a transaction produces an exception.
The variant returned from the :result channel may be one of:
@fr33m0nk
fr33m0nk / kafka-workbench.clj
Created November 18, 2022 18:14 — forked from favila/kafka-workbench.clj
code to fiddle with kafka behaviors
(ns kafka-workbench
(:require [franzy.clients.consumer.protocols :as c]
[franzy.clients.producer.protocols :as p]
[franzy.serialization.serializers :as serializers]
[franzy.serialization.deserializers :as deserializers]
[franzy.clients.consumer.client :as consumer]
[franzy.clients.producer.client :as producer]))
(def kafka-brokers [])
@fr33m0nk
fr33m0nk / clickhousedump
Created November 23, 2022 05:16 — forked from inkrement/clickhousedump
dump all clickhouse databases and tables
#!/bin/bash
OUTDIR=.
while read -r db ; do
while read -r table ; do
if [ "$db" == "system" ]; then
echo "skip system db"
continue 2;
@fr33m0nk
fr33m0nk / KafkaSeekByTimestamp.java
Created November 24, 2022 08:45 — forked from werneckpaiva/KafkaSeekByTimestamp.java
Kafka Consumer - seek by timestamp
// Get the list of partitions
List<PartitionInfo> partitionInfos = consumer.partitionsFor(topicName);
// Transform PartitionInfo into TopicPartition
List<TopicPartition> topicPartitionList = partitionInfos
.stream()
.map(info -> new TopicPartition(topicName, info.partition()))
.collect(Collectors.toList());
// Assign the consumer to these partitions
consumer.assign(topicPartitionList);
// Look for offsets based on timestamp