Skip to content

Instantly share code, notes, and snippets.

@samn
Last active April 4, 2022 01:45
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save samn/5843422 to your computer and use it in GitHub Desktop.
Save samn/5843422 to your computer and use it in GitHub Desktop.
The Lock Less Monster
#_(
Let's say you have a threadpool doing work that requires access to some shared resource.
And this resource needs to be refreshed at times. E.g. an OAuth bearer token that can expire or be revoked.
If this resource were expensive to refresh, or subject to rate limiting (OAuth tokens are often both)
then it's desirable to refresh as little as possible.
It's undesirable, however, to mix complicated synchronization code for updating the resource in
with the business logic consuming it.
Enter the lock less monster, a lock free method for coordinating updates to a shared reference.
It's especially useful when many readers of the shared resource will see the expiration close together.
)
(ns lock-less-monster
"Utils for synchronus, atomic, blocking, lock-less updates to a shared reference.
Useful for coordinating an update to a shared value, where the calculation
for the new value is expensive and should only be performed once.
E.g. refreshing an OAuth bearer token shared by multiple threads. The refresh only
needs to happen once (and is expensive since it involves a network call). The lock-less-monster
allows that refresh to happen exactly once.")
(defmacro make
"Construct a new lock-less-monster container, holding an optional initial value.
The initial-value will be evaluated lazily (when first called by `value`).
Pass no arg to default to nil"
[& initial-value-body]
`(atom (delay ~@initial-value-body)))
(defmacro update!
"Update a loch-less-monster container with a new value. Only sets the value if the container
reference hasn't changed (this is threadsafe when only one update should occur). The body
is delayed and will be evaluated when `value` is first called on the container."
[container & body]
`(do (compare-and-set! ~container @~container (delay ~@body))))
(defn get-value
"Fetch the value from a loch-less-monster container. Blocks until the value is ready."
[container]
@@container)
(defprotocol RefreshableContainer
"A container around a value that also holds the logic for refreshing the value."
(refresh! [this])
(value [this]))
(ns dogz
"An example of using the Lock Less Monster"
(:require [loch-less-monster :as llm]))
(def counter (atom 0))
(defn fetch-external-value
[]
(Thread/sleep 350)
(swap! counter inc))
(defn make-container
[]
(let [container (llm/make (fetch-external-value))]
(reify llm/RefreshableContainer
(refresh! [this]
(llm/update! container (fetch-external-value)))
(value [this]
(llm/get-value container)))))
;; This demonstrates that the refresh only occurs once.
;; It's not meant to be a strong demostration of correctness.
(let [container (make-container)
threads (for [_ (range 0 100)] (Thread. (fn [] (.refresh! container))))]
(prn @counter) ; => 0
(prn (.value container)) ; => 1
(prn @counter) ; => 1
(doseq [thread threads] (.start thread))
(doseq [thread threads] (.join thread))
(prn (.value container)) ; => 2
(prn @counter)) ; => 2
#_(
This lets us write code that handles the expiration and refresh of the shared resource
without explicit coordination.
E.g.
(def container some-llm-container)
(try
(make-request! (.value some-llm-container))
(catch TokenExpiredException
(.refresh! some-llm-container)))
)
@timmc
Copy link

timmc commented Aug 23, 2013

(Copied from email to open the discussion.)

Cool. I like how you're using the object identity of the delay in compare-and-set! as a substitute for the caller locking the resource.

I do think that you could get multiple requests, though. Here's a sample timeline:

  • The atom contains delay #1
  • Threads A and B both discover the current value sucks
  • Thread A calls update! and calls compare-and-set! with delay #1
  • compare-and-set! finishes, setting delay #2
  • Thread B finally gets around to calling update! and dereferences the atom to get delay #2, and asks for that to be replaced
  • compare-and-set! succeeds in swapping out delay #2 for delay #3

Now, each thread is going to call get-value, and there's a chance that they will see different delays (causing multiple requests.) If execution of delay #3 invalidates delay #2 (as with some OAuth2 refresh token impls) then the thread that sees delay #2 is going to have trouble of one sort or another.

Does that make sense?

@timmc
Copy link

timmc commented Aug 27, 2013

Maybe a lazy seq would be better. I think the desired semantics are "Every thread sees the same stream of values with no gaps, and new values are added to the stream as soon as one thread rejects the last added value."

Of course, this is predicated on having the same generator for all values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment