When multiple goroutines share access to a value that will periodically change, readers may wish to wait for a value to be updated before reading the value again. This can be solved using a condition variable:
var val *Thing
var mu = new(sync.Mutex)
var cond = sync.NewCond(mu)
func wait(old *Thing) *Thing {
mu.Lock()
defer mu.Unlock()
for val == old {
cond.Wait()
}
return val
}
func update(v *Thing) {
mu.Lock()
defer mu.Unlock()
val = v
cond.Broadcast()
}
This works fine. But now suppose you want waiters to be able to time out or give up, e.g., if a context governing the request terminates. Condition variables are tricky to interface with channels: You basically have to start up a separate goroutine to close a sentinel channel when the condition is signalled, and wait on that channel instead. Besides being tedious to set up, that also has a few corner cases that you have to get right to avoid leaking goroutines on the "success" path.
A different approach is to use a channel as a condition.
var val *Thing
var done chan struct{}
var mu sync.Mutex
func wait(ctx context.Context, old *Value) (*Thing, bool) {
mu.Lock()
v, ch := val, done
mu.Unlock()
for v == old {
select {
case <-ctx.Done():
return nil, false // timed out
case <-ch:
mu.Lock()
v, ch = val, done
mu.Unlock()
}
}
return v, true
}
func update(v *Thing) {
mu.Lock()
defer mu.Unlock()
val = v
close(done)
done = make(chan struct{})
}
The way this works is that update
closes done
when it has updated the value, and then replaces done
with a fresh channel. All of this is done under the lock. When a waiter discovers the value is not ready, it captures the active channel and waits for it to be closed. It then knows that the value has changed at least once, so it reacquires the lock and checks the values again. This continues until ctx
terminates, or it succeeds in finding a new value.
See also: https://godoc.org/github.com/creachadair/msync#Value
Thanks for sharing!