Skip to content

Instantly share code, notes, and snippets.

@thanatos
Last active July 18, 2018 01:39
Show Gist options
  • Save thanatos/baa510b28337975fc2910c37fa7bd695 to your computer and use it in GitHub Desktop.
Save thanatos/baa510b28337975fc2910c37fa7bd695 to your computer and use it in GitHub Desktop.
Stuff I like

On Agile

From https://tech.labs.oliverwyman.com/blog/2018/05/16/acephalic-agile/:

But I was suddenly seized by a horrible thought: what if this new-found agility was used, not teleologically to approach the right outcome over the course of a project, but simply to enshrine the right of middle management to change their minds, to provide a methodological license for arbitrary management? At least under a Waterfall regime they had to apologise when they departed from the plan. With Agile they are allowed, in principle, to make as many changes of direction as they like. But what if Agile was used merely as a license to justify keeping the team in the office night after night in a never-ending saga of rapidly accumulating requirements and dizzying changes of direction? And what if the talk of developer ‘agility’ was just a way of softening up developers for a life of methodologically sanctioned pliability?

From https://news.ycombinator.com/item?id=17553043:

AWS engineer here, I was lead for Route 53.

We generally use 60 second TTLs, and as low as 10 seconds is very common. There's a lot of myth out there about upstream DNS resolvers not honoring low TTLs, but we find that it's very reliable. We actually see faster convergence times with DNS failover than using BGP/IP Anycast. That's probably because DNS TTLs decrement concurrently on every resolver with the record, but BGP advertisements have to propagate serially network-by-network. The way DNS failover works is that the health checks are integrated directly with the Route 53 name servers. In fact every name server is checking the latest healthiness status every single time it gets a query. Those statuses are basically a bitset, being updated /all/ of the time. The system doesn't "care" or "know" how many health status change each time, it's not delta-based. That's made it very very reliable over the years. We use it ourselves for everything.

Of course the downside of low TTLs is more queries, and we charge by the query unless you ALIAS to an ELB, S3, or CloudFront (then the cost of the queries is on us).

Read: I've been writing ring buffers wrong all these years

The article discusses the common methods of implementing a ring buffer on top of an array, and the obvious issues present in the two obvious solutions to the problem. But the article's solution isn't graceful: it requires power-of-two overflow, it requires power-of-two sizes. We can do better; the better solution, IMO, is in the comments:

Why not store indices modulo 2*capacity? Same as last solution, but prevents overflow. An additional bit in the index can be viewed as a 'fold number'. An array is full when indices refer to the same cell on different folds, and empty if they are on the same fold. And we don't need more then two folds, as indices can't be more than 'capacity' elements apart.

—"By dizzy57 on 2016-12-14"

A comment on HN restates this answer:

I have always considered these "double ring" buffers. Along the same lines as how you figure out which race car is in the race is in lead by their position and lap count. You run your indexes in the range 0 .. (2 * SIZE) and then empty is

EMPTY -> (read == write)
FULL -> (read == (write + SIZE) % (2 * SIZE))

Basically you're full if you're at the same relative index and your on different laps, you are empty if you at the same relative index on the same lap. If you do this with power of 2 size then the 'lap' is just the bit 2 << SIZE.

The read/write pointers are indexes, and they're in [0, 2 * SIZE). The lap analogy is useful for thinking about it. The "2" in the above is that we don't store a full lap count, in essence, we only store one extra bit: are the read/write pointers are on the same lap (empty) or different laps (full).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment