Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@rbranson
rbranson / main.go
Last active April 29, 2020 03:29
P99 across all metrics ~= avg of P99 reported by every host
package main
import (
"fmt"
"sort"
"math/rand"
)
func p(in []int, x float64) int {
ints := make([]int, len(in))
@rbranson
rbranson / gist:007f989da12ba912ae7ec5859d4d732a
Last active January 16, 2019 16:40
MySQL re-orders commits
Schema
==========================================
CREATE TABLE locks (
name VARCHAR(191) PRIMARY KEY,
clock BIGINT
);
CREATE TABLE ledger (
seq BIGINT AUTO_INCREMENT PRIMARY KEY,
package stupid
import (
"fmt"
"reflect"
"unsafe"
)
type iptr struct {
@rbranson
rbranson / interface_slice.go
Created April 26, 2018 19:12
I'm not advocating this particularly idea.
// Converts args to an interface slice using the following rules:
//
// - Returns empty slice if no args are passed
//
// - For a single argument which is of a slice type, the slice
// is converted and returned.
//
// - For a single argument which is not a slice type, the value is
// returned within a single-element slice.
//
@rbranson
rbranson / intbloat-ca-cert-test.js
Created February 22, 2017 19:27
Demonstrates additional memory / CPU usage when forcing use of the OS-provided root certificates
// run this in bash with 'time' and it'll show CPU usage totals at the end
var totalCycles = 250;
var concurrencyLevel = 25;
var useOSCerts = true;
var requestOptions = {
hostname: 'www.google.com',
port: 443,
path: '/',
client_id operation key value
0 BEGIN
1 BEGIN
0 READ 100
0 COMMIT
0 BEGIN
1 READ 100
0 WRITE 100 0
0 READ 100
1 COMMIT
@rbranson
rbranson / leaving-hound.md
Created April 25, 2016 17:58
Stepping Down

Hey friends,

I wanted to let you know what's up with myself and Hound. I have left the company. There were fundamental conflicts among the founding team that were not reconcilable in any timeframe that would have made sense for an early-stage company.

Though it has certainly been a painful process, I still think the world of my co-founders, Charity and Christine. These are two immensely capable, hard-working, and genuinely caring people. I give my highest recommendation to anyone who would consider working with them in the future.

As for myself, I'll be spending the next few months spending time with friends and family as well as working on some side projects.

~ Rick

type Q interface {
Hello() string
}
type A struct {
}
func (x *A) Hello() {
return "Hello!"
}
@rbranson
rbranson / gist:038afa9ad7af3693efd0
Last active September 29, 2016 17:44
Disaggregated Proxy & Storage Nodes

The point of this is to use cheap machines with small/slow storage to coordinate client requests while dedicating the machines with the big and fast storage to doing what they do best. I found that request coordination was contributing to about half the CPU usage on our Cassandra nodes, on average. Solid state storage is quite expensive, nearly doubling the cost of typical hardware. It also means that if people have control over hardware placement within the network, they can place proxy nodes closer to the client without impacting their storage footprint or fault tolerance characteristics.

This is accomplished in Cassandra by passing the -Dcassandra.join_ring=false option when the process is started. These nodes will connect to the seeds, cache the gossip data, load the schema, and begin listening for client requests. Messages like "/x.x.x.x is now UP!" will appear on the other nodes.

There are also some more practical benefits to this. Handling client requests caused us to push the NewSize of the heap up

In [2]: os.path.join(["foo", "bar"])
Out[2]: ['foo', 'bar']
In [3]:
In [3]: os.path.join("foo", "bar")
Out[3]: 'foo/bar'