The post Go is not C, so there is not an extreme fast way to merge slices alleges a performance problem with Go's model of zeroing memory on allocation, in cases where it might not be needed. The methodology is
- Run a benchmark that merges some slices, by
make
ing one of the appropriate size and copying the individual slices over - Run a benchmark that zeros a slice of the appropriate slice
- Subtract the two numbers and call it the "overhead of zeroing"
I have some trouble with that methodology. For one, it assumes that the zeroing