One of the biggest issues with using a self hosted GitHub runner is that actions that require downloading large amounts of data will bottleneck at the network. [actions/cache] does not support locally caching objects and artifacts stored on GitHub's servers will require a lot of bandwidth to fetch on every job. We can, however, set up a content proxy using [Squid] with SSL bumping to locally cache requests from jobs.
A major challenge is that [actions/cache] uses Azure storage APIs which makes HTTP range requests. While Squid supports range requests, it is not good at caching them. There is an option,
range_offset_limit none which, according to the [documentation]:
A size of 'none' causes Squid to always fetch the object from the beginning so it may cache the result. (2.0 style)
However, after extensive debugging, I discovered that the feature does not work if the client closes the connection. When
range_offset_limit is set, Squid will make a full request to the server,