Git does not deal well with bunches of huge binary files, for a few reasons. It kills memory to add and commit them, it is pretty inefficient to transfer them, and it basically forces you to download all of them with each clone.
I thought sparse and narrow checkouts would be the answer, but it looks like it may be too difficult to rewrite other tools to deal with the missing blobs before they are fetched, as well as it still taking server resources to transfer them.
Perhaps a better way would be to hack the Git client slightly to store binary files over a certain size via a protocol that is better for that, like SFTP or HTTP over S3, and then fetch them only when necessary. Instead of the whole blob, just keep a blob that contains a pointer to the URL and SHA of the original content, or possibly just a SHA of the original content, since we could probably depend on the URL being guessable.
This is actually relatively similar to how git submodules does it - not ke