Recently found some clowny gist was the top result for 'google takeout multiple tgz', where it was using two bash scripts to extract all the tgz files and then merge them together. Don't do that. Use brace expansion, cat
the TGZs, and extract:
$ cat takeout-20201023T123551Z-{001..011}.tgz | tar xzivf -
You don't even need to use brace expansion. Globbing will order the files numerically:
$ cat takeout-20201023T123551Z-*.tgz | tar xzivf -
tar
has been around forever, they didn't design it to need custom scripts to deal with multipart archives. Since it's extracting the combined archive, there's no 'mess of partial directories' to be merged. It just works, as intended.
An additional tip, courtesy of Dmitriy Otstavnov (@bvc3at): if you have pv
available, you can track the progress of the extraction:
> pv takeout-* | tar xzif -
190GiB 2:37:54 [18.9MiB/s] [==============> ] 30% ETA 5:03:49
This worked very well for a 6TB Google Photos Takeout data set consisting of 22 50GB TGZ files. Ran it via Ubuntu on Windows (WSL) and the file total matched the export.
Proof positive that tar does indeed work!