Skip to content

Instantly share code, notes, and snippets.

@olomor
Last active April 9, 2020 22:27
Show Gist options
  • Save olomor/c89298344890983966f05d71ca185da9 to your computer and use it in GitHub Desktop.
Save olomor/c89298344890983966f05d71ca185da9 to your computer and use it in GitHub Desktop.
How about compressing using targz in a tenth of the time, using more than one core cpu? Use pigz!!! :)
function tarpigz() {
BLOCKSIZE=$( grep "^cache size" /proc/cpuinfo |uniq |cut -d" " -f3 )
NUMCPU=$(( `nproc` / 2 ))
PIGZ="pigz --best --blocksize ${BLOCKSIZE} --independent --processes ${NUMCPU} --synchronous"
time tar -cvf - ${2} |$PIGZ >${1}
}
@olomor
Copy link
Author

olomor commented Apr 9, 2020

How about compressing using targz in a tenth of the time, using more than one core cpu?

To use:
. tarpigz.sh
tarpigz "[Dest:TargzFile]" "[object:dir/file]"

:: As the same form as you use "tag czvf" :)

My Notes About
So, to to this I have used this bash function to create a tar gz file with an autotuned pigz command based on my host capacity of cpu.

Yeah!!! The solution is a "pigz" multthread zip command, at my tests, 8x faster. Is not magic, only that uses multiple cores to compress same file... Such that the zip command does not. :)

THE BEST is that, you may compress using pigz and decompres NORMALLY without any trick, on other host that has only "zip" and using that "zip"... :) cooool!

References:
https://zlib.net/pigz/
https://linux.die.net/man/1/pigz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment