Skip to content

Instantly share code, notes, and snippets.

@wardbekker
Created May 10, 2011 09:16
Show Gist options
  • Star 17 You must be signed in to star a gist
  • Fork 6 You must be signed in to fork a gist
  • Save wardbekker/964146 to your computer and use it in GitHub Desktop.
Save wardbekker/964146 to your computer and use it in GitHub Desktop.
Naive parallel import of Compressed MYSQL dump file
# Split MYSQL dump file
zcat dump.sql.gz | awk '/DROP TABLE IF EXISTS/{n++}{print >"out" n ".sql" }'
# Parallel import using GNU Parallel http://www.gnu.org/software/parallel/
ls -rS *.sql | parallel --joblog joblog.txt mysql -uXXX -pYYY db_name "<"
@ole-tange
Copy link

If you installed GNU Parallel you got GNU SQL in the same package, so maybe this is more readable:

ls -rS *.dump | parallel --joblog joblog.txt sql mysql://user:pass@/db_name "<"

@wardbekker
Copy link
Author

good suggestion, tnx!

@SchizoDuckie
Copy link

Works brilliant with 10 threads parallel:

ls -rS data.*.sql | parallel -j10 --joblog joblog.txt mysql -uuser -ppass dbname "<"

@wardbekker
Copy link
Author

OSX compatible:

gunzip -c wiebetaaltwat_stable.sql.gz | awk '/DROP TABLE IF EXISTS/{n++}{filename = "out" n ".sql"; print > filename}'

@vojkny
Copy link

vojkny commented Oct 17, 2018

mysql imports are sequential and this will not have any effect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment