Skip to content

Instantly share code, notes, and snippets.

@jasny
Last active February 15, 2024 16:13
Show Gist options
  • Star 95 You must be signed in to star a gist
  • Fork 26 You must be signed in to fork a gist
  • Save jasny/1608062 to your computer and use it in GitHub Desktop.
Save jasny/1608062 to your computer and use it in GitHub Desktop.
Split MySQL dump SQL file into one file per table or extract a single table
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on http://blog.tty.nl/2011/12/28/splitting-a-database-dump
####
if [ $# -lt 1 ] ; then
echo "USAGE $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1"
else
csplit -s -ftable $1 "/-- Table structure for table/" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
mv $FILE foot
else
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
@rubo77
Copy link

rubo77 commented Mar 30, 2012

thanks a lot.

works just great.

although it throws some errors:

mv: Aufruf von stat f�r �table991� nicht m�glich: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden


@jasny
Copy link
Author

jasny commented Mar 31, 2012

@rubo77 Check csplit -b '%d' -s -f$FILE $FILE "/SEARCH_STRING/-1" {*}. SEARCH_STRING is the first of the SQL commands to restore the global variables in the bottom of the dump file.

You can also just skip that part, as it isn't that imporant.

#!/bin/bash

####
# Split MySQL dump SQL file into one file per table
# based on http://blog.tty.nl/2011/12/28/splitting-a-database-dump
####

if [ $# -ne 1 ] ; then
  echo "USAGE $0 DUMP_FILE"
fi

csplit -s -ftable $1 "/-- Table structure for table/" {*}
mv table00 head

for FILE in `ls -1 table*`; do
      NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
      cat head $FILE > "$NAME.sql"
done

rm head table*

@augustofagioli
Copy link

Interesting.
Trying to solve the large dump sql file issue, I was going for a different approach:

. $tables = SHOW TABLES FROM mydb ;
. foreach( $tables as $table) { mysqldump mydb $table > mydb_$table.sql }

From your experience, do you think this may do the job?

Thanks for sharing!

@rubo77
Copy link

rubo77 commented Mar 20, 2013

Copy link

ghost commented Feb 7, 2014

I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.

@missha
Copy link

missha commented Mar 15, 2014

I get this error:

csplit: *}: bad repetition count

@smmortazavi
Copy link

seems to have a bug. Extracting a single table (by providing the second arg) the result file misses "40103 SET TIME_ZONE=@OLD_TIME_ZONE"

@vbarbarosh
Copy link

@jotson
Copy link

jotson commented Oct 1, 2014

Thanks! This worked perfectly.

@castiel
Copy link

castiel commented Jan 28, 2015

csplit: *}: bad repetition count

csplit -s -ftable $1 "$START" {**}

->

csplit -s -ftable $1 "$START" {9999999}

@tmirks
Copy link

tmirks commented Apr 8, 2015

The foot part doesn't work if you have over 100 tables. The ls has to be sorted numerically with -v so you get the correct last file (otherwise it sees table99 or table999 as the last file):

FILE=`ls -1v table* | tail -n 1`

@maxigit
Copy link

maxigit commented May 1, 2015

This script doesn't work if there is more than 100 tables. If there is more than 100 tables, tail -n 1 doesn't get the last files (as ls doesn't sort files by number : table101 is before table29). To fix it, add -n4 in csplit to get the number formatted to 4 number and change line #21 to mv table0000 head.

@vekexasia
Copy link

Hello There, I took a different approach and have written a node module (installable as a cli command) that splits a 16G dump file containing more than 150 tables in less than 2 minutes (on my machine).

If this is of your interest please take a look at mysqldumpsplit

@jonaslm
Copy link

jonaslm commented Aug 30, 2018

Take note that the script only works if the dump file does NOT contain more than one database.

All generated .sql-files will USE the first database in the dump file. Furthermore, if several databases contain tables with the same name, the files will overwrite each other.

@philiplb
Copy link

philiplb commented Jan 6, 2019

Hi there,
the SQLDumpSplitter3 should do the trick here, too. :)
https://philiplb.de/sqldumpsplitter3/

@kwhat
Copy link

kwhat commented Jun 7, 2020

SQLDumpSplitter3 caused issues for me with the dump and it also cannot split by table. The ls -1v table* | tail -n 1 fix from @tmirks was an awesome patch. 👍

@dotancohen
Copy link

Hello There, I took a different approach and have written a node module (installable as a cli command) that splits a 16G dump file containing more than 150 tables in less than 2 minutes (on my machine).

If this is of your interest please take a look at mysqldumpsplit

I'll vouch for this. I just used it on a 2 GiB, 470 table dumpfile. 10.3 seconds, no errors, works fine.

Thank you vekexasia!

@adam-jones-net
Copy link

Any tool that could export all tables from a specific database inside of a multi database dump would be absolutely amazing. If anyone knows of such a tool please share ;)

@exabrial
Copy link

This script doesn't work if there is more than 100 tables. If there is more than 100 tables, tail -n 1 doesn't get the last files (as ls doesn't sort files by number : table101 is before table29). To fix it, add -n4 in csplit to get the number formatted to 4 number and change line #21 to mv table0000 head.

which csplit command? there are several :(

@m-faraz
Copy link

m-faraz commented Feb 2, 2024

there is a simple fix to those saying issue with more than a 100 files: -

  1. add -n5 in both csplit commands to allow upto 5 digits zero-spaced in the name.
  2. change mv table00 head to mv table00000 head.

Now this will handle upto 100000 tables starting from 00000 to 99999.
Upgrade as required!

Bonus: add rm $FILE after cat head $FILE foot to remove the old file as well so as to save some space if there are too many files. My scenario had over 2500 tables with the base sql file at 130GB!

Bonus 2 hours later: for the really lazy ones, heres a link to my version of this script including output and timing
#https://gist.github.com/m-faraz/4e32cbae283e3a9454118cd1d281f8cf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment