Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Split MySQL dump SQL file into one file per table or extract a single table
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on http://blog.tty.nl/2011/12/28/splitting-a-database-dump
####
if [ $# -lt 1 ] ; then
echo "USAGE $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1"
else
csplit -s -ftable $1 "/-- Table structure for table/" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
mv $FILE foot
else
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
@rubo77

This comment has been minimized.

Copy link

@rubo77 rubo77 commented Mar 30, 2012

thanks a lot.

works just great.

although it throws some errors:

mv: Aufruf von stat f�r �table991� nicht m�glich: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden
cat: foot: Datei oder Verzeichnis nicht gefunden


@jasny

This comment has been minimized.

Copy link
Owner Author

@jasny jasny commented Mar 31, 2012

@rubo77 Check csplit -b '%d' -s -f$FILE $FILE "/SEARCH_STRING/-1" {*}. SEARCH_STRING is the first of the SQL commands to restore the global variables in the bottom of the dump file.

You can also just skip that part, as it isn't that imporant.

#!/bin/bash

####
# Split MySQL dump SQL file into one file per table
# based on http://blog.tty.nl/2011/12/28/splitting-a-database-dump
####

if [ $# -ne 1 ] ; then
  echo "USAGE $0 DUMP_FILE"
fi

csplit -s -ftable $1 "/-- Table structure for table/" {*}
mv table00 head

for FILE in `ls -1 table*`; do
      NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
      cat head $FILE > "$NAME.sql"
done

rm head table*
@augustofagioli

This comment has been minimized.

Copy link

@augustofagioli augustofagioli commented Feb 27, 2013

Interesting.
Trying to solve the large dump sql file issue, I was going for a different approach:

. $tables = SHOW TABLES FROM mydb ;
. foreach( $tables as $table) { mysqldump mydb $table > mydb_$table.sql }

From your experience, do you think this may do the job?

Thanks for sharing!

@rubo77

This comment has been minimized.

Copy link

@rubo77 rubo77 commented Mar 20, 2013

@Verace

This comment has been minimized.

Copy link

@Verace Verace commented Feb 7, 2014

I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.

@missha

This comment has been minimized.

Copy link

@missha missha commented Mar 15, 2014

I get this error:

csplit: *}: bad repetition count

@smmortazavi

This comment has been minimized.

Copy link

@smmortazavi smmortazavi commented Jul 15, 2014

seems to have a bug. Extracting a single table (by providing the second arg) the result file misses "40103 SET TIME_ZONE=@OLD_TIME_ZONE"

@vbarbarosh

This comment has been minimized.

Copy link

@vbarbarosh vbarbarosh commented Aug 3, 2014

@jotson

This comment has been minimized.

Copy link

@jotson jotson commented Oct 1, 2014

Thanks! This worked perfectly.

@castiel

This comment has been minimized.

Copy link

@castiel castiel commented Jan 28, 2015

csplit: *}: bad repetition count

csplit -s -ftable $1 "$START" {**}

->

csplit -s -ftable $1 "$START" {9999999}

@tmirks

This comment has been minimized.

Copy link

@tmirks tmirks commented Apr 8, 2015

The foot part doesn't work if you have over 100 tables. The ls has to be sorted numerically with -v so you get the correct last file (otherwise it sees table99 or table999 as the last file):

FILE=`ls -1v table* | tail -n 1`
@maxigit

This comment has been minimized.

Copy link

@maxigit maxigit commented May 1, 2015

This script doesn't work if there is more than 100 tables. If there is more than 100 tables, tail -n 1 doesn't get the last files (as ls doesn't sort files by number : table101 is before table29). To fix it, add -n4 in csplit to get the number formatted to 4 number and change line #21 to mv table0000 head.

@vekexasia

This comment has been minimized.

Copy link

@vekexasia vekexasia commented Mar 9, 2016

Hello There, I took a different approach and have written a node module (installable as a cli command) that splits a 16G dump file containing more than 150 tables in less than 2 minutes (on my machine).

If this is of your interest please take a look at mysqldumpsplit

@jonaslm

This comment has been minimized.

Copy link

@jonaslm jonaslm commented Aug 30, 2018

Take note that the script only works if the dump file does NOT contain more than one database.

All generated .sql-files will USE the first database in the dump file. Furthermore, if several databases contain tables with the same name, the files will overwrite each other.

@philiplb

This comment has been minimized.

Copy link

@philiplb philiplb commented Jan 6, 2019

Hi there,
the SQLDumpSplitter3 should do the trick here, too. :)
https://philiplb.de/sqldumpsplitter3/

@kwhat

This comment has been minimized.

Copy link

@kwhat kwhat commented Jun 7, 2020

SQLDumpSplitter3 caused issues for me with the dump and it also cannot split by table. The ls -1v table* | tail -n 1 fix from @tmirks was an awesome patch. 👍

@dotancohen

This comment has been minimized.

Copy link

@dotancohen dotancohen commented Oct 23, 2020

Hello There, I took a different approach and have written a node module (installable as a cli command) that splits a 16G dump file containing more than 150 tables in less than 2 minutes (on my machine).

If this is of your interest please take a look at mysqldumpsplit

I'll vouch for this. I just used it on a 2 GiB, 470 table dumpfile. 10.3 seconds, no errors, works fine.

Thank you vekexasia!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.