Skip to content

Instantly share code, notes, and snippets.

@jjwatt
Created February 7, 2020 17:47
Show Gist options
  • Save jjwatt/d87fa559867d3d78067b4600f6c74078 to your computer and use it in GitHub Desktop.
Save jjwatt/d87fa559867d3d78067b4600f6c74078 to your computer and use it in GitHub Desktop.
Quick and dirty way to take a huge directory of roms/disk rips and split them into alphabetical directories. Not the most beautiful, but it gets the job done, you can still browse them and *much* faster dir loading/browsing on SDs
#!/bin/env bash
romdirsplit() {
for letter in {A..Z}; do
mkdir $letter
find ./ -iname "${letter}*" -type f -print0 \
| xargs -0 -L1 mv -t "${letter}"
done
}
@jjwatt
Copy link
Author

jjwatt commented Feb 7, 2020

May be a little faster with 10s of thousands of files if you add -P$numprocs to spawn a single mv job per processor, but then they'll be bound by I/O and will have to wait on other jobs writing to the same dir, anyway. However, on my multicore machine running Linux, writing to FAT32 cards, it's much faster because all of the processes start up at once instead of one per letter. And the kernel handles all the I/O, anyway.

@jjwatt
Copy link
Author

jjwatt commented Feb 7, 2020

Oh yeah, might want to add -maxdepth 0 for two main reasons:

  1. If there are other directories already there that you don't want to break into and sort and
  2. If you don't you'll probably see a bunch of warnings/errors like this:
mv: './Z/Zygon Warrior (1989)(ECP)[cr Triad].zip' and 'Z/Zygon Warrior (1989)(ECP)[cr Triad].zip' are the same file

Actually, that might be because of xargs. Hmm. Anyway, it hasn't been an issue for me and the files seem fine. I'll look into it, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment