Skip to content

Instantly share code, notes, and snippets.

@earthbound19
Created January 26, 2022 18:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save earthbound19/e6c9070e2ef691d1c98ef1b1995f95c7 to your computer and use it in GitHub Desktop.
Save earthbound19/e6c9070e2ef691d1c98ef1b1995f95c7 to your computer and use it in GitHub Desktop.
Retrieves all the HTML for all active state archives of the Humanae project (at Tumblr) which the Wayback Machine archived until 2019-07-26 , parses the large portrait JPG URLs out of all that, and retrieves all the large JPGs, in a folder.
# DESCRIPTION
# Retrieves all the HTML for all active state archives of the Humanae project (at Tumblr) which the Wayback Machine archived until 2019-07-26, parses the large portrait JPG URLs out of all that, and retrieves all the large JPGs, in a folder. Result is 3,326 portrait images. At this writing, whether any of those are duplicates has not been determined. The JPG URLs parsed out of the HTML source files have some identical file names at different URLs, and partial analysis suggests they are all in fact the same files at different web locations in Tumblr. Also, it happens at this writing that all the image URLs are still live, although the links to them at the Tumblr blog are down. Pulling images out of the Wayback machine, if that ever becomes necessary, might be more difficult.
# DEPENDENCIES
# Ruby, the wayback_machine_downloader gem, and (if you're on Windows) MSYS2. On other platforms, a Bash environment with what GNU/Linux core utils this scirpt uses. Maybe other things that Ruby may need on platforms other than Windows.
# USAGE
# Install the necessary dependencies and run this script from a Bash/MSYS2 environment.
# To bypass prompts to wipe / recreate target direcotires, pass any parameter to the script:
# get_Humanae_large_JPGs_from_Wayback_Machine.sh FOO
# To run normally, run without any parameter:
# get_Humanae_large_JPGs_from_Wayback_Machine.sh
# Intermediary HTML is placed in a new ./wayback_machine_html folder. Final JPG collection is placed in a _recovered_Humanae_Tumblr_large_jpgs folder.
# CODE
if [ -d wayback_machine_html ] && [ ! "$1" ]
then
read -p "Wayback_machine_html directory already exists. Wipe it and recreate it (Y/N)?" USERINPUT
if [ "$USERINPUT" == "Y" ] || [ "$USERINPUT" == "y" ]
then
rm -rf wayback_machine_html
fi
fi
# Create expected subdir if doesn't exist.
if [ ! -d wayback_machine_html ]
then
mkdir wayback_machine_html
fi
# Retrieve all archives from start date to date of the last snapshot before the Tumblr blog's pages were remove:
wayback_machine_downloader humanae.tumblr.com -t 20190726 -f 20120607 -d wayback_machine_html
cd wayback_machine_html
allFileNamesArray=( $(find . -type f -iname "*.*") )
echo "Beginning parsing of files from wayback machine archives for jpg image URLs . ."
# example commands that work to filter jpg URLs out of a file:
# tr ' ' '\n' < index.html > tmp.txt
# grep -o -h "http[^\"}{]*.jpg" tmp.txt
# adapting those example commands:
printf "" > ../tmp_url_parsing_25eEK75FJ.txt
print "" > ../all_jpgs.txt
for fileName in ${allFileNamesArray[@]}
do
echo parsing file $fileName . . .
tr ' ' '\n' < $fileName > ../tmp_url_parsing_25eEK75FJ.txt
# this regex gets all jpgs:
# grep -o -h "http[^\"}{]*\.jpg" ../tmp_url_parsing_25eEK75FJ.txt >> ../all_jpgs.txt
# -- but we only want the large jpgs, which all end with *._1280.jpg; which this gets:
grep -o -h "http[^\"}{]*\_1280.jpg" ../tmp_url_parsing_25eEK75FJ.txt >> ../all_large_jpgs.txt
done
rm ../tmp_url_parsing_25eEK75FJ.txt
cd ..
echo "DONE extracting .jpg URLs. They are all in all_large_jpgs.txt. Deduplicating that . . ."
lines=($(<all_large_jpgs.txt))
OIFS="$IFS"
IFS=$'\n'
lines=($(sort <<<"${lines[*]}"))
lines=($(uniq <<<"${lines[*]}"))
OIFS="$IFS"
printf '%s\n' "${lines[@]}" > all_large_jpgs.txt
echo "DONE deduplicating all_large_jpgs.txt."
if [ -d _collected_jpgs ] && [ ! "$1" ]
then
read -p "_collected_jpgs directory already exists. Wipe it and recreate it (Y/N)?" USERINPUT
if [ "$USERINPUT" == "Y" ] || [ "$USERINPUT" == "y" ]
then
rm -rf _collected_jpgs
fi
fi
if [ ! -d _collected_jpgs ]
then
mkdir _collected_jpgs
fi
echo "Will now retrieve all images from that list, and skip images with duplicate file names . . ."
allJPGurls=( $(<all_large_jpgs.txt) )
for jpgURL in ${allJPGurls[@]}
do
filenameNoPath=${jpgURL##*/}
if [ ! -f ./_collected_jpgs/$filenameNoPath ]
then
echo retrieving $jpgURL . . .
wget $jpgURL
mv "$filenameNoPath" ./_collected_jpgs/"$possibleDuplicateFileTag""$filenameNoPath"
else
echo "Will not re-retrieve nor clobber target file ./_collected_jpgs/$filenameNoPath, which already exists. Skip."
fi
done
echo "DONE. Collected jpgs are in the ./_collected_jpgs directory. RESPECT THE COPYRIGHT OWNER and only do things like post shrunk (fair use) copies of them anywhere, or only use the images for analysis etc."
@earthbound19
Copy link
Author

Here's a link to a hosted flat list of the result image URLs. It has three times duplicate image (the tail of the URLs giving just image file name URLs has about 3,000 unique image file names, and in tests downloading them, it seems they are in fact duplicates).

https://earthbound.io/data/dist/Humanae_Tumblr_recovered_jpeg_URLs_to_2019_07_26.txt

@earthbound19
Copy link
Author

Comments from extracting the background colors from all those images vs. many other images I find from the project on the web: statistically, this snapshot of Tumblr has a lot more lighter skin colors, and images I get from elsewhere have a lot more dark skin colors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment