Skip to content

Instantly share code, notes, and snippets.

@santrancisco
Last active June 16, 2020 07:04
Show Gist options
  • Save santrancisco/7bb88748d3052bc13e15596088433c17 to your computer and use it in GitHub Desktop.
Save santrancisco/7bb88748d3052bc13e15596088433c17 to your computer and use it in GitHub Desktop.
Simple hugo deploy script to S3 bucket
## This used to be the way I deploy ...
## aws --profile=san-study s3 cp ./public/ s3://jeremyandjames/ --recursive
## Now I use this simple script to increamentally update a hugo site on s3 bucket. It is a whole lot faster and take less bandwidth ;)
## Note that the change base on size of the file, not hash because it's quicker to just get the size :p
## Uncomment aws copy and remove command to start using it ;)
#!/bin/bash
set -e
function finish {
echo "[+] Exit code clean up .sitedeploy.tmp and .newfiles.tmp"
rm .sitedeploy.tmp
rm .newfiles.tmp
}
trap finish EXIT
# Generate public folder
hugo
BUCKET=YOURBUCKETNAME
echo "[+] Get the current list of files and its size"
aws s3 ls --recursive s3://$BUCKET/ > .sitedeploy.tmp
find ./public -type f | sed 's/\.\/public\///g' > .newfiles.tmp
# newfiles=$(find ./public -type f | sed 's/\.\/public\///g')
echo "[+] Get the new file list and comparing size"
while IFS= read -r line ; do
# echo "$line";
currentsize=$(grep " $line$" .sitedeploy.tmp | awk '{ print $3 }');
# echo $currentsize;
newsize=$(ls -lart ./public/$line | awk '{ print $5 }');
if [ "$newsize" != "$currentsize" ]; then
echo "Updating $line"
# aws s3 cp ./public/$line s3://$BUCKET/$line
fi
done < .newfiles.tmp
echo "[+] Deleting files that does not exist locally"
while IFS= read -r line ; do
currentfile=$(echo "$line" | awk '{ print $4 }')
# If the old file does not exist in our new list - delete it
if ! grep -Fxq "$currentfile" .newfiles.tmp ; then
echo "Deleting $currentfile";
# aws s3 rm s3://$BUCKET/$currentfile
fi
done < .sitedeploy.tmp
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment