Dirty, but it works. You do not need to run it until it finishes, a few hundred requests should be good enough.
The purpose is to get all the main pages cached.
#!/usr/bin/bash | |
wget --directory-prefix=/tmp --spider --recursive --level=1 -nd \ | |
--reject '*.js,*.css,*.ico,*.txt,*.gif,*.jpg,*.jpeg,*.png,*.mp3,*.pdf,*.tgz,*.flv,*.avi,*.mpeg,*.iso' \ | |
--ignore-tags=img,link,script https://www.keengardener.co.uk/ 2>&1 | grep '^Saving to:' |