Skip to content

Instantly share code, notes, and snippets.

@vhscom
Forked from suzannealdrich/wget.txt
Created December 11, 2023 15:12
Show Gist options
  • Save vhscom/a7be7360461a64cac05d5f9084d4806d to your computer and use it in GitHub Desktop.
Save vhscom/a7be7360461a64cac05d5f9084d4806d to your computer and use it in GitHub Desktop.
wget spider cache warmer
wget --spider -o wget.log -e robots=off -r -l 5 -p -S --header="X-Bypass-Cache: 1" --limit-rate=124k www.example.com
# Options explained
# --spider: Crawl the site
# -o wget.log: Keep the log
# -e robots=off: Ignore robots.txt
# -r: specify recursive download
# -l 5: Depth to search. I.e 1 means 'crawl the homepages'.  2 means 'crawl the homepage and all pages it links to'...
# -p: get all images, etc. needed to display HTML page
# -S: print server response
# --limit-rate=124k: Make sure we're crawling and not DOS'ing the site.
# www.example.com: URL to start crawling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment