Skip to content

Instantly share code, notes, and snippets.

@rswilley
Created March 9, 2019 12:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rswilley/1b95133537a414417632ca13b92d523b to your computer and use it in GitHub Desktop.
Save rswilley/1b95133537a414417632ca13b92d523b to your computer and use it in GitHub Desktop.
Download an entire site with wget. I've used this as a first step to convert an existing site to a static site.
$ wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--domains website.org \
--no-parent \
www.website.org/tutorials/html/
This command downloads the Web site www.website.org/tutorials/html/.
The options are:
--recursive: download the entire Web site.
--domains website.org: don't follow links outside website.org.
--no-parent: don't follow links outside the directory tutorials/html/.
--page-requisites: get all the elements that compose the page (images, CSS and so on).
--html-extension: save files with the .html extension.
--convert-links: convert links so that they work locally, off-line.
--restrict-file-names=windows: modify filenames so that they will work in Windows as well.
--no-clobber: don't overwrite any existing files (used in case the download is interrupted and resumed).
https://www.linuxjournal.com/content/downloading-entire-web-site-wget
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment