Skip to content

Instantly share code, notes, and snippets.

@mullnerz
Last active February 23, 2024 18:11
Show Gist options
  • Star 44 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save mullnerz/9fff80593d6b442d5c1b to your computer and use it in GitHub Desktop.
Save mullnerz/9fff80593d6b442d5c1b to your computer and use it in GitHub Desktop.
Archiving a website with wget

The command I use to archive a single website

wget -mpck --html-extension --user-agent="" -e robots=off --wait 1 -P . www.foo.com

Explanation of the parameters used

  • -m (Mirror) Turns on mirror-friendly settings like infinite recursion depth, timestamps, etc.
  • -c (Continue) Resumes a partially-downloaded transfer
  • -p (Page requisites) Downloads any page dependencies like images, style sheets, etc.
  • -k (Convert) After completing retrieval of all files… converts all absolute links to other downloaded files into relative links converts all relative links to any files that weren’t downloaded into absolute, external links in a nutshell: makes your website archive work locally
  • --html-extension this adds .html after the downloaded filename, to make sure it plays nicely on whatever system you’re going to view the archive on
  • –user-agent=”” Sometimes websites use robots.txt to block certain agents like web crawlers (e.g. GoogleBot) and Wget. This tells Wget to send a blank user-agent, preventing identification. You could alternatively use a web browser’s user-agent and make it look like a web browser, but it probably doesn’t matter.
  • -e robots=off Sometimes you’ll run into a site with a robots.txt that blocks everything. In these cases, this setting will tell Wget to ignore it. Like the user-agent, I usually leave this on for the sake of convenience.
  • –wait 1 Tells Wget to wait 1 second between each action. This will make it a bit less taxing on the servers.
  • -P . set the download directory to something. I left it at the default “.” (which means “here”) but this is where you could pass in a directory path to tell wget to save the archived site. Handy, if you’re doing this on a regular basis (say, as a cron job or something…) http://url-to-site: this is the full URL of the site to download. You’ll likely want to change this.

Sources

@terminal-root
Copy link

replying to @BrandonKMLee

It does not conflict, to my knowledge; it just includes the recursion flag.

  -m,  --mirror                    shortcut for -N -r -l inf --no-remove-listing

@mariano-daniel
Copy link

mariano-daniel commented Jul 15, 2023

I'm having trouble only wgetting this URL : https://forum.spacehey.com/topic?id=3959 and all the links that branch out of it, only 1 level away from the link.

When I run wget -e robots=off --recursive -np -k --html-extension "https://forum.spacehey.com/topic?id=3959"

it starts downloading all of the forums and all the users. I just want topic?id=3959" to be downloaded, plus the user profiles or links that might branch out from the link, but no deeper than 1 level. Not sure if I'm explaining myself correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment