Skip to content

Instantly share code, notes, and snippets.

@trungnt13
Last active April 14, 2023 12:26
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save trungnt13/d7e6ae4d2e6c2639469b43ad3dd79d81 to your computer and use it in GitHub Desktop.
Save trungnt13/d7e6ae4d2e6c2639469b43ad3dd79d81 to your computer and use it in GitHub Desktop.
[CheatSheet] Curl and Wget

Curl and Wget

Compare

wget:

Pros:

  • Simple to use for downloading files
  • Can download files recursively from a directory
  • Supports resuming interrupted downloads
  • Can download files in the background
  • Supports HTTP, HTTPS, and FTP protocols

Cons:

  • Limited to file downloading tasks
  • No support for uploading files
  • Less flexible compared to curl

curl:

Pros:

  • More versatile than wget
  • Supports a wide range of protocols (HTTP, HTTPS, FTP, SFTP, SCP, LDAP, etc.)
  • Can upload and download files
  • Supports data manipulation (POST, PUT, DELETE, etc.)
  • Can be used for API testing and interaction
  • Supports cookies, authentication, and custom headers

Cons:

  • More complex command-line options
  • No built-in support for downloading files recursively

Curl

  1. GET: curl https://api.example.com/data
  2. POST: curl -X POST -H "Content-Type: application/json" -d '{"key": "value"}' https://api.example.com/data
  3. Save output to file: curl -o output.txt https://api.example.com/data
  4. Authentication headers: curl -u username:password https://api.example.com/secure-data

Download and run a script:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -

Download all PDF files with curl

curl https://example.com/documents | grep -oP 'https://example.com/documents/\K[^"]+\.pdf' | xargs -I {} curl -O https://example.com/documents/{}

Keep in mind that this approach has some limitations and may not work for all websites, especially those that use JavaScript to load content or have complex URL structures. In such cases, you might need to use more advanced tools like wget or web scraping libraries in programming languages like Python or JavaScript.

Wget

  1. GET: wget https://api.example.com/data
  2. Save output to file: wget -O newfile.txt https://example.com/file.txt
  3. Save output to dir: wget -P /path/to/directory https://example.com/file.txt
  4. Authentication headers: wget --user=username --password=password https://example.com/secure-file.txt
  5. To download multiple files: wget -i urls.txt
  6. download all files of a specific type from a website, use the -r flag for recursive download and the -A flag to specify the file type: wget -r -A .pdf https://example.com/documents
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment