Skip to content

Instantly share code, notes, and snippets.

@Fishezzz
Created June 11, 2022 10:26
Show Gist options
  • Save Fishezzz/3604777a59130097f632815795830746 to your computer and use it in GitHub Desktop.
Save Fishezzz/3604777a59130097f632815795830746 to your computer and use it in GitHub Desktop.
Python script to bulk download files with URLs provided in a text file.
#!/usr/bin/python
import sys, os
import requests
if len(sys.argv) < 2:
print('Need 2 arguments: <file-with-urls> <download-folder>')
exit(1)
url_file = sys.argv[1]
folder = sys.argv[2]
# check file which contains the urls
if not os.path.exists(url_file):
print('"{}" does not exist'.format(url_file))
exit(2)
# check the folder where downloaded files will be saved
if not os.path.exists(folder):
os.makedirs(folder)
if not os.path.exists(folder):
print('Could not create folder "{}"'.format(folder))
exit(3)
# open file which contains the urls
with open(url_file) as fp:
# read lines
lines = fp.readlines()
total = len(lines)
count = 1
# process each line
for line in lines:
url = line.strip()
filename = url.split('/')[-1]
# print progress
print('[{}/{}] downloading "{}"'.format(count, total, filename))
count += 1
# download content from url
r = requests.get(url, stream=True)
if r.status_code == 200:
with open(os.path.join(folder, filename), 'wb') as fd:
# save content to file
for chunk in r.iter_content():
fd.write(chunk)
else:
print('Could not download "{}". Got status code {}'.format(url, r.status_code))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment