Skip to content

Instantly share code, notes, and snippets.

@e96031413
Created February 19, 2020 07:25
Show Gist options
  • Save e96031413/9d91ad22c1fd6001808f920766815da2 to your computer and use it in GitHub Desktop.
Save e96031413/9d91ad22c1fd6001808f920766815da2 to your computer and use it in GitHub Desktop.
a multi threading python program which can be reused.
# Assume that you have collected file links with web crawler or any other method.
# We will use requests, wget, concurrent.futures to download our file
import requests
import wget
import concurrent.futures
with open('download_link.txt','r') as f:
url_list = [str(line) for line in f.readlines()]
def download(url):
name = "000.jfif" #choose any file name you want
try:
wget.download(url,names)
print('file has been downloaded.')
except:
print('error occur!')
with concurrent.futures.ThreadPoolExecutor() as executor:
for url in url_list:
executor.submit(download,url)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment