Skip to content

Instantly share code, notes, and snippets.

@chankeypathak
Created July 26, 2017 06:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chankeypathak/9c5f9714ab2d84b707bb07ff82c25328 to your computer and use it in GitHub Desktop.
Save chankeypathak/9c5f9714ab2d84b707bb07ff82c25328 to your computer and use it in GitHub Desktop.
Download big files from internet in chunks to avoid out of memory error
import requests # just a choice of comfort for me
def download(url_address, filename):
response = requests.get(url_address, stream=True)
response.raise_for_status()
with open(filename, "wb") as f:
total_length = response.headers.get('content-length')
if total_length is None:
f.write(response.content)
else:
total_length = int(total_length)
for data in response.iter_content(chunk_size = total_length / 100):
f.write(data)
@chankeypathak
Copy link
Author

response = requests.get(url, stream=True)
handle = open(target_path, "wb")
for chunk in response.iter_content(chunk_size=512):
    if chunk:  # filter out keep-alive new chunks
        handle.write(chunk)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment