Skip to content

Instantly share code, notes, and snippets.

@ngbeslhang
Forked from andrewwatts/request_time.py
Last active July 8, 2022 23:12
Show Gist options
  • Save ngbeslhang/875d6fc542d7e590f9572f10db15286e to your computer and use it in GitHub Desktop.
Save ngbeslhang/875d6fc542d7e590f9572f10db15286e to your computer and use it in GitHub Desktop.
urllib2 vs urllib3 vs requests; updated for Python 3
#!/usr/bin/env python3
import time
# REMEMBER TO CHANGE THE VARIABLES BELOW BEFORE YOU RUN THE SCRIPT
_URL = 'http://localhost/tmp/derp.html'
_NUMBER = 1000
def test_urllib2():
# Per page-top note from https://python.readthedocs.io/en/v2.7.2/library/urllib2.html
import urllib.request
try:
response = urllib.request.urlopen(_URL)
except (urllib.request.HTTPError, e):
response = e
response.code
return response.read()
def test_urllib3():
import urllib3
http = urllib3.PoolManager()
response = http.request('GET', _URL)
response.status
return response.data
def test_requests():
import requests
response = requests.get(_URL)
response.status_code
return response.text
if __name__ == '__main__':
from timeit import Timer
t_urllib2 = Timer("test_urllib2()", "from __main__ import test_urllib2")
print('{0} urllib_request: {1}'.format(_NUMBER, t_urllib2.timeit(number=_NUMBER)))
t_urllib3 = Timer("test_urllib3()", "from __main__ import test_urllib3")
print('{0} urllib3: {1}'.format(_NUMBER, t_urllib3.timeit(number=_NUMBER)))
t_requests = Timer("test_requests()", "from __main__ import test_requests")
print('{0} requests: {1}'.format(_NUMBER, t_requests.timeit(number=_NUMBER)))
@ngbeslhang
Copy link
Author

Found this script as I needed to figure out which HTTP library should I use, and I figured I want to test it out for fun. The results for https://en.wiktionary.org/wiki/abdi#Malay are run under the following conditions:

  • Telekom Malaysia as the ISP
  • 100Mbps download speed as tested on fast.com

  • A Windows 10 PC run on AMD Ryzen 7 5800x auto-overclocked using AMD Ryzen Master (avg. 4.5-4.6GHz); connected via Ethernet

Results

1 execution:

1 urllib.request: 0.16860680002719164
1 urllib3: 0.127388599968981
1 requests: 0.08735689998138696

100 executions:

100 urllib.request: 9.37736159999622
100 urllib3: 7.848610699991696
100 requests: 6.6415311000309885

500 executions:

500 urllib.request: 46.91786539996974
500 urllib3: 45.20416580000892
500 requests: 35.19799030001741

1000 executions:

1000 urllib_request: 95.63006049999967
1000 urllib3: 87.47744510002667
1000 requests: 67.8350026999833

I don't realistically need anything beyond 1000 executions as in my use case, it will almost always be a single request just to check if the entry exists.

Regardless, it's clear that requests wins out by a wider margin the more requests/executions you make. Coupled with its more user-friendly API, this is what I'll be going with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment