Skip to content

Instantly share code, notes, and snippets.

@Lukasa
Last active August 29, 2015 13:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Lukasa/9170968 to your computer and use it in GitHub Desktop.
Save Lukasa/9170968 to your computer and use it in GitHub Desktop.
HTTP/1.1 and HTTP/2.0 test

The plan here is to investigate the network efficiency of HTTP/2.0 versus HTTP/1.1 by gently hitting Twitter's API. The above script does 5 HTTP requests to Twitter and gets the responses.

In my tests I'm showing an insane different, whereby HTTP/2.0 transfers 250 kB of data and HTTP/1.1 transfers 2.2 MB. This difference seems insane to me, so I'd like to work out why it's happening.

To run this, you'll need Python 3.3. Install the requirements listed in requirements.txt, and then go to Twitter's API page and set up an application. Get the values listed in the script for your application, fill them in, and run them, using Wireshark to check how much traffic was sent. Let me know what you find!

from hyper.contrib import HTTP20Adapter
from twython import Twython
import logging
import sys
import time
#########################################################################
# You'll need to create a Twitter app and get these values.
API_KEY = ''
API_SECRET = ''
ACCESS_TOKEN = ''
ACCESS_SECRET = ''
#########################################################################
t = Twython(API_KEY, API_SECRET, ACCESS_TOKEN, ACCESS_SECRET)
a = HTTP20Adapter()
#########################################################################
# Comment out these lines to get HTTP/1.1.
t.client.mount('https://www.twitter.com', a)
t.client.mount('https://twitter.com', a)
t.client.mount('https://api.twitter.com', a)
##########################################################################
start = time.perf_counter()
# Let's get all my mentions. First, get the initial set and save off the
# smallest ID found.
mentions = []
chunk = t.get_mentions_timeline(count=200, include_rts=1)
max_id = chunk[-1]['id']
mentions += chunk
# Now, keep looping until we reach the end or we've reached 800 tweets.
while len(chunk) > 180 and len(mentions) < 800:
chunk = t.get_mentions_timeline(count=200, include_rts=1, max_id=max_id)
max_id = chunk[-1]['id']
mentions += chunk
end = time.perf_counter()
# Print all the messages.
print("Messages: %d" % (len(mentions),))
print("Execution time: %f\n\n" % (end-start,))
-e git+git@github.com:Lukasa/hyper.git@effc09e5e0da99cb8898348e8b8f5df6cddad29d#egg=hyper-origin/master
oauthlib==0.6.1
requests==2.2.1
requests-oauthlib==0.4.0
twython==3.1.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment