Skip to content

Instantly share code, notes, and snippets.

@tonybaloney
Last active June 5, 2023 22:13
Show Gist options
  • Save tonybaloney/262986212e1061b97908657a53a605d6 to your computer and use it in GitHub Desktop.
Save tonybaloney/262986212e1061b97908657a53a605d6 to your computer and use it in GitHub Desktop.
import time
import _xxsubinterpreters as subinterpreters
import _xxinterpchannels as interpchannels
from threading import Thread
import textwrap as tw
from queue import Queue
timeout = 1 # in seconds..
def run(host: str, port: int, results: Queue):
# Create a communication channel
channel_id = interpchannels.create()
interpid = subinterpreters.create()
subinterpreters.run_string(
interpid,
tw.dedent(
"""
import socket
import _xxsubinterpreters as subinterpreters
import _xxinterpchannels as interpchannels
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
result = sock.connect_ex((host, port))
interpchannels.send(channel_id, result)
sock.close()
"""),
shared=dict(
channel_id=channel_id,
host=host,
port=port,
timeout=timeout
))
print("completed")
output = interpchannels.recv(channel_id)
interpchannels.release(channel_id)
if output == 0:
results.put(port)
if __name__ == '__main__':
start = time.time()
host = "localhost" # pick a friend
threads = []
results = Queue()
for port in range(80, 100):
t = Thread(target=run, args=(host, port, results))
t.start()
threads.append(t)
for t in threads:
t.join()
while not results.empty():
print("Port {0} is open".format(results.get()))
print("Completed scan in {0} seconds".format(time.time() - start))
@ericsnowcurrently
Copy link

I definitely wasn't expecting as much of a performance improvement. Here's one of the slides from my upcoming PyCon talk, where I mostly benchmark using the same code Dave Beazley did in his PyCon 2015 talk, using a resource-constrained VM on my laptop:

Comparison: requests/second (fib(1)) → long request (fib(30)):

  1 client 2 clients 3 clients vs. fib(40)
Plain Threaded 13045 reqs/sec → 0.924 sec 11863 reqs/sec → 1.792 sec 8143 reqs/sec → 2.488 sec 121 reqs/sec → 1.779 sec
Threaded + Subprocesses 683 reqs/sec → 0.939 sec 441 reqs/sec → 1.022 sec 314 reqs/sec → 1.055 sec 607 reqs/sec → 1.009 sec
Async 6526 reqs/sec → 0.907 sec 3707 reqs/sec → 1.779 sec 2550 reqs/sec → 2.614 sec 0 reqs/sec → ∞ sec
Interpreters (shared GIL) 13198 reqs/sec → 0.976 sec ??? reqs/sec ??? reqs/sec ??? reqs/sec
Interpreters (own GIL) 13238 reqs/sec → 0.868 sec 12595 reqs/sec → 0.896 sec 11767 reqs/sec → 0.915 sec 11522 reqs/sec → 0.872 sec

@ericsnowcurrently
Copy link

FWIW, I'm pretty sure the crashes are much fewer now, though non-zero.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment