Skip to content

Instantly share code, notes, and snippets.

@kgriffs
Last active August 20, 2020 23:00
Show Gist options
  • Save kgriffs/4f99da6dde2266201ddddc42784e5aee to your computer and use it in GitHub Desktop.
Save kgriffs/4f99da6dde2266201ddddc42784e5aee to your computer and use it in GitHub Desktop.
Falcon ASGI Interface Proposal
class ChunkyBacon():
def __init__(self, baconator):
self._baconator = baconator
async def on_get(self, req, resp, bacon_id=None):
# Use JSON serializer to send it back in one chunk
resp.media = await self._baconator.get_bacon(bacon_id)
resp.set_header('X-Powered-By', 'Bacon')
resp.body = 'some text'
# Or use the new 3.0 alias for body (TBD)
resp.text = 'some text'
# Or set it to a byte string
async with aiofiles.open('filename', mode='rb') as f:
some_data = await f.read()
resp.data = some_data
# Adapt sync function by running it in the default executor
result = await falcon.util.sync_to_async(some_sync_function, some_arg_for_function)
# NOTE: Since only one server supports the push extension so far, and
# it is not really helpful for web APIs, we will probably delay this
# feature to a post-3.0 release.
#
# A push promise consists of a location (the path and query parts of
# the target URI only), as well as a set of request headers. The
# request headers should mimic the headers that you would expect
# to receive from the user agent if that UA were to request
# the resource itself. When the UA gets to the point where it would
# normally GET the pushed resource, it will check to see if
# a push promise was sent that matches the location and set of
# headers it is about to send. If there is a match, it may decide
# to use the pushed resource rather than performing its own GET
# request.
#
# If the UA does not cancel the push, the ASGI server will enqueu
# a regular request for the promised push, and the app will
# subsequently see it as a normal request as if it had been sent
# directly from the UA.
#
# By default, Falcon will copy headers from SOME_HEADER_NAME_SET_TBD
# that are in the present req to the push promise. However, you can
# override any of these by setting them explicitly in the call below.
#
# Push promises will only be sent if the ASGI server supports the
# http.response.push extension (currently only hypercorn, but
# support is also planned for daphne and uvicorn).
#
# See also:
#
# * https://asgi.readthedocs.io/en/latest/extensions.html#http-2-server-push
# * https://httpwg.org/specs/rfc7540.html#PushResources
# * https://en.wikipedia.org/wiki/HTTP/2_Server_Push
#
virtual_req_headers = {}
resp.add_push_promise(
'/path/with/optional/query-string?value=10',
headers=virtual_req_headers,
)
# Or stream the response if it is very large and/or from disk by
# setting resp.stream to an async generator that yields byte strings,
# or that supports an awaitable file-like read() method.
#
# If the object assigned to Response.stream also provides an
# awaitable close() method, it will be called once the stream is
# exhausted.
#
# resp.stream MUST either provide an async read() method, or support
# async iteration. If you don't or can't return an awaitable coroutine,
# then set resp.data or resp.body instead.
resp.stream = await aiofiles.open('bacon.json', 'rb')
async def producer():
while True:
data_chunk = await read_data()
if not data_chunk:
break
yield data_chunk
resp.stream = producer
# Or, rathar than setting a response per above, an app can instead
# emit a series of server-sent events (SSE).
#
# The browser will automatically reconnect if the connection is
# lost, so we don't have to do anything special there. But the
# web server should be set with a relatively long keep-alive TTL
# to minimize the overhead of connection renegotiations.
#
# If the browser does disconnect, Falcon will detect the lost
# client connection and stop iterating over the iterator/generator.
#
# Note that an async iterator or generator may be used (here we
# illustrate only using an async generator).
async def emitter():
while True:
some_event = await get_next_event()
if not some_event:
# Will send an event consisting of a single
# "ping" comment to keep the connection alive.
yield SSEvent()
# Alternatively, one can simply yield None and
# a "ping" will also be sent as above.
yield
continue
yield SSEvent(json=some_event, retry=5000)
# Or...
yield SSEvent(data=b'somethingsomething', id=some_id)
# Alternatively, you can yield anything that implements
# a serialize() method that returns a byte string
# conforming to the SSE event stream format.
yield some_event
resp.sse = emitter()
async def on_put(self, req, resp, bacon_id=None):
# Media handling takes care of asynchronously reading
# the data and then parsing it. It turns out that Python
# supports awaitable properties (albeit getters only).
#
# Note that media handlers will continue to work
# as-is, but may optionally override async versions of their
# methods as needed, i.e. serialize_async() and
# deserialize_async()
new_bacon = await req.get_media()
await self._baconator.put(bacon_id, new_bacon)
# Or read the request body in chunks using async-for and an
# async generator exposed via __aiter__() like this:
manifest = await self._baconator.manifest(bacon_id)
async for data_chunk in req.stream
await manifest.put_chunk(data_chunk)
await manifest.finalize()
# Or read the data all at once regardless of location. This provides
# parity with the way most Falcon WSGI apps read the request
# body and can still be thought of as a file-like object.
# However, it does not implement the full io.IOBase interface, so it
# has no sync interface and does not support readline(), etc.
new_bacon = await req.stream.read() # readall() works as well
await self._baconator.update(bacon_id, new_bacon)
# Or read data in chunks. The underlying stream will read and buffer
# as needed. When EOF is reached, read() simply returns b'' for
# any further calls. Regardless of how the stream is read,
# the implementation works in a similar manner to the ASGI
# req.bounded_stream, meaning that it safely limits the stream
# to the number of bytes specified by the Content-Length header.
manifest = await self._baconator.manifest(bacon_id)
while True:
data_chunk = await req.stream.read(4096)
if not data_chunk:
break
await manifest.put_chunk(data_chunk)
await manifest.finalize()
async def background_job_1():
# Do something that may take a few seconds, such as initiating
# a workflow process that was requested by the API call.
pass
# This will schedule the given coroutine function on the event loop
# after returning the response so that it doesn't delay the current
# in-flight request. The coroutine must not block for long since
# this will block the request processing thread. For long-lived
# operations, awaitable async libraries or an Executor should be
# used to mitigate this problem.
resp.schedule(background_job_1)
def background_job_2():
pass
# In this case Falcon will schedule it to run on the event loop's
# default Executor, after the response is sent.
resp.schedule_sync(background_job_2)
baconator = Baconator()
api = falcon.asgi.App()
api.add_route('/bacon', ChunkyBacon(baconator))
@kgriffs
Copy link
Author

kgriffs commented Jan 3, 2020

I updated the inline comments for resp.stream = await aiofiles.open('bacon.json', 'rb') to match the PR implementation.

@kgriffs
Copy link
Author

kgriffs commented Jan 8, 2020

Added a note in the comments explaining that objects assigned toResponse.stream may expose an awaitable read() method as an alternative to supporting async iteration.

@kgriffs
Copy link
Author

kgriffs commented Feb 4, 2020

Updated scheduling to use schedule_sync() for the synchronous function.

@kgriffs
Copy link
Author

kgriffs commented Feb 5, 2020

Added example to demonstrate sync_to_async()

@kgriffs
Copy link
Author

kgriffs commented Feb 12, 2020

I just posted a WebSocket proposal here: https://gist.github.com/kgriffs/023dcdc39c07c0ec0c749d0ddf29c4da

Suggestions welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment