Skip to content

Instantly share code, notes, and snippets.

@erik-stephens
Last active December 9, 2016 02:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save erik-stephens/6918b6c25591d2b4d93f to your computer and use it in GitHub Desktop.
Save erik-stephens/6918b6c25591d2b4d93f to your computer and use it in GitHub Desktop.
IPython Notebook for "Python Logging for Web Applications"
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"name": "",
"signature": "sha256:7d07642324cdbdebf553013fd4b07e8a9047561c77299e4531e77fea82122420"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Python Logging for Web Applications\n",
"\n",
"The python logging module can be used for much more than replacing your print() statements. Plus, using it will make your operations team happy since environment specific settings can easily be defined without having to touch the application code. Trying to understand the details can make for a humbling experience. This [diagram](https://docs.python.org/3/howto/logging.html#logging-flow) does a good job of documenting the flow control.\n",
"\n",
"In this notebook, we'll focus on the use-case of emitting a log record for each request to a web application. We want flexibility in being able to attach arbitrary data to the per-request log record. An alternative approach could be to emit multiple log records for a given web request. That has disadvantages such as duplicating data in order to tie the log records together. Even if duplicate data isn't a concern, re-constructing what a single request looks like becomes more challenging.\n",
"\n",
"## Requirements\n",
"\n",
"The python (tested with python 3.4.1) requirements are as follows:\n",
"\n",
" Flask==0.10.1\n",
" Jinja2==2.7.3\n",
" MarkupSafe==0.23\n",
" Pygments==1.6\n",
" Werkzeug==0.9.6\n",
" certifi==14.05.14\n",
" elasticsearch==1.1.1\n",
" gnureadline==6.3.3\n",
" ipython==2.1.0\n",
" itsdangerous==0.24\n",
" pyzmq==14.3.1\n",
" redis==2.10.1\n",
" tornado==4.0\n",
" urllib3==1.9\n",
"\n",
"This notebook assumes basic understanding of the logging module and its main components: loggers, handlers, formatters, and filters. We will emit messages directly (and synchronously) to an [ElasticSearch](http://www.elasticsearch.org) server to showcase the kinds of views you can provide using Python logging. In a production environment, you would want to asynchronsouly emit messages to a queue such as [Redis](http://redis.io) to act as a buffer. You would also want to use something like Logstash to consume the queue in order to do normalization, filtering, and routing.\n",
"\n",
"During the meeting, I will use my local installations of ElasticSearch & Kibana. Feel free to point the ElasticSearchHandler below to my instance. Running local, standalone copies of those is pretty simple. I highly recommend installing those locally so you can experiment after the meeting. Here is [a copy of the Kibana dashboard](https://gist.github.com/erik-stephens/6c4ab1222b088c15413f)."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import sys\n",
"import os\n",
"import imp\n",
"import json\n",
"import threading\n",
"import socket\n",
"import random\n",
"import time\n",
"import elasticsearch\n",
"from datetime import datetime\n",
"import logging.config, logging.handlers"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Custom Handlers\n",
"\n",
"Writing custom handlers is pretty simple. Here are some examples. We'll be using the ElasticSearchHandler for this notebook. One thing to be aware of: applications that use this handler will fail if there is no server to connect to initially. In addition to handling failure, you should think about latency requirements. For example, if the handler synchronously calls out to an external service, then that could impact performance as well as skew timing instrumentation."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"class ScriptHandler(logging.Handler):\n",
" \"\"\"\n",
" A logging handler which behaves the way scripts should:\n",
" error msgs to stderr and normal msgs to stdout.\n",
" \"\"\"\n",
" @classmethod\n",
" def factory(cls, **args):\n",
" return cls()\n",
"\n",
" def emit(self, record):\n",
" try:\n",
" f = sys.stderr if record.levelno >= logging.WARN else sys.stdout\n",
" f.write(self.format(record) + '\\n')\n",
" f.flush()\n",
" except (KeyboardInterrupt, SystemExit):\n",
" raise\n",
" except:\n",
" self.handleError(record)\n",
"\n",
"\n",
"class WebHandler(logging.Handler):\n",
" \"\"\"\n",
" A handler class which allows pages to act the way scripts do:\n",
" errors to one file and msgs to another.\n",
" \"\"\"\n",
" @classmethod\n",
" def factory(cls, **args):\n",
" return cls(args['info'], args['err'])\n",
"\n",
" def __init__(self, info_path, err_path):\n",
" self.info_stream = open(info_path, 'a')\n",
" self.err_stream = open(err_path, 'a')\n",
" logging.Handler.__init__(self)\n",
"\n",
" def emit(self, record):\n",
" try:\n",
" f = record.levelno >= logging.WARN and self.err_stream or self.info_stream\n",
" f.write(self.format(record) + '\\n')\n",
" f.flush()\n",
" except (KeyboardInterrupt, SystemExit):\n",
" raise\n",
" except:\n",
" self.handleError(record)\n",
"\n",
" def flush(self):\n",
" self.info_stream.flush()\n",
" self.err_stream.flush()\n",
"\n",
" def close(self):\n",
" self.info_stream.close()\n",
" self.err_stream.close()\n",
" logging.Handler.close(self)\n",
"\n",
"\n",
"class RedisHandler(logging.Handler):\n",
" \"\"\"\n",
" Emit logs to a Redis server using LPUSH.\n",
" \"\"\"\n",
" @classmethod\n",
" def factory(cls, **args):\n",
" key = args.pop('key', None)\n",
" if not key:\n",
" raise ValueError('RedisHandler requires `key`')\n",
" return cls(key, **args)\n",
"\n",
" def __init__(self, key, host='localhost', port=6379):\n",
" self.key = key\n",
" self.client = redis.StrictRedis(host=host, port=port)\n",
" logging.Handler.__init__(self)\n",
"\n",
" def emit(self, record):\n",
" \"\"\"\n",
" Synchronously LPUSH message to Redis.\n",
"\n",
" NOTE: If given a model object as an argument, then add it as keyword arg as well.\n",
" Assuming the msg makes it to something like ElasticSearch, then this enables more\n",
" powerful filtering. Examples: \"user:X\", \"order:Y\"\n",
" \"\"\"\n",
" d = dict(record.__dict__)\n",
" for arg in d['args']:\n",
" if hasattr(arg, '_kind'):\n",
" s = str(arg)\n",
" if hasattr(arg, 'id') and s != arg.id:\n",
" s = \"{}:{}\".format(arg.id, s)\n",
" d[arg._kind] = s\n",
" if 'type' not in d:\n",
" d['type'] = d.get('name')\n",
" exc = d.pop('exc_info', None)\n",
" \n",
" if exc and exc[0]: # Could be (None, None, None)\n",
" exc_type, exc_val, tb = exc\n",
" d['exc'] = dict(\n",
" type = exc_type.__name__,\n",
" val = exc_val,\n",
" tb = [],\n",
" )\n",
" while tb:\n",
" f = tb.tb_frame\n",
" d['exc']['tb'].append(\"{}:{}:{}\".format(f.f_code.co_filename, tb.tb_lineno, f.f_code.co_name))\n",
" tb = tb.tb_next\n",
" del tb # Can lead to a memory leak if don't delete?\n",
" d['message'] = d.pop('msg', '') % d.pop('args', ())\n",
" self.client.lpush(self.key, json.dumps(d, default=str))\n",
"\n",
"\n",
"class ElasticSearchHandler(logging.Handler):\n",
" \"\"\"\n",
" Emit logs to an ElasticSearch index.\n",
" \"\"\"\n",
" @classmethod\n",
" def factory(cls, **args):\n",
" index = args.pop('index', None)\n",
" if not index:\n",
" raise ValueError('ElasticSearchHandler requires `index`')\n",
" return cls(index, **args)\n",
"\n",
" def __init__(self, index, host='localhost', port=9200):\n",
" self.index = index\n",
" self.client = elasticsearch.Elasticsearch([dict(host=host, port=port)])\n",
" logging.Handler.__init__(self)\n",
"\n",
" def emit(self, record):\n",
" \"\"\"\n",
" Index record to ElasticSearch.\n",
"\n",
" NOTE: If given a model object as an argument, then add it as keyword arg as well.\n",
" Assuming the msg makes it to something like ElasticSearch, then this enables more\n",
" powerful filtering. Examples: \"user:X\", \"order:Y\"\n",
" \"\"\"\n",
" d = dict(record.__dict__)\n",
" for arg in d['args']:\n",
" if hasattr(arg, '_kind'):\n",
" s = str(arg)\n",
" if hasattr(arg, 'id') and s != arg.id:\n",
" s = \"{}:{}\".format(arg.id, s)\n",
" d[arg._kind] = s\n",
" if 'type' not in d:\n",
" d['type'] = d.get('name')\n",
" exc = d.pop('exc_info', None)\n",
" \n",
" if exc and exc[0]: # Could be (None, None, None)\n",
" exc_type, exc_val, tb = exc\n",
" d['exc'] = dict(\n",
" type = exc_type.__name__,\n",
" val = exc_val,\n",
" tb = [],\n",
" )\n",
" while tb:\n",
" f = tb.tb_frame\n",
" d['exc']['tb'].append(\"{}:{}:{}\".format(f.f_code.co_filename, tb.tb_lineno, f.f_code.co_name))\n",
" tb = tb.tb_next\n",
" del tb # Can lead to a memory leak if don't delete?\n",
" d['message'] = d.pop('msg', '') % d.pop('args', ())\n",
" d['@timestamp'] = datetime.utcfromtimestamp(d.pop('created'))\n",
" self.client.index(index=self.index, doc_type=d['type'], body=d)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 2
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filters\n",
"\n",
"Filters provide a way to hook into the processing of a log in a handler-agnostic way. Filters can be used for more than just deciding if a message should be logged or not. Context filters are very important, especially for applications where different data can be attached based on the context.\n",
"\n",
"One example: a web application might not require that all requests authenticate the user. For such requests, the user contextual data will be undefined. Another example: suppose you want to log performance metrics for each stage (eg input validation, authentication, template rendering, serialization). Context filters provide an elegant way for each stage to publish its data to the log record. See [the docs](https://docs.python.org/3/howto/logging-cookbook.html#filters-contextual) for more details."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"class ContextFilter(logging.Filter):\n",
" \"\"\"\n",
" A logging filter for storing & emitting thread-local attributes.\n",
" \"\"\"\n",
"\n",
" def __init__(self, name, **defaults):\n",
" super().__init__(name)\n",
" self.tlocal = threading.local()\n",
" self.defaults = defaults\n",
"\n",
" @property\n",
" def _data(self):\n",
" \"\"\"\n",
" Return thread-local data if defined. Otherwise, initialize\n",
" thread-local data with defaults.\n",
" \"\"\"\n",
" if not hasattr(self.tlocal, 'data'):\n",
" self.tlocal.data = self.defaults\n",
" return self.tlocal.data\n",
"\n",
" def filter(self, record):\n",
" \"\"\"\n",
" Attach collected data to the record.\n",
" \"\"\"\n",
" record.__dict__.update(self._data)\n",
" return True\n",
"\n",
" # Emulate a dictionary\n",
"\n",
" def update(self, *args, **kwargs):\n",
" self._data.update(*args, **kwargs)\n",
"\n",
" def __contains__(self, name):\n",
" return name in self._data\n",
"\n",
" def __getitem__(self, name):\n",
" return self._data[name]\n",
"\n",
" def __setitem__(self, name, val):\n",
" self._data[name] = val\n",
"\n",
"\n",
"web_filter = ContextFilter(\n",
" 'web',\n",
" hostname = socket.gethostname(),\n",
" user = None,\n",
" method = None,\n",
" path = None,\n",
" status = None, \n",
" ip = None,\n",
" size = None,\n",
" times = dict(),\n",
")"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 3
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuration\n",
"\n",
"As a general rule, leave the configuration of your logging to an external file. That way, you can easily apply environment specific settings. For development, writing to files might suffice. In production, a more centralized or performant configuration may be called for.\n",
"\n",
"There are multiple ways to define your logging configuration. [logging.config.fileConfig](https://docs.python.org/3/library/logging.config.html#configuration-file-format) can read a ConfigParser file format. You could use whichever serialization format you prefer (eg JSON, YAML) and use [logging.config.dictConfig](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema). For this notebook, we will use dictionaries, since that makes for a more independent notebook as well quicker experimentation. Here is the [list of default attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes) that can be used in formatters. If a custom field is used and it is not defined for the log record, then the message will quietly be dropped.\n",
"\n",
"### Important Notes\n",
"\n",
"- Almost always, you'll want `disable_existing_loggers: False`. This defaults to True for backwards compatibility.\n",
"\n",
"- When writing libraries, resist the tempatation to define a base logging configuration. Leave all configuration up to the application. The logging module provides a mechanism for applications to decrease or increase a library's logging verbosity.\n",
"\n",
"- We use [user-defined objects](https://docs.python.org/3/library/logging.config.html#user-defined-objects) for our custom handlers to make this notebook more self-contained. Typically, your custom loggers will be addressable via your package/module and would obviate the need for a Handler factory."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"LOG_CFG = {\n",
" 'version': 1,\n",
" 'disable_existing_loggers': False,\n",
" 'handlers': {\n",
" 'elasticsearch': {\n",
" '()': ElasticSearchHandler.factory,\n",
" 'host': 'localhost',\n",
" 'index': 'desertpy',\n",
" 'level': 'INFO',\n",
" },\n",
" 'script' : {\n",
" '()': ScriptHandler.factory,\n",
" 'formatter': 'script',\n",
" 'level': 'INFO',\n",
" },\n",
" 'stdout' : {\n",
" 'class': 'logging.StreamHandler',\n",
" 'formatter': 'stdout',\n",
" 'stream': 'ext://sys.stdout',\n",
" 'level': 'INFO',\n",
" },\n",
" 'syslog-bin' : {\n",
" 'class': 'logging.handlers.SysLogHandler',\n",
" 'formatter': 'syslog-bin',\n",
" },\n",
" 'syslog-web' : {\n",
" 'class': 'logging.handlers.SysLogHandler',\n",
" 'formatter': 'syslog-web',\n",
" },\n",
" },\n",
" 'root': {\n",
" 'handlers': ['stdout'],\n",
" 'level': 'INFO',\n",
" },\n",
" 'loggers': {\n",
" 'orvant.web': {\n",
" 'handlers': ['elasticsearch'],\n",
" 'level': 'INFO',\n",
" },\n",
" },\n",
" 'formatters': {\n",
" 'script': {\n",
" 'format': '%(levelname)s SCRIPT[%(name)s] %(message)s'\n",
" },\n",
" 'stdout': {\n",
" 'format': '%(levelname)s STDOUT[%(name)s] %(message)s'\n",
" },\n",
" 'syslog-bin': {\n",
" 'format': '%(type)s: %(message)s'\n",
" },\n",
" 'syslog-web': {\n",
" 'format': '%(type)s %(ip)s %(user)s %(method)s %(path)s %(status)s %(size)s %(message)s'\n",
" },\n",
" },\n",
"}"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 4
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Show Me the Logging!\n",
"\n",
"All our setup is done. Let's get some loggers and log some messages and see how things like log levels, handlers, and filters are inherited. You can use whatever logger heirarchy makes sense. We'll use *PROJECT.CONTEXT.APP* for this notebook. Typically, `logging.getLogger(__name__)` is what you want since your package heirarchy is a good way to model your logger heirarchy. Plus, it makes it more intuitive for users of a library to know where in the logger heirarchy to configure.\n",
"\n",
"Logger instances are singletons - multiple calls to logging.getLogger('foo') will return the same Logger. Changing the your configs and re-evaluating logging.config.dictConfig() may not produce the intended results - you may need to restart the notebook kernel.\n",
"\n",
"### Logger Level vs Handler Level\n",
"\n",
"The logger's effective level takes precedence over any of its handler's levels. Assuming the logger's level allows the message through, then each handler's level will be checked. This can make it tricky to keep straight, but it allows for very fine-grained level of control. A web application provides a good example. Assume one handler is responsible for generating a common web access log such as the one we attached to orvant.web. When trouble-shooting a specific endpoint, you may want to attach a verbose handler that streams to the console or a log file so as to not pollute your normal log normal."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def show_effective_level(log):\n",
" levels = {0: 'NOTSET', 10: 'DEBUG', 20: 'INFO', 30: 'WARN', 40: 'ERROR', 50: 'FATAL'}\n",
" print(\"LEVEL[{}]: {}\".format(log.name, levels.get(log.getEffectiveLevel())))\n",
"\n",
"logging.config.dictConfig(LOG_CFG)\n",
"root = logging.getLogger()\n",
"orvant = logging.getLogger('orvant')\n",
"web = logging.getLogger('orvant.web')\n",
"web.addFilter(web_filter)\n",
"test1 = logging.getLogger('orvant.web.test1')"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 5
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Log levels are inherited: test1 inherits from web and orvant inherits from root."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"show_effective_level(root)\n",
"show_effective_level(orvant)\n",
"show_effective_level(web)\n",
"show_effective_level(test1)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"LEVEL[root]: INFO\n",
"LEVEL[orvant]: INFO\n",
"LEVEL[orvant.web]: INFO\n",
"LEVEL[orvant.web.test1]: INFO\n"
]
}
],
"prompt_number": 6
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Handlers are not directly inherited, but indirectly inherited as the log record traverses the logger heirarchy."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for i in root.handlers:\n",
" print(\"HANDLER[root]:\", i.name)\n",
"for i in orvant.handlers:\n",
" print(\"HANDLER[orvant]:\", i.name)\n",
"for i in web.handlers:\n",
" print(\"HANDLER[orvant.web]:\", i.name)\n",
"for i in test1.handlers:\n",
" print(\"HANDLER[orvant.web.test1]:\", i.name)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"HANDLER[root]: stdout\n",
"HANDLER[orvant.web]: elasticsearch\n"
]
}
],
"prompt_number": 7
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Filters are not inherited at all, not even as log records traverse the logger heirarchy."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for i in root.filters:\n",
" print(\"FILTER[root]:\", i.name)\n",
"for i in orvant.filters:\n",
" print(\"FILTER[orvant]:\", i.name)\n",
"for i in web.filters:\n",
" print(\"FILTER[orvant.web]:\", i.name)\n",
"for i in test1.filters:\n",
" print(\"FILTER[orvant.web.test1]:\", i.name)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"FILTER[orvant.web]: web\n"
]
}
],
"prompt_number": 8
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Shutting Up Libraries\n",
"\n",
"Here is a good example demonstrating how to deal with 3rd-party logging modules. We could account for this in our config by specifying log levels for specific log channels or decreasing the verbosity of the root log channel. For this notebook, let's just quiet the more verbose log channels."
]
},
{
"cell_type": "code",
"collapsed": true,
"input": [
"logging.getLogger('elasticsearch').setLevel(logging.WARN)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 9
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example Web App\n",
"\n",
"Here is a very basic web app using flask. In a non-trivial web application, this code will most likely be organized across multiple packages or modules. Since logging.getLogger() returns the singleton Logger instance, you don't have to worry about passing the Logger instance to every function. Similarly, any filters attached to the Logger can be looked by name. Coupled with a thread-local ContextFilter, you can attached arbitrary data to the log record for a web request."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from flask import Flask, request\n",
"\n",
"app = Flask(__name__)\n",
"\n",
"def get_filter(log_name, filter_name):\n",
" \"\"\"\n",
" Return filter by name, first by looking at the logger's\n",
" filters and then each handler's filters.\n",
" \"\"\"\n",
" log = logging.getLogger(log_name)\n",
" for filter in log.filters:\n",
" if filter.name == filter_name:\n",
" return filter\n",
" for handler in log.handlers:\n",
" for filter in handler.filters:\n",
" if filter.name == filter_name:\n",
" return filter\n",
"\n",
"class timed(object):\n",
" \"\"\"\n",
" Time wrapped function and store in given log context filter.\n",
" \"\"\"\n",
" def __init__(self, metric_name, log_name, filter_name):\n",
" self.metric_name = metric_name\n",
" self.log_name = log_name\n",
" self.filter_name = filter_name\n",
"\n",
" def __call__(self, func):\n",
" def wrapper(*args, **kwds):\n",
" start = time.time()\n",
" try:\n",
" return func(*args, **kwds)\n",
" finally:\n",
" filter = get_filter(self.log_name, self.filter_name)\n",
" filter['times'][self.metric_name] = round(1000 * (time.time() - start))\n",
" return wrapper\n",
"\n",
"@timed('auth', 'orvant.web', 'web')\n",
"def authenticate():\n",
" user = random.choice(('larry', 'moe', 'curly', 'jabroney', 'jones'))\n",
" get_filter('orvant.web', 'web').update(user = user)\n",
" time.sleep(0.5 * random.random())\n",
" return user\n",
"\n",
"@timed('validate', 'orvant.web', 'web')\n",
"def validate():\n",
" time.sleep(0.5 * random.random())\n",
"\n",
"@timed('render', 'orvant.web', 'web')\n",
"def render():\n",
" time.sleep(0.5 * random.random())\n",
"\n",
"@timed('total', 'orvant.web.test1', 'web')\n",
"@app.route('/t1_<path:path>')\n",
"def test1_endpoint(path):\n",
" authenticate()\n",
" validate()\n",
" render()\n",
" return reply('Great Success!', 'orvant.web.test1')\n",
"\n",
"@timed('total', 'orvant.web.test2', 'web')\n",
"@app.route('/t2_<path:path>')\n",
"def test2_endpoint(path):\n",
" authenticate()\n",
" validate()\n",
" render()\n",
" return reply('Great Success!', 'orvant.web.test2')\n",
"\n",
"def reply(body, channel):\n",
" \"Common per-response processing\"\n",
" filter = get_filter('orvant.web', 'web')\n",
" filter.update(\n",
" method = request.method,\n",
" path = request.path,\n",
" status = random.choice(10*[200] + 1*[302] + 1*[304] + 1*[400] + 1*[404] + 1*[500] + 1*[503]),\n",
" ip = request.remote_addr or \"10.1.1.{}\".format(random.choice(range(255))),\n",
" size = round((1 << 20) * random.random()),\n",
" )\n",
" logging.getLogger(channel).info(\"{} {} {}\".format(filter['user'], request.method, request.path))\n",
" return body\n",
"\n",
"def get(path):\n",
" \"Simulate a GET to our web app\"\n",
" return app.test_client().get(path)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 10
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Simulate Requests\n",
"\n",
"Let's simulate a few minutes worth of web requests so we can see the kinds of reporting we can do. If we want each endpoint to use its own logger, then we have to make sure each of those loggers has our web filter attached - the one attached to orvant.web is not inherited at all. I find this unintuitive and a bit klunky, but it can be worked around."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"logging.getLogger('orvant.web.test1').addFilter(web_filter)\n",
"logging.getLogger('orvant.web.test2').addFilter(web_filter)\n",
"for i in range(500):\n",
" get(\"/t{}_{}\".format(\n",
" random.choice((1, 2)),\n",
" random.choice(('foo', 'bar', 'baz', 'big', 'bam', 'boozle'))))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"INFO STDOUT[orvant.web.test1] moe GET /t1_foo\n"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"INFO STDOUT[orvant.web.test2] jones GET /t2_baz\n"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"INFO STDOUT[orvant.web.test1] curly GET /t1_foo\n"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"INFO STDOUT[orvant.web.test1] jabroney GET /t1_bar\n"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"INFO STDOUT[orvant.web.test1] larry GET /t1_foo\n"
]
}
],
"prompt_number": 13
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 11
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment