Skip to content

Instantly share code, notes, and snippets.

@neuroticnerd
Created February 20, 2015 00:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save neuroticnerd/7c60d61c8d9d9716f50d to your computer and use it in GitHub Desktop.
Save neuroticnerd/7c60d61c8d9d9716f50d to your computer and use it in GitHub Desktop.
debunking the myth that using logging instead of print is too difficult!

python logging demystified!

disclaimer

This has been written with the latest version of Python 2 in mind (currently 2.7.9), and while the general premise holds true for Python 3, some of the details may have small differences from how it would be done in Python 3.

Of course, for further insight into the workings of the Python logging module, I would encourage you to take a look at their official documentation for an authoritative reference.

simplest setup

Even just for simple scripts, it really is a good habit to start using the built in Python logging module instead of print, even for writing simple information to the console. Here, you can find a few good logging practices for Python.

There seems to be a myth that changing from print to using the logging module is too complicated, or too much work. There was a time where thought the exact same thing, but it turns out I was mistaken! The simple truth is that the most basic logging configuration is actually only an extra 1 or 2 lines added your code, but provides a large amount of value.

To simply output messages to stdout as you would do with print, there is no complex setup, in fact, you can just use Python's default configuration by simply importing the module:

import logging

# print 'some message'
logging.info('some message')

Their default configuration is probably not ideal if writing plain messages to the console is all you're looking for. Adding one more line of code provides an easy way to do basic tweaking to the setup of subsequent logging:

import logging
logging.basicConfig(format='%(message)s', level=logging.DEBUG)
logging.info('some message')

If you want to log to a file instead of the console, its only a matter of adding a couple of additional parameters:

import logging
logging.basicConfig(
    filename='myapp.log',
    filemode='w',
    format='%(message)s',
    level=logging.DEBUG
    )
logging.info('some message')

It is important to note that in a multithreaded project, or one where you will be logging from multiple modules, its good practice to configure logging from the main application entry point to avoid potential state inconsistencies:

import logging

if __name__ == '__main__':
    logging.basicConfig(
        filename='myapp.log',
        filemode='w',
        format='%(message)s',
        level=logging.DEBUG
        )
    logging.info('some message')

further config

There may be times when you won't know certain things until the time of execution (passing a filename as a command line argument, for example). In this case, it might make sense to write a more malleable setup function for your logging, which you could then even move to a utils file for your project:

# __main__.py
import logging

def customize_logging(fname=None, fmt=None, dfmt=none, level=None):
    """
    logging.basicConfig parameters:
    :param filename: use a FileHandler with file filename
    :param filemode: mode to open file in if filename is specified
    :param format: use this format string for the handler
    :param datefmt: use this to format date/time
    :param level: set the root logger to this level
    """
    bc_args = {}
    if fname:
        bc_args['filename'] = fname
        bc_args['filemode'] = 'w'
    if fmt:
        bc_args['format'] = fmt
    if dfmt:
        bc_args['datefmt'] = dfmt
    if level:
        bc_args['level'] = level
    logging.basicConfig(**bc_args)

if __name__ == '__main__':
    customize_logging(
        fname='myapp.log',
        fmt='\n[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s',
        dfmt='%Y-%m-%d %H:%M:%S',
        level=logging.DEBUG
        )
    logging.info('some message')

One of the largest benefits to using logging over print (IMHO) is that it allows you to write output statements into your code without hesitation, knowing that you can always easily modify the logging config later (like adding a FileHandler or changing the formatting, for instance) in a single location, without ever having to search for every place where there is an output statement.

named loggers

coming soon!

custom config

Building upon the basic config, it actually takes a surprisingly small amount of work to further customize logging by adding more handlers or altering the output formatting, and that code can be wrapped up in a function if desired:

import logging

def customize_logging():
    # gets a named logger using the module path of this file
    log = logging.getLogger(__name__)

    # don't propagate to base loggers
    log.propagate = False

    # make sure we log everything all the way down to debug statements
    log.setLevel(logging.DEBUG)

    # create custom format for log messages
    formatter = logging.Formatter(
        fmt='\n[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s',
        datefmt='%Y-%m-%d %H:%M:%S'
        )

    # a StreamHandler object writes to the terminal by default, but
    # one of the benefits of doing custom logging setup is using a
    # file handler for logging to files!
    handler = logging.StreamHandler()

    # add formatting to the console handler
    handler.setFormatter(formatter)

    # don't forget to add the handler to the logger
    log.addHandler(handler)
    return log

config via json

coming soon!

exception logging

A very important use-case for logging is to output exception info, particularly when program execution will continue. Here is a Gist which has examples of catching and logging exceptions.

import logging


def exceptions_with_logging(logger):
    try:
        raise ValueError('ERROR_1')
    except:
        # you can use the normal logging methods and set the
        # exception info param to True and let it do the work for you
        logger.error('caught exception', exc_info=True)

    try:
        raise ValueError('ERROR_2')
    except:
        # you can also use the logging exception method as a shortcut
        # to print almost the same information
        logger.exception('caught exception')

    try:
        raise ValueError('ERROR_3')
    except Exception as exc:
        # alternatively you can use a combination of your own formatting
        # with the exception info parameter
        logger.error('[%s] %s' % (exc.__class__.__name__, exc), exc_info=True)


def setup_logger():
    """
    its not strictly necessary to configure the logging, but its
    usually a good idea, and necessary most of the time for anything
    aside from simple scripts... but technically you can just
    ``import logging`` and start using it without messing with config
    """
    # gets a named logger using the module path of this file
    log = logging.getLogger(__name__)

    # don't propagate to base loggers
    log.propagate = False

    # make sure we log everything all the way down to debug statements
    log.setLevel(logging.DEBUG)

    # create custom format for log messages
    formatter = logging.Formatter(
        fmt='\n[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s',
        datefmt='%Y-%m-%d %H:%M:%S'
        )

    # a StreamHandler object writes to the terminal by default, but
    # one of the benefits of doing custom logging setup is using a
    # file handler for logging to files!
    handler = logging.StreamHandler()

    # add formatting to the console handler
    handler.setFormatter(formatter)

    # don't forget to add the handler to the logger
    log.addHandler(handler)
    return log


if __name__ == '__main__':
    """
    the name == main thing is common in scripts and modules which have
    code to be executed only when actually 'running' it. This prevents
    running code if this module is simply imported and not 'run'

    >>>
    ERROR:root:caught exception
    Traceback (most recent call last):
      File "<stdin>", line 3, in exceptions_with_logging
    ValueError: ERROR_1

    ERROR:root:caught exception
    Traceback (most recent call last):
      File "<stdin>", line 9, in exceptions_with_logging
    ValueError: ERROR_2

    ERROR:root:[ValueError] ERROR_3
    Traceback (most recent call last):
      File "<stdin>", line 15, in exceptions_with_logging
    ValueError: ERROR_3

    [2015-02-19 12:18:42] INFO [__main__:42] 
    --- CUSTOM LOGGING STARTS HERE ---

    [2015-02-19 12:18:42] ERROR [__main__:7] caught exception
    Traceback (most recent call last):
      File "<stdin>", line 3, in exceptions_with_logging
    ValueError: ERROR_1

    [2015-02-19 12:18:42] ERROR [__main__:13] caught exception
    Traceback (most recent call last):
      File "<stdin>", line 9, in exceptions_with_logging
    ValueError: ERROR_2

    [2015-02-19 12:18:42] ERROR [__main__:19] [ValueError] ERROR_3
    Traceback (most recent call last):
      File "<stdin>", line 15, in exceptions_with_logging
    ValueError: ERROR_3
    >>>
    """
    # run the exceptions using the base logging
    exceptions_with_logging(logging)

    # now lets run the same tests as before, but with the configured
    # logger instead of the default one
    log = setup_logger()
    log.info('\n--- CUSTOM LOGGING STARTS HERE ---\n')
    exceptions_with_logging(log)

If you require more advanced manipulation of exception information, then you can use the sys + traceback modules to extract and format the information before logging it, see this for examples.

Luckily, it is also relatively trivial to handle uncaught exceptions which can be quite useful in certain situations; not all libs will document all of the exceptions they might raise, for example.

import logging
import sys
import traceback

def global_catch(exc_type, exc_value, traceback):
    log = logging.getLogger(__name__)
    logging.critical(''.join(traceback.format_tb(tb)))
    logging.critical('{0}: {1}'.format(ex_cls, ex))

# the original value when program execution began is stored in
# sys.__excepthook__ in case it needs to be restored
# https://docs.python.org/2/library/sys.html
sys.excepthook = global_catch

Whether or not you are concerned with logging uncaught exceptions, you can use this hook to automatically launch pdb in reponse to an unhandled exception.

# coming soon!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment