Skip to content

Instantly share code, notes, and snippets.

@wheresrhys
Last active October 15, 2018 22:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wheresrhys/af439dc9266a56a92a4b031bf22a9fc1 to your computer and use it in GitHub Desktop.
Save wheresrhys/af439dc9266a56a92a4b031bf22a9fc1 to your computer and use it in GitHub Desktop.
Logging/metrics idea

Basic motivation

  • We want to observe our systems. Sometimes we may want a log with messages in it, other times just a metric is fine. BUT the times we want either of these can be arbitrary... don't want to have to make a release of an app just because we're in the middle of an incident and want more detailed logs of something.
  • Want to untangle next-metrics from express
  • BUT ideally want any new thing to be reasonably backwards compatible

Sketch proposal

ft-observable

  • collects both logs & metrics in a unified interface
  • decorates with tags
  • .log .count .error .time etc
  • basically has the n-logger interface
  • Can do request tracing too (I have a prototype of how it coudl work in biz-ops-api)

ft-observable-graphite

ft-observable-splunk

ft-observable-prometheus

...

  • transport layers for each sink we may want to send stuff to. i.e. decouple sending the log somewhere from the actual collection of it
  • could be bundled as part of ft-observable, but keeping them as separate plugins help us avoid the spaghettiness that next-metrics suffers from

ft-express-collector

ft-serverless-collector

ft-new-cool-framework-collector

...

  • uses ft-observable and some of the plugins listed above to log the standard metrics we know and love in n-express/next-metrics
  • variants for serverless and other frameworks could be developed too

During normal operation most logs could be collected as metrics in graphite/prometheus, with a sample sent to splunk (could also have a convention whereby adding a alwaysLog: true property means a certain type of log is always sent to splunk. or maybe all errors are logged, but a sample of info)

During an incident an ENV var could toggle sending more/all logs to splunk

Could be a nice project to open source too. Could call it twitcher

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment