On the seasonablity example, we have currently only one model in production that is built using predictive modeling and that explicitly takes seasonality into account. Slide #10 mentions it - left column, item #4.
We use the following techniques to minimize the deterioration of models in production.
We use lambda architecture at Indix and our batch pipelines usually run every week. Lets assume we built a classifier model on Day 1. These pipelines use a prediction cache so that we can reuse the scoring output for already seen data. I did not mention this in my talk but for the deployment in batch mode, the prediction cache, we call it IB (information base) internally, is an artifact in addition to the model container. Now lets assume the batch pipelines runs on Day 1, Day 8, Day 15 etc. and the model and prediction cache being used is from Day 1. For every run we monitor the number of predictions that did not come from the cache. If that number goes beyond a threshold which is currently either a static number or a percentage of the total number of records scored, the team gets a notification. This notification can then be set to trigger the model building and training workflow.
We have built a framework on top of Spark that allows us to define arbitrary rules on any dataset and generate statistics on top of it. We use this to encode our assumptions about the data that was used to build the model. This DQ pipeline is usually run on schedule and works on the latest published dataset that is the input for the model and fails if those assumptions break. We automate it using MDA pipelines (our internal data platform - we spoke about this as well at the conference).
We have a periodic data quality initiative where we take a sizable sample of our data and manually score it using our internal data turk tool. This varies for models but the most common frequency is quarterly. We then compare the results to the previous baseline and analyze to understand which of the models might have issues. And based on this, actions are generated for teams.