Skip to content

Instantly share code, notes, and snippets.

@regonn
Created March 3, 2019 12:21
Show Gist options
  • Save regonn/9f68a788928e3d7262bd154290345c4a to your computer and use it in GitHub Desktop.
Save regonn/9f68a788928e3d7262bd154290345c4a to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SageMaker/DeepAR demo on electricity dataset\n",
"\n",
"This notebook complements the [DeepAR introduction notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_synthetic/deepar_synthetic.ipynb). \n",
"\n",
"Here, we will consider a real use case and show how to use DeepAR on SageMaker for predicting energy consumption of 370 customers over time, based on a [dataset](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014) that was used in the academic papers [[1](https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips29/reviews/526.html)] and [[2](https://arxiv.org/abs/1704.04110)]. \n",
"\n",
"In particular, we will see how to:\n",
"* Prepare the dataset\n",
"* Use the SageMaker Python SDK to train a DeepAR model and deploy it\n",
"* Make requests to the deployed model to obtain forecasts interactively\n",
"* Illustrate advanced features of DeepAR: missing values, additional time features, non-regular frequencies and category information\n",
"\n",
"Running this notebook takes around 40 min on a ml.c4.2xlarge for the training, and inference is done on a ml.m4.xlarge (the usage time will depend on how long you leave your served model running).\n",
"\n",
"For more information see the DeepAR [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html) or [paper](https://arxiv.org/abs/1704.04110), "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import sys\n",
"from urllib.request import urlretrieve\n",
"import zipfile\n",
"from dateutil.parser import parse\n",
"import json\n",
"from random import shuffle\n",
"import random\n",
"import datetime\n",
"import os\n",
"\n",
"import boto3\n",
"import s3fs\n",
"import sagemaker\n",
"import numpy as np\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from __future__ import print_function\n",
"from ipywidgets import interact, interactive, fixed, interact_manual\n",
"import ipywidgets as widgets\n",
"from ipywidgets import IntSlider, FloatSlider, Checkbox"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# set random seeds for reproducibility\n",
"np.random.seed(42)\n",
"random.seed(42)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"sagemaker_session = sagemaker.Session()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before starting, we can override the default values for the following:\n",
"- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.\n",
"- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"s3_bucket = 'sagemaker-itoc' # replace with an existing bucket if needed\n",
"s3_prefix = 'itoc' # prefix used for all data stored within the bucket\n",
"\n",
"role = sagemaker.get_execution_role() # IAM role to use by SageMaker"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"region = sagemaker_session.boto_region_name\n",
" \n",
"s3_data_path = \"s3://{}/{}/data\".format(s3_bucket, s3_prefix)\n",
"s3_output_path = \"s3://{}/{}/output\".format(s3_bucket, s3_prefix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we configure the container image to be used for the region that we are running in."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"image_name = sagemaker.amazon.amazon_estimator.get_image_uri(region, \"forecasting-deepar\", \"latest\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"s3n://{}/{}\".format(s3_bucket, 'juyo-2017.csv'))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"df['DATETIME'] = pd.to_datetime(df['DATE']+' '+df[\"TIME\"])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"df = df.drop(columns=['DATE', 'TIME']).set_index('DATETIME')"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>実績(万kW)</th>\n",
" </tr>\n",
" <tr>\n",
" <th>DATETIME</th>\n",
" <th></th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>2017-04-01 00:00:00</th>\n",
" <td>654</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 01:00:00</th>\n",
" <td>660</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 02:00:00</th>\n",
" <td>685</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 03:00:00</th>\n",
" <td>706</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 04:00:00</th>\n",
" <td>696</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 05:00:00</th>\n",
" <td>667</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 06:00:00</th>\n",
" <td>665</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 07:00:00</th>\n",
" <td>662</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 08:00:00</th>\n",
" <td>689</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 09:00:00</th>\n",
" <td>727</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 10:00:00</th>\n",
" <td>680</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 11:00:00</th>\n",
" <td>671</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 12:00:00</th>\n",
" <td>627</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 13:00:00</th>\n",
" <td>626</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 14:00:00</th>\n",
" <td>601</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 15:00:00</th>\n",
" <td>605</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 16:00:00</th>\n",
" <td>612</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 17:00:00</th>\n",
" <td>627</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 18:00:00</th>\n",
" <td>675</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 19:00:00</th>\n",
" <td>689</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 20:00:00</th>\n",
" <td>671</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 21:00:00</th>\n",
" <td>646</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 22:00:00</th>\n",
" <td>619</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-01 23:00:00</th>\n",
" <td>597</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 00:00:00</th>\n",
" <td>594</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 01:00:00</th>\n",
" <td>602</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 02:00:00</th>\n",
" <td>641</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 03:00:00</th>\n",
" <td>668</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 04:00:00</th>\n",
" <td>657</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2017-04-02 05:00:00</th>\n",
" <td>622</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 18:00:00</th>\n",
" <td>669</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 19:00:00</th>\n",
" <td>673</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 20:00:00</th>\n",
" <td>653</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 21:00:00</th>\n",
" <td>629</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 22:00:00</th>\n",
" <td>611</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-30 23:00:00</th>\n",
" <td>584</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 00:00:00</th>\n",
" <td>570</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 01:00:00</th>\n",
" <td>585</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 02:00:00</th>\n",
" <td>629</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 03:00:00</th>\n",
" <td>659</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 04:00:00</th>\n",
" <td>659</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 05:00:00</th>\n",
" <td>629</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 06:00:00</th>\n",
" <td>621</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 07:00:00</th>\n",
" <td>629</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 08:00:00</th>\n",
" <td>638</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 09:00:00</th>\n",
" <td>642</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 10:00:00</th>\n",
" <td>624</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 11:00:00</th>\n",
" <td>617</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 12:00:00</th>\n",
" <td>594</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 13:00:00</th>\n",
" <td>612</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 14:00:00</th>\n",
" <td>601</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 15:00:00</th>\n",
" <td>605</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 16:00:00</th>\n",
" <td>612</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 17:00:00</th>\n",
" <td>606</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 18:00:00</th>\n",
" <td>625</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 19:00:00</th>\n",
" <td>635</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 20:00:00</th>\n",
" <td>618</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 21:00:00</th>\n",
" <td>593</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 22:00:00</th>\n",
" <td>560</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-03-31 23:00:00</th>\n",
" <td>538</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>8760 rows × 1 columns</p>\n",
"</div>"
],
"text/plain": [
" 実績(万kW)\n",
"DATETIME \n",
"2017-04-01 00:00:00 654\n",
"2017-04-01 01:00:00 660\n",
"2017-04-01 02:00:00 685\n",
"2017-04-01 03:00:00 706\n",
"2017-04-01 04:00:00 696\n",
"2017-04-01 05:00:00 667\n",
"2017-04-01 06:00:00 665\n",
"2017-04-01 07:00:00 662\n",
"2017-04-01 08:00:00 689\n",
"2017-04-01 09:00:00 727\n",
"2017-04-01 10:00:00 680\n",
"2017-04-01 11:00:00 671\n",
"2017-04-01 12:00:00 627\n",
"2017-04-01 13:00:00 626\n",
"2017-04-01 14:00:00 601\n",
"2017-04-01 15:00:00 605\n",
"2017-04-01 16:00:00 612\n",
"2017-04-01 17:00:00 627\n",
"2017-04-01 18:00:00 675\n",
"2017-04-01 19:00:00 689\n",
"2017-04-01 20:00:00 671\n",
"2017-04-01 21:00:00 646\n",
"2017-04-01 22:00:00 619\n",
"2017-04-01 23:00:00 597\n",
"2017-04-02 00:00:00 594\n",
"2017-04-02 01:00:00 602\n",
"2017-04-02 02:00:00 641\n",
"2017-04-02 03:00:00 668\n",
"2017-04-02 04:00:00 657\n",
"2017-04-02 05:00:00 622\n",
"... ...\n",
"2018-03-30 18:00:00 669\n",
"2018-03-30 19:00:00 673\n",
"2018-03-30 20:00:00 653\n",
"2018-03-30 21:00:00 629\n",
"2018-03-30 22:00:00 611\n",
"2018-03-30 23:00:00 584\n",
"2018-03-31 00:00:00 570\n",
"2018-03-31 01:00:00 585\n",
"2018-03-31 02:00:00 629\n",
"2018-03-31 03:00:00 659\n",
"2018-03-31 04:00:00 659\n",
"2018-03-31 05:00:00 629\n",
"2018-03-31 06:00:00 621\n",
"2018-03-31 07:00:00 629\n",
"2018-03-31 08:00:00 638\n",
"2018-03-31 09:00:00 642\n",
"2018-03-31 10:00:00 624\n",
"2018-03-31 11:00:00 617\n",
"2018-03-31 12:00:00 594\n",
"2018-03-31 13:00:00 612\n",
"2018-03-31 14:00:00 601\n",
"2018-03-31 15:00:00 605\n",
"2018-03-31 16:00:00 612\n",
"2018-03-31 17:00:00 606\n",
"2018-03-31 18:00:00 625\n",
"2018-03-31 19:00:00 635\n",
"2018-03-31 20:00:00 618\n",
"2018-03-31 21:00:00 593\n",
"2018-03-31 22:00:00 560\n",
"2018-03-31 23:00:00 538\n",
"\n",
"[8760 rows x 1 columns]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"num_timeseries = df.shape[1]\n",
"data_kw = df.resample('2H').sum() / 2\n",
"timeseries = []\n",
"for i in range(num_timeseries):\n",
" timeseries.append(np.trim_zeros(data_kw.iloc[:,i], trim='f'))"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[DATETIME\n",
" 2017-04-01 00:00:00 657.0\n",
" 2017-04-01 02:00:00 695.5\n",
" 2017-04-01 04:00:00 681.5\n",
" 2017-04-01 06:00:00 663.5\n",
" 2017-04-01 08:00:00 708.0\n",
" 2017-04-01 10:00:00 675.5\n",
" 2017-04-01 12:00:00 626.5\n",
" 2017-04-01 14:00:00 603.0\n",
" 2017-04-01 16:00:00 619.5\n",
" 2017-04-01 18:00:00 682.0\n",
" 2017-04-01 20:00:00 658.5\n",
" 2017-04-01 22:00:00 608.0\n",
" 2017-04-02 00:00:00 598.0\n",
" 2017-04-02 02:00:00 654.5\n",
" 2017-04-02 04:00:00 639.5\n",
" 2017-04-02 06:00:00 606.5\n",
" 2017-04-02 08:00:00 588.5\n",
" 2017-04-02 10:00:00 583.0\n",
" 2017-04-02 12:00:00 570.5\n",
" 2017-04-02 14:00:00 546.0\n",
" 2017-04-02 16:00:00 612.5\n",
" 2017-04-02 18:00:00 679.5\n",
" 2017-04-02 20:00:00 662.0\n",
" 2017-04-02 22:00:00 603.0\n",
" 2017-04-03 00:00:00 603.5\n",
" 2017-04-03 02:00:00 661.5\n",
" 2017-04-03 04:00:00 671.0\n",
" 2017-04-03 06:00:00 711.5\n",
" 2017-04-03 08:00:00 736.5\n",
" 2017-04-03 10:00:00 721.5\n",
" ... \n",
" 2018-03-29 12:00:00 683.0\n",
" 2018-03-29 14:00:00 694.0\n",
" 2018-03-29 16:00:00 673.5\n",
" 2018-03-29 18:00:00 677.0\n",
" 2018-03-29 20:00:00 635.5\n",
" 2018-03-29 22:00:00 576.5\n",
" 2018-03-30 00:00:00 563.5\n",
" 2018-03-30 02:00:00 626.0\n",
" 2018-03-30 04:00:00 631.5\n",
" 2018-03-30 06:00:00 630.5\n",
" 2018-03-30 08:00:00 677.0\n",
" 2018-03-30 10:00:00 680.0\n",
" 2018-03-30 12:00:00 659.0\n",
" 2018-03-30 14:00:00 667.0\n",
" 2018-03-30 16:00:00 663.5\n",
" 2018-03-30 18:00:00 671.0\n",
" 2018-03-30 20:00:00 641.0\n",
" 2018-03-30 22:00:00 597.5\n",
" 2018-03-31 00:00:00 577.5\n",
" 2018-03-31 02:00:00 644.0\n",
" 2018-03-31 04:00:00 644.0\n",
" 2018-03-31 06:00:00 625.0\n",
" 2018-03-31 08:00:00 640.0\n",
" 2018-03-31 10:00:00 620.5\n",
" 2018-03-31 12:00:00 603.0\n",
" 2018-03-31 14:00:00 603.0\n",
" 2018-03-31 16:00:00 609.0\n",
" 2018-03-31 18:00:00 630.0\n",
" 2018-03-31 20:00:00 605.5\n",
" 2018-03-31 22:00:00 549.0\n",
" Freq: 2H, Name: 実績(万kW), Length: 4380, dtype: float64]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"timeseries"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.axes._subplots.AxesSubplot at 0x7f6836c08a90>"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAEtCAYAAAAGK6vfAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzsvXmYJFWZLv6eiFxq632j6Qa6m01ZW2hZBEVF5QIOy7gM6uB+ca7IdZnrXJw7i9dR0Rn9eceNEcddlEFcWUUQEUGWZmvWht7ofe+u7lozM+L8/oj4Ir5z4sSpzMrMqqiqeJ+nn66MyIw8kRHxnfe83yaklMiRI0eOHJMXzngPIEeOHDlytBe5oc+RI0eOSY7c0OfIkSPHJEdu6HPkyJFjkiM39Dly5MgxyZEb+hw5cuSY5MgNfY4cOXJMcuSGPkeOHDkmOXJDnyNHjhyTHLmhz5EjR45JjsJ4DwAA5s6dK5csWTLew8iRI0eOCYVHH310t5Ry3kjvy4ShX7JkCVauXDnew8iRI0eOCQUhxEv1vC+XbnLkyJFjkiM39Dly5MgxyZEb+hw5cuSY5MgNfY4cOXJMcuSGPkeOHDkmOXJDnyNHjhyTHLmhn+RYvf0gPD9vF5kjx1RGbugnMZ7bdgDn/b8/4hv3rBnvoeTIkWMckRv6SYyX9gwAAJ7a0jvOI8mRI8d4Ijf0kxgVzwcAlAv5Zc6RYyojtwCTGMNVDwBQyg19jhxTGrkFmMTIGX2OHDmA3NBPalRqZOjdcR5Jjhw5xhO5oZ/EGA4NfS7d5MgxtZFbgEkMYvQlN7/MOXJMZeQWYBJjoOKN9xBy5MiRAeSGfhJjoFIDAFR9f5xHkiNHjvFEbugnMfqHA0Zf8/ISCDlyTGWMaOiFEMcKIZ5g/w4IIT4mhPi0EGIL234B+8ynhBBrhBCrhRDntfcUcqShfzhg9DUvZ/Q5ckxljNgzVkq5GsByABBCuAC2APglgPcB+IqU8kv8/UKI4wBcBuB4AIcCuEsIcYyUMheMxxiDYcJUNS9qliPHlEaj0s25ANZKKW0NaS8GcIOUclhKuR7AGgCnjXaAOUYPXwYGPmf0OXJMbTRq6C8D8FP2+iNCiFVCiO8KIWaF2xYB2MTesznclmOMQeWJc40+R46pjboNvRCiBOAiAD8LN10L4EgEss42AF+mtxo+nrA0QogrhBArhRArd+3a1dCgc9QHYvS5dJMjx9RGI4z+fACPSSl3AICUcoeU0pNS+gC+jVie2QzgMPa5xQC26geTUl4npVwhpVwxb9680Y0+hxUUVZlLNzlyTG00YujfASbbCCEWsn2XAng6/Ps3AC4TQpSFEEsBHA3g4WYHmqNxRIw+l25y5JjSGDHqBgCEEF0A3gjgQ2zzvwohliOQZTbQPinlM0KIGwE8C6AG4Mo84mZ84JEzNk+YypFjSqMuQy+lHAAwR9t2ueX9nwPwueaGlqNZkDSfO2Nz5JjayDNjJzF8n6SbnNHnyDGVkRv6NmHlhr34xWObx3UMURx9HnWTI8eURl3STY7G8db/+DMA4KKTD0VhnMoEx3H0OaPPkWMqY0oz+u29Q1hy9a2494X2xfGv3nGwbccGACklvvTb1XhpT79hX/B/HnWTI8fUxpQ29Ks27wcA/OjPG1p+7AXTywCAJzbtb/mxOTbtHcTX71mDD/xgZWJfHnWTI0cOYIob+mLYYo9a7nE8vaUXS66+FQ+v3zuqY8/oLAIAtu0fGv0A64AI85AHwkqVHHGtm+wzeill5DzOkSNHazGlDX3BCaykKSrlz2v3AAB++8z2UR2bujvt7hse5ejqQyTPGIwkGc51u/ux82B7J5xmcf1DG7Hs72/Dnjb/XjlyTEVMaUM/XA0MvEnDLrjpk0A96AsZdrsNfSUc366Dw7hTm5S47f+gQdrJEm56NIhQ2mDwNeTIkaM5TGlDP1QL67UbjDlFyozG0P957R7sH6gCAHb3VZoY4cioMNnpih89quzzmKV/btuBto6jWXQUg997qJr7E3LkaDWmtqEPjUrFoNHHsk5junHfcA3v+PaD0et2M3p9IvqPe9dGf0sZj92XcQ/ZLKKj6AKIV0I5cuRoHaa4oU9n9MNV6rfaGMO85/md0d+lgoPdfcOKwW019LF/4fbno7899r2eL3HVTx5v2ziaRUchMPT7B9q7AsqRYyoiN/Qws/YByz4bOGue3lHEUNWPdPR2wHZsXwLvOO1wfObi4wEAK1/a17ZxNAuSbvaFkleOHDlahylt6Cms0sToh8KomeFaY4U3uQN0X8hO2xnGrstOrhP3fZFSwnWAd552OKZ3FLDiiFn6xzMDEcaJ7ssZfY4cLceUNvTE6E1x9NRY+8BQY5oxOUAXzujA5WccEWxrq3SjHvvlC6cpY3GEQMF1sHReT6Zr3tC12N+fM/ocOVqNKW3oB0PWXq35kFLioXV7Ij2dDP3BBg09JSndfNXZOHx2FwDAa2PCEl+NLJrZqSRH+RJwQqZccIQShZMlPL5xH25/OggN3T+YM/ocOVqNKW3oo/BK38dPHt6Iv7ruQdwRGhxKeGo0UoWMqStEJKO0swQBN/Qzu4qKMfdDRg8Ekk5WSyFc+s0Hor/z8MocOVqPqW3oWcLU+l1Bos7GvQPhvpjtNwIytA4z9O1k0lx2mtVVUg29lCDJXmf0Z3/x93jz1+5r27hGi0Z9IjlyjCfuWb1zQtyzU9zQBxeIG0D6i2SdSoOyC8nxjhPH4rdXo48N/YyuoqLDe1JGk03A6ON9m/cN4ukt2UiiKhfi2zBn9DkmCp7a3Iv3fe8RfO7W58Z7KCNiihv62KiQESSbTBp9pcHZmoy66zDppp0afcjov/veFegsukrcvy/jaJYsa/TTwwJwgNkxniNHFnFwOAgceKHNpchbgSlt6PmSSzeCkaO2QSM91tINje+0pXNQ0Fi77wfhlQDgOk5mq1jO4Ia+2tplsOdL9Oax+TnagFJUJiWbzxXHFDf0nNEHf0uoUTeN1rqhipGc0bdTuqGEqaIrUHCFQaOfAIy+I250NtRiQ/8vtzyLkz9zZzRx58jRKriW6rdZw6Qx9D9buQlb9g829Bl+gYjt6tJNzW+sTjq91RECBSf4edtpYClhquQ6KDgOk6CkEl7puuaoG1Odn7FGdzk29K2Wbq5/6CUA2TjPHJML9FxPhHtrUhj6wYqHT960Cu9ixcTqAZcy9GSiwUp88RopYUDs3RGIZJO2avSej6IrIEKpyNN8DSMx+oND4y9rVD0fi2Z24p2nH95yRk/L6qyGluaYuKB7a1IweiHEsUKIJ9i/A0KIjwkhZgshfieEeDH8f1b4fiGE+KoQYo0QYpUQ4pR2nwQ9xDsONFYpUmH0jAkDwCCLn2/kQgax6wgN79gw+mI4oxRYrHzsFEb4vzBmxvYOjr+hr9R8LJvXjZmdxbY5Y7OcFZwj23hxx0Ejaye7MCk0einlainlcinlcgCnAhgA8EsAVwO4W0p5NIC7w9cAcD6Ao8N/VwC4th0D5yDG3KgWXlGkm+Dvmi8hpcRg1Yu046Gqjyfr7P3KQxrHKrySDL3riOi3oAxdHnVjkqAaLfHQDlTCc+gouqj5suGKofUgi6yr6vn4wu3P587iMcLaXX0NE5t9/RW88St/xN//8qnEPiJVk1G6ORfAWinlSwAuBvCDcPsPAFwS/n0xgB/KAA8CmCmEWNiS0aagSo7UBg1qzZNRDPdQFE4ZVJv0ZRz29+U7V+Pib9yPpzb3AgB6B6r4vzc/Y0yU8H0ZGdc46qad1SslSgXO6GXYfzXYr2bGZpPRV2sSJdeJm4+06MHZdTBe4WUx4uiWVVvxH/euxcmfuRPfu3/9eA9n0uPcL9+Lv/janxr6DPVH0Lu3AZNMutFwGYCfhn8vkFJuA4Dw//nh9kUANrHPbA63tQ0xi23sc1XPR1cpqIPeX4kN/VCoz1PY30Nhg/Dd/YHh+MpdL+B792/Azx/dkjimLyVczdC3W6OnMC/qiuXLmNFz6cYkIWUhGqXi+SgWHJTDmvStCrG88vrHor+zqNH3sdXUNbc9b3lnjlaBMt/rha24IT3X7SxD3irUbeiFECUAFwH42UhvNWxLWBghxBVCiJVCiJW7du2qdxhG6HJFvQgMfSDPUE2b4ZqPgWrwNxl6KmxWdtWfa7DqYd2uPvzjr56OZBHPh5KNGmwL9n3//vVYcvWtLe2iVKn5UX9bXlsndgqTdOMYGf1wzcOqzfuxvXf8modXan5bGD1frWRRR+X+iIlgLKYiBixEKNbos3/tGmH05wN4TEq5I3y9gySZ8H9qrbQZwGHsc4sBbNUPJqW8Tkq5Qkq5Yt68eY2PnCGWbhr8nCfRGTL6geGY0RPLJUPfHxpmOnyRNQ7/mx8/ih89+BLW7uoDkKwvA8Qa/U8e3ggA2LinMVZhg+cbfAK+hDRIN2mM/qKv34/X/Ns9LRtTo6h4PkoFEbUTbFXkzTGHqCWbs4Y8C3jsMNrrP2AhZZPKGcvwDsSyDQD8BsB7wr/fA+DXbPu7w+ibMwD0ksTTDjy9pRerNtfnKNWhSjfE6L1ouTa9IzD09JoMEGni1ZofXWTO3nVGT0x6ZlcJQGtL8Xq+QSryZbS64ZOOSb7gvonxAslP5C8ZblG9G+7UzSLryg392GG0158zej2YoRat4rNv6AsjvwUQQnQBeCOAD7HNXwBwoxDiAwA2AnhbuP02ABcAWIMgQud9LRutAW9u0LnCUfV8dBY1Ru/5kfGb3qn+PFQbh6JcKp4fGU9ywHpSLQ0MxPXoZ4YrhFZGWZiifGqeZBq9ndG3SiZpBhQiWiZGz5zcQ1UPVc/HtI5i2scTWLurD5+5+Vml/2wWwytbnTOQIx1cGusfrilJejYMsGs0UPXQwz6XRfKQhrrOVko5AGCOtm0Pgigc/b0SwJUtGV0bIaVE1ZNJRl/1o2QpXoMFSDL6Ss2PjDixRyklHF2jD43urJDRt7IvqlJzPpyAOHPn4ZVk7DgzyYIztur5KBXMjP7Sbz6A57YdwIYvXFj38R5Ysxv3vqD6fbL4UOZhlWMHXm58d99w3YaeypcDwQShGvrskYc0TIrM2NGAjB45Y8n2VTwfP3pwA4CkoR9m5QbovXQcYgxcStFLIMzoCo7Xyr6onNEXmXxEtj5m9A6kDIw8Z7cHxjkz1veDCbfoOtFKia88ntvWeCnlzlLyIc5ieGUWQlunAqSU+P4DG6LX9QZDPLJhL75y1wvRa73bXNalQY4pa+jpwpAzljBc9fHbZwJ/8/QURk+ySKXmR3/T7G6KuiHDSk7cvf2t1ej1FQSXbiKN3o3Hwg1pmrFZs7OvZWO0gRzppYITrUxoRbKnL46DN+VI/Mstz+LD1z+a2M7zFmj1lcXwSn2SzaWc9mDNzj587fdrotf1rmI3aaGY/doEwQnTYMav3RQ29MFF6tYNPZuZ50/rUPZFrQfDz+4bqGB3XyXcFnzOlxIOi10HYsNDh97XQkMfxO0Hf5Mx95gxT/gLfKkYPZN8cMPDG/GG/+9e/HntnpaNMw1qUbZgjDRJbdoXF6nrr3h4dqvK7r/zp/W47antuOf5nTj2H26P6vbwJXUHGfoMMnrdGbuOyQQ5WgedwdtCJjlmd5eU17qh5wEMWZ+kp7ChJ0avLvPp4r19xWIsmF5W9sWtB4P/b3sqzpYjDdDz1dLAwTaE/wd/DLdwmadG+RB7lcaiZsE+H5zcmhj9Xc8FK5r9LZSY0kBGuVRwEglmfGn85TtX44Kv3ofHN+5LHOPztz2H4ZqPzeHEwFcstGLLojNWz/tYvSMbHb8mG/SIsnoNvb6I1CcMTpiGKtlbMXJMeUPfpTF6Mi6LZnZFmjEQMGLK2DSxwwpj9Hq4o87ob121DWd94fcNl1U2wWdSETfmUcJUYnWhMXqDoSeD2VUu4Fv3rsXTW3qbHmca6CEsurGhJwPIjfNvw6btJv/G/vAcYpmGGfowkieLGqoervfctux3KpqISDL6+jR6/Z7pr+gafXz9hjLeN3bKGnq6SLqhJ4NdcEVkOIBA4qGldtWg91ZZdq6TMLyk38ef27J/ELeuSuSRNQy9Lyyga/TJsYyk0dMENFz1cM3tzzcVwjoS6GEquiLh0+Dj3Bpm7nYbHK208oiK27HfmZKwsijdcDs/p7uk1ObJ0Rg8X2LJ1bfiG/esSezTnaj1Mnp9Fdg3rH6Oh2xmIXrNhkln6Kuej6t/vgqb99mzTysGZ2zBERHDdITAtLB65SfPOxYdRTfS4UxGo2qIuiGDH5VH0NaCrXB4cqmoyDR6P6HRxxEttREMPT0Y+gOiH7sVIGNecJOtF00s3FQJVC8uVTNKN9lj9Hwi6+koJPIc6PVjG/dh58HxK1ExEUDX/t9+uzqxT3d6c6O8cc8Allx9K3737A79Y9Exf/nhVwEwOGM5o881+rHFI+v34oZHNuGTP1tl3D8cOVST0s1FJx8aGXrXCUIv13zufFz5uqM0Q59em9rzYWH06oP8YgsMvS/NGj19lak8Ah+HLTtzv2ESePk/3YEPs2JhzYLX5KEJ0kv5vQB77HIU4qo4Y0m6ySKjj8dUdB1lMnpgzW4c+fe34ektvfjLbz6AS75+/3gMccLAVufqgHYfc0b/4Log4OD2p5PJ+3TPzO4uwRGmqBvG6HND3z6YQu6KVJ7AYIyf2dqLY//hDtz5zPZoNu4sBqx9+WEz4TBGT0aTKkJ2FJ3YGWswQPQ5yWrd6AxVN1zrdzcfZaHG7ZuibpJjqTdlmyQRwcrUVWo+7nhme8tqxktu6LXfy+RArVomJtrHP0eF0rKYps6NU4H1EgCA6x8K6iI9HFZO3TqORecmAmzXV688SUULgbgcCSUzctQiWdFBd6mQWOFWFUafvRUjx4Q29KaLS1KFqRrgM2F43m+f2RHtLxcd3PyRs/GjD5yGgiOiiBhXqJ8tF9zI4WJm9LE8kxZHn2xX2DwLMMbR+35So2dx9DQOIfSjqdgfhl6SU5pLNhtaVJiN99hNGHqLRGaa5OPfOb4+JN1k0RnL71+9RAXJep+55dkxH9dEBP/tfrZyk7JPb5fJnzvKUp/VlSyxQYSu4AqUi27CpvB7Kpdu2gjzcjxOZtLBqyOSESm5Dk5cPAPTOopwFUavWsFywYn2mb6Xa/Sm2HUgGWXRCpbJo3xMtW5oLE4ki/jR986fVtYPp9y8FOFCGbdc5jE1XRkN+MpDr/ZJBvuuT5yDb797BYB4AjdJThWDRh85YzPI6KUEXn30XNzzv16rlKgAgJf25jH1jYA/Sz/880vKvgSjZ4aeVq10n3BEjN5xwhVXfM9t3jeAXz8RB1Pk0k0bYWLttIQyPdidzNCTQSswg87/Jukmfh0/iCZ2yBOmYl1clQ30MVE3qGZgqpbpWTT6GguvXDKnO3E8zkzIUUvyFb+ZW1V5MZqQHBGtTHSpq+gKHLsgKDlME7RpNRTlMngGQz9GjP6hdXvwwNrddb3XkxJzuktYOrc7tegcQSceOVRwJ/0zW3uVkEp+7csFR2X0/cE9fsMjmxKZsHSvUaAAf35vWaVq+jmjbyNMBpdYt4nRU9GswaoXTRJFFkLpKIZe/WzBjXuucknhNccEtfRNtW7ocFExMYNRb5bV+zIed9GNnbF0XJJneOgl7dMNve9LxZjH0k3wWcXQt0iT5IldhRTpxnUEioW4B4A+FvqdaaXFH0g6pmkVNljxRozOahR/dd2DeOe3H6rrvUooruaMLWo3YLkwoR/VtoM/R74EHntpH9sXv29GZ1GJh6eInDU7+3DR19UwYgqjDuowqRPxwhlq1nzO6NsIk6GnmdWmyQ5q0g2BM3pHE7Bd1qGJP5BUQqFaI2MeG1cR6s4U123SnJuVFIKJRR1zzfOjlULE6FnoJX3nEXO7lGNVfV/J8CPphlYmnAm1SrrhNXl0Rk/jLDhxwTOTob9k+aLw/b7yeSBod+am1OJ/3/cfxtlfvKcl5zEa+L6a58DHrRv6Um7ordAJ0yMb9kZ/c4LVUy4ozyH/nF5VNmL0Tsjo2eeISF54UtAOO3fGthFkXDmGLTo6Le+Gqj76hoOLyuPouVxT0Lyx9CCu2rxfiT13wpsgkm6YlBIcU0SMoh2MnjtjI2MuTbVuWOhluG/Z3B7lWFVPKhl+VAeH2PRQG6QbPk6d0dMEWXAF6wGgSjf/cvHx+Pgbjwn2GaJuJNIboz+4LjAGzcpnBFt+wemfvwtLrr4Vf3fTk9E2vWmMaSVCyBm9HXwedwTwKGP0/LnrKLrKpG8LyyTJx3VE2Ioz/hyt4P/5zceh6IpMOvs5JvTdY9LoiWlWDIyTHsShqhdlIXKHpJ3RCwxWg7Z7j2yIbyIpJQqOiJZ5POqGjhkxekNqfrMZm9wZ67A4dB7Nws+NM/rpnQXcctXZOH3pbABqG0UAOBjqnEUnqdHf9+KuliROReN0BKteSb6QmFGVNEZPk84Rc7oZ2yfJJ74vpAycybbfuVWO2j2WYnU7DgT3240rN0fbeAE8ndHrIzI5CwHgqc29iaiSqQhuhA+Z3qGQMf67dpXchMyThqovUXQFhBAoaNINSZdUoymL4bscE9rQm6WbdEZf0wx9R9FRGgk4GhPnKDjC6ACUMpB/IumGRd3QcUwJU10tythM7RmrlSlWGoczSeSERTNw0fJDAQS/p0lrpJUCP/8fP7gRH/rxo00bGT7OqHql9nsVQo0UiB2uNM6Ooqv08OWfAwJjWnAdqzO2VWxsxwFzrHvad/tKNzJHmYz0SZQz+prn408v7saBoSr+4ut/wt/dZE4OnErgzHxGV0n9LROM3izdAOrqrhp2PgOCe5PbFFrRlgtuwPYzmJDHMekMPTF6kwGlh2cwNPTzppWjDkxAegQOEBY1M6wSpAwcupGRkUnphr5Xv+GCcTbP6BMdrXzJipqlM3qSgaNGKjXfGD1Aw9Yngd89uyM1A7lecOlGzzugVRJppELE17wSPWiOkiT3b799Hr94fEs8dgTOZFOSG8EkAY4G21OSmkwrTyB0pKdo9Pp9IRDfU1+/Zw3++jsP4T/vWw+gNYl3Ex3RKrWjgKVzu5R6VEHIM/DeVy1JOFV16ebxTXH/6Zovo+em4DrK56Ly2gUnZPu5dNM2mFj7cB2MfrDiYVffMOb1qHHk3ECbpBtTpImEVDS6wMEW7+fx0XzWb1X5XFNzcF8mpRtuROmmJN0+ao3omQ29LpdwbNjTnJGhcQoROK8dET98Hou6ESLQ6Umjj4uhOUzWkfjGPWsT3xEwLgujb9FDujEMz9NXg6aaQYAWGuuqDmPO6N/w8vnKZPHCjqDK5b2rdwIAjpqv+lqmIsgI/+tbT0bJ1VdHwGlLZ+PTFx2fWDnpjP4HD2yI/q56MaPXHfrDNS8iIHoORBYxwQ198MNf9fqjcOayoKWtrVwoGZDhmh8xeo6CxsQ5AkZvMPQSoQGK4+h16cYzMPqobV6TSz5THD13uNJpRDH9noycw3S+PKKFWDuXCmjSvOnRWF8m6L9ho4iigwxOyTjqJthXcuOVExn8UiFuWGJa4Ulpdsby8NtWSDe/e3YHngjZIJcDdxwYwumfvzt6vXhWZ/Q3l/lsjH56Z1EZLxmqJzcH5aP1CqxTET67p3Wpjq+y9d9ZN/SczNU8GcmWeomK4ZofPSMTQaOvr0NuRkEP6DnHzMOpR8zCn9ftscZ38wu1f6CKmZ1qfQtdcuEoOMK4BPdDjb5S8/GLxzZj/0AVS+dyCchJGC4gjk1vlk3y5X/E6H2pMF6+LxiDVLbxjFpqjN7FyjJXPB9b9w/ivheTiUD6qmg04wegyE9RvoLvR2w+OJd45URafSmsY89lHQ4ppZIDQeB17ZuVbnoHqvjvP1wZvdYzKDmmd8Sp9r5WFylNOy4XXIVk6EZlMOOhfWOBWrRKFSi6yd8yeka0lZMedMP3VX0/IkgF11Eyais1P1oJ82c8q5gUhp43lraF/fHsuZovo7BBgs3Q65myMQJDsn53Pz5xYxA6x526jhMvw7mx0UMJR4uA0YdjZFErxMxJIiJmEsg6KlPm2j59jmKK5/aUUPV8bE9xNOp9dRsev+40Fiqj59ehyBh9dO0LsaxjkuvSGD13nKZp6PUiUQNF+S71PuKGRGeaiqTA7tWSFr7XjppJEx08w1o3vL6N0UuVfPHP1TwZETL9c8M1D+WwMupEYPQTWrqphEysyPqN2lKRfW2WL2jGW5FuRJLRmxAYEtVR44qRGT2VFWjWW+9xZyxLiuoPmyRQow7O6DeEzrtE6QQp0TtQUYqdnb50Dio1HztSHI3Nyh56TR7+0HiejOrsAKFEFl7zirZi4bKODv6Q1jwfP314o1J/vNnIJ1OxK1NsfrmgaccS0WqFJ+RJqVYYLbE6S4DO9p3Mp9+PBbgcyfNagn3p+QrcJnSXC5p85kfPqX7MSs1HuRhH5GSd0U9oQy9EkNJMnm/Azuj15ZyNtZs0esL5JxyCj557NABKyFEfdv5RxXAZGH2zRsbnNzGLo6d2aRGjJx275uNLd74AIGh2EeyLa/LsH6wq8sLhc7pQ9fyIAVMTBiBIA2/W0EuLodcZfSmMbtreO4Rv3bsOQGzoC65I1AsHAme5w1YJt6zahk/94il87fdxJ6JmpRvd0EoZX2tuoHvKBUWq89lqrMgiN3SbUSo4yv2lxIzP6Mh8+v1YgH4TR4TSTcqEamP03aWCcj9XPcn8WDqj96MgAJ79nlXUZeiFEDOFEDcJIZ4XQjwnhDhTCPFpIcQWIcQT4b8L2Ps/JYRYI4RYLYQ4r12DP+/4Q/DkP78JR83viYwVD4G8/DsP4f/88qnotS+12TphzPnf6Yz+opMPxRmh83dOdwmu4yi+AUdbGRgNPatL0wxMZZE9KaPlPDnqaN9LYZTM+89aivnTOpR9NU8GvouuIn7+P87E99/3ykgS2XZgCCXXwfLDZkbfXXQdY02hRkDPh2NgtjU/jnoAKJbZx/u//0jU7pAetqLrYK8hYUlKtU7R1t5kn95mpRt+7aeHk2eUs8Hux66yqzjf1Th6oZw3ALzpuAX43vteiZLrKn0EuCa/YHpHLt0gvo/c0Bnraaw9kjcNsg6hR2f0StRaSTdtAAAgAElEQVSN+rlhxuhdLcY+i6iX0f87gDuklC8DcDKA58LtX5FSLg//3QYAQojjAFwG4HgA/w3AN4UQbQ8LIEbPa07c9+LuqIEDgERYlY3RO/o+VhLBdQTOWDYbn7v0BPzzRccnHLWuFnUzXPPwxTueVwwRSRLNaHtSSkiDM9bzJPorHkrMd0ET4dNhTX5KklI+FzL6mV0lnHrEbLz22PlRZMGWfYOJvIOAYTc/UQG8+JqaMGXS6NfuijtzkUOs5DoYMkw6rzpqjvKQmlh/s6sSTi5mdQcO/qicMmPb3aVCpN/LMATWFHVD/596xCy87tj5iYJuPElt3rRyLt0gvo9cR6DoCDWO3qLRcyLe01FQ7uear0Xd+Jp0U4j9XxNeoxdCTAfwGgDfAQApZUVKud/ykYsB3CClHJZSrgewBsBprRisDSZGr8PXnLE6o7cmTHHdPUyLftfpR6CnXAhj7OPv5Uvpgitwz+pduPYPa5UU+YKWzTka0M0VMXrBGX1Nq+MT7Hs2NPTHLOhJ7POkxP6BCmYyBys5ow4O1dBdVufrous0XfMmUXyNGWW+dAaCxLSKJ5XvpM8VXZEweI/94xtx6SsWKw93nyGmvVk/CScXM8NORRQVxPd1lwtRRA7ZBd4GUs+gpn0lLdCAx+V3l9y6m11PZngs6sZ1HEU+42GsQXEys79D1+grWmasp4RXeky6yX7UTT2MfhmAXQC+J4R4XAjxn0IIqm/7ESHEKiHEd4UQs8JtiwDwFi+bw21tRaMaPTlROWwJU7Za9QUtxn5nWEdHPybHwhlBPHUzTMDTjKQThhl6fsDou7XG50BQY75UcNBVKiT2eb4fSTcEutEHq17iNykV0h2g9SIKr2QPYpQw5ctI4gLC6JOU61twk07J2SG7DiJ5Agdp33D8Hr1+zmjBycXs8Leje43v62aVE3nVTsDM6Om60qpquObhmtuei+o0AUHNpFyjj52xbliXBmAlMeqMuilr93PNZ1E3WnY1d8YWJ4lGXwBwCoBrpZSvANAP4GoA1wI4EsByANsAfDl8v8myJayZEOIKIcRKIcTKXbt2jWbsCqjwlq1cqG5U9QqV9YZXmssjxN+7mxt6zTj+5SsW4RvvPCWSTpphk7q+TWPz/ECj54yepKiKZ/JNcI1eZ/RxiWL6vf7lkhPwxbeciJIrmtboeYcpGguPUlIYvSWypmgw9ATXEdh5YBhLP3Ubfv5YnPQ1vTOY7JrW6NlvQL1HKwZG31N24+J3dN7Mv0I1inRDT9fgpT0D+NYf1ynf3VHKDT2gMvoiiz6j/6OABTc96iaRtObFcfR6CKWeMDUZat1sBrBZSkndFG4CcIqUcoeU0pNS+gC+jVie2QzgMPb5xQC2QoOU8jop5Qop5Yp58+aN/gxCuBGjry+8ErBH1iRkHTd9EtDfu8vC6EsFBxeetDCamJpZ8sWMPt7mhM7f/kothbWbfBPB68GqhwNDtUh+AGLpZrjmRQ/L5Wccgb965eFWw1sv/EijZ4yehUIq18R1Uo1yyRWpk3zBFdhmCA+dEU5ozUs3SY3eVDaiuxQzeqmtZEyN3SPpJjQoukRz5LxudBULqNT8zGvE7UbE6MM4eiC+rr4vlZpPabVuguYvZnlXL6MxXPNRmkwavZRyO4BNQohjw03nAnhWCLGQve1SAE+Hf/8GwGVCiLIQYimAowE83MIxGxH1NTU87PTQ6UbVptEnnLE2/V6Tcg4yh1/aqiFuBNK8Rm9i9AMVT0mNt05i4esXdwROzmXzutm+WB82TVrNsmF6zrifgZxeNU26KTpJHT7a5zqpzNZ1BGRyURkZ+ualG87oVekmodETa9dLPxiat+s1jLh/4ZPnHYu7//a16CwlS0hPRXAZM5JueOlwLapLMnnwva9agg1fuDDhcPW0CYLbj0C/T/pXsop6M2OvAnC9EKIEYB2A9wH4qhBiOQJZZgOADwGAlPIZIcSNAJ4FUANwpZSy7XdhwU13xh4cqmF2dylRqc7G6G0JU8nPqd9HMfbBPrMfwNbirl74GvMDggmqFsbR8/IEI9XxAYCntwa1U16+cHr8ufBmrhgMfUvCKw3llMnuepp0UxiBtQ+lOCVdYZaYyNC3UrqZqUs3ikYfF7LTWz3aGD1NtrzGOskGUZ2img80V41iQoNLNzqj9/xkFVdfAq5IVhDVyxuTHdDZvrLPyT6jr8vQSymfALBC23y55f2fA/C5JsbVMOihMDH6g0NVzO4ujcjo7Ro9Z8S6MzZ+/YsPvwqnHD4r/pzmsYgZvdo4HAD29lfQVXJTm0zo0J2xwVgEtuwfxNNbDkRtzkY6Nxr/s1sPoOgKLJ3bnXhvGqNvlg0nO2HFjq2qQbpJk+aKrpNa0C5w8Ca3T+toEaPn0k2XKt3w+5GkNN4vgEfdAOokUGCOQCDubwqA1VmJVwJTGSZnbC1KQONx9PE+13GVEiIFQ42c1LBMLX8l67//hM6M5Shaom4oHC2p0adH3dh0+Ea0/cR3aJosv0Eu/eb9+OYfkmV20+BrRpLGQun9Lz9kWrSd+tcG362OiV7u7a9gZlcpkaQEBAxV/1zJopnXi0ir5k7JcJvHoh4Akm7SNPr0mH5dPiO0qssX3XO3f/TVse5vSJjiUT76BKcwemneZ2L0dH/ZWuJNBUSrWzd2xvIIJ9PvDKglRHQd3pdmZ3nweT3bduJH3UwIkBEyGR4y9Pqsq0su3JDpxtxRmGX9k0DC8LtCeR+/sbb1DmFjA/XdTYye//3BVy9Tv9sRifcEY4zZZNpENVzzEn4L3llrtDBLN6FG7+lNXNIja/Rm2iezDN60gnQUHtcKRi8E8LJDpsX6cBR1E4z3S287OWaaHusXoF0TvQMY33dAMfRqaYusM8p2g/s14vs5jnAy9WUAgjwOU3YyfY4e9aKb/BxfJUyGqJsJgbSiY0Cs2+s6WnsYvXZM3RkbaX7qjeP5EpWaj30DVXzh9udx0df/lHo+BI/d3Prx9THz1zbJKi1/QE9eAoLKkc0yep298jC2qlYCoeiaewIA6uT775ctxw3//YzoNb8EX37bybjydUcCiFlxs+cwFIbaURXNYOxhHH3Vx5HzuvHWUxdHcl3V96MJjq4XZ6FkNCJJwaLR00TRbF+DiY64o1rS/+VrEgwQ/1489FKvkcOdsWQrlJUA1/YzPtFOGkMfNJc27yPHmL68tWr0iQ5T6XH0ttDLNKduQbtxiPntH6ziP+5di1VhUwkbojh6Pm7bWMLXOjPXHZ6mzwDJSaDkuqkJTPWCno+oBEIYHrp+dz+e2tyLZcxfkCbBBGOJr8/8aR1aVnC8b9GszsgYEytuWrqpxiVrS9w5ijCDMtxHkWEBo9dXMvH9cMFX71O20TXghl7X6L1cugEQPFd6ZVilHhSrMRWVoWDGnJNBaZB84k5ybJ9WWyeLmDSGHoASisdRSQmvtLF2mzFsKP4+RQJytRuHwuP2DyQLc6XBFEfPQ/JE6iSTLkvZoo1MjH64RdUr+UrH8yXuXb0TNV/iinOOZN+ffrsqfoURpLXIKLgBOWhFeCUxbD0rs+pJlNzYIACBAdITpkxltnnlRAA4wMIreS10oLkw3ckARbrRnbG+2RdC5oAz+kSNHG0FHneL06J1Mv77Ty5Dn0LpI0bfQNSNfV961I1tEgCAQ2cGpQ/07D2qQLjPUIExDaY4er3ujToWlSHG47dIVm76uVENeFPt9XqhO5Qp4eulvQPoKrk4dEZH9N6ihdHzJjKJa6CtciJ9XAhrEla94N2GdOmGl3HgXcWSVTuD/3kOhk4KDnLppqhey6xLB+1GdB85SCQjeilRN75GlFxHQEpeVC85EVeZg5d/Lmf0Y4iRDH0jjL6RfUrCkkX6+PRfHIe3nrJY2R4n1gSG/kBKI2kTfIszVh8jP4fRhpWaDL2UzWb3Bv8rVRylxMY9Azh8dpeyKklbsQWfYxOSLrtpPozXHBNkYp911NyWOJQrnq9U0QRi6YaHiPL4bt3I0DXhSVG6zGeOuonloKmMGpNu9NUy1+FN+Qpxq0p1glCNuabR+xNLo5/QrQR16JEXhIqXwug1o+zUafAa0ujZ6zcdf0j0HUX20AOjy2w0OmOpx6XB0KdH3YxugisW4qiVtN9+JEjmRAv+DxxiG/b046j5Pcp7i4ZzIpChNY1Tn6xOWzob6z5/ARxHhAy/uYe06sVNKHTppuZLdGjsu+r50ft0Rt/HGb1ln67RT/nwSnYfccd2zPTV37LmywRR4my/BEedIDQ5SOrafsYn2kll6E0sFogZve6wMlWhjPbpWjU35hbWbmtPqPsAgkqToUZvyOqUUiZ0dg5d5w3GYh4jP4dEZNAoGX0rQvv0VoKUmLLzwDBefbRaA4kz+r994zF464rF0Wsu69QT/sqX5M1q9FVPRhOdLt0EIauqdMPjsR3NkCiMXkuY4hFHukafdUbZbvCQVJ6MmCg1YWD0abkteo0cQHXw0m2lJ1plEZNKukljlcMp0o3Oejkr1J2xej2ZtOMkDWX6MQtOXPrUxOhHKo+gh+jx7zNr9GTk1O1CCCWOXRmjO/J5N8NmuF5O3+/5UikRaxrLcYdOj0o9A+q1T0g3tnNoQUGqYEVDy/+QtdM9x6qFxl3FfKUIV/B/sO+ggdGbVme6dJN1jbjdoGfbEWqEDG8aDqgGW49a04253gEs+B6WbWsICc4qJpWhLxdSpJsUZ6xu1BZM60Aa6nW4JvVvpO4rsHAuUyLQSJ58vSYKEMeM2zX65O9USJkgrIXeWFz4aKHXfCmGztHAgaaOs2iZNIsWpzE/p+Rkay5I9dOHN+LD1z9a1znwBhURoyfpxou7FBUjA8SibogV1qHRc7Ta0FdqPvb0DY/8xowiCHcMSAvPUTG1qgQo6ka7BmwiBvQ69vS5uLObUDT6POpmzEA1YnQyW2/1St0IcNQrz9gYvUn6sGn0IzkJdbbCx2bW6M1RN3ybbfyJ8ErNyGzvHbKWiTYh2WFKROGHicxlCzNXpJs65TP6XpOR/NQvnsJtT22v6xyqXjLqhvd/LUT6fdIZm9ThY4drHPudvF6xRq86CUeLj/zkMZz62buaOsZ4wpOS3fvxZKuHIMfyjJ/ILE8wej95fZSwTPbM+DJJJLOESWboKRFGPa16GT0QRMacsWx2Yjs3MvrHrMlUtgnCjZnAYMVQXnkElkCf6SzyBiPhmAzGQb/Z1X1mQ68w+kQSWfD6vhd24w+rd+KMa+7Gldc/Dikl/vO+ddhdB0PUpZugSqAfGnr990qfNDmj1wmwIp8ZVizN6quqRh8cn0d6xdJNyOhZaJ8uz/A2gboB4og1ekTf0wzuDOsjNTpRZwVBFmvwN/eF3PlMMFkbDbYW2qsXGjSFZXJtX1+NZVmnn1SGnrIh6SEgpCVMmZbE7z1rKW644szEdm5Y0hKR9Pfpr01sueZLPLO1FwOVxptWE/vrKfMGIyqr0b/PNA6+bTTRRn/381V47/ceAQDc9dwOPLWlF5+99Tl88mdPWscPJDtM8TIHuoxUtEyaaiG25IRq+hsIzm+wUsOXfrtaCV9sBFWmwwshFAdvzWPOWCW8Mv7+YFxJjb5guCYnHzYTD37q3IjRx1JEa6SDZ7YewJKrb8Xq7QdbcryxghohE//On7gxuAcTrJ07arV91rBMKZO6v6ESbdYwuQx9yGyJ2RN4CQRuq9KidEyw1dJxLJPASFErtz+1DRd+9U+4ceUm6BgpNppi7qd1xIaexmIarslw6PsaCR1Ny1SlTkg8HDANJN3wWGYq7ZuInrEy+vRxKglliWgqB/e9uBtfv2cNPvWLVYnx1fPwVjw/CjWlc+DSTVGLnqmx6pV6PXqu0euhuADQWXRwCEsii+PCRxxmXbj+wY0AgN88uaU1Bxwj6CWFAXVFrBMZU9SNke3T8xTtS1+NZVmnn1SGnjT6mZ0lZXsUXulLhe3baqfo0FcJHLZJgBsd/W0FV2DfQMAiX9jRByFUg2XK2NzWO4gNu4MKl30GQ29z4NmSqdKlG7uPwQSaoOqJrfeltmpw40JpusxiM+Y2Z+xIeQIUA3/700lNvp7GKjyOPjqHWszokwbIVI+eNHoDo7f4H1xmgJoBfdeW/QMAgOkdRdvbMweToedESQ9jDWrdBPviQABi9DIR0cYnVNsEkVVMTkPfpd6kw0y6sSXW2HDY7M7UfbbjcCNjk3yAgLnN7o4nKROjP/Oa3+O1X/oDgNgodDPpRtciTd/XiEZvXZGkGHJiUrZMVoK+yipa/R0Wx7At6mYEaY3uD1PO0UiG/uePbsamvYPKJESlIb79x3XY01+JxhYv8f2EsYg1+lg+Ekher7TM62b1YTru1v1Bb93pnRPM0PMIGc0hDiTDWH0/2cmLR+R4mjxDz1VQOiE4pr4ao++r1Hw8sGZ3i8+wOUwqQ9+ZYui5M5Y7am1MXAdvtK2jXkY/0r6CK3DNX54YvR5Zo6+hs+gaG4WYViv6DW3e1xgbNo4rXGnYMlkJnpYUpji9G5CRTL9BPZ8rOGqbQZ+xbWBk5+Tfhn4ItZyygzU7+/C5255TxsPrpXiaZBUxeibd0MrGHr7bGjZJE+zGvQOJ85kI8NjKqcgkMoIpKSqh0VtqEdE+30+28NTLI3zxjufxzv98CE/VUYF2rDCxruYIIGdsmnTTDKO3oRFjzqE/TK4j8PqXLcB1l58KYGRDf3Cohp4OdQKyyTM2Rl9ImQTqMTI6dhwIWGE90piUqm5uM9hqCGX9so6a7JZ+fgCw7O9vw4evfyx6nVb/XodePfOh9Xvj74iSqWInod4UncbFpRvyNdUz2TZt6LVItVqrRP8xgtoAPPi/n2Wb6yUQPN9PRt1w/V4Ly1QZfUpYZni8F3YEjux6os7GCpPK0BNbn5HG6KXO6Bs7/UtfsQhzukuJ7TaD1gjbJ0NAq4eRMmMPDlUxrZxi6G3VK21sX9tlS5hKO+89YQXOuqQbX5Vu7EZtdAlT6jmo329irlyrr9fQcwJR0o5JbJnXS9GjjeiaDNd8dBZd/NcVZ2DxrC4AI7SBDK/z9Q9txNfufrGusZqg36fNloUYa/i+jCdUyjJmMhidHjfKNBHov62t1aMv01djlCHeqsm3lZhUtW4IvLl2V8mNwys9GTWBAJIP/Uj4yl8tN25Pa1U30j7dENINYlp6mtA3PDpGbyuP0AijT5sodx8MmEw90o0v9eqfFoNtZfTsc5bIp5EYvY56nLHB95tXJUA87mJkSAwheuH/Fc/HnO4yTl82JzFO7nDUj/3oS/vw6Ev7cNW5R9c13uT41TGPRDKyhhoLhaQmRKacBC51XfKN+4P30yTA4u+T8gw9k2w1psk65JuK/SbZmSwnFaOnpVOJPXSdRVdh9CU3/aEfLWysvZF9ek2Ukeqk9w3VlIgbIKk3cnSV1EJY6ucoLlvdzvXzRLLRCIzeNskRfCmVTObRRtaUi+ls35bsNpKfxnYNOGOrp/EJD69My4yVMnkNlGPUKZ81Cr2nwERj9LzJNxA8Q0qoasTMY/ksuS825mmVLX2ZTJjS/ST0zPQPZyf5bFIZ+qiCHXtSOktuanhlqx4S3QAq+xpyxgbjLrnJm1FH72AVz207gMPC5X10DIvDlaJzGtHoTe9Je02IDX3qoSL4UqYybnu3K/XguoTFwa+PLc/BhGFL+eh+luSmO2M54hIIsWwQFTUzFC5rJDRWf+9o5YKhmg9HAL++8iwAE8/Q+77UfkOBg6ycRBRhY6gEqks33FErDE5cW6E0IL6m9eSRjBXqMvRCiJlCiJuEEM8LIZ4TQpwphJgthPidEOLF8P9Z4XuFEOKrQog1QohVQohT2nsKMejm5AZiWkcxqiPjac7YRqJubBgto09UZ9S9/5aH7ffP70B/xcPbX3mYsj3KmDR8bVc5ZPTWiJzUr7SGLXJQcax6mmHwCoGAJs9o47SVOdAlLA7bym0kh7GN0fcbKk0CSY1ez22oej5uejRIkItKVlj8CMoxdAe+NnEdGGV271DVw/vPWoqTFs8AAFQmoHSjV5g9aIhgot9x/2Dcyc3RnjuPRd1wOSjYlyyUptcbomvJfQTjjXoZ/b8DuENK+TIAJwN4DsDVAO6WUh4N4O7wNQCcD+Do8N8VAK5t6YgtMFVznNFZiB5IT3fGNpAwZYPJcEb7rIw+RcvVapqb0BsmWi2coVbcJP+EkdFbQkTTnH228aaF4O3pCx6ielr0eb7KsouKzGLT6DVGb0nwaeQa6LBp9NzQc39KmnTjOkEPgpon8dtngtoydI62Eg7BMZMROEDy3tvbQM9hjuGaj3LRgRAiygOYSPA1/0XRdVRDX1P1c96yU19V8fBXvT4Ur3qp934gTZ7+PziRGL0QYjqA1wD4DgBIKStSyv0ALgbwg/BtPwBwSfj3xQB+KAM8CGCmEGJhy0duwAfOXopjF0zDX5x8aLRtekcxWkLVPInpnUWcd/wCfPK8Y63Zro2A9M25PeXEPls8cppzU69pbsJg1VwPpiOqgZL8DGn0VGKAgx4AmwylHzPNgNIKqh5jIaUedZO+4uqwyG5dxfRrae01O5J0Y7kGfUyDrabo9YDKwouOg50Hh6LX0XKfM3rDkNLKV+i/USM9hwkUZUK/b9EV1nsviwiqV6qrQW7oqxqj39vPI3LU1ZLnyyhEkvcypn2e5l/Ro2yoBMiug8NYcvWt+PUT419Ooh5GvwzALgDfE0I8LoT4TyFEN4AFUsptABD+Pz98/yIAvHDL5nBb23HEnG789uOvwbxpZXSHRm1GZxHDNT9ygBVdgW9dvgJXvu6oln3vCYfOwKuPnosbP3RGYt+srnSmmZbYE5e6tRj6UB/WfQDl0OCZbCzlGZiSgPRWeCbY6sObUK90ozIxLmGoY+kxlHpIey8HnbcJzUTdpDF6fWLRu19t2DMQvT5yfncwDkusPx/nSM7YvaMw9NQLgVaDxYLK6B9evxc//POGho87lvB8mSjXzUs+64x+/wCXbugzsWT6vrBInx7cYOoOpmcnU7e4Z7YcAAB87fdrWnKOzaAeQ18AcAqAa6WUrwDQj1imMcH05CSeeCHEFUKIlUKIlbt27aprsI1gZlcQ704PeX/FC0KwWqTLc8yf3oEffeB0LJvXk9g3x8DyCSNFZ9h0UmLNCUYfPqymSYIY25CB0VNyjo3RjyQb6KhHukmEV1oYPa/S2ch1tElWI8l3tsxY7myrKoZeD+FUz2ljaOhvuersKGeCQgLp78Q4idHrJRC06zWaCpx0P9A9EDR/ie+9t3/rz/inXz/T8HHHErzSJBBcV36fx4w+OEcucelhknu4rEO/OzF6GTvS0zR6YvTr9wQ1qWYbcm/GGvUY+s0ANkspHwpf34TA8O8gSSb8fyd7P/cQLgawVT+olPI6KeUKKeWKefPm6bubxgdfvRQAcOjMoEZN/3AtTM5pvaG3YW5P+kVOltNVo25sy2e6iROMvpAesUOTgMl40Uqg3iqdI70XqE+68X01vNJWAqE8yqzm7nJ7GD3v88vjzm1yXdF1sD3MHF48S62fZG8aY2b01HuYMJqoG7pORXb/TTSNXidx+u9EsqpJo9cTn7buH0zdZ9ToNUZPJcfp3jElWY41RjT0UsrtADYJIY4NN50L4FkAvwHwnnDbewD8Ovz7NwDeHUbfnAGglySescR7X7UE6z5/QWToByq1hI43FrBVARwpjt4q3VTVrD4CGXPTg0qG0qTRE9u3SSD1hlcS6km6SUo36YyeO21NSV9psNcpsj8CW3uH8JGfPKYw5Q27+7Fq836laBaPQ9dXCbzhDd83QyscZk92I9+LwVHL3j+a4mZ6EEPRNTdMr3p+ZhuTmJyxhH99y0l4+4qAe8YafWzoSboiKZIbeltTcT30kiqIDlTU32hWBgx9vZmxVwG4XghRArAOwPsQTBI3CiE+AGAjgLeF770NwAUA1gAYCN875hAiYDo9IZvrG/bgedJqyNqB0cTRc+//waGqMaIkXbqJY4F1UFLRkJHRp7NJQrI7k91I1sXoE9JNfca8ketolW5GOM53/rQelZqPI+f14ONvPAYAouqh//qWkwAA5xwzDx97wzHRZ/TwSn4tOGtPq2ZqMvR6pUSO4PcL29+ZSnCOgKShNzP6m5/cik/c+CRuuepsnLBoRsPf0054GmHgf593wiFKXXkh1Do40bMUTsJb9g+x46jH8/xkZqzLnld+PMLYWhwz6gqvlFI+EcosJ0kpL5FS7pNS7pFSniulPDr8f2/4XimlvFJKeaSU8kQp5cr2noId9JD3D48Po7chvQRCsP2WVdtw4qfvNFbBGwpvVN3WEjM3lU8gtm/S6IsWI6OPLx5/8r1nLpsD1xE4ZkFPXeUDPKlLN6OTZ4Bkr2CC1Rk7gkY/N2Rjm/cNJvZR9MU1f3miwtp06cYzsH1jdrJln/55ZRtn9KOIf9erOBZdBxVDv+JfPREosE9u3t/wd7QbenkIJdLJ4AvjEpeN0ZtaEMYljOm74n1AQHAWzYxluXrrJbUTk7LWDQdlg/YP14KkinEw9K89dl6C5QHpUghtfm5b4LV/ZMNenLhYZVDEGnSWHUs3yQf1kOlBzP3yw2Ym9pFxMvkw5k0rY9fB4boSpn78wdMBAP/zhsfxfDh+G6SWMGUrgTASHvk/b8CAIe28VEjnMyNN/D0dBaAX2LxvILHPlLcBjCDdpGjtfJtpJVMupK+4+PePRqNPMPqCmdGTfDVStNV4QHfG2tpOuo5Qno8VS2ZH2wHVoZ2odeMnG7vr1StrnsTpS2fjjy/uwu6+Sm7oxwKRoa/UEmnSY4Xvv+804/a0iA8hRKiTBjcOjwcmkA6oGwWSYEwP6mGzu3Dnx1+DpXO7E/uoTK1p6X/a0tm4ddW2qPxwNH6L0SlqD1MafD89tLBRQz+3pwwkA5+sqDdhysToaZ8+OdqkG5pQzZPwKzIAACAASURBVPWGyEeT3GdLhOPslX/Xjx98CecdfwjmTUuP/AKSHZOCa5e8fyjrtlWJhq1EktGnO2aDeywo+bDumgtT3wew8sbhb+P7ElvCeyEZdePjpT39qPkSi2d3YeU/vBH/7f/90VpGY6yQvam5xaCiX/sHqkqFuyxAv7E4GeM3rSmVmpab+gqFpJs0ZnfMgmnGqJAoG9fwgH/8Dcdg4YwOvPoYNTpqpFr7aRr92l19kfOykaJm7cBIEz+xsX2GjFNifrbia4DZGWu6BmQwTKuqDktUFH8/TdQbdvfjH371ND7yk8cS79dRr0a/PmxhWU/56bFGwtBHv2XyGUkt+cxCXKP3atLNnv4Krvrp48o2+v+GhzfhnH/7Q3jsYFu54GSC0WfvirUYc7pLmFYuYO2uPgCNOfHaDZ2dcTZddJwok/WAwdAPVj2jISRnbLXBEqlU1dOk8R41vwd//tS5iu4IJAuEcaQt/x/fuA/nfvlefO/+DQBMtW7sjN4kOzWDkePo09sMRoY+pVRDyXUwf1oZF54UJ4bH0TMWRm+5rrby00B8/YjZ7zo4cvMLk3Rjy+HwRyEPtRvJ6pVmYx5sI8ds8jhp9f6FCCaBxzfui/ZFDvLwu3izGdpWLrh1l7puJya9dCOEwFELerB6e5DSnCVnrO4k4izcdQWqodOUpBsewjdU9YyrkyhhqkGnnI3R14N3n3kEXn10zPiDWOzkGDaFy95HN+7D+7E02Rw8JXKCcMMVZ0SrmVZgxDLFZOiTOX+RoU9rZnLY7E7c/bev1fZZNHqXDFByX6eF0asavXr96onC0fujlkYogZDFEMtk9cr0TG9bXaegOX38mr+l4Dh4kgVG6Gyfg/wY5aKjZFCPFyY9oweAo+b14PltYe2KDBl6/QbhD2XBiZd8ZOj5RDBY8YyMhJx2jTaOqLcGfhqOnt+DNx63IHpNsdi/enwL7np2R7SdTplYYU3rMGWLoweCiYyynluBejV6K6PXxkkavekzsQFKZ5pp5w2oETzR55gx4w2qAVUOTIOva/Suo+Rw6MPJAkPVocuyRUsEkx70wJEIOOD9GLRL5lgmDDpOLt2MIQ6b3RVVkssSo09q9Gp0Br0kjZ472iqeb2T0hVEyc5t0Uw+SzT4C6eZj//UEPvjDOMI2cmqFJ7e3f1gJTezgDURa6E+5/aOvxs0fOTuxXb8GS+bE9f1ndBatE1/E6FOkGxObLlgYPU06JjISZTUbQmP5sWgiINZdD6On+4qH93Jjrk9KWTBcOnwtoo77G3RQGKtpsjX1cSakyTpGRs+kG9Pv5fkSn/7NM9iyP+nkbwemhKHnhcXGugSCDfqN5hmcdkBcU4Xvr9R847lQPZh3n3lEQ2NpVroxhYqaHMJxXe/g9fbeIaXUMs9ibVUHMAB4+cLpiRBVIPmQTmfZqj0pzUzIdzKSM9ZkYm1RNzZGb0t248aJ+paScalHT4+aZLPvr/kSVc/H75/fkYgiaqWhf3LTflxz+3NNH6emSTf0m5jDWNOd3vp14ZOHfih6bZbhQumm4BilrpUb9uL7D2zA3930pOl0Wo5Jr9EDwAy21M82ozfvo82c0fvSzPxKBQfrr7mg4bEUm5RuTL1mTTaGp4sP1zzs7qvgkOmxk5fHvI+UzNQK6H6SjoKLkxbPgC8lDgzG2io/lY6ii4GKl+qMJcNoYtO2Mgd6AS2OTkuyGz8W3SPDDUg3esKU6wj4vsSX7lyNb927DkBQAvw7f1qvHLsVuPSb98OXwP9607HWGkEjwdedsZZJ0+b01stWuymBAvw4tmtZLjrGVRhhrHrzTjlGPxYhe/UiYegVRp+8NDpDTjuXoPxDY+dJ0oAYZcJ2vXVwaFieBHYeCCJCDplhjvMei1BYfYLqKLn4zUfOxi1XvVp15BlCX4cqHoRI71FrCnwqWsMr0w1HnNWcZIeKdJPQ6OuXbrjhqvkSL4QBDECcjwK01hlLt3SjDvZfPb4FX7zj+ei1pztjo7Lbjf3OOrngrF9fAegJUxxFFnllmhij53OMApimhKGf2Rkz+kw5Y/V4a02j16EXOWulDPWm4xfgA2cvxd9f8LKGPkcMPBmrbL61iMH4voyqOB4yo9P43rGYlLlPAAA62Wt1VRVfG9o6XDP7SSLpxmBk60mYMu07ZkGQCXbEnK7EPm7Mkhp94u0J6M7YYDUmFQNVLjhReG07nLGmlYoNH/uvJ3DtH9ZGr02tBPn/HLbfWc/6VaNuzIbeVoSuXHSNEyMNdTS1iUaDqWHoGaPPknTToaXm81WciU0mGX3rxlJ0Hfzjm4+z1s83oSMlNT/tdyZnr+dLbOsNDL3eDpEwFoa+U+tMxV+nTVb0kFY830gcipF0k/ysVTu2RIq8/mUL8MsPvwrvPO3wxD4+2UQafZWiheoIr0xh9Lqhv//q12NuT6ktzthmQ2b16pWR09tSG8g22ZpeJ/X74H8hROr9T1E3+nWg52CsMhKmnKHPkjNWbz3IpRtuZCKNXtPzspDlG6fmpz8gQGxMaFXiSYkdvcToM2ToWQE0hdGzn57LW2ZGH0o3BiMbNW+3sMC0837F4bOMkpxdo6/DGWvQ6D1fKsydxl0uuFbNebRoVg7SCxbGv3N66KPxGmjsif98ttDLtEq0JdeBlMlqsnS+9UzErcCUMPQ8eiJLdToShl7JjOUPb2gctZslCzJUWnlj/XeO+8jG0s223iF0lVxMS4luGRPpRqtsWU7pS5v2OJpD69IZPTlqTRp92TIJ2MB/64rnY92uPlQakG4SGr0IDD03viUWRTJah70NjUo3BDKUeitBKgViWvVGE6opPDmFoATHUvcJg1QUvdZCOPVnN07EGxtMCUMvLA6V8cTcaWriT9pN5Wlp7ab3jBeIEafVEyEMRA3aY0a//cAgDpnRkeo4HguZzcboeUQOZ148ack0RHrITWzNZsyj1VGD9yj/nW5+cite/+V7sSYs+eFLibue3YF7Vu9M+3h030Was0uGPsnoSwWnLUW6Rivd0Bj16pW2DHE7o1e31SyG3ibr6D4C/dmlyXKsqklMifBKjiwYR4LeS/K84w+J/uaMj26SBKPPwKSVVmxLf02NHqq+yujT9HlgbM6vq2TT6OPv92VguIUQVpYH2MMrS5Zyw1E9mwZXnSZ5YuPeIBHH92WUsLbhCxcm3sfHWdAZfdUk3bQn03O0xxysePjjC7sSpTTotzTlcth8IUlGH49Ln4D5W5O9JdQJ3dMmnOi3zaWb9iBLzlguEzzxT2/ER889OnrNmYWXaujbPMA6EDFUobMd9daieh8eY/S7+4YT8hXHeDB6HkaoJ2zRz8+butikG9MjbOpLoI+lUUav5wIAwM4woonHaZMPaMeBISy5+lbc83zA8on18vhyT6ZJN+YiXXc9uwNLrr4VOw8OJfbd+MgmLLn61qiXqgmjZfSDVQ9X/OhRZfwAY/QGQ29l9No1P5QV8ks4Yy0afVFzBusRc8Toc+mmTcgCCzZhZlfJmPABpDP6LKxO6IHSddsEoyfpJjqXwMDYDN9Y+CC4Rn/d5afiLacsil7rv6/JV2K6n/SOQxzEjE01a2yliG0w/U5UM51fF0q3p45lP3rwJQAxo+ft9qRUdXNFujE4TulYz2xNNpv52j0vArBX0hytRs8nCBOjN/VdtkXd0LV77bHz8PT/PQ8LWeivLuukPa/82BGjT9Po8zj61oKuQ5acsQBw80fOxu//9pzEds4s9IgVQhYmLVqV6A9qQqM3OGP1GuKE9zRYvqEZcEb/puMPUQqmJRPagv+5kbZJN6aHODL0lkmg0WQ308Rw0FAxcV1YTz7K1ZHxpMuPQ//z3qc0tqIrUPF8PLhuj9Jg29GOyUGbbMl4zTB6gmLoqS9Doxo9+w30Ehj682aLutHDaBMafQNRUa3AlDH0tlZ544kTF8/AsnnJtkhcp6X6M9lk9MHvqrO8tKxf7oz1pdnQf/qi40dVxmE0sKXdJx1z9TF6W8IUGUzT811MidAYCfXWBBoMpRMaM30L6dC03bRCKEeG3kHNk7jsugfxpq/cG+2nz5qygelcbUbNVMOnHiiM3uCMrZo0ekvUjWsJcU0mTKXvi0pOCzOjj8MrE1/TFkwZQ1+KZtiJcco8vDJm9Nkz9CcsCgqFzelWtfak7BGMvcpkqFoKox9NCYd2QL9XTNfBtvy3hVeajPloC8vVK/Xo9w8ZGT1hynS8kutGY6Tx7e6LGT1dL5MkRdDPS+2vUP8583yTwYrZX1K2OGOj8zSs7uNwYUPpBJt0k3DG6hq9mdGPpsfvaDBlom6KBQcYNneVySK4M7PmS0gpMxl1c8Wrl+GUw2fhtKWzle1pDbKJ0VdqfhD7nIFzSINpsvJ9aU2iAeqLujEZRHLgNRqnXm+Ujj5R0fhI3aBzMV2TKFrIFUqEzP6BSuBfskg3BD2yhr9uRLpJ+xy/FHF4ZXoRONN5li0lKkoF1XGvNrVPi7oxTzjD9By0ISfBhAli9poHDx2bCNAjKXyZ7B2bAUIPxxEJIw8ko25qmpEZDg19lqKgdJiSZ3QDbRp+0aLRlwvpsg59ruHuYHX+htWohLFqVEm6sTH6ciFmupx9k0ZORs/UgpDOVWf0g6yVUyOx+dy4c42eG01b7+R6ykGbDH1ZK1kiLPv0vgNpzthWdkuzYcoY+sg5kjFnbBpMER97QufX/Gll43uyhLSmKnGtGz/VGZsVmBh9PX4SW+MRmzN2tNLNyVof3bk9gUNZj2gigx7XwaHtwf+21njcGcuNMp0Gzesmw0Vnqpfk5UZ6qIE4es7oqfsaoDZliXsnp0c3Gct8Wxm9XpsqPrb+W+tx9HogBZ3DWDVxqcvQCyE2CCGeEkI8IYRYGW77tBBiS7jtCSHEBez9nxJCrBFCrBZCnNeuwTcCmnGzLBVw6EvBmiexJ9RE508PDH2WzyVhJKPsXj/8P90ZO9a48MSFxkifRGMYTya0Vpsz1mjoXWoJmBzHaKWbS1+xCHd94hy8fOF0AHF2r16ZU6+DQxU54/DK4H2mBKwSc8ZyxylJcqTR25i5Hn+vGPpRMvr9g7GfgBvNqPWiwdCXLUlrtoQ2+tz8aWXMn1bGDNakRieQWWP0jWj0r5NS7ta2fUVK+SW+QQhxHIDLABwP4FAAdwkhjpFSjmtHYVtX+CzCxCb39lcwraMQJ9ZkwEimIa1mCLG6mpfujB1rfONdpxi36+dQ8/1EuJ4tYWqhofwyGRJT5yfa16h0I4TAUfN7ovo2f336Efjl41tw/gkL8ZW7Xoje5/mqdEPzEH1fzEKT38E1es7M9fIJJqcqfY9NujElNqWBTzS9A7GcyY2m1dCHE6A5uij92SJD/xcnH4p/fPNxyr5EeGWC0ZsNfSPn3QzaYfUuBnCDlHJYSrkewBoAp7XhexpCM91rxgM6Q/B8iT39FcztKVvjgLOCRPXKSLoJbvCq50PKCXYOBo0+rdTt19/5Cvzsb85M7LNVtiRDO9p2jrQSWDq3G3d87DVYMletXU8GfYhJN/3DtWgyoFMxMvqUYmz0e9BnzdKNupIgcEZvinfvHaxibVizh4NPJn0sX0Bl9OnPe1lzqnLo8ozpc7bSxwRXI5ZpjH6som7qtX4SwJ1CiEeFEFew7R8RQqwSQnxXCDEr3LYIwCb2ns3hNgVCiCuEECuFECt37do1qsE3gmZb5Y01TGxyb/8wZneXMpsTwKGvnHRnbFQxMdPnkFxV1Zu09uaTDlXS5wm2WHmaBEZt6EPjQX13E0ZZY/S+lPjGPWui/SLS6JPHjsOTzdFUdDqmePjRMvpLv3E/zv3yvYntfDLhhl5h9BZjXo5WTsnfuZ6ENluP2Oh1lGWM8LvMcfReGFHXbtRr6M+SUp4C4HwAVwohXgPgWgBHAlgOYBuAL4fvNd35iTORUl4npVwhpVwxb968xkfeIEpu+sXNIkwx3Hv6KpjdXZqYjJ4xeY4slFpOgx626NXpjLUeM3y/6dkmjf1NrLhdIyBD31kyOxSrvuoArHg+dvclyxKYGH3UODyF0Vcjzdkg3WjjI3BGbypVQJm8Ojhz72POWPJd8fFe9srDEp8vW4x5NAmYDH3kqDXE2Kdmxpq/a4BNcmPB6uvS6KWUW8P/dwohfgngNCnlH2m/EOLbAG4JX24GwH/dxQC2tma4owfNxmPVjLdZJNikF0g3yw+bGSWqZJrR60ZS04Oj92XY0Ott5TxfNt38hYyvSbo5bHYXVn/2v1nr/9gQGfpi8Fgn/CRa96mhqq8w4miMlnMqGSY/IF4p25yL+iQ/FIVm2rVqqhqqfw6ISz286/TD8dFzj1E+98Jnz0+pEhpmzRpsgc2HQtfO9OskC56p22u+j7uf24Gzj56LcsFVDH3Nl7AsQFqCEe8oIUS3EGIa/Q3gTQCeFkIsZG+7FMDT4d+/AXCZEKIshFgK4GgAD7d22I3ji289CZe98jCcviwZ851FlDWNseL52NdfwZyeUrwszK6NNOjb5Hyy18TJEowJU3ocfYM2mQ6ZlkFaLrijzgoejqQb0pLTom4CIzNU9ZTwRILtmujHpMtZsTH68FR12ZSkm2kdRaNGT9ANMjf0VCzvklcsSujrpYJjdrhasmZJhzddn7QIGr5vVtjNjq4hbf/tM9vxgR+sxHf/tAEAlEqeWWH0CwD8Mhx4AcBPpJR3CCF+JIRYjmBltgHAhwBASvmMEOJGAM8CqAG4crwjbgBg0cxOfOEtJ433MOrG9I6i8npffwU1X2J2dxkvuQMAsm0kk1E3wf9ZLOOQBlPkEO/GNJo8ALquJ4alI1oJGltk6FMin4ZZaF/Dhj4lPyIy9MaaNep7CCTd9JQLVkZf8XzFiA8bnLE2TV4HGXOTL6RkkW5o0jBOAuEE+KnzX463M7mIfsuH1u0NPhvOjKONOBotRjT0Usp1AE42bL/c8pnPAfhcc0Ob2uB9bgFgZ1jidU43Y/QZNpKpjL6O8MSsQNfoa2GSFxA0Re+veA3LZ/Ond+BXV56FYxdMa9k4dVAcfVq9IWLEgaFXs60Bu5yWFnVDKf2mOPrYGWtOmJrWUTBq9IThqqdUkuSTCTF6W7SMDpsOH7F2wwojLtyW7kjXj0kSJvkbKOR2gPsnxsBvOLFiDqcQpndqhj5sJBE4Y9Or72UFaVE3CWdsps/BwOhDA5DWFL0eLD9sptKysNWIo27Mky0x+qqXdC4DdgKRVsOoYsn0THXGVjwIEUxMNvlCl3yI0XcUnejvRjLeidHb6uBYJwGLfq9PWGkroIGKh2kdhdTjtRq5oc8oZnaqbQZ3hIw+CK/MftSNrl37mmxAyLIzVo+35hp9Wmet8YTuANSjQ+JaN7G/xKZFm5AWsplWShtQcyc4Biseuoouio5jTRLTJ4ihSPIpRuy+kfuoXExn9HQ9Tc5yxzoJmJPd9GsQkAUflZofyXiZkG5yjA906WZHyOjn9MThldlmw2ZGr0dlZFl+IsZF4CUQbPVSxgt3fvw1eHJTb/Q6VaMPr0HNk8b68TYCkVglSInBioc1O4PEJpPRIkaecMZWPXSWXBRcoRjIbb2DSktHnRwM1Ty4jkBH0cHuPjUztx50RIw+nZkbWXv4FeZkN/PndCJQ82Uk24wlo88NfUYxQ5NuHlizB44A5vWUI1aV5WRfU1YpMLEYve4QD4qaBeMvj7K/aztx1PxpOGp+rP0nG6fojD6ZAAaM5IxNRt184sYn4u8w1JyP4vYNztiOogvXEUpM/ZnX/F7pJZxk9D46Co6yumhIurG0GbQaetu+FI3elIsRRxsVjJ9pB3JDn1F0aRru9gNDWDijAwXXmSCM3mzoh6oeygUnevizLD/pfhJVo0+vcpgVJK9BUkc3aer8nN5/1lJceFKcwGVi9Lc/vZ19h8RQ1cPaXX04/tAZqISlLoAko991MMj0LoQRTBw8kSvB6KseyuEEYRrzSIjKQVt0ePMkkF6srhhJN3Yi4/kyiqEnIuFZHNGtQoY54dSGLZaajEyG7bw14oNLIlk2lNM7VR5U8/1I557THfhQxqr64GiQJp/xyJhhQ9w7X6WctnQWTj0izj3Rj7k1bDjOv+Pd330YF371T6jUfCVztaoZ7DU7+3DU/B64I2j0vH7+tt5BXP/QRhQcoRjRRlZWkTG3SDdmSQupn7v8zCOw/LCZ+KvTDtM+kzT0F371PgBjy+hzQ59h/NcVZ+DfL1sevaaYZwrR4q3csoY0Rj9Y9RT9NUvSh45pHUlGT0ZnwfQOAMABQ3hiVlCPfGaKe1eZsmoiiloY472rgzpVP/8fZ+INL1+Amifx8PogZrzmq5m3nNEfHKpiW+8QjprfozB6UyQMl24+c/OzAIJwY0W6aSBzrWCpKbRkbjcA4M0nL0zsIzlp4YyOxL4FYdjs/GnqPv05eHprb8Toe8jQj0G2fi7dZBinL5uDp7fEzrXjDg1qoRw2O6hKqLOpLMFUvVJKiaGqj+5SfNtlyZmpY7rmjK16fmQkI0M/mEw4ygoSGr2hw5QpQZdfu0QNF+31A2t3Y25PCaccPgvlgqMYc8+XSkIWZ67rw7jyI+f14NmtByKpZMCwQhqu+bj5ya3Y268SG2VCakCjn9UVrMb++owjEvsWTO9ILUPx+pfNx7XvOgVvOG5B3d+lPwf3r4krvdskpFYjN/QZB39w/uOvTwUALJ4VMPptvUPjMqZ6oEtPnhc75XqYAc2yM5ZPSEDgBCwXgnM4ZAIw+qR0E3eY4n6S8084BB99w9Hsc/E10SdivQTCgaEajls4HUIIFFyBA4Px7+H5UmH0nK3vD+vIzw1LepCxGxhOGvpKzcdVP30cAPD2FYuj7dxf0MjKsKPoYv01F6TKo2lljIUQOP/EJNO3Qb8G/eH5ffUdr8D0jgK+d/+GXKPPEVfM+9A5yzA71IUXheVveb2MrMOTMtKDp5UnBqPXxzZc82LpZgYx+iwb+nTphmeaLpnbjZcdMj167VgYvSm6heQc1xGKzFLzJX784EuJ7wfijNauUkHR6E33NPeDcPLLjWijvp7R1hNqFPq4KLrowhMXsoq6OaOf8jhh0Qz87G/OxCmHz4q2dRRdfOr8l+GMZXPGcWSNYbjmY3uYC8A1+iwzekJ3yUV/xcNQ1UdnkaSbQK/tr2TXGZss4SDh+xIVz0d3uRD1IE4Yc0s0i17RE4grWhYcoWj+D67bg1tWbQNAZQ4kbntqG+5dvQuvDBvKd5cKikY/YPg9KUYfiFcCgDrpZPU+SmtMQ/+API4+R4hXLklW3PzQOUeOw0hGj5uf3IqbnwyqVU8UZywA3PvJ16KnXMDpn78bQ1UPw7VgWb9gWtIhlzWYyiyTXMOvgR6my18nDH3BwOhZnXZez4ZH9MzsKqLmSXz4+scAAMcvmh6Ow4XrMunGYOh5XfrewQoOn92Fm/7mTPzvn6+Kx5xRQ28aFklOhZTY+3YgN/Q5xhw95VgDzeoDSjhiThCFQZp2VAo4PIdXHD5z3MY2EhKNRzw/kp74NbDJM/oxTNEtxajNoFmmAIKSHjzKhbT77nIBRUfEzliDdMO37R+o4vhDp2P+9I7IX5BVNg8EEhH3QQDqxAjkjD7HBAfd4PqN3lMuKu+ZCOgouiGjDwxSyXXwwNWvT2QwZwkmjT5yiFv8JHyVpR/DFI1SYho9Ry/zX/SUC9jCosT6h2twRDCBuo4Dz5N4ccdBI6PvZw7afQNVzAyjZiZCzScgGB+//8taS8Kc0eeY0JjTU8KOA0G8c82PH9buCcToCeWCg417B3D9QxtRch0IIYw9YbME/bet+bFD3OYn4YZTl3U6SumNw/XjcEd1wRVKBM7BoRq6y4UoWufgcA1v/Mofo0ADghBq7fa+4WqUaESMOOtkoeAI8IaNMaMnjT6PuskxgTGnO3BY6kk5PRPMGQsEjP6+F4MY6InSYJ5DCDXhi18DmzyjR9l0FpOhh7HmrJoTYvRfetvJKLqOkpHbO1iNwlf59xPr//a7V+CGK85A0XGU+2eo6kcF5SimP+tkQR+f3mQ8z4zNMaHxb287CWcfNRcvZ6F7gJqRmOV6PRxlg4GbSCgXghDGa/+wFoDmENcMEW9jqU/EZkNvZtZk6C9ZfihcR2CIhV4eGKxGfg49CQsAzjpqDs5YNgeuIxJlGqgESIFF+2QZ+j2uM/qxCK/MDX2OtuH4Q2fgxx88PSrQduoRs3De8Qtw2tI4LLSRqoPjiXIDHYyyiJLr4MBQFb94fAuAIGSUoBtKfq66kdJZO6DG0XPsH6iio+ig4DqJwmW9g9VoVaGXWaDxBt8nEitCKjNc0JyaEwWx1DV2mbET6xfKMSFBBuDERTPwrctXqEXNJgij7yhO7EelVHCxeV/sDO22OGOF4owd+bxTNfqhauR4N2XUxk3M9e+P75mi6yQKx5F0Q9+X5XLdgClEVV2R5Bp9jkkBshtUO8aWYp9VmGrCTCToKxJe2MwmfdRTQybSnA0aPYVxmhy1XQaNHgiMO002riMwlCbdOOr/WYV+fiVNcsoZfY5JAYqaoPrutqJZWcW+gexWCq0HXHd/x2mH45xj5kWvbX6SelZcxRStvHewGtU10o3dQMVjBjs9hNOk30fO2AkSXpk4P03qynvG5pgUoFT7edOCKBzOwCaKM3bfQHZr2thARoXqC83qKuKavzxRDa+0sPZ6jKjuXCQcHKqhq2huVD5QqaVKPvyWMK0oOrXG7Fk39KYVC5DeZ7YdyA19jraDugVRxUelRskEccbu65+YjJ66GM0KC+KRgeeG18roGzD0ptVZOaUTly8Zs7WI7KbaOuWiKhVl3M4nGqrTBOe6GWP0QogNQoinhBBPCCFWhttmCyF+J4R4Mfx/VrhdCCG+KoRYI4RYJYQ4pZ0nkCP7II31kLDio1JHfIIw+rHQUdsB6pJFNdjpt+d6uk3jrsfQxxEytmSq9Ixaq4/AIt2QLyDcLwAAELdJREFU1p31K5PqjM2oRv86KeVyKeWK8PXVAO6WUh4N4O7wNQCcD+Do8N8VAK5t1WBzTGxQsw4lfC/rdCzE5y89EccfOh3nHb8AHzpn2XgPp25Qlyxi8NTvlGvftqiVenwoNq28nKLDA3Hdd2szcsPgKLySPj8WjLgZJMJXE3H07Y+6aaYEwsUAXhv+/QMAfwDwv8PtP5RSSgAPCiFmCiEWSim3NTPQHBMfxMSEpZZKVvHO0w/HO08/fLyH0TCWze3Gk5v2R5INRfJxA2qLQ6+L0RfMkTVAUqZQPzcyo9e1fSCOuukIwzP1XrRZQ7pGnz1GLwHcKYR4VAhxRbhtARnv8P/54fZFADaxz24Ot+WYoviXS07AW05ZrGw7an4PgOw70iY6PnvJCfjsJSfg3JcF7e+I/bp1Mvp6nOVpJRCAmHXbJgHT5+KxpUs3HeFEURkDZ2YzSLRjDEs9CxHUpM9S9cqzpJRbhRDzAfxOCPG85b2mOyNxJuGEcQUAHH74xGNKOerH5Wccgcu1/pz/dcUZ+NOa3YkG3Dlai+5yAX99xhF4anPQe9gj6UYpRZw0tD94/2n48YMvGRn1m45bgDuf3RG9tjHzeF+6Rm+qiEkwOWPJ0HcSo8947aFkHL2r7MtM9Uop5dbw/51CiF8COA3ADpJkhBALAewM374ZwGHs44sBbDUc8zoA1wHAihUrsj0l52g55vSUcfHyfKE3ViCnrAwNvVK4zGCgzzlmnhJrz/HNd52Cqifx8n+6A0B6eCVgnwTIV2MrL2Fm9KF0U5gYhl5fsfDmLbf9z7OjssvtxIjSjRCiWwgxjf4G8CYATwP4DYD3hG97D4Bfh3//BsC7w+ibMwD05vp8jhzjC9LoSYqpN7zShILrRGw6ONbIxtwYkVOHoTeF35KBnyiMnn4X+r/Mfouj5k/D3J5y+8dQx3sWAPhl6EArAPiJlPIOIcQjAG4UQnwAwEYAbwvffxuACwCsATAA4H0tH3WOHDkawpzuEt5/1lK89dTAV6I4xJvMZbBp9FZZJ3x/yWboDZ+jSC1i9tWMa/Su5nQdj0qoIxp6KeU6ACcbtu8BcK5huwRwZUtGlyNHjv+/vfsPlqus7zj+/pBASPgRktAECIEEW00xBiIQpoBMrEVTBCHWSOLYJgotUyczYkuro05k6IzNCMax7YyMJfzQUgwSqdRqHdriCOqkQJqQYFAsUAyGBCc2IYYScvn2j/OcsNns7v21u2f3nM9r5s7d8zxnz37vvXu+99nnPOd52kISKy8/s2HdaO9ObnaHKzRffQpeH3rZMtG36L8/uk+mjq7/vUw5pvNdNfV8Z6xZxY12iGujOYxy+aibRhd184uS+T6NNHpertHc+L2o/mL3lC501dRzojeruNEOcc3XzW3dom8x6qauRV97lFZj/Pu1RT/ZLXoz67ZRJ/oJjeech8MXwq7V7GJsbY97o9krc33Toq/7VHLisU70ZtZlo030+cyYjUbPjKtbZKPWUC7GtoqtX1r01158xiErernrxsy6brSJPh/BM+GowxPvUUMYYz+ci7G1h+mX5R3nnnoCT9y48OD2MQ1+T502mrluzKyP5bfft2u+ofENEti4utWgDqlr0nVTG02zRTugfybEy930vrn8x5M7Dxna2i1O9GYVNW7sEezbP9C2xV/ypQFr5SNrGnXdjBvCFAj5845QmsO+1xeIbWHxuTNYfO6MwXfsACd6s4o6+sgx7Ns/0LaJ5RpdHB3KPDitWrj5Xbfjxo7h5VcHDs6UmfvwhbM4b+akEcdcFU70ZhWVz/440tkTP/3u3z64PCQ0G0fffAqERhdTP3D+aSw57/VW75iau2BffnXg4GIjuWY3gdmhnOjNKmreaZP4xebthy11N1TXvG3wBVjyfvt8rH2tEyYcXvbZRW85ZDsfXpmPp2914daac6I3q6ibF5/FsgtmHlzisROmHZcde3KDGRpb3RGbyxP8gbRiylCeY4fzv0ezihp/1Bjmz5rc0dfIp0eePMKbhPKLscemsfrnzXJ//Ei4RW9mHZNfaB3p2PF8rpvpJ4znlg+ewxunHde22KrEid7MOm6kY8dr57qZM31iu8KpHHfdmFnPyi++7u/xxUV6nVv0ZtZ2t3zwHE6fMqFpff3dsJtWvrPhatOXzjmJ7255gXfNOandIVaKE72Ztc0Nl5/J49t2s7BBYr5/xYXsfeUAu/e9yptPObQbZmKDoZaQTQD2D9ec35FYq8SJ3szaZvmFs5rWzT31hC5GYrXcR29mVnJO9GZmJedEb2ZWck70ZmYl50RvZlZyTvRmZiXnRG9mVnJO9GZmJaeIka0u09YgpJeAnwATgd0NdmlWDnAa8FyTulbPc1376noljirXjfR4Pn/aV1dEHCdFxOBTekZE4V/Ao+n7l5vUNyxPdS+2qGv1PNe1qa5X4qhy3SiO5/On+L/BiOvy3DnYV6913fzzMMsB/ncEx3Nde+t6JY4q1430eD5/2lfXK3Ecple6bh6NiHO7/VyzqvP509+G+vfrlRb9lwt6rlnV+fzpb0P6+/VEi97MzDqnV1r0HSdpoaSfSPqZpE+ksjWSNkl6XNK9ko4tOs5mJN0maaekLTVlkyU9IOmp9L1nV05uEv9aSRvT17OSNhYZYyuSZkh6UNJWSU9I+mhd/fWSQtKJRcVYBU3O43dI2pDeRw9L+s2i42ymyXlwg6Tna86FS9v+ulVo0UsaA/wUuATYBjwCLAW2RcSetM9qYGdErCos0BYkXQzsBb4SEXNS2eeAXRGxKr3pJ0XEx4uMs5lG8dfVfx7YHRE3dj24IZB0MnByRGyQdBzwGHBlRPxY0gzgVmA2cE5E/LLIWMuqxXn8T8AVEbFV0keA+RGxvLBAW2hyHt8A7I2Imzv1ulVp0c8HfhYRT0fEfuBrZG+MPMkLGA/07H+9iPg+sKuu+ArgzvT4TuDKrgY1DE3iBw7+/t8P3N3VoIYhIrZHxIb0+CVgKzA9VX8B+Et6+P1TEg3PY7Lf+/Fpn4nALwqKb1CtzoNOqkqinw78vGZ7WypD0u3AC2Stsb/tfmijMi0itkOWiICpBcczUm8DdkTEU0UHMhSSZgLzgPWS3gM8HxGbCg2qGpqdx9cA35a0DfhDoCc/lQ9iRepCvq0TXbBVSfQNlh3OWl8R8SHgFLIW2lXdDMoOWkoPt+Zrpes464DrgAPAp4CVhQZVHc3O448Bl0bEqcDtwOquRjV6XwLeAJwNbAc+3+4XqEqi3wbMqNk+lZqPdxExAKwF/qDLcY3WjtR3nPch7yw4nmGTNBZ4L9nvv6dJOpIsyd8VEd8gOzlnAZskPUv2vtog6fCVsa0dGp3HO4GzImJ9KlsLXNDtwEYjInZExEBEvAb8PVkXVVtVJdE/AvyWpFmSjgKWAPfnV+dTH/HlwJMFxjgS9wPL0uNlwDcLjGWkfg94MiK2FR1IK+k9sgbYGhGrASJic0RMjYiZETGTLBG9NSJeKDDUMmt4HgMTJb0x7XMJ2afzvpE31pJFwJZm+47U2HYfsBdFxAFJK4DvAmOA28jeDA9JOp7sI+Em4E+Li7I1SXcDC4ATU1/kZ8j6Iu+RdDXZxFSLi4uwtUbxR8QaspO1H7ptLiTr/91cMwz0kxHx7QJjqpRG53FEbJL0x8A6Sa8BvwI+XGScrTQ5jxdIOpusG+pZ4Nq2v24VhleamVVZVbpuzMwqy4nezKzknOjNzErOid7MrOSc6M3MSs6J3sys5JzozcxKzonezKzknOjNzErOid7MrOSc6M3MSs6J3sys5JzozcxKzonezKzknOjNzEqubxK9pEWSQtLsomMx6wfpfPlqzfZYSS9K+laRcVn39U2iJ1tA+mGyFYmGTNKYzoRj1vN+DcyRND5tXwI8X2A8VpC+SPSSjiVbyu1qUqKXtEDS9yXdJ+nHkm6RdESq2yvpRknrgd8pLnKzwn0HeHd6vJSaZRslzZf0Q0n/lb6/KZU/lJa2y/f7gaS5XY3a2qovEj1wJfCvEfFTYJekt6by+cCfA28B3gC8N5UfA2yJiPMj4uGuR2vWO74GLJF0NDAXWF9T9yRwcUTMA1YCn03ltwLLAdKi2+Mi4vGuRWxt1y+JfinZG5b0fWl6/J8R8XREDJC1VC5K5QPAuu6GaNZ7UoKeSXbO1C9kPhH4uqQtwBeAN6fyrwOXSTqSbKHtO7oSrHXM2KIDGIykKcDvkvU1Btnq70H2pq1f2Tzf/r+U/M0M7gduBhYAU2rK/wp4MCIWSZoJfA8gIvZJegC4Ang/cG4XY7UO6IcW/fuAr0TE6RExMyJmAM+Qtd7nS5qV+uavIrtYa2aHug24MSI215VP5PWLs8vr6m4F/gZ4JCJ2dTY867R+SPRLgfvqytYBHwB+BKwCtpAl//r9zCovIrZFxBcbVH0O+GtJPyD7pFz7nMeAPcDtXQjROkwR9b0f/UHSAuD6iLis6FjMykbSKWRdObMj4rWCw7FR6ocWvZl1kaQ/Ihud8ykn+XLo2xa9mZkNjVv0ZmYl13OJXtIMSQ9K2irpCUkfTeWTJT0g6an0fVIqny3pR5JekXR9zXHeJGljzdceSdcV9XOZmRWl57puJJ0MnBwRGyQdBzxGdmfscmBXRKyS9AlgUkR8XNJU4PS0z68i4uYGxxxDNozs/Ij4n279LGZmvaDnWvQRsT0iNqTHLwFbgelkN2/cmXa7kyyxExE7I+IR4NUWh30H8N9O8mZWRT2X6Gulu/XmkY0AmBYR2yH7ZwBMHcahllAzmZOZWZX0bKJPM1auA66LiD2jOM5RwHvI5u8wM6ucnkz0aTKldcBdEfGNVLwj9d/n/fg7h3i43wc2RMSO9kdqZtb7ei7RSxKwBtgaEatrqu4HlqXHy4BvDvGQh8zBbWZWNb046uYi4CFgM5DflfdJsn76e4DTgOeAxRGxS9JJwKPA8Wn/vcCZEbFH0gTg58AZEbG7uz+JmVlv6LlEb2Zm7dVzXTdmZtZeTvRmZiXnRG9mVnJO9GZmJedEb2ZWck701tckDaTZSZ+QtEnSn6U1hGv3+aKk5/NySR+qmdV0v6TN6fEqScslvVg38+lZNY93SXomPf43STMlbUnHXSApJF1d89rzUtn1afuOmudvlPTDbv6+rJrGFh2A2Si9HBFnA6SZTP+RbNHrz6SyI4BFZPdTXAx8LyJuJ62FKulZ4O0R8cu0vRxYGxEr6l4nf407gG9FxL1pe2bdfpvJFqpfk7aXAJvq9vmL/Plm3eAWvZVGROwE/gRYke6wBng72eLxXyK7S7rTngOOljQtxbAQ+E4XXtesKSd6K5WIeJrsfZ3PbppPgXEfcFmaR2kwV9V13YwfZhj3AouBC4ANwCt19TfVHPuuYR7bbNjcdWNlJDg4c+mlwMci4iVJ64F3Av8yyPMbdd0Mxz3AWmA22T+ZC+rq3XVjXeUWvZWKpDOAAbLZTReS9ddvTn3xF9GF7puIeIFsIZxLgH/v9OuZDcYteisNSb8B3AL8XUSEpKXANRFxd6o/BnhG0oSI2NfhcFYCUyNi4PXLBWbFcKK3fjde0kbgSOAA8FVgdZq59F3AtfmOEfFrSQ8Dl5N1rTRzVZpFNfeRiBjWMMhB9r9J0qdrtudHxP7hHN9sODx7pZlZybmP3sys5JzozcxKzonezKzknOjNzErOid7MrOSc6M3MSs6J3sys5JzozcxK7v8B9qMhNRxk5scAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"timeseries[0].loc[\"2017-04-01\":\"2017-05-14\"].plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import electricity dataset and upload it to S3 to make it available for Sagemaker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a first step, we need to download the original data set of from the UCI data set repository."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we load and parse the dataset and convert it to a collection of Pandas time series, which makes common time series operations such as indexing by time periods or resampling much easier. The data is originally recorded in 15min interval, which we could use directly. Here we want to forecast longer periods (one week) and resample the data to a granularity of 2 hours."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train and Test splits\n",
"\n",
"Often times one is interested in evaluating the model or tuning its hyperparameters by looking at error metrics on a hold-out test set. Here we split the available data into train and test sets for evaluating the trained model. For standard machine learning tasks such as classification and regression, one typically obtains this split by randomly separating examples into train and test sets. However, in forecasting it is important to do this train/test split based on time rather than by time series.\n",
"\n",
"In this example, we will reserve the last section of each of the time series for evalutation purpose and use only the first part as training data. "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# we use 1 hour frequency for the time series\n",
"freq = '2H'\n",
"\n",
"# we predict for 7 days\n",
"prediction_length = 7 * 12\n",
"\n",
"# we also use 7 days as context length, this is the number of state updates accomplished before making predictions\n",
"context_length = 7 * 12"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We specify here the portion of the data that is used for training: the model sees data from 2014-01-01 to 2014-09-01 for training."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"start_dataset = pd.Timestamp(\"2017-04-01 00:00:00\", freq=freq)\n",
"end_training = pd.Timestamp(\"2017-12-01 00:00:00\", freq=freq)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The DeepAR JSON input format represents each time series as a JSON object. In the simplest case each time series just consists of a start time stamp (``start``) and a list of values (``target``). For more complex cases, DeepAR also supports the fields ``dynamic_feat`` for time-series features and ``cat`` for categorical features, which we will use later."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1\n"
]
}
],
"source": [
"training_data = [\n",
" {\n",
" \"start\": str(start_dataset),\n",
" \"target\": ts[start_dataset:end_training - 1].tolist() # We use -1, because pandas indexing includes the upper bound \n",
" }\n",
" for ts in timeseries\n",
"]\n",
"print(len(training_data))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As test data, we will consider time series extending beyond the training range: these will be used for computing test scores, by using the trained model to forecast their trailing 7 days, and comparing predictions with actual values.\n",
"To evaluate our model performance on more than one week, we generate test data that extends to 1, 2, 3, 4 weeks beyond the training range. This way we perform *rolling evaluation* of our model."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n"
]
}
],
"source": [
"num_test_windows = 4\n",
"\n",
"test_data = [\n",
" {\n",
" \"start\": str(start_dataset),\n",
" \"target\": ts[start_dataset:end_training + k * prediction_length].tolist()\n",
" }\n",
" for k in range(1, num_test_windows + 1) \n",
" for ts in timeseries\n",
"]\n",
"print(len(test_data))"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'start': '2017-04-01 00:00:00',\n",
" 'target': [657.0,\n",
" 695.5,\n",
" 681.5,\n",
" 663.5,\n",
" 708.0,\n",
" 675.5,\n",
" 626.5,\n",
" 603.0,\n",
" 619.5,\n",
" 682.0,\n",
" 658.5,\n",
" 608.0,\n",
" 598.0,\n",
" 654.5,\n",
" 639.5,\n",
" 606.5,\n",
" 588.5,\n",
" 583.0,\n",
" 570.5,\n",
" 546.0,\n",
" 612.5,\n",
" 679.5,\n",
" 662.0,\n",
" 603.0,\n",
" 603.5,\n",
" 661.5,\n",
" 671.0,\n",
" 711.5,\n",
" 736.5,\n",
" 721.5,\n",
" 679.5,\n",
" 686.0,\n",
" 683.5,\n",
" 718.5,\n",
" 688.0,\n",
" 627.0,\n",
" 623.0,\n",
" 691.0,\n",
" 689.0,\n",
" 711.0,\n",
" 733.5,\n",
" 719.0,\n",
" 683.0,\n",
" 686.5,\n",
" 688.5,\n",
" 703.0,\n",
" 677.0,\n",
" 618.0,\n",
" 612.0,\n",
" 669.0,\n",
" 668.0,\n",
" 657.5,\n",
" 702.0,\n",
" 682.5,\n",
" 654.5,\n",
" 687.5,\n",
" 676.0,\n",
" 699.0,\n",
" 660.5,\n",
" 601.5,\n",
" 603.0,\n",
" 660.0,\n",
" 653.0,\n",
" 644.5,\n",
" 726.0,\n",
" 728.0,\n",
" 722.0,\n",
" 739.0,\n",
" 735.0,\n",
" 725.0,\n",
" 666.0,\n",
" 602.5,\n",
" 592.0,\n",
" 652.5,\n",
" 633.0,\n",
" 633.5,\n",
" 699.5,\n",
" 689.0,\n",
" 650.0,\n",
" 685.5,\n",
" 685.0,\n",
" 679.5,\n",
" 624.5,\n",
" 575.0,\n",
" 565.0,\n",
" 619.5,\n",
" 600.0,\n",
" 568.5,\n",
" 618.0,\n",
" 641.0,\n",
" 629.0,\n",
" 616.5,\n",
" 615.0,\n",
" 637.5,\n",
" 596.0,\n",
" 545.5,\n",
" 532.5,\n",
" 593.0,\n",
" 581.0,\n",
" 526.5,\n",
" 534.5,\n",
" 529.5,\n",
" 537.0,\n",
" 548.0,\n",
" 558.5,\n",
" 616.5,\n",
" 584.5,\n",
" 533.0,\n",
" 526.0,\n",
" 605.5,\n",
" 607.5,\n",
" 623.5,\n",
" 699.5,\n",
" 736.0,\n",
" 715.0,\n",
" 730.0,\n",
" 723.0,\n",
" 727.0,\n",
" 672.5,\n",
" 607.0,\n",
" 581.0,\n",
" 641.5,\n",
" 641.5,\n",
" 661.5,\n",
" 754.0,\n",
" 772.5,\n",
" 726.5,\n",
" 731.0,\n",
" 724.5,\n",
" 726.0,\n",
" 673.0,\n",
" 610.0,\n",
" 591.5,\n",
" 651.5,\n",
" 656.5,\n",
" 656.5,\n",
" 716.5,\n",
" 702.5,\n",
" 677.5,\n",
" 670.5,\n",
" 670.0,\n",
" 698.5,\n",
" 662.5,\n",
" 601.5,\n",
" 577.0,\n",
" 642.0,\n",
" 647.0,\n",
" 659.5,\n",
" 683.0,\n",
" 689.0,\n",
" 656.0,\n",
" 660.0,\n",
" 665.0,\n",
" 685.0,\n",
" 648.0,\n",
" 592.5,\n",
" 589.0,\n",
" 653.0,\n",
" 661.5,\n",
" 658.0,\n",
" 681.0,\n",
" 678.5,\n",
" 649.5,\n",
" 667.5,\n",
" 658.5,\n",
" 666.0,\n",
" 630.5,\n",
" 578.5,\n",
" 560.5,\n",
" 621.5,\n",
" 611.5,\n",
" 586.5,\n",
" 622.5,\n",
" 614.0,\n",
" 593.5,\n",
" 593.5,\n",
" 600.5,\n",
" 621.5,\n",
" 583.0,\n",
" 525.5,\n",
" 516.5,\n",
" 583.0,\n",
" 564.0,\n",
" 501.0,\n",
" 509.5,\n",
" 523.5,\n",
" 520.5,\n",
" 515.0,\n",
" 540.0,\n",
" 587.0,\n",
" 558.0,\n",
" 504.5,\n",
" 500.0,\n",
" 567.0,\n",
" 575.0,\n",
" 583.5,\n",
" 684.0,\n",
" 719.0,\n",
" 709.0,\n",
" 732.0,\n",
" 720.0,\n",
" 693.5,\n",
" 628.5,\n",
" 583.0,\n",
" 564.0,\n",
" 629.0,\n",
" 635.5,\n",
" 637.0,\n",
" 689.5,\n",
" 693.5,\n",
" 678.5,\n",
" 683.5,\n",
" 678.0,\n",
" 687.5,\n",
" 649.0,\n",
" 603.0,\n",
" 587.0,\n",
" 647.5,\n",
" 655.5,\n",
" 630.0,\n",
" 667.0,\n",
" 690.0,\n",
" 672.0,\n",
" 681.0,\n",
" 679.0,\n",
" 688.5,\n",
" 650.0,\n",
" 609.0,\n",
" 593.0,\n",
" 655.5,\n",
" 665.5,\n",
" 639.5,\n",
" 694.5,\n",
" 693.0,\n",
" 653.5,\n",
" 701.5,\n",
" 699.5,\n",
" 710.5,\n",
" 663.5,\n",
" 602.5,\n",
" 583.5,\n",
" 648.5,\n",
" 644.0,\n",
" 639.5,\n",
" 656.5,\n",
" 657.0,\n",
" 632.0,\n",
" 652.5,\n",
" 661.0,\n",
" 672.0,\n",
" 632.0,\n",
" 577.5,\n",
" 555.0,\n",
" 620.0,\n",
" 617.0,\n",
" 566.5,\n",
" 591.0,\n",
" 613.0,\n",
" 586.0,\n",
" 579.5,\n",
" 585.5,\n",
" 625.0,\n",
" 594.0,\n",
" 539.5,\n",
" 525.0,\n",
" 595.5,\n",
" 589.0,\n",
" 530.5,\n",
" 504.0,\n",
" 512.0,\n",
" 518.0,\n",
" 504.0,\n",
" 534.0,\n",
" 595.0,\n",
" 579.0,\n",
" 534.0,\n",
" 522.0,\n",
" 598.0,\n",
" 615.5,\n",
" 595.5,\n",
" 634.0,\n",
" 661.5,\n",
" 642.5,\n",
" 659.0,\n",
" 665.5,\n",
" 674.0,\n",
" 633.0,\n",
" 585.0,\n",
" 552.5,\n",
" 613.5,\n",
" 615.5,\n",
" 587.5,\n",
" 636.5,\n",
" 662.5,\n",
" 640.5,\n",
" 670.0,\n",
" 663.0,\n",
" 670.0,\n",
" 621.5,\n",
" 576.5,\n",
" 543.5,\n",
" 597.0,\n",
" 604.5,\n",
" 601.5,\n",
" 690.0,\n",
" 703.5,\n",
" 678.0,\n",
" 702.5,\n",
" 690.0,\n",
" 685.0,\n",
" 631.5,\n",
" 573.0,\n",
" 548.5,\n",
" 613.5,\n",
" 636.0,\n",
" 613.0,\n",
" 676.5,\n",
" 681.0,\n",
" 656.0,\n",
" 678.0,\n",
" 666.0,\n",
" 681.5,\n",
" 647.5,\n",
" 593.0,\n",
" 576.0,\n",
" 631.0,\n",
" 649.5,\n",
" 622.5,\n",
" 652.0,\n",
" 666.0,\n",
" 647.5,\n",
" 661.5,\n",
" 663.0,\n",
" 667.5,\n",
" 645.0,\n",
" 583.5,\n",
" 556.0,\n",
" 611.0,\n",
" 599.5,\n",
" 538.5,\n",
" 559.5,\n",
" 570.0,\n",
" 560.5,\n",
" 553.5,\n",
" 559.5,\n",
" 589.0,\n",
" 566.0,\n",
" 513.5,\n",
" 501.5,\n",
" 569.0,\n",
" 563.5,\n",
" 509.0,\n",
" 495.0,\n",
" 511.5,\n",
" 515.0,\n",
" 510.5,\n",
" 537.5,\n",
" 566.5,\n",
" 542.0,\n",
" 495.5,\n",
" 477.0,\n",
" 541.5,\n",
" 550.5,\n",
" 529.5,\n",
" 598.5,\n",
" 624.0,\n",
" 608.0,\n",
" 613.5,\n",
" 610.0,\n",
" 620.5,\n",
" 582.0,\n",
" 517.0,\n",
" 499.5,\n",
" 562.0,\n",
" 579.5,\n",
" 535.5,\n",
" 565.5,\n",
" 600.0,\n",
" 581.5,\n",
" 587.0,\n",
" 576.5,\n",
" 594.5,\n",
" 564.5,\n",
" 520.5,\n",
" 499.0,\n",
" 562.5,\n",
" 559.0,\n",
" 479.5,\n",
" 492.5,\n",
" 529.5,\n",
" 532.5,\n",
" 536.0,\n",
" 533.0,\n",
" 566.5,\n",
" 541.0,\n",
" 487.5,\n",
" 467.0,\n",
" 529.0,\n",
" 533.0,\n",
" 469.0,\n",
" 477.5,\n",
" 508.5,\n",
" 525.5,\n",
" 518.5,\n",
" 535.0,\n",
" 566.5,\n",
" 531.0,\n",
" 478.5,\n",
" 467.5,\n",
" 520.5,\n",
" 524.5,\n",
" 467.5,\n",
" 479.0,\n",
" 511.0,\n",
" 529.5,\n",
" 535.0,\n",
" 556.0,\n",
" 580.0,\n",
" 554.5,\n",
" 500.5,\n",
" 469.5,\n",
" 519.0,\n",
" 516.5,\n",
" 479.5,\n",
" 516.0,\n",
" 536.0,\n",
" 532.0,\n",
" 518.5,\n",
" 525.5,\n",
" 550.0,\n",
" 519.0,\n",
" 469.0,\n",
" 468.5,\n",
" 524.0,\n",
" 529.0,\n",
" 474.0,\n",
" 494.5,\n",
" 519.5,\n",
" 532.0,\n",
" 525.0,\n",
" 549.5,\n",
" 577.0,\n",
" 553.5,\n",
" 505.0,\n",
" 481.0,\n",
" 542.5,\n",
" 567.5,\n",
" 548.0,\n",
" 622.0,\n",
" 684.0,\n",
" 666.5,\n",
" 677.5,\n",
" 673.5,\n",
" 670.0,\n",
" 624.0,\n",
" 574.0,\n",
" 548.5,\n",
" 598.5,\n",
" 621.5,\n",
" 597.5,\n",
" 676.5,\n",
" 691.0,\n",
" 666.5,\n",
" 703.0,\n",
" 693.5,\n",
" 673.0,\n",
" 616.5,\n",
" 558.0,\n",
" 530.0,\n",
" 595.5,\n",
" 610.0,\n",
" 592.0,\n",
" 673.5,\n",
" 682.5,\n",
" 647.5,\n",
" 674.0,\n",
" 665.5,\n",
" 664.5,\n",
" 611.5,\n",
" 562.5,\n",
" 531.5,\n",
" 592.0,\n",
" 601.0,\n",
" 567.0,\n",
" 647.5,\n",
" 692.0,\n",
" 684.5,\n",
" 704.5,\n",
" 696.5,\n",
" 679.0,\n",
" 626.5,\n",
" 561.5,\n",
" 531.0,\n",
" 584.5,\n",
" 588.5,\n",
" 555.0,\n",
" 646.0,\n",
" 676.0,\n",
" 672.5,\n",
" 715.5,\n",
" 695.5,\n",
" 671.5,\n",
" 617.0,\n",
" 560.0,\n",
" 533.5,\n",
" 587.5,\n",
" 588.5,\n",
" 545.5,\n",
" 594.5,\n",
" 615.0,\n",
" 599.0,\n",
" 592.0,\n",
" 594.0,\n",
" 602.0,\n",
" 573.0,\n",
" 514.0,\n",
" 487.0,\n",
" 540.5,\n",
" 545.5,\n",
" 484.0,\n",
" 494.5,\n",
" 530.0,\n",
" 532.5,\n",
" 529.5,\n",
" 550.0,\n",
" 581.5,\n",
" 554.5,\n",
" 499.0,\n",
" 476.5,\n",
" 538.5,\n",
" 566.0,\n",
" 549.5,\n",
" 629.5,\n",
" 664.0,\n",
" 637.5,\n",
" 667.5,\n",
" 651.0,\n",
" 651.5,\n",
" 610.5,\n",
" 559.0,\n",
" 533.5,\n",
" 590.0,\n",
" 612.5,\n",
" 591.5,\n",
" 660.5,\n",
" 672.5,\n",
" 655.0,\n",
" 672.0,\n",
" 673.5,\n",
" 673.5,\n",
" 630.0,\n",
" 582.0,\n",
" 553.0,\n",
" 610.5,\n",
" 628.5,\n",
" 604.5,\n",
" 658.0,\n",
" 706.5,\n",
" 692.5,\n",
" 699.5,\n",
" 691.5,\n",
" 682.5,\n",
" 637.0,\n",
" 586.0,\n",
" 544.5,\n",
" 605.0,\n",
" 620.0,\n",
" 597.5,\n",
" 663.5,\n",
" 703.5,\n",
" 697.0,\n",
" 710.0,\n",
" 702.5,\n",
" 689.0,\n",
" 641.5,\n",
" 573.0,\n",
" 535.5,\n",
" 588.0,\n",
" 605.0,\n",
" 569.0,\n",
" 622.5,\n",
" 675.0,\n",
" 686.0,\n",
" 697.5,\n",
" 690.0,\n",
" 663.5,\n",
" 623.5,\n",
" 579.0,\n",
" 550.0,\n",
" 598.0,\n",
" 600.5,\n",
" 548.0,\n",
" 601.5,\n",
" 638.0,\n",
" 638.0,\n",
" 638.0,\n",
" 643.0,\n",
" 639.5,\n",
" 597.5,\n",
" 536.0,\n",
" 506.0,\n",
" 549.5,\n",
" 563.5,\n",
" 506.5,\n",
" 536.5,\n",
" 577.5,\n",
" 580.0,\n",
" 574.0,\n",
" 580.0,\n",
" 610.0,\n",
" 593.0,\n",
" 527.5,\n",
" 501.0,\n",
" 556.5,\n",
" 588.0,\n",
" 576.0,\n",
" 658.0,\n",
" 711.0,\n",
" 707.0,\n",
" 735.0,\n",
" 719.5,\n",
" 697.5,\n",
" 649.5,\n",
" 592.5,\n",
" 548.0,\n",
" 589.0,\n",
" 601.5,\n",
" 579.0,\n",
" 671.0,\n",
" 726.0,\n",
" 727.5,\n",
" 750.5,\n",
" 723.5,\n",
" 708.5,\n",
" 649.0,\n",
" 585.0,\n",
" 548.0,\n",
" 590.0,\n",
" 616.5,\n",
" 612.5,\n",
" 718.5,\n",
" 748.0,\n",
" 718.0,\n",
" 726.5,\n",
" 714.0,\n",
" 688.0,\n",
" 625.5,\n",
" 572.0,\n",
" 536.0,\n",
" 581.0,\n",
" 601.0,\n",
" 584.0,\n",
" 665.5,\n",
" 725.5,\n",
" 709.0,\n",
" 721.5,\n",
" 705.5,\n",
" 684.5,\n",
" 633.5,\n",
" 577.5,\n",
" 535.0,\n",
" 580.5,\n",
" 605.0,\n",
" 574.0,\n",
" 664.0,\n",
" 715.0,\n",
" 702.5,\n",
" 695.0,\n",
" 675.5,\n",
" 662.0,\n",
" 610.5,\n",
" 556.0,\n",
" 516.0,\n",
" 569.0,\n",
" 587.5,\n",
" 536.0,\n",
" 579.0,\n",
" 586.5,\n",
" 591.0,\n",
" 580.0,\n",
" 585.5,\n",
" 612.5,\n",
" 589.0,\n",
" 528.0,\n",
" 498.0,\n",
" 546.0,\n",
" 564.0,\n",
" 486.0,\n",
" 490.0,\n",
" 522.5,\n",
" 537.5,\n",
" 539.0,\n",
" 560.5,\n",
" 590.5,\n",
" 561.0,\n",
" 515.0,\n",
" 488.5,\n",
" 544.5,\n",
" 571.0,\n",
" 558.0,\n",
" 643.0,\n",
" 706.5,\n",
" 708.0,\n",
" 736.0,\n",
" 730.0,\n",
" 693.0,\n",
" 635.5,\n",
" 572.5,\n",
" 529.0,\n",
" 580.0,\n",
" 602.0,\n",
" 584.5,\n",
" 687.5,\n",
" 745.5,\n",
" 760.5,\n",
" 783.5,\n",
" 748.0,\n",
" 718.0,\n",
" 659.5,\n",
" 587.0,\n",
" 540.0,\n",
" 580.0,\n",
" 602.0,\n",
" 588.5,\n",
" 668.0,\n",
" 718.5,\n",
" 681.5,\n",
" 722.0,\n",
" 710.0,\n",
" 682.5,\n",
" 621.0,\n",
" 561.0,\n",
" 522.0,\n",
" 561.5,\n",
" 586.0,\n",
" 579.5,\n",
" 680.0,\n",
" 735.5,\n",
" 737.0,\n",
" 755.0,\n",
" 728.0,\n",
" 706.5,\n",
" 642.5,\n",
" 568.5,\n",
" 529.5,\n",
" 567.5,\n",
" 594.0,\n",
" 576.0,\n",
" 657.5,\n",
" 705.0,\n",
" 691.5,\n",
" 709.0,\n",
" 693.0,\n",
" 660.5,\n",
" 616.5,\n",
" 556.0,\n",
" 517.5,\n",
" 557.0,\n",
" 575.0,\n",
" 531.0,\n",
" 573.5,\n",
" 605.5,\n",
" 613.5,\n",
" 615.5,\n",
" 597.0,\n",
" 597.0,\n",
" 570.5,\n",
" 507.5,\n",
" 476.5,\n",
" 527.5,\n",
" 547.0,\n",
" 482.5,\n",
" 502.5,\n",
" 537.5,\n",
" 539.0,\n",
" 540.5,\n",
" 561.5,\n",
" 587.5,\n",
" 562.5,\n",
" 496.0,\n",
" 470.0,\n",
" 531.0,\n",
" 558.5,\n",
" 542.0,\n",
" 628.0,\n",
" 686.5,\n",
" 684.0,\n",
" 691.0,\n",
" 692.0,\n",
" 669.0,\n",
" 625.5,\n",
" 555.0,\n",
" 520.5,\n",
" 569.5,\n",
" 593.0,\n",
" 571.5,\n",
" 661.0,\n",
" 703.0,\n",
" 675.5,\n",
" 703.5,\n",
" 695.5,\n",
" 679.0,\n",
" 632.0,\n",
" 575.0,\n",
" 535.5,\n",
" 580.5,\n",
" 610.5,\n",
" 611.5,\n",
" 714.5,\n",
" 740.5,\n",
" 712.5,\n",
" 729.5,\n",
" 706.5,\n",
" 674.5,\n",
" 617.5,\n",
" 568.0,\n",
" 536.0,\n",
" 576.5,\n",
" 601.0,\n",
" 582.5,\n",
" 669.0,\n",
" 708.0,\n",
" 706.0,\n",
" 701.5,\n",
" 696.5,\n",
" 674.0,\n",
" 631.5,\n",
" 579.0,\n",
" 537.0,\n",
" 573.0,\n",
" 594.5,\n",
" 566.0,\n",
" 642.5,\n",
" 700.5,\n",
" 689.5,\n",
" 719.5,\n",
" 693.0,\n",
" 653.5,\n",
" 615.0,\n",
" 559.0,\n",
" 521.5,\n",
" 575.5,\n",
" 580.0,\n",
" 541.0,\n",
" 600.0,\n",
" 635.5,\n",
" 647.5,\n",
" 656.5,\n",
" 642.5,\n",
" 645.5,\n",
" 610.5,\n",
" 540.0,\n",
" 496.5,\n",
" 536.5,\n",
" 546.0,\n",
" 506.0,\n",
" 530.0,\n",
" 536.5,\n",
" 545.5,\n",
" 553.0,\n",
" 568.0,\n",
" 589.5,\n",
" 564.5,\n",
" 499.5,\n",
" 469.0,\n",
" 526.5,\n",
" 559.5,\n",
" 545.5,\n",
" 649.5,\n",
" 695.0,\n",
" 684.5,\n",
" 694.0,\n",
" 669.0,\n",
" 658.0,\n",
" 616.0,\n",
" 559.5,\n",
" 526.5,\n",
" 571.5,\n",
" 604.0,\n",
" 574.5,\n",
" 644.5,\n",
" 685.0,\n",
" 684.5,\n",
" 702.5,\n",
" 682.5,\n",
" 659.0,\n",
" 621.0,\n",
" 568.0,\n",
" 525.0,\n",
" 573.5,\n",
" 592.5,\n",
" 571.0,\n",
" 645.0,\n",
" 698.5,\n",
" 691.0,\n",
" 728.5,\n",
" 706.0,\n",
" 671.5,\n",
" 629.0,\n",
" 567.5,\n",
" 524.0,\n",
" 566.5,\n",
" 586.5,\n",
" 572.5,\n",
" 665.0,\n",
" 723.0,\n",
" 726.0,\n",
" 753.0,\n",
" 729.5,\n",
" 688.5,\n",
" 641.5,\n",
" 580.0,\n",
" 533.5,\n",
" 580.0,\n",
" 599.0,\n",
" 594.5,\n",
" 687.5,\n",
" 754.5,\n",
" 739.5,\n",
" 759.0,\n",
" 727.5,\n",
" 692.0,\n",
" 641.5,\n",
" 589.0,\n",
" 545.0,\n",
" 574.0,\n",
" 595.5,\n",
" 545.5,\n",
" 584.0,\n",
" 643.5,\n",
" 652.5,\n",
" 656.0,\n",
" 626.5,\n",
" 620.5,\n",
" 597.0,\n",
" 537.0,\n",
" 504.5,\n",
" 545.0,\n",
" 556.5,\n",
" 505.5,\n",
" 533.0,\n",
" 570.5,\n",
" 583.5,\n",
" 590.0,\n",
" 606.5,\n",
" 626.0,\n",
" 593.0,\n",
" 533.0,\n",
" 488.0,\n",
" 527.5,\n",
" 560.5,\n",
" 553.5,\n",
" 682.5,\n",
" 767.5,\n",
" 767.5,\n",
" 797.5,\n",
" 781.5,\n",
" 725.5,\n",
" 666.5,\n",
" 598.5,\n",
" 544.0,\n",
" 577.5,\n",
" 606.5,\n",
" 590.5,\n",
" 704.5,\n",
" 760.5,\n",
" 753.5,\n",
" 785.0,\n",
" 763.5,\n",
" 706.5,\n",
" 646.5,\n",
" 586.5,\n",
" 540.5,\n",
" 568.0,\n",
" 601.5,\n",
" 601.0,\n",
" 709.5,\n",
" 745.5,\n",
" 712.5,\n",
" 747.5,\n",
" 722.5,\n",
" 691.5,\n",
" 634.5,\n",
" 571.0,\n",
" 528.5,\n",
" 573.0,\n",
" 607.5,\n",
" 588.5,\n",
" 686.5,\n",
" 743.5,\n",
" 731.0,\n",
" 746.5,\n",
" 734.0,\n",
" 709.0,\n",
" 660.5,\n",
" 595.5,\n",
" 544.5,\n",
" 585.0,\n",
" 609.5,\n",
" 604.5,\n",
" ...]},\n",
" {'start': '2017-04-01 00:00:00',\n",
" 'target': [657.0,\n",
" 695.5,\n",
" 681.5,\n",
" 663.5,\n",
" 708.0,\n",
" 675.5,\n",
" 626.5,\n",
" 603.0,\n",
" 619.5,\n",
" 682.0,\n",
" 658.5,\n",
" 608.0,\n",
" 598.0,\n",
" 654.5,\n",
" 639.5,\n",
" 606.5,\n",
" 588.5,\n",
" 583.0,\n",
" 570.5,\n",
" 546.0,\n",
" 612.5,\n",
" 679.5,\n",
" 662.0,\n",
" 603.0,\n",
" 603.5,\n",
" 661.5,\n",
" 671.0,\n",
" 711.5,\n",
" 736.5,\n",
" 721.5,\n",
" 679.5,\n",
" 686.0,\n",
" 683.5,\n",
" 718.5,\n",
" 688.0,\n",
" 627.0,\n",
" 623.0,\n",
" 691.0,\n",
" 689.0,\n",
" 711.0,\n",
" 733.5,\n",
" 719.0,\n",
" 683.0,\n",
" 686.5,\n",
" 688.5,\n",
" 703.0,\n",
" 677.0,\n",
" 618.0,\n",
" 612.0,\n",
" 669.0,\n",
" 668.0,\n",
" 657.5,\n",
" 702.0,\n",
" 682.5,\n",
" 654.5,\n",
" 687.5,\n",
" 676.0,\n",
" 699.0,\n",
" 660.5,\n",
" 601.5,\n",
" 603.0,\n",
" 660.0,\n",
" 653.0,\n",
" 644.5,\n",
" 726.0,\n",
" 728.0,\n",
" 722.0,\n",
" 739.0,\n",
" 735.0,\n",
" 725.0,\n",
" 666.0,\n",
" 602.5,\n",
" 592.0,\n",
" 652.5,\n",
" 633.0,\n",
" 633.5,\n",
" 699.5,\n",
" 689.0,\n",
" 650.0,\n",
" 685.5,\n",
" 685.0,\n",
" 679.5,\n",
" 624.5,\n",
" 575.0,\n",
" 565.0,\n",
" 619.5,\n",
" 600.0,\n",
" 568.5,\n",
" 618.0,\n",
" 641.0,\n",
" 629.0,\n",
" 616.5,\n",
" 615.0,\n",
" 637.5,\n",
" 596.0,\n",
" 545.5,\n",
" 532.5,\n",
" 593.0,\n",
" 581.0,\n",
" 526.5,\n",
" 534.5,\n",
" 529.5,\n",
" 537.0,\n",
" 548.0,\n",
" 558.5,\n",
" 616.5,\n",
" 584.5,\n",
" 533.0,\n",
" 526.0,\n",
" 605.5,\n",
" 607.5,\n",
" 623.5,\n",
" 699.5,\n",
" 736.0,\n",
" 715.0,\n",
" 730.0,\n",
" 723.0,\n",
" 727.0,\n",
" 672.5,\n",
" 607.0,\n",
" 581.0,\n",
" 641.5,\n",
" 641.5,\n",
" 661.5,\n",
" 754.0,\n",
" 772.5,\n",
" 726.5,\n",
" 731.0,\n",
" 724.5,\n",
" 726.0,\n",
" 673.0,\n",
" 610.0,\n",
" 591.5,\n",
" 651.5,\n",
" 656.5,\n",
" 656.5,\n",
" 716.5,\n",
" 702.5,\n",
" 677.5,\n",
" 670.5,\n",
" 670.0,\n",
" 698.5,\n",
" 662.5,\n",
" 601.5,\n",
" 577.0,\n",
" 642.0,\n",
" 647.0,\n",
" 659.5,\n",
" 683.0,\n",
" 689.0,\n",
" 656.0,\n",
" 660.0,\n",
" 665.0,\n",
" 685.0,\n",
" 648.0,\n",
" 592.5,\n",
" 589.0,\n",
" 653.0,\n",
" 661.5,\n",
" 658.0,\n",
" 681.0,\n",
" 678.5,\n",
" 649.5,\n",
" 667.5,\n",
" 658.5,\n",
" 666.0,\n",
" 630.5,\n",
" 578.5,\n",
" 560.5,\n",
" 621.5,\n",
" 611.5,\n",
" 586.5,\n",
" 622.5,\n",
" 614.0,\n",
" 593.5,\n",
" 593.5,\n",
" 600.5,\n",
" 621.5,\n",
" 583.0,\n",
" 525.5,\n",
" 516.5,\n",
" 583.0,\n",
" 564.0,\n",
" 501.0,\n",
" 509.5,\n",
" 523.5,\n",
" 520.5,\n",
" 515.0,\n",
" 540.0,\n",
" 587.0,\n",
" 558.0,\n",
" 504.5,\n",
" 500.0,\n",
" 567.0,\n",
" 575.0,\n",
" 583.5,\n",
" 684.0,\n",
" 719.0,\n",
" 709.0,\n",
" 732.0,\n",
" 720.0,\n",
" 693.5,\n",
" 628.5,\n",
" 583.0,\n",
" 564.0,\n",
" 629.0,\n",
" 635.5,\n",
" 637.0,\n",
" 689.5,\n",
" 693.5,\n",
" 678.5,\n",
" 683.5,\n",
" 678.0,\n",
" 687.5,\n",
" 649.0,\n",
" 603.0,\n",
" 587.0,\n",
" 647.5,\n",
" 655.5,\n",
" 630.0,\n",
" 667.0,\n",
" 690.0,\n",
" 672.0,\n",
" 681.0,\n",
" 679.0,\n",
" 688.5,\n",
" 650.0,\n",
" 609.0,\n",
" 593.0,\n",
" 655.5,\n",
" 665.5,\n",
" 639.5,\n",
" 694.5,\n",
" 693.0,\n",
" 653.5,\n",
" 701.5,\n",
" 699.5,\n",
" 710.5,\n",
" 663.5,\n",
" 602.5,\n",
" 583.5,\n",
" 648.5,\n",
" 644.0,\n",
" 639.5,\n",
" 656.5,\n",
" 657.0,\n",
" 632.0,\n",
" 652.5,\n",
" 661.0,\n",
" 672.0,\n",
" 632.0,\n",
" 577.5,\n",
" 555.0,\n",
" 620.0,\n",
" 617.0,\n",
" 566.5,\n",
" 591.0,\n",
" 613.0,\n",
" 586.0,\n",
" 579.5,\n",
" 585.5,\n",
" 625.0,\n",
" 594.0,\n",
" 539.5,\n",
" 525.0,\n",
" 595.5,\n",
" 589.0,\n",
" 530.5,\n",
" 504.0,\n",
" 512.0,\n",
" 518.0,\n",
" 504.0,\n",
" 534.0,\n",
" 595.0,\n",
" 579.0,\n",
" 534.0,\n",
" 522.0,\n",
" 598.0,\n",
" 615.5,\n",
" 595.5,\n",
" 634.0,\n",
" 661.5,\n",
" 642.5,\n",
" 659.0,\n",
" 665.5,\n",
" 674.0,\n",
" 633.0,\n",
" 585.0,\n",
" 552.5,\n",
" 613.5,\n",
" 615.5,\n",
" 587.5,\n",
" 636.5,\n",
" 662.5,\n",
" 640.5,\n",
" 670.0,\n",
" 663.0,\n",
" 670.0,\n",
" 621.5,\n",
" 576.5,\n",
" 543.5,\n",
" 597.0,\n",
" 604.5,\n",
" 601.5,\n",
" 690.0,\n",
" 703.5,\n",
" 678.0,\n",
" 702.5,\n",
" 690.0,\n",
" 685.0,\n",
" 631.5,\n",
" 573.0,\n",
" 548.5,\n",
" 613.5,\n",
" 636.0,\n",
" 613.0,\n",
" 676.5,\n",
" 681.0,\n",
" 656.0,\n",
" 678.0,\n",
" 666.0,\n",
" 681.5,\n",
" 647.5,\n",
" 593.0,\n",
" 576.0,\n",
" 631.0,\n",
" 649.5,\n",
" 622.5,\n",
" 652.0,\n",
" 666.0,\n",
" 647.5,\n",
" 661.5,\n",
" 663.0,\n",
" 667.5,\n",
" 645.0,\n",
" 583.5,\n",
" 556.0,\n",
" 611.0,\n",
" 599.5,\n",
" 538.5,\n",
" 559.5,\n",
" 570.0,\n",
" 560.5,\n",
" 553.5,\n",
" 559.5,\n",
" 589.0,\n",
" 566.0,\n",
" 513.5,\n",
" 501.5,\n",
" 569.0,\n",
" 563.5,\n",
" 509.0,\n",
" 495.0,\n",
" 511.5,\n",
" 515.0,\n",
" 510.5,\n",
" 537.5,\n",
" 566.5,\n",
" 542.0,\n",
" 495.5,\n",
" 477.0,\n",
" 541.5,\n",
" 550.5,\n",
" 529.5,\n",
" 598.5,\n",
" 624.0,\n",
" 608.0,\n",
" 613.5,\n",
" 610.0,\n",
" 620.5,\n",
" 582.0,\n",
" 517.0,\n",
" 499.5,\n",
" 562.0,\n",
" 579.5,\n",
" 535.5,\n",
" 565.5,\n",
" 600.0,\n",
" 581.5,\n",
" 587.0,\n",
" 576.5,\n",
" 594.5,\n",
" 564.5,\n",
" 520.5,\n",
" 499.0,\n",
" 562.5,\n",
" 559.0,\n",
" 479.5,\n",
" 492.5,\n",
" 529.5,\n",
" 532.5,\n",
" 536.0,\n",
" 533.0,\n",
" 566.5,\n",
" 541.0,\n",
" 487.5,\n",
" 467.0,\n",
" 529.0,\n",
" 533.0,\n",
" 469.0,\n",
" 477.5,\n",
" 508.5,\n",
" 525.5,\n",
" 518.5,\n",
" 535.0,\n",
" 566.5,\n",
" 531.0,\n",
" 478.5,\n",
" 467.5,\n",
" 520.5,\n",
" 524.5,\n",
" 467.5,\n",
" 479.0,\n",
" 511.0,\n",
" 529.5,\n",
" 535.0,\n",
" 556.0,\n",
" 580.0,\n",
" 554.5,\n",
" 500.5,\n",
" 469.5,\n",
" 519.0,\n",
" 516.5,\n",
" 479.5,\n",
" 516.0,\n",
" 536.0,\n",
" 532.0,\n",
" 518.5,\n",
" 525.5,\n",
" 550.0,\n",
" 519.0,\n",
" 469.0,\n",
" 468.5,\n",
" 524.0,\n",
" 529.0,\n",
" 474.0,\n",
" 494.5,\n",
" 519.5,\n",
" 532.0,\n",
" 525.0,\n",
" 549.5,\n",
" 577.0,\n",
" 553.5,\n",
" 505.0,\n",
" 481.0,\n",
" 542.5,\n",
" 567.5,\n",
" 548.0,\n",
" 622.0,\n",
" 684.0,\n",
" 666.5,\n",
" 677.5,\n",
" 673.5,\n",
" 670.0,\n",
" 624.0,\n",
" 574.0,\n",
" 548.5,\n",
" 598.5,\n",
" 621.5,\n",
" 597.5,\n",
" 676.5,\n",
" 691.0,\n",
" 666.5,\n",
" 703.0,\n",
" 693.5,\n",
" 673.0,\n",
" 616.5,\n",
" 558.0,\n",
" 530.0,\n",
" 595.5,\n",
" 610.0,\n",
" 592.0,\n",
" 673.5,\n",
" 682.5,\n",
" 647.5,\n",
" 674.0,\n",
" 665.5,\n",
" 664.5,\n",
" 611.5,\n",
" 562.5,\n",
" 531.5,\n",
" 592.0,\n",
" 601.0,\n",
" 567.0,\n",
" 647.5,\n",
" 692.0,\n",
" 684.5,\n",
" 704.5,\n",
" 696.5,\n",
" 679.0,\n",
" 626.5,\n",
" 561.5,\n",
" 531.0,\n",
" 584.5,\n",
" 588.5,\n",
" 555.0,\n",
" 646.0,\n",
" 676.0,\n",
" 672.5,\n",
" 715.5,\n",
" 695.5,\n",
" 671.5,\n",
" 617.0,\n",
" 560.0,\n",
" 533.5,\n",
" 587.5,\n",
" 588.5,\n",
" 545.5,\n",
" 594.5,\n",
" 615.0,\n",
" 599.0,\n",
" 592.0,\n",
" 594.0,\n",
" 602.0,\n",
" 573.0,\n",
" 514.0,\n",
" 487.0,\n",
" 540.5,\n",
" 545.5,\n",
" 484.0,\n",
" 494.5,\n",
" 530.0,\n",
" 532.5,\n",
" 529.5,\n",
" 550.0,\n",
" 581.5,\n",
" 554.5,\n",
" 499.0,\n",
" 476.5,\n",
" 538.5,\n",
" 566.0,\n",
" 549.5,\n",
" 629.5,\n",
" 664.0,\n",
" 637.5,\n",
" 667.5,\n",
" 651.0,\n",
" 651.5,\n",
" 610.5,\n",
" 559.0,\n",
" 533.5,\n",
" 590.0,\n",
" 612.5,\n",
" 591.5,\n",
" 660.5,\n",
" 672.5,\n",
" 655.0,\n",
" 672.0,\n",
" 673.5,\n",
" 673.5,\n",
" 630.0,\n",
" 582.0,\n",
" 553.0,\n",
" 610.5,\n",
" 628.5,\n",
" 604.5,\n",
" 658.0,\n",
" 706.5,\n",
" 692.5,\n",
" 699.5,\n",
" 691.5,\n",
" 682.5,\n",
" 637.0,\n",
" 586.0,\n",
" 544.5,\n",
" 605.0,\n",
" 620.0,\n",
" 597.5,\n",
" 663.5,\n",
" 703.5,\n",
" 697.0,\n",
" 710.0,\n",
" 702.5,\n",
" 689.0,\n",
" 641.5,\n",
" 573.0,\n",
" 535.5,\n",
" 588.0,\n",
" 605.0,\n",
" 569.0,\n",
" 622.5,\n",
" 675.0,\n",
" 686.0,\n",
" 697.5,\n",
" 690.0,\n",
" 663.5,\n",
" 623.5,\n",
" 579.0,\n",
" 550.0,\n",
" 598.0,\n",
" 600.5,\n",
" 548.0,\n",
" 601.5,\n",
" 638.0,\n",
" 638.0,\n",
" 638.0,\n",
" 643.0,\n",
" 639.5,\n",
" 597.5,\n",
" 536.0,\n",
" 506.0,\n",
" 549.5,\n",
" 563.5,\n",
" 506.5,\n",
" 536.5,\n",
" 577.5,\n",
" 580.0,\n",
" 574.0,\n",
" 580.0,\n",
" 610.0,\n",
" 593.0,\n",
" 527.5,\n",
" 501.0,\n",
" 556.5,\n",
" 588.0,\n",
" 576.0,\n",
" 658.0,\n",
" 711.0,\n",
" 707.0,\n",
" 735.0,\n",
" 719.5,\n",
" 697.5,\n",
" 649.5,\n",
" 592.5,\n",
" 548.0,\n",
" 589.0,\n",
" 601.5,\n",
" 579.0,\n",
" 671.0,\n",
" 726.0,\n",
" 727.5,\n",
" 750.5,\n",
" 723.5,\n",
" 708.5,\n",
" 649.0,\n",
" 585.0,\n",
" 548.0,\n",
" 590.0,\n",
" 616.5,\n",
" 612.5,\n",
" 718.5,\n",
" 748.0,\n",
" 718.0,\n",
" 726.5,\n",
" 714.0,\n",
" 688.0,\n",
" 625.5,\n",
" 572.0,\n",
" 536.0,\n",
" 581.0,\n",
" 601.0,\n",
" 584.0,\n",
" 665.5,\n",
" 725.5,\n",
" 709.0,\n",
" 721.5,\n",
" 705.5,\n",
" 684.5,\n",
" 633.5,\n",
" 577.5,\n",
" 535.0,\n",
" 580.5,\n",
" 605.0,\n",
" 574.0,\n",
" 664.0,\n",
" 715.0,\n",
" 702.5,\n",
" 695.0,\n",
" 675.5,\n",
" 662.0,\n",
" 610.5,\n",
" 556.0,\n",
" 516.0,\n",
" 569.0,\n",
" 587.5,\n",
" 536.0,\n",
" 579.0,\n",
" 586.5,\n",
" 591.0,\n",
" 580.0,\n",
" 585.5,\n",
" 612.5,\n",
" 589.0,\n",
" 528.0,\n",
" 498.0,\n",
" 546.0,\n",
" 564.0,\n",
" 486.0,\n",
" 490.0,\n",
" 522.5,\n",
" 537.5,\n",
" 539.0,\n",
" 560.5,\n",
" 590.5,\n",
" 561.0,\n",
" 515.0,\n",
" 488.5,\n",
" 544.5,\n",
" 571.0,\n",
" 558.0,\n",
" 643.0,\n",
" 706.5,\n",
" 708.0,\n",
" 736.0,\n",
" 730.0,\n",
" 693.0,\n",
" 635.5,\n",
" 572.5,\n",
" 529.0,\n",
" 580.0,\n",
" 602.0,\n",
" 584.5,\n",
" 687.5,\n",
" 745.5,\n",
" 760.5,\n",
" 783.5,\n",
" 748.0,\n",
" 718.0,\n",
" 659.5,\n",
" 587.0,\n",
" 540.0,\n",
" 580.0,\n",
" 602.0,\n",
" 588.5,\n",
" 668.0,\n",
" 718.5,\n",
" 681.5,\n",
" 722.0,\n",
" 710.0,\n",
" 682.5,\n",
" 621.0,\n",
" 561.0,\n",
" 522.0,\n",
" 561.5,\n",
" 586.0,\n",
" 579.5,\n",
" 680.0,\n",
" 735.5,\n",
" 737.0,\n",
" 755.0,\n",
" 728.0,\n",
" 706.5,\n",
" 642.5,\n",
" 568.5,\n",
" 529.5,\n",
" 567.5,\n",
" 594.0,\n",
" 576.0,\n",
" 657.5,\n",
" 705.0,\n",
" 691.5,\n",
" 709.0,\n",
" 693.0,\n",
" 660.5,\n",
" 616.5,\n",
" 556.0,\n",
" 517.5,\n",
" 557.0,\n",
" 575.0,\n",
" 531.0,\n",
" 573.5,\n",
" 605.5,\n",
" 613.5,\n",
" 615.5,\n",
" 597.0,\n",
" 597.0,\n",
" 570.5,\n",
" 507.5,\n",
" 476.5,\n",
" 527.5,\n",
" 547.0,\n",
" 482.5,\n",
" 502.5,\n",
" 537.5,\n",
" 539.0,\n",
" 540.5,\n",
" 561.5,\n",
" 587.5,\n",
" 562.5,\n",
" 496.0,\n",
" 470.0,\n",
" 531.0,\n",
" 558.5,\n",
" 542.0,\n",
" 628.0,\n",
" 686.5,\n",
" 684.0,\n",
" 691.0,\n",
" 692.0,\n",
" 669.0,\n",
" 625.5,\n",
" 555.0,\n",
" 520.5,\n",
" 569.5,\n",
" 593.0,\n",
" 571.5,\n",
" 661.0,\n",
" 703.0,\n",
" 675.5,\n",
" 703.5,\n",
" 695.5,\n",
" 679.0,\n",
" 632.0,\n",
" 575.0,\n",
" 535.5,\n",
" 580.5,\n",
" 610.5,\n",
" 611.5,\n",
" 714.5,\n",
" 740.5,\n",
" 712.5,\n",
" 729.5,\n",
" 706.5,\n",
" 674.5,\n",
" 617.5,\n",
" 568.0,\n",
" 536.0,\n",
" 576.5,\n",
" 601.0,\n",
" 582.5,\n",
" 669.0,\n",
" 708.0,\n",
" 706.0,\n",
" 701.5,\n",
" 696.5,\n",
" 674.0,\n",
" 631.5,\n",
" 579.0,\n",
" 537.0,\n",
" 573.0,\n",
" 594.5,\n",
" 566.0,\n",
" 642.5,\n",
" 700.5,\n",
" 689.5,\n",
" 719.5,\n",
" 693.0,\n",
" 653.5,\n",
" 615.0,\n",
" 559.0,\n",
" 521.5,\n",
" 575.5,\n",
" 580.0,\n",
" 541.0,\n",
" 600.0,\n",
" 635.5,\n",
" 647.5,\n",
" 656.5,\n",
" 642.5,\n",
" 645.5,\n",
" 610.5,\n",
" 540.0,\n",
" 496.5,\n",
" 536.5,\n",
" 546.0,\n",
" 506.0,\n",
" 530.0,\n",
" 536.5,\n",
" 545.5,\n",
" 553.0,\n",
" 568.0,\n",
" 589.5,\n",
" 564.5,\n",
" 499.5,\n",
" 469.0,\n",
" 526.5,\n",
" 559.5,\n",
" 545.5,\n",
" 649.5,\n",
" 695.0,\n",
" 684.5,\n",
" 694.0,\n",
" 669.0,\n",
" 658.0,\n",
" 616.0,\n",
" 559.5,\n",
" 526.5,\n",
" 571.5,\n",
" 604.0,\n",
" 574.5,\n",
" 644.5,\n",
" 685.0,\n",
" 684.5,\n",
" 702.5,\n",
" 682.5,\n",
" 659.0,\n",
" 621.0,\n",
" 568.0,\n",
" 525.0,\n",
" 573.5,\n",
" 592.5,\n",
" 571.0,\n",
" 645.0,\n",
" 698.5,\n",
" 691.0,\n",
" 728.5,\n",
" 706.0,\n",
" 671.5,\n",
" 629.0,\n",
" 567.5,\n",
" 524.0,\n",
" 566.5,\n",
" 586.5,\n",
" 572.5,\n",
" 665.0,\n",
" 723.0,\n",
" 726.0,\n",
" 753.0,\n",
" 729.5,\n",
" 688.5,\n",
" 641.5,\n",
" 580.0,\n",
" 533.5,\n",
" 580.0,\n",
" 599.0,\n",
" 594.5,\n",
" 687.5,\n",
" 754.5,\n",
" 739.5,\n",
" 759.0,\n",
" 727.5,\n",
" 692.0,\n",
" 641.5,\n",
" 589.0,\n",
" 545.0,\n",
" 574.0,\n",
" 595.5,\n",
" 545.5,\n",
" 584.0,\n",
" 643.5,\n",
" 652.5,\n",
" 656.0,\n",
" 626.5,\n",
" 620.5,\n",
" 597.0,\n",
" 537.0,\n",
" 504.5,\n",
" 545.0,\n",
" 556.5,\n",
" 505.5,\n",
" 533.0,\n",
" 570.5,\n",
" 583.5,\n",
" 590.0,\n",
" 606.5,\n",
" 626.0,\n",
" 593.0,\n",
" 533.0,\n",
" 488.0,\n",
" 527.5,\n",
" 560.5,\n",
" 553.5,\n",
" 682.5,\n",
" 767.5,\n",
" 767.5,\n",
" 797.5,\n",
" 781.5,\n",
" 725.5,\n",
" 666.5,\n",
" 598.5,\n",
" 544.0,\n",
" 577.5,\n",
" 606.5,\n",
" 590.5,\n",
" 704.5,\n",
" 760.5,\n",
" 753.5,\n",
" 785.0,\n",
" 763.5,\n",
" 706.5,\n",
" 646.5,\n",
" 586.5,\n",
" 540.5,\n",
" 568.0,\n",
" 601.5,\n",
" 601.0,\n",
" 709.5,\n",
" 745.5,\n",
" 712.5,\n",
" 747.5,\n",
" 722.5,\n",
" 691.5,\n",
" 634.5,\n",
" 571.0,\n",
" 528.5,\n",
" 573.0,\n",
" 607.5,\n",
" 588.5,\n",
" 686.5,\n",
" 743.5,\n",
" 731.0,\n",
" 746.5,\n",
" 734.0,\n",
" 709.0,\n",
" 660.5,\n",
" 595.5,\n",
" 544.5,\n",
" 585.0,\n",
" 609.5,\n",
" 604.5,\n",
" ...]},\n",
" {'start': '2017-04-01 00:00:00',\n",
" 'target': [657.0,\n",
" 695.5,\n",
" 681.5,\n",
" 663.5,\n",
" 708.0,\n",
" 675.5,\n",
" 626.5,\n",
" 603.0,\n",
" 619.5,\n",
" 682.0,\n",
" 658.5,\n",
" 608.0,\n",
" 598.0,\n",
" 654.5,\n",
" 639.5,\n",
" 606.5,\n",
" 588.5,\n",
" 583.0,\n",
" 570.5,\n",
" 546.0,\n",
" 612.5,\n",
" 679.5,\n",
" 662.0,\n",
" 603.0,\n",
" 603.5,\n",
" 661.5,\n",
" 671.0,\n",
" 711.5,\n",
" 736.5,\n",
" 721.5,\n",
" 679.5,\n",
" 686.0,\n",
" 683.5,\n",
" 718.5,\n",
" 688.0,\n",
" 627.0,\n",
" 623.0,\n",
" 691.0,\n",
" 689.0,\n",
" 711.0,\n",
" 733.5,\n",
" 719.0,\n",
" 683.0,\n",
" 686.5,\n",
" 688.5,\n",
" 703.0,\n",
" 677.0,\n",
" 618.0,\n",
" 612.0,\n",
" 669.0,\n",
" 668.0,\n",
" 657.5,\n",
" 702.0,\n",
" 682.5,\n",
" 654.5,\n",
" 687.5,\n",
" 676.0,\n",
" 699.0,\n",
" 660.5,\n",
" 601.5,\n",
" 603.0,\n",
" 660.0,\n",
" 653.0,\n",
" 644.5,\n",
" 726.0,\n",
" 728.0,\n",
" 722.0,\n",
" 739.0,\n",
" 735.0,\n",
" 725.0,\n",
" 666.0,\n",
" 602.5,\n",
" 592.0,\n",
" 652.5,\n",
" 633.0,\n",
" 633.5,\n",
" 699.5,\n",
" 689.0,\n",
" 650.0,\n",
" 685.5,\n",
" 685.0,\n",
" 679.5,\n",
" 624.5,\n",
" 575.0,\n",
" 565.0,\n",
" 619.5,\n",
" 600.0,\n",
" 568.5,\n",
" 618.0,\n",
" 641.0,\n",
" 629.0,\n",
" 616.5,\n",
" 615.0,\n",
" 637.5,\n",
" 596.0,\n",
" 545.5,\n",
" 532.5,\n",
" 593.0,\n",
" 581.0,\n",
" 526.5,\n",
" 534.5,\n",
" 529.5,\n",
" 537.0,\n",
" 548.0,\n",
" 558.5,\n",
" 616.5,\n",
" 584.5,\n",
" 533.0,\n",
" 526.0,\n",
" 605.5,\n",
" 607.5,\n",
" 623.5,\n",
" 699.5,\n",
" 736.0,\n",
" 715.0,\n",
" 730.0,\n",
" 723.0,\n",
" 727.0,\n",
" 672.5,\n",
" 607.0,\n",
" 581.0,\n",
" 641.5,\n",
" 641.5,\n",
" 661.5,\n",
" 754.0,\n",
" 772.5,\n",
" 726.5,\n",
" 731.0,\n",
" 724.5,\n",
" 726.0,\n",
" 673.0,\n",
" 610.0,\n",
" 591.5,\n",
" 651.5,\n",
" 656.5,\n",
" 656.5,\n",
" 716.5,\n",
" 702.5,\n",
" 677.5,\n",
" 670.5,\n",
" 670.0,\n",
" 698.5,\n",
" 662.5,\n",
" 601.5,\n",
" 577.0,\n",
" 642.0,\n",
" 647.0,\n",
" 659.5,\n",
" 683.0,\n",
" 689.0,\n",
" 656.0,\n",
" 660.0,\n",
" 665.0,\n",
" 685.0,\n",
" 648.0,\n",
" 592.5,\n",
" 589.0,\n",
" 653.0,\n",
" 661.5,\n",
" 658.0,\n",
" 681.0,\n",
" 678.5,\n",
" 649.5,\n",
" 667.5,\n",
" 658.5,\n",
" 666.0,\n",
" 630.5,\n",
" 578.5,\n",
" 560.5,\n",
" 621.5,\n",
" 611.5,\n",
" 586.5,\n",
" 622.5,\n",
" 614.0,\n",
" 593.5,\n",
" 593.5,\n",
" 600.5,\n",
" 621.5,\n",
" 583.0,\n",
" 525.5,\n",
" 516.5,\n",
" 583.0,\n",
" 564.0,\n",
" 501.0,\n",
" 509.5,\n",
" 523.5,\n",
" 520.5,\n",
" 515.0,\n",
" 540.0,\n",
" 587.0,\n",
" 558.0,\n",
" 504.5,\n",
" 500.0,\n",
" 567.0,\n",
" 575.0,\n",
" 583.5,\n",
" 684.0,\n",
" 719.0,\n",
" 709.0,\n",
" 732.0,\n",
" 720.0,\n",
" 693.5,\n",
" 628.5,\n",
" 583.0,\n",
" 564.0,\n",
" 629.0,\n",
" 635.5,\n",
" 637.0,\n",
" 689.5,\n",
" 693.5,\n",
" 678.5,\n",
" 683.5,\n",
" 678.0,\n",
" 687.5,\n",
" 649.0,\n",
" 603.0,\n",
" 587.0,\n",
" 647.5,\n",
" 655.5,\n",
" 630.0,\n",
" 667.0,\n",
" 690.0,\n",
" 672.0,\n",
" 681.0,\n",
" 679.0,\n",
" 688.5,\n",
" 650.0,\n",
" 609.0,\n",
" 593.0,\n",
" 655.5,\n",
" 665.5,\n",
" 639.5,\n",
" 694.5,\n",
" 693.0,\n",
" 653.5,\n",
" 701.5,\n",
" 699.5,\n",
" 710.5,\n",
" 663.5,\n",
" 602.5,\n",
" 583.5,\n",
" 648.5,\n",
" 644.0,\n",
" 639.5,\n",
" 656.5,\n",
" 657.0,\n",
" 632.0,\n",
" 652.5,\n",
" 661.0,\n",
" 672.0,\n",
" 632.0,\n",
" 577.5,\n",
" 555.0,\n",
" 620.0,\n",
" 617.0,\n",
" 566.5,\n",
" 591.0,\n",
" 613.0,\n",
" 586.0,\n",
" 579.5,\n",
" 585.5,\n",
" 625.0,\n",
" 594.0,\n",
" 539.5,\n",
" 525.0,\n",
" 595.5,\n",
" 589.0,\n",
" 530.5,\n",
" 504.0,\n",
" 512.0,\n",
" 518.0,\n",
" 504.0,\n",
" 534.0,\n",
" 595.0,\n",
" 579.0,\n",
" 534.0,\n",
" 522.0,\n",
" 598.0,\n",
" 615.5,\n",
" 595.5,\n",
" 634.0,\n",
" 661.5,\n",
" 642.5,\n",
" 659.0,\n",
" 665.5,\n",
" 674.0,\n",
" 633.0,\n",
" 585.0,\n",
" 552.5,\n",
" 613.5,\n",
" 615.5,\n",
" 587.5,\n",
" 636.5,\n",
" 662.5,\n",
" 640.5,\n",
" 670.0,\n",
" 663.0,\n",
" 670.0,\n",
" 621.5,\n",
" 576.5,\n",
" 543.5,\n",
" 597.0,\n",
" 604.5,\n",
" 601.5,\n",
" 690.0,\n",
" 703.5,\n",
" 678.0,\n",
" 702.5,\n",
" 690.0,\n",
" 685.0,\n",
" 631.5,\n",
" 573.0,\n",
" 548.5,\n",
" 613.5,\n",
" 636.0,\n",
" 613.0,\n",
" 676.5,\n",
" 681.0,\n",
" 656.0,\n",
" 678.0,\n",
" 666.0,\n",
" 681.5,\n",
" 647.5,\n",
" 593.0,\n",
" 576.0,\n",
" 631.0,\n",
" 649.5,\n",
" 622.5,\n",
" 652.0,\n",
" 666.0,\n",
" 647.5,\n",
" 661.5,\n",
" 663.0,\n",
" 667.5,\n",
" 645.0,\n",
" 583.5,\n",
" 556.0,\n",
" 611.0,\n",
" 599.5,\n",
" 538.5,\n",
" 559.5,\n",
" 570.0,\n",
" 560.5,\n",
" 553.5,\n",
" 559.5,\n",
" 589.0,\n",
" 566.0,\n",
" 513.5,\n",
" 501.5,\n",
" 569.0,\n",
" 563.5,\n",
" 509.0,\n",
" 495.0,\n",
" 511.5,\n",
" 515.0,\n",
" 510.5,\n",
" 537.5,\n",
" 566.5,\n",
" 542.0,\n",
" 495.5,\n",
" 477.0,\n",
" 541.5,\n",
" 550.5,\n",
" 529.5,\n",
" 598.5,\n",
" 624.0,\n",
" 608.0,\n",
" 613.5,\n",
" 610.0,\n",
" 620.5,\n",
" 582.0,\n",
" 517.0,\n",
" 499.5,\n",
" 562.0,\n",
" 579.5,\n",
" 535.5,\n",
" 565.5,\n",
" 600.0,\n",
" 581.5,\n",
" 587.0,\n",
" 576.5,\n",
" 594.5,\n",
" 564.5,\n",
" 520.5,\n",
" 499.0,\n",
" 562.5,\n",
" 559.0,\n",
" 479.5,\n",
" 492.5,\n",
" 529.5,\n",
" 532.5,\n",
" 536.0,\n",
" 533.0,\n",
" 566.5,\n",
" 541.0,\n",
" 487.5,\n",
" 467.0,\n",
" 529.0,\n",
" 533.0,\n",
" 469.0,\n",
" 477.5,\n",
" 508.5,\n",
" 525.5,\n",
" 518.5,\n",
" 535.0,\n",
" 566.5,\n",
" 531.0,\n",
" 478.5,\n",
" 467.5,\n",
" 520.5,\n",
" 524.5,\n",
" 467.5,\n",
" 479.0,\n",
" 511.0,\n",
" 529.5,\n",
" 535.0,\n",
" 556.0,\n",
" 580.0,\n",
" 554.5,\n",
" 500.5,\n",
" 469.5,\n",
" 519.0,\n",
" 516.5,\n",
" 479.5,\n",
" 516.0,\n",
" 536.0,\n",
" 532.0,\n",
" 518.5,\n",
" 525.5,\n",
" 550.0,\n",
" 519.0,\n",
" 469.0,\n",
" 468.5,\n",
" 524.0,\n",
" 529.0,\n",
" 474.0,\n",
" 494.5,\n",
" 519.5,\n",
" 532.0,\n",
" 525.0,\n",
" 549.5,\n",
" 577.0,\n",
" 553.5,\n",
" 505.0,\n",
" 481.0,\n",
" 542.5,\n",
" 567.5,\n",
" 548.0,\n",
" 622.0,\n",
" 684.0,\n",
" 666.5,\n",
" 677.5,\n",
" 673.5,\n",
" 670.0,\n",
" 624.0,\n",
" 574.0,\n",
" 548.5,\n",
" 598.5,\n",
" 621.5,\n",
" 597.5,\n",
" 676.5,\n",
" 691.0,\n",
" 666.5,\n",
" 703.0,\n",
" 693.5,\n",
" 673.0,\n",
" 616.5,\n",
" 558.0,\n",
" 530.0,\n",
" 595.5,\n",
" 610.0,\n",
" 592.0,\n",
" 673.5,\n",
" 682.5,\n",
" 647.5,\n",
" 674.0,\n",
" 665.5,\n",
" 664.5,\n",
" 611.5,\n",
" 562.5,\n",
" 531.5,\n",
" 592.0,\n",
" 601.0,\n",
" 567.0,\n",
" 647.5,\n",
" 692.0,\n",
" 684.5,\n",
" 704.5,\n",
" 696.5,\n",
" 679.0,\n",
" 626.5,\n",
" 561.5,\n",
" 531.0,\n",
" 584.5,\n",
" 588.5,\n",
" 555.0,\n",
" 646.0,\n",
" 676.0,\n",
" 672.5,\n",
" 715.5,\n",
" 695.5,\n",
" 671.5,\n",
" 617.0,\n",
" 560.0,\n",
" 533.5,\n",
" 587.5,\n",
" 588.5,\n",
" 545.5,\n",
" 594.5,\n",
" 615.0,\n",
" 599.0,\n",
" 592.0,\n",
" 594.0,\n",
" 602.0,\n",
" 573.0,\n",
" 514.0,\n",
" 487.0,\n",
" 540.5,\n",
" 545.5,\n",
" 484.0,\n",
" 494.5,\n",
" 530.0,\n",
" 532.5,\n",
" 529.5,\n",
" 550.0,\n",
" 581.5,\n",
" 554.5,\n",
" 499.0,\n",
" 476.5,\n",
" 538.5,\n",
" 566.0,\n",
" 549.5,\n",
" 629.5,\n",
" 664.0,\n",
" 637.5,\n",
" 667.5,\n",
" 651.0,\n",
" 651.5,\n",
" 610.5,\n",
" 559.0,\n",
" 533.5,\n",
" 590.0,\n",
" 612.5,\n",
" 591.5,\n",
" 660.5,\n",
" 672.5,\n",
" 655.0,\n",
" 672.0,\n",
" 673.5,\n",
" 673.5,\n",
" 630.0,\n",
" 582.0,\n",
" 553.0,\n",
" 610.5,\n",
" 628.5,\n",
" 604.5,\n",
" 658.0,\n",
" 706.5,\n",
" 692.5,\n",
" 699.5,\n",
" 691.5,\n",
" 682.5,\n",
" 637.0,\n",
" 586.0,\n",
" 544.5,\n",
" 605.0,\n",
" 620.0,\n",
" 597.5,\n",
" 663.5,\n",
" 703.5,\n",
" 697.0,\n",
" 710.0,\n",
" 702.5,\n",
" 689.0,\n",
" 641.5,\n",
" 573.0,\n",
" 535.5,\n",
" 588.0,\n",
" 605.0,\n",
" 569.0,\n",
" 622.5,\n",
" 675.0,\n",
" 686.0,\n",
" 697.5,\n",
" 690.0,\n",
" 663.5,\n",
" 623.5,\n",
" 579.0,\n",
" 550.0,\n",
" 598.0,\n",
" 600.5,\n",
" 548.0,\n",
" 601.5,\n",
" 638.0,\n",
" 638.0,\n",
" 638.0,\n",
" 643.0,\n",
" 639.5,\n",
" 597.5,\n",
" 536.0,\n",
" 506.0,\n",
" 549.5,\n",
" 563.5,\n",
" 506.5,\n",
" 536.5,\n",
" 577.5,\n",
" 580.0,\n",
" 574.0,\n",
" 580.0,\n",
" 610.0,\n",
" 593.0,\n",
" 527.5,\n",
" 501.0,\n",
" 556.5,\n",
" 588.0,\n",
" 576.0,\n",
" 658.0,\n",
" 711.0,\n",
" 707.0,\n",
" 735.0,\n",
" 719.5,\n",
" 697.5,\n",
" 649.5,\n",
" 592.5,\n",
" 548.0,\n",
" 589.0,\n",
" 601.5,\n",
" 579.0,\n",
" 671.0,\n",
" 726.0,\n",
" 727.5,\n",
" 750.5,\n",
" 723.5,\n",
" 708.5,\n",
" 649.0,\n",
" 585.0,\n",
" 548.0,\n",
" 590.0,\n",
" 616.5,\n",
" 612.5,\n",
" 718.5,\n",
" 748.0,\n",
" 718.0,\n",
" 726.5,\n",
" 714.0,\n",
" 688.0,\n",
" 625.5,\n",
" 572.0,\n",
" 536.0,\n",
" 581.0,\n",
" 601.0,\n",
" 584.0,\n",
" 665.5,\n",
" 725.5,\n",
" 709.0,\n",
" 721.5,\n",
" 705.5,\n",
" 684.5,\n",
" 633.5,\n",
" 577.5,\n",
" 535.0,\n",
" 580.5,\n",
" 605.0,\n",
" 574.0,\n",
" 664.0,\n",
" 715.0,\n",
" 702.5,\n",
" 695.0,\n",
" 675.5,\n",
" 662.0,\n",
" 610.5,\n",
" 556.0,\n",
" 516.0,\n",
" 569.0,\n",
" 587.5,\n",
" 536.0,\n",
" 579.0,\n",
" 586.5,\n",
" 591.0,\n",
" 580.0,\n",
" 585.5,\n",
" 612.5,\n",
" 589.0,\n",
" 528.0,\n",
" 498.0,\n",
" 546.0,\n",
" 564.0,\n",
" 486.0,\n",
" 490.0,\n",
" 522.5,\n",
" 537.5,\n",
" 539.0,\n",
" 560.5,\n",
" 590.5,\n",
" 561.0,\n",
" 515.0,\n",
" 488.5,\n",
" 544.5,\n",
" 571.0,\n",
" 558.0,\n",
" 643.0,\n",
" 706.5,\n",
" 708.0,\n",
" 736.0,\n",
" 730.0,\n",
" 693.0,\n",
" 635.5,\n",
" 572.5,\n",
" 529.0,\n",
" 580.0,\n",
" 602.0,\n",
" 584.5,\n",
" 687.5,\n",
" 745.5,\n",
" 760.5,\n",
" 783.5,\n",
" 748.0,\n",
" 718.0,\n",
" 659.5,\n",
" 587.0,\n",
" 540.0,\n",
" 580.0,\n",
" 602.0,\n",
" 588.5,\n",
" 668.0,\n",
" 718.5,\n",
" 681.5,\n",
" 722.0,\n",
" 710.0,\n",
" 682.5,\n",
" 621.0,\n",
" 561.0,\n",
" 522.0,\n",
" 561.5,\n",
" 586.0,\n",
" 579.5,\n",
" 680.0,\n",
" 735.5,\n",
" 737.0,\n",
" 755.0,\n",
" 728.0,\n",
" 706.5,\n",
" 642.5,\n",
" 568.5,\n",
" 529.5,\n",
" 567.5,\n",
" 594.0,\n",
" 576.0,\n",
" 657.5,\n",
" 705.0,\n",
" 691.5,\n",
" 709.0,\n",
" 693.0,\n",
" 660.5,\n",
" 616.5,\n",
" 556.0,\n",
" 517.5,\n",
" 557.0,\n",
" 575.0,\n",
" 531.0,\n",
" 573.5,\n",
" 605.5,\n",
" 613.5,\n",
" 615.5,\n",
" 597.0,\n",
" 597.0,\n",
" 570.5,\n",
" 507.5,\n",
" 476.5,\n",
" 527.5,\n",
" 547.0,\n",
" 482.5,\n",
" 502.5,\n",
" 537.5,\n",
" 539.0,\n",
" 540.5,\n",
" 561.5,\n",
" 587.5,\n",
" 562.5,\n",
" 496.0,\n",
" 470.0,\n",
" 531.0,\n",
" 558.5,\n",
" 542.0,\n",
" 628.0,\n",
" 686.5,\n",
" 684.0,\n",
" 691.0,\n",
" 692.0,\n",
" 669.0,\n",
" 625.5,\n",
" 555.0,\n",
" 520.5,\n",
" 569.5,\n",
" 593.0,\n",
" 571.5,\n",
" 661.0,\n",
" 703.0,\n",
" 675.5,\n",
" 703.5,\n",
" 695.5,\n",
" 679.0,\n",
" 632.0,\n",
" 575.0,\n",
" 535.5,\n",
" 580.5,\n",
" 610.5,\n",
" 611.5,\n",
" 714.5,\n",
" 740.5,\n",
" 712.5,\n",
" 729.5,\n",
" 706.5,\n",
" 674.5,\n",
" 617.5,\n",
" 568.0,\n",
" 536.0,\n",
" 576.5,\n",
" 601.0,\n",
" 582.5,\n",
" 669.0,\n",
" 708.0,\n",
" 706.0,\n",
" 701.5,\n",
" 696.5,\n",
" 674.0,\n",
" 631.5,\n",
" 579.0,\n",
" 537.0,\n",
" 573.0,\n",
" 594.5,\n",
" 566.0,\n",
" 642.5,\n",
" 700.5,\n",
" 689.5,\n",
" 719.5,\n",
" 693.0,\n",
" 653.5,\n",
" 615.0,\n",
" 559.0,\n",
" 521.5,\n",
" 575.5,\n",
" 580.0,\n",
" 541.0,\n",
" 600.0,\n",
" 635.5,\n",
" 647.5,\n",
" 656.5,\n",
" 642.5,\n",
" 645.5,\n",
" 610.5,\n",
" 540.0,\n",
" 496.5,\n",
" 536.5,\n",
" 546.0,\n",
" 506.0,\n",
" 530.0,\n",
" 536.5,\n",
" 545.5,\n",
" 553.0,\n",
" 568.0,\n",
" 589.5,\n",
" 564.5,\n",
" 499.5,\n",
" 469.0,\n",
" 526.5,\n",
" 559.5,\n",
" 545.5,\n",
" 649.5,\n",
" 695.0,\n",
" 684.5,\n",
" 694.0,\n",
" 669.0,\n",
" 658.0,\n",
" 616.0,\n",
" 559.5,\n",
" 526.5,\n",
" 571.5,\n",
" 604.0,\n",
" 574.5,\n",
" 644.5,\n",
" 685.0,\n",
" 684.5,\n",
" 702.5,\n",
" 682.5,\n",
" 659.0,\n",
" 621.0,\n",
" 568.0,\n",
" 525.0,\n",
" 573.5,\n",
" 592.5,\n",
" 571.0,\n",
" 645.0,\n",
" 698.5,\n",
" 691.0,\n",
" 728.5,\n",
" 706.0,\n",
" 671.5,\n",
" 629.0,\n",
" 567.5,\n",
" 524.0,\n",
" 566.5,\n",
" 586.5,\n",
" 572.5,\n",
" 665.0,\n",
" 723.0,\n",
" 726.0,\n",
" 753.0,\n",
" 729.5,\n",
" 688.5,\n",
" 641.5,\n",
" 580.0,\n",
" 533.5,\n",
" 580.0,\n",
" 599.0,\n",
" 594.5,\n",
" 687.5,\n",
" 754.5,\n",
" 739.5,\n",
" 759.0,\n",
" 727.5,\n",
" 692.0,\n",
" 641.5,\n",
" 589.0,\n",
" 545.0,\n",
" 574.0,\n",
" 595.5,\n",
" 545.5,\n",
" 584.0,\n",
" 643.5,\n",
" 652.5,\n",
" 656.0,\n",
" 626.5,\n",
" 620.5,\n",
" 597.0,\n",
" 537.0,\n",
" 504.5,\n",
" 545.0,\n",
" 556.5,\n",
" 505.5,\n",
" 533.0,\n",
" 570.5,\n",
" 583.5,\n",
" 590.0,\n",
" 606.5,\n",
" 626.0,\n",
" 593.0,\n",
" 533.0,\n",
" 488.0,\n",
" 527.5,\n",
" 560.5,\n",
" 553.5,\n",
" 682.5,\n",
" 767.5,\n",
" 767.5,\n",
" 797.5,\n",
" 781.5,\n",
" 725.5,\n",
" 666.5,\n",
" 598.5,\n",
" 544.0,\n",
" 577.5,\n",
" 606.5,\n",
" 590.5,\n",
" 704.5,\n",
" 760.5,\n",
" 753.5,\n",
" 785.0,\n",
" 763.5,\n",
" 706.5,\n",
" 646.5,\n",
" 586.5,\n",
" 540.5,\n",
" 568.0,\n",
" 601.5,\n",
" 601.0,\n",
" 709.5,\n",
" 745.5,\n",
" 712.5,\n",
" 747.5,\n",
" 722.5,\n",
" 691.5,\n",
" 634.5,\n",
" 571.0,\n",
" 528.5,\n",
" 573.0,\n",
" 607.5,\n",
" 588.5,\n",
" 686.5,\n",
" 743.5,\n",
" 731.0,\n",
" 746.5,\n",
" 734.0,\n",
" 709.0,\n",
" 660.5,\n",
" 595.5,\n",
" 544.5,\n",
" 585.0,\n",
" 609.5,\n",
" 604.5,\n",
" ...]},\n",
" {'start': '2017-04-01 00:00:00',\n",
" 'target': [657.0,\n",
" 695.5,\n",
" 681.5,\n",
" 663.5,\n",
" 708.0,\n",
" 675.5,\n",
" 626.5,\n",
" 603.0,\n",
" 619.5,\n",
" 682.0,\n",
" 658.5,\n",
" 608.0,\n",
" 598.0,\n",
" 654.5,\n",
" 639.5,\n",
" 606.5,\n",
" 588.5,\n",
" 583.0,\n",
" 570.5,\n",
" 546.0,\n",
" 612.5,\n",
" 679.5,\n",
" 662.0,\n",
" 603.0,\n",
" 603.5,\n",
" 661.5,\n",
" 671.0,\n",
" 711.5,\n",
" 736.5,\n",
" 721.5,\n",
" 679.5,\n",
" 686.0,\n",
" 683.5,\n",
" 718.5,\n",
" 688.0,\n",
" 627.0,\n",
" 623.0,\n",
" 691.0,\n",
" 689.0,\n",
" 711.0,\n",
" 733.5,\n",
" 719.0,\n",
" 683.0,\n",
" 686.5,\n",
" 688.5,\n",
" 703.0,\n",
" 677.0,\n",
" 618.0,\n",
" 612.0,\n",
" 669.0,\n",
" 668.0,\n",
" 657.5,\n",
" 702.0,\n",
" 682.5,\n",
" 654.5,\n",
" 687.5,\n",
" 676.0,\n",
" 699.0,\n",
" 660.5,\n",
" 601.5,\n",
" 603.0,\n",
" 660.0,\n",
" 653.0,\n",
" 644.5,\n",
" 726.0,\n",
" 728.0,\n",
" 722.0,\n",
" 739.0,\n",
" 735.0,\n",
" 725.0,\n",
" 666.0,\n",
" 602.5,\n",
" 592.0,\n",
" 652.5,\n",
" 633.0,\n",
" 633.5,\n",
" 699.5,\n",
" 689.0,\n",
" 650.0,\n",
" 685.5,\n",
" 685.0,\n",
" 679.5,\n",
" 624.5,\n",
" 575.0,\n",
" 565.0,\n",
" 619.5,\n",
" 600.0,\n",
" 568.5,\n",
" 618.0,\n",
" 641.0,\n",
" 629.0,\n",
" 616.5,\n",
" 615.0,\n",
" 637.5,\n",
" 596.0,\n",
" 545.5,\n",
" 532.5,\n",
" 593.0,\n",
" 581.0,\n",
" 526.5,\n",
" 534.5,\n",
" 529.5,\n",
" 537.0,\n",
" 548.0,\n",
" 558.5,\n",
" 616.5,\n",
" 584.5,\n",
" 533.0,\n",
" 526.0,\n",
" 605.5,\n",
" 607.5,\n",
" 623.5,\n",
" 699.5,\n",
" 736.0,\n",
" 715.0,\n",
" 730.0,\n",
" 723.0,\n",
" 727.0,\n",
" 672.5,\n",
" 607.0,\n",
" 581.0,\n",
" 641.5,\n",
" 641.5,\n",
" 661.5,\n",
" 754.0,\n",
" 772.5,\n",
" 726.5,\n",
" 731.0,\n",
" 724.5,\n",
" 726.0,\n",
" 673.0,\n",
" 610.0,\n",
" 591.5,\n",
" 651.5,\n",
" 656.5,\n",
" 656.5,\n",
" 716.5,\n",
" 702.5,\n",
" 677.5,\n",
" 670.5,\n",
" 670.0,\n",
" 698.5,\n",
" 662.5,\n",
" 601.5,\n",
" 577.0,\n",
" 642.0,\n",
" 647.0,\n",
" 659.5,\n",
" 683.0,\n",
" 689.0,\n",
" 656.0,\n",
" 660.0,\n",
" 665.0,\n",
" 685.0,\n",
" 648.0,\n",
" 592.5,\n",
" 589.0,\n",
" 653.0,\n",
" 661.5,\n",
" 658.0,\n",
" 681.0,\n",
" 678.5,\n",
" 649.5,\n",
" 667.5,\n",
" 658.5,\n",
" 666.0,\n",
" 630.5,\n",
" 578.5,\n",
" 560.5,\n",
" 621.5,\n",
" 611.5,\n",
" 586.5,\n",
" 622.5,\n",
" 614.0,\n",
" 593.5,\n",
" 593.5,\n",
" 600.5,\n",
" 621.5,\n",
" 583.0,\n",
" 525.5,\n",
" 516.5,\n",
" 583.0,\n",
" 564.0,\n",
" 501.0,\n",
" 509.5,\n",
" 523.5,\n",
" 520.5,\n",
" 515.0,\n",
" 540.0,\n",
" 587.0,\n",
" 558.0,\n",
" 504.5,\n",
" 500.0,\n",
" 567.0,\n",
" 575.0,\n",
" 583.5,\n",
" 684.0,\n",
" 719.0,\n",
" 709.0,\n",
" 732.0,\n",
" 720.0,\n",
" 693.5,\n",
" 628.5,\n",
" 583.0,\n",
" 564.0,\n",
" 629.0,\n",
" 635.5,\n",
" 637.0,\n",
" 689.5,\n",
" 693.5,\n",
" 678.5,\n",
" 683.5,\n",
" 678.0,\n",
" 687.5,\n",
" 649.0,\n",
" 603.0,\n",
" 587.0,\n",
" 647.5,\n",
" 655.5,\n",
" 630.0,\n",
" 667.0,\n",
" 690.0,\n",
" 672.0,\n",
" 681.0,\n",
" 679.0,\n",
" 688.5,\n",
" 650.0,\n",
" 609.0,\n",
" 593.0,\n",
" 655.5,\n",
" 665.5,\n",
" 639.5,\n",
" 694.5,\n",
" 693.0,\n",
" 653.5,\n",
" 701.5,\n",
" 699.5,\n",
" 710.5,\n",
" 663.5,\n",
" 602.5,\n",
" 583.5,\n",
" 648.5,\n",
" 644.0,\n",
" 639.5,\n",
" 656.5,\n",
" 657.0,\n",
" 632.0,\n",
" 652.5,\n",
" 661.0,\n",
" 672.0,\n",
" 632.0,\n",
" 577.5,\n",
" 555.0,\n",
" 620.0,\n",
" 617.0,\n",
" 566.5,\n",
" 591.0,\n",
" 613.0,\n",
" 586.0,\n",
" 579.5,\n",
" 585.5,\n",
" 625.0,\n",
" 594.0,\n",
" 539.5,\n",
" 525.0,\n",
" 595.5,\n",
" 589.0,\n",
" 530.5,\n",
" 504.0,\n",
" 512.0,\n",
" 518.0,\n",
" 504.0,\n",
" 534.0,\n",
" 595.0,\n",
" 579.0,\n",
" 534.0,\n",
" 522.0,\n",
" 598.0,\n",
" 615.5,\n",
" 595.5,\n",
" 634.0,\n",
" 661.5,\n",
" 642.5,\n",
" 659.0,\n",
" 665.5,\n",
" 674.0,\n",
" 633.0,\n",
" 585.0,\n",
" 552.5,\n",
" 613.5,\n",
" 615.5,\n",
" 587.5,\n",
" 636.5,\n",
" 662.5,\n",
" 640.5,\n",
" 670.0,\n",
" 663.0,\n",
" 670.0,\n",
" 621.5,\n",
" 576.5,\n",
" 543.5,\n",
" 597.0,\n",
" 604.5,\n",
" 601.5,\n",
" 690.0,\n",
" 703.5,\n",
" 678.0,\n",
" 702.5,\n",
" 690.0,\n",
" 685.0,\n",
" 631.5,\n",
" 573.0,\n",
" 548.5,\n",
" 613.5,\n",
" 636.0,\n",
" 613.0,\n",
" 676.5,\n",
" 681.0,\n",
" 656.0,\n",
" 678.0,\n",
" 666.0,\n",
" 681.5,\n",
" 647.5,\n",
" 593.0,\n",
" 576.0,\n",
" 631.0,\n",
" 649.5,\n",
" 622.5,\n",
" 652.0,\n",
" 666.0,\n",
" 647.5,\n",
" 661.5,\n",
" 663.0,\n",
" 667.5,\n",
" 645.0,\n",
" 583.5,\n",
" 556.0,\n",
" 611.0,\n",
" 599.5,\n",
" 538.5,\n",
" 559.5,\n",
" 570.0,\n",
" 560.5,\n",
" 553.5,\n",
" 559.5,\n",
" 589.0,\n",
" 566.0,\n",
" 513.5,\n",
" 501.5,\n",
" 569.0,\n",
" 563.5,\n",
" 509.0,\n",
" 495.0,\n",
" 511.5,\n",
" 515.0,\n",
" 510.5,\n",
" 537.5,\n",
" 566.5,\n",
" 542.0,\n",
" 495.5,\n",
" 477.0,\n",
" 541.5,\n",
" 550.5,\n",
" 529.5,\n",
" 598.5,\n",
" 624.0,\n",
" 608.0,\n",
" 613.5,\n",
" 610.0,\n",
" 620.5,\n",
" 582.0,\n",
" 517.0,\n",
" 499.5,\n",
" 562.0,\n",
" 579.5,\n",
" 535.5,\n",
" 565.5,\n",
" 600.0,\n",
" 581.5,\n",
" 587.0,\n",
" 576.5,\n",
" 594.5,\n",
" 564.5,\n",
" 520.5,\n",
" 499.0,\n",
" 562.5,\n",
" 559.0,\n",
" 479.5,\n",
" 492.5,\n",
" 529.5,\n",
" 532.5,\n",
" 536.0,\n",
" 533.0,\n",
" 566.5,\n",
" 541.0,\n",
" 487.5,\n",
" 467.0,\n",
" 529.0,\n",
" 533.0,\n",
" 469.0,\n",
" 477.5,\n",
" 508.5,\n",
" 525.5,\n",
" 518.5,\n",
" 535.0,\n",
" 566.5,\n",
" 531.0,\n",
" 478.5,\n",
" 467.5,\n",
" 520.5,\n",
" 524.5,\n",
" 467.5,\n",
" 479.0,\n",
" 511.0,\n",
" 529.5,\n",
" 535.0,\n",
" 556.0,\n",
" 580.0,\n",
" 554.5,\n",
" 500.5,\n",
" 469.5,\n",
" 519.0,\n",
" 516.5,\n",
" 479.5,\n",
" 516.0,\n",
" 536.0,\n",
" 532.0,\n",
" 518.5,\n",
" 525.5,\n",
" 550.0,\n",
" 519.0,\n",
" 469.0,\n",
" 468.5,\n",
" 524.0,\n",
" 529.0,\n",
" 474.0,\n",
" 494.5,\n",
" 519.5,\n",
" 532.0,\n",
" 525.0,\n",
" 549.5,\n",
" 577.0,\n",
" 553.5,\n",
" 505.0,\n",
" 481.0,\n",
" 542.5,\n",
" 567.5,\n",
" 548.0,\n",
" 622.0,\n",
" 684.0,\n",
" 666.5,\n",
" 677.5,\n",
" 673.5,\n",
" 670.0,\n",
" 624.0,\n",
" 574.0,\n",
" 548.5,\n",
" 598.5,\n",
" 621.5,\n",
" 597.5,\n",
" 676.5,\n",
" 691.0,\n",
" 666.5,\n",
" 703.0,\n",
" 693.5,\n",
" 673.0,\n",
" 616.5,\n",
" 558.0,\n",
" 530.0,\n",
" 595.5,\n",
" 610.0,\n",
" 592.0,\n",
" 673.5,\n",
" 682.5,\n",
" 647.5,\n",
" 674.0,\n",
" 665.5,\n",
" 664.5,\n",
" 611.5,\n",
" 562.5,\n",
" 531.5,\n",
" 592.0,\n",
" 601.0,\n",
" 567.0,\n",
" 647.5,\n",
" 692.0,\n",
" 684.5,\n",
" 704.5,\n",
" 696.5,\n",
" 679.0,\n",
" 626.5,\n",
" 561.5,\n",
" 531.0,\n",
" 584.5,\n",
" 588.5,\n",
" 555.0,\n",
" 646.0,\n",
" 676.0,\n",
" 672.5,\n",
" 715.5,\n",
" 695.5,\n",
" 671.5,\n",
" 617.0,\n",
" 560.0,\n",
" 533.5,\n",
" 587.5,\n",
" 588.5,\n",
" 545.5,\n",
" 594.5,\n",
" 615.0,\n",
" 599.0,\n",
" 592.0,\n",
" 594.0,\n",
" 602.0,\n",
" 573.0,\n",
" 514.0,\n",
" 487.0,\n",
" 540.5,\n",
" 545.5,\n",
" 484.0,\n",
" 494.5,\n",
" 530.0,\n",
" 532.5,\n",
" 529.5,\n",
" 550.0,\n",
" 581.5,\n",
" 554.5,\n",
" 499.0,\n",
" 476.5,\n",
" 538.5,\n",
" 566.0,\n",
" 549.5,\n",
" 629.5,\n",
" 664.0,\n",
" 637.5,\n",
" 667.5,\n",
" 651.0,\n",
" 651.5,\n",
" 610.5,\n",
" 559.0,\n",
" 533.5,\n",
" 590.0,\n",
" 612.5,\n",
" 591.5,\n",
" 660.5,\n",
" 672.5,\n",
" 655.0,\n",
" 672.0,\n",
" 673.5,\n",
" 673.5,\n",
" 630.0,\n",
" 582.0,\n",
" 553.0,\n",
" 610.5,\n",
" 628.5,\n",
" 604.5,\n",
" 658.0,\n",
" 706.5,\n",
" 692.5,\n",
" 699.5,\n",
" 691.5,\n",
" 682.5,\n",
" 637.0,\n",
" 586.0,\n",
" 544.5,\n",
" 605.0,\n",
" 620.0,\n",
" 597.5,\n",
" 663.5,\n",
" 703.5,\n",
" 697.0,\n",
" 710.0,\n",
" 702.5,\n",
" 689.0,\n",
" 641.5,\n",
" 573.0,\n",
" 535.5,\n",
" 588.0,\n",
" 605.0,\n",
" 569.0,\n",
" 622.5,\n",
" 675.0,\n",
" 686.0,\n",
" 697.5,\n",
" 690.0,\n",
" 663.5,\n",
" 623.5,\n",
" 579.0,\n",
" 550.0,\n",
" 598.0,\n",
" 600.5,\n",
" 548.0,\n",
" 601.5,\n",
" 638.0,\n",
" 638.0,\n",
" 638.0,\n",
" 643.0,\n",
" 639.5,\n",
" 597.5,\n",
" 536.0,\n",
" 506.0,\n",
" 549.5,\n",
" 563.5,\n",
" 506.5,\n",
" 536.5,\n",
" 577.5,\n",
" 580.0,\n",
" 574.0,\n",
" 580.0,\n",
" 610.0,\n",
" 593.0,\n",
" 527.5,\n",
" 501.0,\n",
" 556.5,\n",
" 588.0,\n",
" 576.0,\n",
" 658.0,\n",
" 711.0,\n",
" 707.0,\n",
" 735.0,\n",
" 719.5,\n",
" 697.5,\n",
" 649.5,\n",
" 592.5,\n",
" 548.0,\n",
" 589.0,\n",
" 601.5,\n",
" 579.0,\n",
" 671.0,\n",
" 726.0,\n",
" 727.5,\n",
" 750.5,\n",
" 723.5,\n",
" 708.5,\n",
" 649.0,\n",
" 585.0,\n",
" 548.0,\n",
" 590.0,\n",
" 616.5,\n",
" 612.5,\n",
" 718.5,\n",
" 748.0,\n",
" 718.0,\n",
" 726.5,\n",
" 714.0,\n",
" 688.0,\n",
" 625.5,\n",
" 572.0,\n",
" 536.0,\n",
" 581.0,\n",
" 601.0,\n",
" 584.0,\n",
" 665.5,\n",
" 725.5,\n",
" 709.0,\n",
" 721.5,\n",
" 705.5,\n",
" 684.5,\n",
" 633.5,\n",
" 577.5,\n",
" 535.0,\n",
" 580.5,\n",
" 605.0,\n",
" 574.0,\n",
" 664.0,\n",
" 715.0,\n",
" 702.5,\n",
" 695.0,\n",
" 675.5,\n",
" 662.0,\n",
" 610.5,\n",
" 556.0,\n",
" 516.0,\n",
" 569.0,\n",
" 587.5,\n",
" 536.0,\n",
" 579.0,\n",
" 586.5,\n",
" 591.0,\n",
" 580.0,\n",
" 585.5,\n",
" 612.5,\n",
" 589.0,\n",
" 528.0,\n",
" 498.0,\n",
" 546.0,\n",
" 564.0,\n",
" 486.0,\n",
" 490.0,\n",
" 522.5,\n",
" 537.5,\n",
" 539.0,\n",
" 560.5,\n",
" 590.5,\n",
" 561.0,\n",
" 515.0,\n",
" 488.5,\n",
" 544.5,\n",
" 571.0,\n",
" 558.0,\n",
" 643.0,\n",
" 706.5,\n",
" 708.0,\n",
" 736.0,\n",
" 730.0,\n",
" 693.0,\n",
" 635.5,\n",
" 572.5,\n",
" 529.0,\n",
" 580.0,\n",
" 602.0,\n",
" 584.5,\n",
" 687.5,\n",
" 745.5,\n",
" 760.5,\n",
" 783.5,\n",
" 748.0,\n",
" 718.0,\n",
" 659.5,\n",
" 587.0,\n",
" 540.0,\n",
" 580.0,\n",
" 602.0,\n",
" 588.5,\n",
" 668.0,\n",
" 718.5,\n",
" 681.5,\n",
" 722.0,\n",
" 710.0,\n",
" 682.5,\n",
" 621.0,\n",
" 561.0,\n",
" 522.0,\n",
" 561.5,\n",
" 586.0,\n",
" 579.5,\n",
" 680.0,\n",
" 735.5,\n",
" 737.0,\n",
" 755.0,\n",
" 728.0,\n",
" 706.5,\n",
" 642.5,\n",
" 568.5,\n",
" 529.5,\n",
" 567.5,\n",
" 594.0,\n",
" 576.0,\n",
" 657.5,\n",
" 705.0,\n",
" 691.5,\n",
" 709.0,\n",
" 693.0,\n",
" 660.5,\n",
" 616.5,\n",
" 556.0,\n",
" 517.5,\n",
" 557.0,\n",
" 575.0,\n",
" 531.0,\n",
" 573.5,\n",
" 605.5,\n",
" 613.5,\n",
" 615.5,\n",
" 597.0,\n",
" 597.0,\n",
" 570.5,\n",
" 507.5,\n",
" 476.5,\n",
" 527.5,\n",
" 547.0,\n",
" 482.5,\n",
" 502.5,\n",
" 537.5,\n",
" 539.0,\n",
" 540.5,\n",
" 561.5,\n",
" 587.5,\n",
" 562.5,\n",
" 496.0,\n",
" 470.0,\n",
" 531.0,\n",
" 558.5,\n",
" 542.0,\n",
" 628.0,\n",
" 686.5,\n",
" 684.0,\n",
" 691.0,\n",
" 692.0,\n",
" 669.0,\n",
" 625.5,\n",
" 555.0,\n",
" 520.5,\n",
" 569.5,\n",
" 593.0,\n",
" 571.5,\n",
" 661.0,\n",
" 703.0,\n",
" 675.5,\n",
" 703.5,\n",
" 695.5,\n",
" 679.0,\n",
" 632.0,\n",
" 575.0,\n",
" 535.5,\n",
" 580.5,\n",
" 610.5,\n",
" 611.5,\n",
" 714.5,\n",
" 740.5,\n",
" 712.5,\n",
" 729.5,\n",
" 706.5,\n",
" 674.5,\n",
" 617.5,\n",
" 568.0,\n",
" 536.0,\n",
" 576.5,\n",
" 601.0,\n",
" 582.5,\n",
" 669.0,\n",
" 708.0,\n",
" 706.0,\n",
" 701.5,\n",
" 696.5,\n",
" 674.0,\n",
" 631.5,\n",
" 579.0,\n",
" 537.0,\n",
" 573.0,\n",
" 594.5,\n",
" 566.0,\n",
" 642.5,\n",
" 700.5,\n",
" 689.5,\n",
" 719.5,\n",
" 693.0,\n",
" 653.5,\n",
" 615.0,\n",
" 559.0,\n",
" 521.5,\n",
" 575.5,\n",
" 580.0,\n",
" 541.0,\n",
" 600.0,\n",
" 635.5,\n",
" 647.5,\n",
" 656.5,\n",
" 642.5,\n",
" 645.5,\n",
" 610.5,\n",
" 540.0,\n",
" 496.5,\n",
" 536.5,\n",
" 546.0,\n",
" 506.0,\n",
" 530.0,\n",
" 536.5,\n",
" 545.5,\n",
" 553.0,\n",
" 568.0,\n",
" 589.5,\n",
" 564.5,\n",
" 499.5,\n",
" 469.0,\n",
" 526.5,\n",
" 559.5,\n",
" 545.5,\n",
" 649.5,\n",
" 695.0,\n",
" 684.5,\n",
" 694.0,\n",
" 669.0,\n",
" 658.0,\n",
" 616.0,\n",
" 559.5,\n",
" 526.5,\n",
" 571.5,\n",
" 604.0,\n",
" 574.5,\n",
" 644.5,\n",
" 685.0,\n",
" 684.5,\n",
" 702.5,\n",
" 682.5,\n",
" 659.0,\n",
" 621.0,\n",
" 568.0,\n",
" 525.0,\n",
" 573.5,\n",
" 592.5,\n",
" 571.0,\n",
" 645.0,\n",
" 698.5,\n",
" 691.0,\n",
" 728.5,\n",
" 706.0,\n",
" 671.5,\n",
" 629.0,\n",
" 567.5,\n",
" 524.0,\n",
" 566.5,\n",
" 586.5,\n",
" 572.5,\n",
" 665.0,\n",
" 723.0,\n",
" 726.0,\n",
" 753.0,\n",
" 729.5,\n",
" 688.5,\n",
" 641.5,\n",
" 580.0,\n",
" 533.5,\n",
" 580.0,\n",
" 599.0,\n",
" 594.5,\n",
" 687.5,\n",
" 754.5,\n",
" 739.5,\n",
" 759.0,\n",
" 727.5,\n",
" 692.0,\n",
" 641.5,\n",
" 589.0,\n",
" 545.0,\n",
" 574.0,\n",
" 595.5,\n",
" 545.5,\n",
" 584.0,\n",
" 643.5,\n",
" 652.5,\n",
" 656.0,\n",
" 626.5,\n",
" 620.5,\n",
" 597.0,\n",
" 537.0,\n",
" 504.5,\n",
" 545.0,\n",
" 556.5,\n",
" 505.5,\n",
" 533.0,\n",
" 570.5,\n",
" 583.5,\n",
" 590.0,\n",
" 606.5,\n",
" 626.0,\n",
" 593.0,\n",
" 533.0,\n",
" 488.0,\n",
" 527.5,\n",
" 560.5,\n",
" 553.5,\n",
" 682.5,\n",
" 767.5,\n",
" 767.5,\n",
" 797.5,\n",
" 781.5,\n",
" 725.5,\n",
" 666.5,\n",
" 598.5,\n",
" 544.0,\n",
" 577.5,\n",
" 606.5,\n",
" 590.5,\n",
" 704.5,\n",
" 760.5,\n",
" 753.5,\n",
" 785.0,\n",
" 763.5,\n",
" 706.5,\n",
" 646.5,\n",
" 586.5,\n",
" 540.5,\n",
" 568.0,\n",
" 601.5,\n",
" 601.0,\n",
" 709.5,\n",
" 745.5,\n",
" 712.5,\n",
" 747.5,\n",
" 722.5,\n",
" 691.5,\n",
" 634.5,\n",
" 571.0,\n",
" 528.5,\n",
" 573.0,\n",
" 607.5,\n",
" 588.5,\n",
" 686.5,\n",
" 743.5,\n",
" 731.0,\n",
" 746.5,\n",
" 734.0,\n",
" 709.0,\n",
" 660.5,\n",
" 595.5,\n",
" 544.5,\n",
" 585.0,\n",
" 609.5,\n",
" 604.5,\n",
" ...]}]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's now write the dictionary to the `jsonlines` file format that DeepAR understands (it also supports gzipped jsonlines and parquet)."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"def write_dicts_to_file(path, data):\n",
" with open(path, 'wb') as fp:\n",
" for d in data:\n",
" fp.write(json.dumps(d).encode(\"utf-8\"))\n",
" fp.write(\"\\n\".encode('utf-8'))"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 5.8 ms, sys: 0 ns, total: 5.8 ms\n",
"Wall time: 5.32 ms\n"
]
}
],
"source": [
"%%time\n",
"write_dicts_to_file(\"train.json\", training_data)\n",
"write_dicts_to_file(\"test.json\", test_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have the data files locally, let us copy them to S3 where DeepAR can access them. Depending on your connection, this may take a couple of minutes."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"s3 = boto3.resource('s3')\n",
"def copy_to_s3(local_file, s3_path, override=False):\n",
" assert s3_path.startswith('s3://')\n",
" split = s3_path.split('/')\n",
" bucket = split[2]\n",
" path = '/'.join(split[3:])\n",
" buk = s3.Bucket(bucket)\n",
" \n",
" if len(list(buk.objects.filter(Prefix=path))) > 0:\n",
" if not override:\n",
" print('File s3://{}/{} already exists.\\nSet override to upload anyway.\\n'.format(s3_bucket, s3_path))\n",
" return\n",
" else:\n",
" print('Overwriting existing file')\n",
" with open(local_file, 'rb') as data:\n",
" print('Uploading file to {}'.format(s3_path))\n",
" buk.put_object(Key=path, Body=data)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"File s3://sagemaker-itoc/s3://sagemaker-itoc/itoc/data/train/train.json already exists.\n",
"Set override to upload anyway.\n",
"\n",
"File s3://sagemaker-itoc/s3://sagemaker-itoc/itoc/data/test/test.json already exists.\n",
"Set override to upload anyway.\n",
"\n",
"CPU times: user 25.2 ms, sys: 0 ns, total: 25.2 ms\n",
"Wall time: 171 ms\n"
]
}
],
"source": [
"%%time\n",
"copy_to_s3(\"train.json\", s3_data_path + \"/train/train.json\")\n",
"copy_to_s3(\"test.json\", s3_data_path + \"/test/test.json\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's have a look to what we just wrote to S3."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"start\": \"2017-04-01 00:00:00\", \"target\": [654, 660, 685, 706, 696, 667, 665, 662, 689, 727, 680, 6...\n"
]
}
],
"source": [
"s3filesystem = s3fs.S3FileSystem()\n",
"with s3filesystem.open(s3_data_path + \"/train/train.json\", 'rb') as fp:\n",
" print(fp.readline().decode(\"utf-8\")[:100] + \"...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are all set with our dataset processing, we can now call DeepAR to train a model and generate predictions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train a model\n",
"\n",
"Here we define the estimator that will launch the training job."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"estimator = sagemaker.estimator.Estimator(\n",
" sagemaker_session=sagemaker_session,\n",
" image_name=image_name,\n",
" role=role,\n",
" train_instance_count=1,\n",
" train_instance_type='ml.c4.2xlarge',\n",
" base_job_name='deepar-electricity-demo',\n",
" output_path=s3_output_path\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we need to set the hyperparameters for the training job. For example frequency of the time series used, number of data points the model will look at in the past, number of predicted data points. The other hyperparameters concern the model to train (number of layers, number of cells per layer, likelihood function) and the training options (number of epochs, batch size, learning rate...). We use default parameters for every optional parameter in this case (you can always use [Sagemaker Automated Model Tuning](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune them)."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"hyperparameters = {\n",
" \"time_freq\": freq,\n",
" \"epochs\": \"100\",\n",
" \"early_stopping_patience\": \"40\",\n",
" \"mini_batch_size\": \"64\",\n",
" \"learning_rate\": \"5E-4\",\n",
" \"context_length\": str(context_length),\n",
" \"prediction_length\": str(prediction_length)\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"estimator.set_hyperparameters(**hyperparameters)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are ready to launch the training job. SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.\n",
"\n",
"If you provide the `test` data channel as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test. This is done by predicting the last `prediction_length` points of each time-series in the test set and comparing this to the actual value of the time-series. \n",
"\n",
"**Note:** the next cell may take a few minutes to complete, depending on data size, model complexity, training options."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:sagemaker:Creating training-job with name: deepar-electricity-demo-2019-02-18-01-56-30-309\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"2019-02-18 01:56:30 Starting - Starting the training job...\n",
"2019-02-18 01:56:32 Starting - Launching requested ML instances......\n",
"2019-02-18 01:57:34 Starting - Preparing the instances for training...\n",
"2019-02-18 01:58:29 Downloading - Downloading input data\n",
"2019-02-18 01:58:29 Training - Downloading the training image.....\n",
"\u001b[31mArguments: train\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/default-input.json: {u'num_dynamic_feat': u'auto', u'dropout_rate': u'0.10', u'mini_batch_size': u'128', u'test_quantiles': u'[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'num_eval_samples': u'100', u'learning_rate': u'0.001', u'num_cells': u'40', u'num_layers': u'2', u'embedding_dimension': u'10', u'_kvstore': u'auto', u'_num_kv_servers': u'auto', u'cardinality': u'auto', u'likelihood': u'student-t', u'early_stopping_patience': u''}\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'learning_rate': u'5E-4', u'prediction_length': u'84', u'epochs': u'100', u'time_freq': u'2H', u'context_length': u'84', u'mini_batch_size': u'64', u'early_stopping_patience': u'40'}\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Final configuration: {u'dropout_rate': u'0.10', u'test_quantiles': u'[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]', u'_tuning_objective_metric': u'', u'num_eval_samples': u'100', u'learning_rate': u'5E-4', u'num_layers': u'2', u'epochs': u'100', u'embedding_dimension': u'10', u'num_cells': u'40', u'_num_kv_servers': u'auto', u'mini_batch_size': u'64', u'likelihood': u'student-t', u'num_dynamic_feat': u'auto', u'cardinality': u'auto', u'_num_gpus': u'auto', u'prediction_length': u'84', u'time_freq': u'2H', u'context_length': u'84', u'_kvstore': u'auto', u'early_stopping_patience': u'40'}\u001b[0m\n",
"\u001b[31mProcess 1 is a worker.\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Detected entry point for worker worker\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Using early stopping with patience 40\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] [cardinality=auto] `cat` field was NOT found in the file `/opt/ml/input/data/train/train.json` and will NOT be used for training.\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] [num_dynamic_feat=auto] `dynamic_feat` field was NOT found in the file `/opt/ml/input/data/train/train.json` and will NOT be used for training.\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Training set statistics:\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Integer time series\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] number of time series: 1\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] number of observations: 5856\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] mean target length: 5856\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] min/mean/max target: 458.0/669.295081967/1075.0\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] mean abs(target): 669.295081967\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] contains missing values: no\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Small number of time series. Doing 10 number of passes over dataset per epoch.\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Test set statistics:\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Integer time series\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] number of time series: 4\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] number of observations: 25108\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] mean target length: 6277\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] min/mean/max target: 458.0/679.036880675/1075.0\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] mean abs(target): 679.036880675\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] contains missing values: no\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] nvidia-smi took: 0.0252089500427 secs to identify 0 gpus\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Number of GPUs being used: 0\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:04 INFO 140098330203968] Create Store: local\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"get_graph.time\": {\"count\": 1, \"max\": 1664.5970344543457, \"sum\": 1664.5970344543457, \"min\": 1664.5970344543457}}, \"EndTime\": 1550455146.572732, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455144.907268}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:06 INFO 140098330203968] Number of GPUs being used: 0\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"initialize.time\": {\"count\": 1, \"max\": 2376.9381046295166, \"sum\": 2376.9381046295166, \"min\": 2376.9381046295166}}, \"EndTime\": 1550455147.284316, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455146.57281}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:08 INFO 140098330203968] Epoch[0] Batch[0] avg_epoch_loss=8.682605\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:09 INFO 140098330203968] Epoch[0] Batch[5] avg_epoch_loss=8.025395\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:09 INFO 140098330203968] Epoch[0] Batch [5]#011Speed: 301.79 samples/sec#011loss=8.025395\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] processed a total of 627 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"epochs\": {\"count\": 1, \"max\": 100, \"sum\": 100.0, \"min\": 100}, \"update.time\": {\"count\": 1, \"max\": 2720.9041118621826, \"sum\": 2720.9041118621826, \"min\": 2720.9041118621826}}, \"EndTime\": 1550455150.005397, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455147.284402}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=230.426095958 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] #progress_metric: host=algo-1, completed 1 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_997a3e3d-429b-401a-87d4-b16f456c3017-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 93.85299682617188, \"sum\": 93.85299682617188, \"min\": 93.85299682617188}}, \"EndTime\": 1550455150.099776, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455150.005496}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:10 INFO 140098330203968] Epoch[1] Batch[0] avg_epoch_loss=7.112750\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:11 INFO 140098330203968] Epoch[1] Batch[5] avg_epoch_loss=6.983446\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:11 INFO 140098330203968] Epoch[1] Batch [5]#011Speed: 302.04 samples/sec#011loss=6.983446\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:12 INFO 140098330203968] processed a total of 629 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2439.481019973755, \"sum\": 2439.481019973755, \"min\": 2439.481019973755}}, \"EndTime\": 1550455152.539407, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455150.099854}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:12 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=257.828296583 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:12 INFO 140098330203968] #progress_metric: host=algo-1, completed 2 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:12 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:12 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_743f5eab-74f9-4302-a9c1-3db0bfaf7558-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 59.4329833984375, \"sum\": 59.4329833984375, \"min\": 59.4329833984375}}, \"EndTime\": 1550455152.599293, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455152.539492}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:13 INFO 140098330203968] Epoch[2] Batch[0] avg_epoch_loss=6.770631\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:14 INFO 140098330203968] Epoch[2] Batch[5] avg_epoch_loss=6.624952\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:14 INFO 140098330203968] Epoch[2] Batch [5]#011Speed: 260.06 samples/sec#011loss=6.624952\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] processed a total of 639 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2585.055112838745, \"sum\": 2585.055112838745, \"min\": 2585.055112838745}}, \"EndTime\": 1550455155.184483, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455152.599364}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=247.179281786 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] #progress_metric: host=algo-1, completed 3 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_4339cad9-4485-4e6e-a070-1f35deb55716-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 56.94413185119629, \"sum\": 56.94413185119629, \"min\": 56.94413185119629}}, \"EndTime\": 1550455155.241918, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455155.184555}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:15 INFO 140098330203968] Epoch[3] Batch[0] avg_epoch_loss=6.374486\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:16 INFO 140098330203968] Epoch[3] Batch[5] avg_epoch_loss=6.250451\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:16 INFO 140098330203968] Epoch[3] Batch [5]#011Speed: 298.70 samples/sec#011loss=6.250451\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] Epoch[3] Batch[10] avg_epoch_loss=6.199170\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] Epoch[3] Batch [10]#011Speed: 299.34 samples/sec#011loss=6.137634\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] processed a total of 664 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2603.701114654541, \"sum\": 2603.701114654541, \"min\": 2603.701114654541}}, \"EndTime\": 1550455157.845763, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455155.241993}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=255.010639049 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] #progress_metric: host=algo-1, completed 4 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:17 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_abe63da7-5662-4c8c-9c56-7feacf5037e8-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.84487724304199, \"sum\": 57.84487724304199, \"min\": 57.84487724304199}}, \"EndTime\": 1550455157.904081, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455157.845832}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:18 INFO 140098330203968] Epoch[4] Batch[0] avg_epoch_loss=6.132699\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:19 INFO 140098330203968] Epoch[4] Batch[5] avg_epoch_loss=6.097386\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:19 INFO 140098330203968] Epoch[4] Batch [5]#011Speed: 297.32 samples/sec#011loss=6.097386\u001b[0m\n",
"\n",
"2019-02-18 01:59:02 Training - Training image download completed. Training in progress.\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] Epoch[4] Batch[10] avg_epoch_loss=6.043487\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] Epoch[4] Batch [10]#011Speed: 291.62 samples/sec#011loss=5.978808\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] processed a total of 651 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2634.181022644043, \"sum\": 2634.181022644043, \"min\": 2634.181022644043}}, \"EndTime\": 1550455160.538411, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455157.904159}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=247.121834729 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] #progress_metric: host=algo-1, completed 5 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:20 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_abd7c881-a02e-4c47-8031-12a496ace8bf-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 58.25400352478027, \"sum\": 58.25400352478027, \"min\": 58.25400352478027}}, \"EndTime\": 1550455160.597162, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455160.538514}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:21 INFO 140098330203968] Epoch[5] Batch[0] avg_epoch_loss=6.034633\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:22 INFO 140098330203968] Epoch[5] Batch[5] avg_epoch_loss=5.967347\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:22 INFO 140098330203968] Epoch[5] Batch [5]#011Speed: 305.41 samples/sec#011loss=5.967347\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] Epoch[5] Batch[10] avg_epoch_loss=5.957599\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] Epoch[5] Batch [10]#011Speed: 287.52 samples/sec#011loss=5.945901\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] processed a total of 657 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2604.9110889434814, \"sum\": 2604.9110889434814, \"min\": 2604.9110889434814}}, \"EndTime\": 1550455163.202218, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455160.597239}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=252.204198751 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] #progress_metric: host=algo-1, completed 6 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_5220218d-5c37-4590-a390-6c58e3393aa8-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 87.26906776428223, \"sum\": 87.26906776428223, \"min\": 87.26906776428223}}, \"EndTime\": 1550455163.28997, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455163.202299}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:23 INFO 140098330203968] Epoch[6] Batch[0] avg_epoch_loss=5.881286\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:24 INFO 140098330203968] Epoch[6] Batch[5] avg_epoch_loss=5.803719\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:24 INFO 140098330203968] Epoch[6] Batch [5]#011Speed: 296.33 samples/sec#011loss=5.803719\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:25 INFO 140098330203968] processed a total of 632 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2394.9689865112305, \"sum\": 2394.9689865112305, \"min\": 2394.9689865112305}}, \"EndTime\": 1550455165.685092, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455163.290048}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:25 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=263.87229569 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:25 INFO 140098330203968] #progress_metric: host=algo-1, completed 7 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:25 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:25 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_26a2e89d-2fce-4749-b8e7-5734c192348b-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 56.77390098571777, \"sum\": 56.77390098571777, \"min\": 56.77390098571777}}, \"EndTime\": 1550455165.742388, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455165.685177}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:26 INFO 140098330203968] Epoch[7] Batch[0] avg_epoch_loss=5.771475\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:27 INFO 140098330203968] Epoch[7] Batch[5] avg_epoch_loss=5.645546\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:27 INFO 140098330203968] Epoch[7] Batch [5]#011Speed: 302.52 samples/sec#011loss=5.645546\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] processed a total of 626 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2357.9719066619873, \"sum\": 2357.9719066619873, \"min\": 2357.9719066619873}}, \"EndTime\": 1550455168.100496, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455165.74245}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=265.468183261 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] #progress_metric: host=algo-1, completed 8 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_9092ed10-7ca1-40c4-86fe-09d2a4561af8-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 89.46800231933594, \"sum\": 89.46800231933594, \"min\": 89.46800231933594}}, \"EndTime\": 1550455168.19043, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455168.100582}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:28 INFO 140098330203968] Epoch[8] Batch[0] avg_epoch_loss=5.592095\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:29 INFO 140098330203968] Epoch[8] Batch[5] avg_epoch_loss=5.493723\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:29 INFO 140098330203968] Epoch[8] Batch [5]#011Speed: 303.24 samples/sec#011loss=5.493723\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] Epoch[8] Batch[10] avg_epoch_loss=5.407243\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] Epoch[8] Batch [10]#011Speed: 290.08 samples/sec#011loss=5.303467\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] processed a total of 648 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2639.2910480499268, \"sum\": 2639.2910480499268, \"min\": 2639.2910480499268}}, \"EndTime\": 1550455170.829885, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455168.190525}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=245.509034321 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] #progress_metric: host=algo-1, completed 9 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:30 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_b5a20a52-a408-4c23-a76f-10f3cbbcc0a4-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 93.3380126953125, \"sum\": 93.3380126953125, \"min\": 93.3380126953125}}, \"EndTime\": 1550455170.923684, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455170.829967}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:31 INFO 140098330203968] Epoch[9] Batch[0] avg_epoch_loss=5.374328\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:32 INFO 140098330203968] Epoch[9] Batch[5] avg_epoch_loss=5.313929\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:32 INFO 140098330203968] Epoch[9] Batch [5]#011Speed: 296.45 samples/sec#011loss=5.313929\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] Epoch[9] Batch[10] avg_epoch_loss=5.286981\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] Epoch[9] Batch [10]#011Speed: 290.74 samples/sec#011loss=5.254643\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] processed a total of 672 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2641.950845718384, \"sum\": 2641.950845718384, \"min\": 2641.950845718384}}, \"EndTime\": 1550455173.565782, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455170.923765}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.346018845 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] #progress_metric: host=algo-1, completed 10 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:33 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_4ea13941-4440-4f1d-8a69-329c43d544f1-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 59.4019889831543, \"sum\": 59.4019889831543, \"min\": 59.4019889831543}}, \"EndTime\": 1550455173.625639, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455173.565861}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:34 INFO 140098330203968] Epoch[10] Batch[0] avg_epoch_loss=5.381837\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:35 INFO 140098330203968] Epoch[10] Batch[5] avg_epoch_loss=5.270474\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:35 INFO 140098330203968] Epoch[10] Batch [5]#011Speed: 291.92 samples/sec#011loss=5.270474\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] Epoch[10] Batch[10] avg_epoch_loss=5.285605\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] Epoch[10] Batch [10]#011Speed: 298.66 samples/sec#011loss=5.303763\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] processed a total of 655 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2643.254041671753, \"sum\": 2643.254041671753, \"min\": 2643.254041671753}}, \"EndTime\": 1550455176.269027, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455173.625705}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=247.783570754 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] #progress_metric: host=algo-1, completed 11 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_0597736e-9b7f-49ba-bf7a-df3e0250338f-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 67.96002388000488, \"sum\": 67.96002388000488, \"min\": 67.96002388000488}}, \"EndTime\": 1550455176.337509, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455176.269167}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:36 INFO 140098330203968] Epoch[11] Batch[0] avg_epoch_loss=5.213901\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:37 INFO 140098330203968] Epoch[11] Batch[5] avg_epoch_loss=5.184569\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:37 INFO 140098330203968] Epoch[11] Batch [5]#011Speed: 294.31 samples/sec#011loss=5.184569\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] Epoch[11] Batch[10] avg_epoch_loss=5.144078\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] Epoch[11] Batch [10]#011Speed: 298.89 samples/sec#011loss=5.095489\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] processed a total of 662 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2607.578992843628, \"sum\": 2607.578992843628, \"min\": 2607.578992843628}}, \"EndTime\": 1550455178.945235, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455176.337585}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=253.864731003 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] #progress_metric: host=algo-1, completed 12 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:38 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:39 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_bfe14223-b9b9-4638-a0ec-deec4a77056a-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.72280693054199, \"sum\": 57.72280693054199, \"min\": 57.72280693054199}}, \"EndTime\": 1550455179.003419, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455178.945299}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:39 INFO 140098330203968] Epoch[12] Batch[0] avg_epoch_loss=5.097429\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:40 INFO 140098330203968] Epoch[12] Batch[5] avg_epoch_loss=5.152807\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:40 INFO 140098330203968] Epoch[12] Batch [5]#011Speed: 302.92 samples/sec#011loss=5.152807\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:41 INFO 140098330203968] processed a total of 624 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2411.543130874634, \"sum\": 2411.543130874634, \"min\": 2411.543130874634}}, \"EndTime\": 1550455181.415108, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455179.003493}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:41 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.7416663 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:41 INFO 140098330203968] #progress_metric: host=algo-1, completed 13 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:41 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:41 INFO 140098330203968] Epoch[13] Batch[0] avg_epoch_loss=5.179683\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:42 INFO 140098330203968] Epoch[13] Batch[5] avg_epoch_loss=5.084547\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:42 INFO 140098330203968] Epoch[13] Batch [5]#011Speed: 301.81 samples/sec#011loss=5.084547\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:43 INFO 140098330203968] processed a total of 640 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2414.9019718170166, \"sum\": 2414.9019718170166, \"min\": 2414.9019718170166}}, \"EndTime\": 1550455183.830449, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455181.415194}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:43 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=265.00444347 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:43 INFO 140098330203968] #progress_metric: host=algo-1, completed 14 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:43 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:43 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_ed0d36d2-ccd1-4f4e-8886-381aac3c78b9-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.42502212524414, \"sum\": 57.42502212524414, \"min\": 57.42502212524414}}, \"EndTime\": 1550455183.888409, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455183.830561}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:44 INFO 140098330203968] Epoch[14] Batch[0] avg_epoch_loss=5.029421\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:45 INFO 140098330203968] Epoch[14] Batch[5] avg_epoch_loss=5.062504\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:45 INFO 140098330203968] Epoch[14] Batch [5]#011Speed: 300.45 samples/sec#011loss=5.062504\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] Epoch[14] Batch[10] avg_epoch_loss=5.028987\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] Epoch[14] Batch [10]#011Speed: 301.28 samples/sec#011loss=4.988767\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] processed a total of 665 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2585.568904876709, \"sum\": 2585.568904876709, \"min\": 2585.568904876709}}, \"EndTime\": 1550455186.474127, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455183.888481}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=257.185421563 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] #progress_metric: host=algo-1, completed 15 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_d045d62d-35cd-47d0-b4be-a87546350ad2-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.72805213928223, \"sum\": 57.72805213928223, \"min\": 57.72805213928223}}, \"EndTime\": 1550455186.532383, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455186.474194}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:46 INFO 140098330203968] Epoch[15] Batch[0] avg_epoch_loss=5.048838\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] Epoch[15] Batch[5] avg_epoch_loss=5.050207\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] Epoch[15] Batch [5]#011Speed: 303.34 samples/sec#011loss=5.050207\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] processed a total of 632 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2344.5889949798584, \"sum\": 2344.5889949798584, \"min\": 2344.5889949798584}}, \"EndTime\": 1550455188.877095, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455186.532454}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.545135792 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] #progress_metric: host=algo-1, completed 16 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:48 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:49 INFO 140098330203968] Epoch[16] Batch[0] avg_epoch_loss=5.025700\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:50 INFO 140098330203968] Epoch[16] Batch[5] avg_epoch_loss=4.982955\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:50 INFO 140098330203968] Epoch[16] Batch [5]#011Speed: 294.18 samples/sec#011loss=4.982955\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] processed a total of 600 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2366.234064102173, \"sum\": 2366.234064102173, \"min\": 2366.234064102173}}, \"EndTime\": 1550455191.243768, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455188.877162}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=253.553987095 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] #progress_metric: host=algo-1, completed 17 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_443c3eb2-5427-487f-a28e-bdfa1f891e81-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.55496025085449, \"sum\": 57.55496025085449, \"min\": 57.55496025085449}}, \"EndTime\": 1550455191.30182, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455191.243852}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:51 INFO 140098330203968] Epoch[17] Batch[0] avg_epoch_loss=5.024175\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:52 INFO 140098330203968] Epoch[17] Batch[5] avg_epoch_loss=4.996346\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:52 INFO 140098330203968] Epoch[17] Batch [5]#011Speed: 305.82 samples/sec#011loss=4.996346\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] Epoch[17] Batch[10] avg_epoch_loss=4.988373\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] Epoch[17] Batch [10]#011Speed: 290.09 samples/sec#011loss=4.978806\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] processed a total of 659 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2617.0029640197754, \"sum\": 2617.0029640197754, \"min\": 2617.0029640197754}}, \"EndTime\": 1550455193.918959, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455191.30189}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=251.802657804 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] #progress_metric: host=algo-1, completed 18 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:53 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:54 INFO 140098330203968] Epoch[18] Batch[0] avg_epoch_loss=5.109987\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:55 INFO 140098330203968] Epoch[18] Batch[5] avg_epoch_loss=5.029211\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:55 INFO 140098330203968] Epoch[18] Batch [5]#011Speed: 300.33 samples/sec#011loss=5.029211\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:56 INFO 140098330203968] processed a total of 629 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2359.844923019409, \"sum\": 2359.844923019409, \"min\": 2359.844923019409}}, \"EndTime\": 1550455196.279232, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455193.91904}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:56 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=266.530282288 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:56 INFO 140098330203968] #progress_metric: host=algo-1, completed 19 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:56 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:56 INFO 140098330203968] Epoch[19] Batch[0] avg_epoch_loss=4.977211\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:57 INFO 140098330203968] Epoch[19] Batch[5] avg_epoch_loss=4.962649\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:57 INFO 140098330203968] Epoch[19] Batch [5]#011Speed: 309.12 samples/sec#011loss=4.962649\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] Epoch[19] Batch[10] avg_epoch_loss=4.910905\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] Epoch[19] Batch [10]#011Speed: 290.70 samples/sec#011loss=4.848812\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] processed a total of 646 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2583.8451385498047, \"sum\": 2583.8451385498047, \"min\": 2583.8451385498047}}, \"EndTime\": 1550455198.863466, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455196.27931}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=250.004072189 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] #progress_metric: host=algo-1, completed 20 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:58 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_d020ae29-e55e-47c5-bf2f-e4bc6ccc6fb7-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 58.90083312988281, \"sum\": 58.90083312988281, \"min\": 58.90083312988281}}, \"EndTime\": 1550455198.922849, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455198.863535}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 01:59:59 INFO 140098330203968] Epoch[20] Batch[0] avg_epoch_loss=4.889206\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:00 INFO 140098330203968] Epoch[20] Batch[5] avg_epoch_loss=4.874217\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:00 INFO 140098330203968] Epoch[20] Batch [5]#011Speed: 295.82 samples/sec#011loss=4.874217\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] Epoch[20] Batch[10] avg_epoch_loss=4.883365\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] Epoch[20] Batch [10]#011Speed: 295.15 samples/sec#011loss=4.894342\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] processed a total of 663 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2634.6278190612793, \"sum\": 2634.6278190612793, \"min\": 2634.6278190612793}}, \"EndTime\": 1550455201.557629, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455198.922933}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=251.636970414 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] #progress_metric: host=algo-1, completed 21 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:01 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_9f28304c-c910-4293-b099-d7ca9f6bb9ca-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 94.33102607727051, \"sum\": 94.33102607727051, \"min\": 94.33102607727051}}, \"EndTime\": 1550455201.652418, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455201.55771}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:02 INFO 140098330203968] Epoch[21] Batch[0] avg_epoch_loss=4.910189\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:03 INFO 140098330203968] Epoch[21] Batch[5] avg_epoch_loss=4.878717\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:03 INFO 140098330203968] Epoch[21] Batch [5]#011Speed: 300.75 samples/sec#011loss=4.878717\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] processed a total of 640 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2372.6699352264404, \"sum\": 2372.6699352264404, \"min\": 2372.6699352264404}}, \"EndTime\": 1550455204.025235, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455201.652499}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.725011279 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] #progress_metric: host=algo-1, completed 22 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_f8530b5e-682a-467a-9c35-b0f1050b5471-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.717084884643555, \"sum\": 57.717084884643555, \"min\": 57.717084884643555}}, \"EndTime\": 1550455204.083439, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455204.02531}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:04 INFO 140098330203968] Epoch[22] Batch[0] avg_epoch_loss=4.859643\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:05 INFO 140098330203968] Epoch[22] Batch[5] avg_epoch_loss=4.836203\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:05 INFO 140098330203968] Epoch[22] Batch [5]#011Speed: 305.20 samples/sec#011loss=4.836203\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] Epoch[22] Batch[10] avg_epoch_loss=4.815714\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] Epoch[22] Batch [10]#011Speed: 297.47 samples/sec#011loss=4.791127\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] processed a total of 666 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2604.811906814575, \"sum\": 2604.811906814575, \"min\": 2604.811906814575}}, \"EndTime\": 1550455206.688384, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455204.083511}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=255.66876154 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] #progress_metric: host=algo-1, completed 23 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:06 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_8cfe3cac-a7b6-45b3-9381-47972d480a36-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 67.6119327545166, \"sum\": 67.6119327545166, \"min\": 67.6119327545166}}, \"EndTime\": 1550455206.75646, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455206.688464}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:07 INFO 140098330203968] Epoch[23] Batch[0] avg_epoch_loss=4.814577\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:08 INFO 140098330203968] Epoch[23] Batch[5] avg_epoch_loss=4.810688\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:08 INFO 140098330203968] Epoch[23] Batch [5]#011Speed: 303.55 samples/sec#011loss=4.810688\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] processed a total of 628 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2376.7359256744385, \"sum\": 2376.7359256744385, \"min\": 2376.7359256744385}}, \"EndTime\": 1550455209.133339, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455206.756532}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=264.2132409 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] #progress_metric: host=algo-1, completed 24 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_07dc51d6-43b5-4cd4-84c4-60980a9d6a37-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.97004699707031, \"sum\": 57.97004699707031, \"min\": 57.97004699707031}}, \"EndTime\": 1550455209.191825, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455209.133427}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:09 INFO 140098330203968] Epoch[24] Batch[0] avg_epoch_loss=4.786370\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:10 INFO 140098330203968] Epoch[24] Batch[5] avg_epoch_loss=4.788903\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:10 INFO 140098330203968] Epoch[24] Batch [5]#011Speed: 298.04 samples/sec#011loss=4.788903\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:11 INFO 140098330203968] processed a total of 637 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2363.145112991333, \"sum\": 2363.145112991333, \"min\": 2363.145112991333}}, \"EndTime\": 1550455211.555103, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455209.191885}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:11 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.541338837 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:11 INFO 140098330203968] #progress_metric: host=algo-1, completed 25 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:11 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:11 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_2f14ccb6-ebe6-462b-9901-a71f6a9c1c2c-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 56.93483352661133, \"sum\": 56.93483352661133, \"min\": 56.93483352661133}}, \"EndTime\": 1550455211.612629, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455211.555187}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:12 INFO 140098330203968] Epoch[25] Batch[0] avg_epoch_loss=4.683328\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] Epoch[25] Batch[5] avg_epoch_loss=4.742460\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] Epoch[25] Batch [5]#011Speed: 301.23 samples/sec#011loss=4.742460\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] processed a total of 623 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2373.0311393737793, \"sum\": 2373.0311393737793, \"min\": 2373.0311393737793}}, \"EndTime\": 1550455213.985807, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455211.612704}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=262.519185425 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] #progress_metric: host=algo-1, completed 26 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:13 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:14 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_f172d4ed-0d82-415f-a913-11f5868c8ed0-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.48796463012695, \"sum\": 57.48796463012695, \"min\": 57.48796463012695}}, \"EndTime\": 1550455214.043783, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455213.985893}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:14 INFO 140098330203968] Epoch[26] Batch[0] avg_epoch_loss=4.745231\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:15 INFO 140098330203968] Epoch[26] Batch[5] avg_epoch_loss=4.728993\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:15 INFO 140098330203968] Epoch[26] Batch [5]#011Speed: 298.90 samples/sec#011loss=4.728993\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:16 INFO 140098330203968] processed a total of 622 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2352.102041244507, \"sum\": 2352.102041244507, \"min\": 2352.102041244507}}, \"EndTime\": 1550455216.396026, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455214.043853}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:16 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=264.428227319 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:16 INFO 140098330203968] #progress_metric: host=algo-1, completed 27 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:16 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:16 INFO 140098330203968] Epoch[27] Batch[0] avg_epoch_loss=4.757177\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:17 INFO 140098330203968] Epoch[27] Batch[5] avg_epoch_loss=4.721903\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:17 INFO 140098330203968] Epoch[27] Batch [5]#011Speed: 306.14 samples/sec#011loss=4.721903\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] Epoch[27] Batch[10] avg_epoch_loss=4.699296\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] Epoch[27] Batch [10]#011Speed: 296.32 samples/sec#011loss=4.672166\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] processed a total of 669 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2590.1288986206055, \"sum\": 2590.1288986206055, \"min\": 2590.1288986206055}}, \"EndTime\": 1550455218.986618, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455216.396125}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.276440598 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] #progress_metric: host=algo-1, completed 28 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:18 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:19 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_58b861d0-85b7-4aab-aba5-5164af527419-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 64.64815139770508, \"sum\": 64.64815139770508, \"min\": 64.64815139770508}}, \"EndTime\": 1550455219.051722, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455218.986696}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:19 INFO 140098330203968] Epoch[28] Batch[0] avg_epoch_loss=4.686228\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:20 INFO 140098330203968] Epoch[28] Batch[5] avg_epoch_loss=4.675995\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:20 INFO 140098330203968] Epoch[28] Batch [5]#011Speed: 303.92 samples/sec#011loss=4.675995\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] processed a total of 622 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2367.2101497650146, \"sum\": 2367.2101497650146, \"min\": 2367.2101497650146}}, \"EndTime\": 1550455221.419065, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455219.051786}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=262.744918115 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] #progress_metric: host=algo-1, completed 29 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_2c3d119d-a333-4186-b450-31f3e11247cd-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.153940200805664, \"sum\": 57.153940200805664, \"min\": 57.153940200805664}}, \"EndTime\": 1550455221.476682, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455221.41913}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:21 INFO 140098330203968] Epoch[29] Batch[0] avg_epoch_loss=4.625590\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:22 INFO 140098330203968] Epoch[29] Batch[5] avg_epoch_loss=4.670987\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:22 INFO 140098330203968] Epoch[29] Batch [5]#011Speed: 306.47 samples/sec#011loss=4.670987\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:23 INFO 140098330203968] processed a total of 635 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2372.576951980591, \"sum\": 2372.576951980591, \"min\": 2372.576951980591}}, \"EndTime\": 1550455223.849398, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455221.476758}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:23 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=267.629724472 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:23 INFO 140098330203968] #progress_metric: host=algo-1, completed 30 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:23 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:24 INFO 140098330203968] Epoch[30] Batch[0] avg_epoch_loss=4.669417\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:25 INFO 140098330203968] Epoch[30] Batch[5] avg_epoch_loss=4.637085\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:25 INFO 140098330203968] Epoch[30] Batch [5]#011Speed: 299.24 samples/sec#011loss=4.637085\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] Epoch[30] Batch[10] avg_epoch_loss=4.656720\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] Epoch[30] Batch [10]#011Speed: 299.86 samples/sec#011loss=4.680282\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] processed a total of 677 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2594.104051589966, \"sum\": 2594.104051589966, \"min\": 2594.104051589966}}, \"EndTime\": 1550455226.443931, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455223.849465}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=260.964647946 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] #progress_metric: host=algo-1, completed 31 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_bc95e070-3a9e-4416-8b59-53eee8ce12db-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 69.45204734802246, \"sum\": 69.45204734802246, \"min\": 69.45204734802246}}, \"EndTime\": 1550455226.513837, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455226.444009}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:26 INFO 140098330203968] Epoch[31] Batch[0] avg_epoch_loss=4.633619\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] Epoch[31] Batch[5] avg_epoch_loss=4.631858\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] Epoch[31] Batch [5]#011Speed: 298.95 samples/sec#011loss=4.631858\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] processed a total of 633 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2390.904188156128, \"sum\": 2390.904188156128, \"min\": 2390.904188156128}}, \"EndTime\": 1550455228.904902, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455226.513933}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=264.742359475 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] #progress_metric: host=algo-1, completed 32 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:28 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_556ab061-2280-406a-afc9-2690a4386c42-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 58.18891525268555, \"sum\": 58.18891525268555, \"min\": 58.18891525268555}}, \"EndTime\": 1550455228.963547, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455228.904966}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:29 INFO 140098330203968] Epoch[32] Batch[0] avg_epoch_loss=4.589612\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:30 INFO 140098330203968] Epoch[32] Batch[5] avg_epoch_loss=4.664897\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:30 INFO 140098330203968] Epoch[32] Batch [5]#011Speed: 296.92 samples/sec#011loss=4.664897\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] Epoch[32] Batch[10] avg_epoch_loss=4.640773\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] Epoch[32] Batch [10]#011Speed: 291.81 samples/sec#011loss=4.611825\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] processed a total of 646 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2648.0910778045654, \"sum\": 2648.0910778045654, \"min\": 2648.0910778045654}}, \"EndTime\": 1550455231.611774, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455228.963617}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=243.938463688 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] #progress_metric: host=algo-1, completed 33 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:31 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:32 INFO 140098330203968] Epoch[33] Batch[0] avg_epoch_loss=4.709333\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] Epoch[33] Batch[5] avg_epoch_loss=4.613711\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] Epoch[33] Batch [5]#011Speed: 301.14 samples/sec#011loss=4.613711\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] processed a total of 640 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2366.572856903076, \"sum\": 2366.572856903076, \"min\": 2366.572856903076}}, \"EndTime\": 1550455233.978773, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455231.611852}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=270.420232337 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] #progress_metric: host=algo-1, completed 34 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:33 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:34 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_2f09491e-142d-43a7-8852-56263011b2cf-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.03997611999512, \"sum\": 57.03997611999512, \"min\": 57.03997611999512}}, \"EndTime\": 1550455234.036318, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455233.978848}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:34 INFO 140098330203968] Epoch[34] Batch[0] avg_epoch_loss=4.586714\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:35 INFO 140098330203968] Epoch[34] Batch[5] avg_epoch_loss=4.572983\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:35 INFO 140098330203968] Epoch[34] Batch [5]#011Speed: 303.63 samples/sec#011loss=4.572983\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] processed a total of 617 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2374.2527961730957, \"sum\": 2374.2527961730957, \"min\": 2374.2527961730957}}, \"EndTime\": 1550455236.410701, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455234.036388}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=259.857402267 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] #progress_metric: host=algo-1, completed 35 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_de765539-1165-4ed5-b8d7-b03b6bbed3b7-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 71.04992866516113, \"sum\": 71.04992866516113, \"min\": 71.04992866516113}}, \"EndTime\": 1550455236.482214, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455236.410787}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:36 INFO 140098330203968] Epoch[35] Batch[0] avg_epoch_loss=4.539979\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] Epoch[35] Batch[5] avg_epoch_loss=4.557394\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] Epoch[35] Batch [5]#011Speed: 290.14 samples/sec#011loss=4.557394\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] processed a total of 633 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2385.3769302368164, \"sum\": 2385.3769302368164, \"min\": 2385.3769302368164}}, \"EndTime\": 1550455238.867736, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455236.482294}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=265.355726539 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] #progress_metric: host=algo-1, completed 36 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:38 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_57c662d8-9f25-4c7e-9143-8b2e1b7091b1-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 71.43092155456543, \"sum\": 71.43092155456543, \"min\": 71.43092155456543}}, \"EndTime\": 1550455238.939619, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455238.8678}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:39 INFO 140098330203968] Epoch[36] Batch[0] avg_epoch_loss=4.559431\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:40 INFO 140098330203968] Epoch[36] Batch[5] avg_epoch_loss=4.547896\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:40 INFO 140098330203968] Epoch[36] Batch [5]#011Speed: 301.02 samples/sec#011loss=4.547896\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] Epoch[36] Batch[10] avg_epoch_loss=4.545612\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] Epoch[36] Batch [10]#011Speed: 301.97 samples/sec#011loss=4.542870\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] processed a total of 683 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2582.1361541748047, \"sum\": 2582.1361541748047, \"min\": 2582.1361541748047}}, \"EndTime\": 1550455241.521903, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455238.939698}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=264.497294976 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] #progress_metric: host=algo-1, completed 37 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:41 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_0ae64d9d-592b-4cba-af1a-acd1667a033e-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 90.69108963012695, \"sum\": 90.69108963012695, \"min\": 90.69108963012695}}, \"EndTime\": 1550455241.613058, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455241.521984}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:42 INFO 140098330203968] Epoch[37] Batch[0] avg_epoch_loss=4.626551\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:43 INFO 140098330203968] Epoch[37] Batch[5] avg_epoch_loss=4.511424\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:43 INFO 140098330203968] Epoch[37] Batch [5]#011Speed: 304.71 samples/sec#011loss=4.511424\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] Epoch[37] Batch[10] avg_epoch_loss=4.510255\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] Epoch[37] Batch [10]#011Speed: 293.04 samples/sec#011loss=4.508852\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] processed a total of 649 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2634.4358921051025, \"sum\": 2634.4358921051025, \"min\": 2634.4358921051025}}, \"EndTime\": 1550455244.247632, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455241.613135}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=246.342539969 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] #progress_metric: host=algo-1, completed 38 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_562c74a2-213a-42d6-bf58-d3d2088e58b5-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.945966720581055, \"sum\": 57.945966720581055, \"min\": 57.945966720581055}}, \"EndTime\": 1550455244.306048, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455244.247704}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:44 INFO 140098330203968] Epoch[38] Batch[0] avg_epoch_loss=4.589242\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:45 INFO 140098330203968] Epoch[38] Batch[5] avg_epoch_loss=4.513799\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:45 INFO 140098330203968] Epoch[38] Batch [5]#011Speed: 301.54 samples/sec#011loss=4.513799\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:46 INFO 140098330203968] processed a total of 633 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2382.314920425415, \"sum\": 2382.314920425415, \"min\": 2382.314920425415}}, \"EndTime\": 1550455246.688492, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455244.306123}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:46 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=265.697121859 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:46 INFO 140098330203968] #progress_metric: host=algo-1, completed 39 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:46 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:47 INFO 140098330203968] Epoch[39] Batch[0] avg_epoch_loss=4.508337\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:48 INFO 140098330203968] Epoch[39] Batch[5] avg_epoch_loss=4.520968\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:48 INFO 140098330203968] Epoch[39] Batch [5]#011Speed: 306.94 samples/sec#011loss=4.520968\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] Epoch[39] Batch[10] avg_epoch_loss=4.517497\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] Epoch[39] Batch [10]#011Speed: 293.49 samples/sec#011loss=4.513331\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] processed a total of 667 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2586.5659713745117, \"sum\": 2586.5659713745117, \"min\": 2586.5659713745117}}, \"EndTime\": 1550455249.275432, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455246.688556}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=257.85909796 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] #progress_metric: host=algo-1, completed 40 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:49 INFO 140098330203968] Epoch[40] Batch[0] avg_epoch_loss=4.753620\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:50 INFO 140098330203968] Epoch[40] Batch[5] avg_epoch_loss=4.576472\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:50 INFO 140098330203968] Epoch[40] Batch [5]#011Speed: 294.43 samples/sec#011loss=4.576472\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] Epoch[40] Batch[10] avg_epoch_loss=4.543101\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] Epoch[40] Batch [10]#011Speed: 301.24 samples/sec#011loss=4.503055\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] processed a total of 668 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2615.2260303497314, \"sum\": 2615.2260303497314, \"min\": 2615.2260303497314}}, \"EndTime\": 1550455251.89106, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455249.275511}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=255.415144418 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] #progress_metric: host=algo-1, completed 41 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:51 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:52 INFO 140098330203968] Epoch[41] Batch[0] avg_epoch_loss=4.467214\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:53 INFO 140098330203968] Epoch[41] Batch[5] avg_epoch_loss=4.445528\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:53 INFO 140098330203968] Epoch[41] Batch [5]#011Speed: 301.84 samples/sec#011loss=4.445528\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] Epoch[41] Batch[10] avg_epoch_loss=4.450010\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] Epoch[41] Batch [10]#011Speed: 297.36 samples/sec#011loss=4.455390\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] processed a total of 654 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2581.7971229553223, \"sum\": 2581.7971229553223, \"min\": 2581.7971229553223}}, \"EndTime\": 1550455254.473289, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455251.891142}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=253.300276471 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] #progress_metric: host=algo-1, completed 42 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:54 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_3ed04318-28e6-417a-80a9-af2e824dc769-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 94.06280517578125, \"sum\": 94.06280517578125, \"min\": 94.06280517578125}}, \"EndTime\": 1550455254.56782, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455254.473367}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:55 INFO 140098330203968] Epoch[42] Batch[0] avg_epoch_loss=4.382257\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] Epoch[42] Batch[5] avg_epoch_loss=4.463115\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] Epoch[42] Batch [5]#011Speed: 303.75 samples/sec#011loss=4.463115\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] processed a total of 636 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2410.861015319824, \"sum\": 2410.861015319824, \"min\": 2410.861015319824}}, \"EndTime\": 1550455256.978824, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455254.567897}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=263.792469628 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] #progress_metric: host=algo-1, completed 43 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:56 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:57 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_43af17a6-9888-4203-ab61-9824a0118336-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 62.330007553100586, \"sum\": 62.330007553100586, \"min\": 62.330007553100586}}, \"EndTime\": 1550455257.041612, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455256.978907}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:57 INFO 140098330203968] Epoch[43] Batch[0] avg_epoch_loss=4.456538\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:58 INFO 140098330203968] Epoch[43] Batch[5] avg_epoch_loss=4.429974\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:58 INFO 140098330203968] Epoch[43] Batch [5]#011Speed: 302.00 samples/sec#011loss=4.429974\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] processed a total of 624 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2407.4580669403076, \"sum\": 2407.4580669403076, \"min\": 2407.4580669403076}}, \"EndTime\": 1550455259.449217, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455257.041689}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=259.180454261 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] #progress_metric: host=algo-1, completed 44 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_66fd77e5-9b87-40f5-8363-129f48b19a89-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.88111686706543, \"sum\": 57.88111686706543, \"min\": 57.88111686706543}}, \"EndTime\": 1550455259.507571, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455259.449305}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:00:59 INFO 140098330203968] Epoch[44] Batch[0] avg_epoch_loss=4.387825\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] Epoch[44] Batch[5] avg_epoch_loss=4.400286\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] Epoch[44] Batch [5]#011Speed: 299.07 samples/sec#011loss=4.400286\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] processed a total of 606 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2407.323122024536, \"sum\": 2407.323122024536, \"min\": 2407.323122024536}}, \"EndTime\": 1550455261.915042, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455259.507649}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=251.718826977 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] #progress_metric: host=algo-1, completed 45 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:01 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_203705ee-9149-441c-bfba-2770f2739f95-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 65.94014167785645, \"sum\": 65.94014167785645, \"min\": 65.94014167785645}}, \"EndTime\": 1550455261.981451, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455261.915126}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:02 INFO 140098330203968] Epoch[45] Batch[0] avg_epoch_loss=4.484845\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:03 INFO 140098330203968] Epoch[45] Batch[5] avg_epoch_loss=4.452065\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:03 INFO 140098330203968] Epoch[45] Batch [5]#011Speed: 297.09 samples/sec#011loss=4.452065\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:04 INFO 140098330203968] processed a total of 635 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2396.372079849243, \"sum\": 2396.372079849243, \"min\": 2396.372079849243}}, \"EndTime\": 1550455264.377976, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455261.981533}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:04 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=264.970078008 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:04 INFO 140098330203968] #progress_metric: host=algo-1, completed 46 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:04 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:04 INFO 140098330203968] Epoch[46] Batch[0] avg_epoch_loss=4.447557\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:05 INFO 140098330203968] Epoch[46] Batch[5] avg_epoch_loss=4.401564\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:05 INFO 140098330203968] Epoch[46] Batch [5]#011Speed: 301.85 samples/sec#011loss=4.401564\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] Epoch[46] Batch[10] avg_epoch_loss=4.415216\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] Epoch[46] Batch [10]#011Speed: 294.79 samples/sec#011loss=4.431600\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] processed a total of 674 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2644.2480087280273, \"sum\": 2644.2480087280273, \"min\": 2644.2480087280273}}, \"EndTime\": 1550455267.022632, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455264.378059}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.881324009 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] #progress_metric: host=algo-1, completed 47 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:07 INFO 140098330203968] Epoch[47] Batch[0] avg_epoch_loss=4.381103\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:08 INFO 140098330203968] Epoch[47] Batch[5] avg_epoch_loss=4.403470\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:08 INFO 140098330203968] Epoch[47] Batch [5]#011Speed: 303.35 samples/sec#011loss=4.403470\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] Epoch[47] Batch[10] avg_epoch_loss=4.388720\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] Epoch[47] Batch [10]#011Speed: 293.83 samples/sec#011loss=4.371019\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] processed a total of 684 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2651.2300968170166, \"sum\": 2651.2300968170166, \"min\": 2651.2300968170166}}, \"EndTime\": 1550455269.674274, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455267.022713}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=257.982082369 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] #progress_metric: host=algo-1, completed 48 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:09 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_005344b8-a2d8-490b-b88b-e1cee3aea4b9-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 57.588815689086914, \"sum\": 57.588815689086914, \"min\": 57.588815689086914}}, \"EndTime\": 1550455269.732368, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455269.674346}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:10 INFO 140098330203968] Epoch[48] Batch[0] avg_epoch_loss=4.360822\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:11 INFO 140098330203968] Epoch[48] Batch[5] avg_epoch_loss=4.353790\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:11 INFO 140098330203968] Epoch[48] Batch [5]#011Speed: 307.21 samples/sec#011loss=4.353790\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] processed a total of 619 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2377.3319721221924, \"sum\": 2377.3319721221924, \"min\": 2377.3319721221924}}, \"EndTime\": 1550455272.109845, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455269.732444}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=260.362211834 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] #progress_metric: host=algo-1, completed 49 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_6cf8133c-8303-4ac5-8b67-bb16f46cb246-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 60.2869987487793, \"sum\": 60.2869987487793, \"min\": 60.2869987487793}}, \"EndTime\": 1550455272.170639, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455272.109929}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:12 INFO 140098330203968] Epoch[49] Batch[0] avg_epoch_loss=4.448277\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:13 INFO 140098330203968] Epoch[49] Batch[5] avg_epoch_loss=4.439774\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:13 INFO 140098330203968] Epoch[49] Batch [5]#011Speed: 299.19 samples/sec#011loss=4.439774\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:14 INFO 140098330203968] processed a total of 579 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2381.0529708862305, \"sum\": 2381.0529708862305, \"min\": 2381.0529708862305}}, \"EndTime\": 1550455274.551836, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455272.170711}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:14 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=243.15677366 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:14 INFO 140098330203968] #progress_metric: host=algo-1, completed 50 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:14 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:15 INFO 140098330203968] Epoch[50] Batch[0] avg_epoch_loss=4.535917\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:16 INFO 140098330203968] Epoch[50] Batch[5] avg_epoch_loss=4.413770\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:16 INFO 140098330203968] Epoch[50] Batch [5]#011Speed: 299.82 samples/sec#011loss=4.413770\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] Epoch[50] Batch[10] avg_epoch_loss=4.373587\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] Epoch[50] Batch [10]#011Speed: 297.72 samples/sec#011loss=4.325368\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] processed a total of 654 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2622.375965118408, \"sum\": 2622.375965118408, \"min\": 2622.375965118408}}, \"EndTime\": 1550455277.174627, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455274.55192}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=249.380934488 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] #progress_metric: host=algo-1, completed 51 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:17 INFO 140098330203968] Epoch[51] Batch[0] avg_epoch_loss=4.344268\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:18 INFO 140098330203968] Epoch[51] Batch[5] avg_epoch_loss=4.354896\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:18 INFO 140098330203968] Epoch[51] Batch [5]#011Speed: 303.39 samples/sec#011loss=4.354896\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] Epoch[51] Batch[10] avg_epoch_loss=4.336876\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] Epoch[51] Batch [10]#011Speed: 288.72 samples/sec#011loss=4.315251\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] processed a total of 644 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2672.7049350738525, \"sum\": 2672.7049350738525, \"min\": 2672.7049350738525}}, \"EndTime\": 1550455279.847728, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455277.174705}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=240.943562103 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] #progress_metric: host=algo-1, completed 52 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:19 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_0a8606e3-e9eb-4924-9707-36d466460108-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 68.76301765441895, \"sum\": 68.76301765441895, \"min\": 68.76301765441895}}, \"EndTime\": 1550455279.916988, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455279.84781}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:20 INFO 140098330203968] Epoch[52] Batch[0] avg_epoch_loss=4.610729\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:21 INFO 140098330203968] Epoch[52] Batch[5] avg_epoch_loss=4.489800\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:21 INFO 140098330203968] Epoch[52] Batch [5]#011Speed: 286.52 samples/sec#011loss=4.489800\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] Epoch[52] Batch[10] avg_epoch_loss=4.470037\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] Epoch[52] Batch [10]#011Speed: 294.88 samples/sec#011loss=4.446321\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] processed a total of 649 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2651.655912399292, \"sum\": 2651.655912399292, \"min\": 2651.655912399292}}, \"EndTime\": 1550455282.568781, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455279.917058}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=244.742092442 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] #progress_metric: host=algo-1, completed 53 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:22 INFO 140098330203968] Epoch[53] Batch[0] avg_epoch_loss=4.445441\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] Epoch[53] Batch[5] avg_epoch_loss=4.395133\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] Epoch[53] Batch [5]#011Speed: 302.46 samples/sec#011loss=4.395133\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] processed a total of 606 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2355.5970191955566, \"sum\": 2355.5970191955566, \"min\": 2355.5970191955566}}, \"EndTime\": 1550455284.924804, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455282.568851}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=257.248270386 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] #progress_metric: host=algo-1, completed 54 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:24 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:25 INFO 140098330203968] Epoch[54] Batch[0] avg_epoch_loss=4.377720\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:26 INFO 140098330203968] Epoch[54] Batch[5] avg_epoch_loss=4.337751\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:26 INFO 140098330203968] Epoch[54] Batch [5]#011Speed: 301.80 samples/sec#011loss=4.337751\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] Epoch[54] Batch[10] avg_epoch_loss=4.343241\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] Epoch[54] Batch [10]#011Speed: 302.46 samples/sec#011loss=4.349830\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] processed a total of 697 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2580.3802013397217, \"sum\": 2580.3802013397217, \"min\": 2580.3802013397217}}, \"EndTime\": 1550455287.505641, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455284.924871}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=270.102580278 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] #progress_metric: host=algo-1, completed 55 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:27 INFO 140098330203968] Epoch[55] Batch[0] avg_epoch_loss=4.297465\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:29 INFO 140098330203968] Epoch[55] Batch[5] avg_epoch_loss=4.311660\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:29 INFO 140098330203968] Epoch[55] Batch [5]#011Speed: 293.40 samples/sec#011loss=4.311660\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] Epoch[55] Batch[10] avg_epoch_loss=4.303508\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] Epoch[55] Batch [10]#011Speed: 290.09 samples/sec#011loss=4.293725\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] processed a total of 675 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2661.2539291381836, \"sum\": 2661.2539291381836, \"min\": 2661.2539291381836}}, \"EndTime\": 1550455290.167292, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455287.505721}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=253.628584659 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] #progress_metric: host=algo-1, completed 56 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_8f13694d-61c4-45ba-8bd8-57cf568b5bcf-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 78.77993583679199, \"sum\": 78.77993583679199, \"min\": 78.77993583679199}}, \"EndTime\": 1550455290.24652, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455290.167371}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:30 INFO 140098330203968] Epoch[56] Batch[0] avg_epoch_loss=4.228218\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:31 INFO 140098330203968] Epoch[56] Batch[5] avg_epoch_loss=4.292144\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:31 INFO 140098330203968] Epoch[56] Batch [5]#011Speed: 305.42 samples/sec#011loss=4.292144\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:32 INFO 140098330203968] processed a total of 638 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2381.150007247925, \"sum\": 2381.150007247925, \"min\": 2381.150007247925}}, \"EndTime\": 1550455292.62782, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455290.246599}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:32 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=267.923354523 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:32 INFO 140098330203968] #progress_metric: host=algo-1, completed 57 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:32 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:33 INFO 140098330203968] Epoch[57] Batch[0] avg_epoch_loss=4.261302\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:34 INFO 140098330203968] Epoch[57] Batch[5] avg_epoch_loss=4.311742\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:34 INFO 140098330203968] Epoch[57] Batch [5]#011Speed: 302.37 samples/sec#011loss=4.311742\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] Epoch[57] Batch[10] avg_epoch_loss=4.292720\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] Epoch[57] Batch [10]#011Speed: 285.57 samples/sec#011loss=4.269893\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] processed a total of 647 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2642.6169872283936, \"sum\": 2642.6169872283936, \"min\": 2642.6169872283936}}, \"EndTime\": 1550455295.270831, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455292.627905}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=244.821948457 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] #progress_metric: host=algo-1, completed 58 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_8fe75a3c-1636-4873-91de-34d966f31577-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 63.78889083862305, \"sum\": 63.78889083862305, \"min\": 63.78889083862305}}, \"EndTime\": 1550455295.335073, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455295.270911}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:35 INFO 140098330203968] Epoch[58] Batch[0] avg_epoch_loss=4.332305\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:36 INFO 140098330203968] Epoch[58] Batch[5] avg_epoch_loss=4.352651\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:36 INFO 140098330203968] Epoch[58] Batch [5]#011Speed: 300.73 samples/sec#011loss=4.352651\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:37 INFO 140098330203968] processed a total of 611 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2363.6131286621094, \"sum\": 2363.6131286621094, \"min\": 2363.6131286621094}}, \"EndTime\": 1550455297.698818, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455295.335144}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:37 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.488669837 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:37 INFO 140098330203968] #progress_metric: host=algo-1, completed 59 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:37 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:38 INFO 140098330203968] Epoch[59] Batch[0] avg_epoch_loss=4.348832\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:39 INFO 140098330203968] Epoch[59] Batch[5] avg_epoch_loss=4.314933\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:39 INFO 140098330203968] Epoch[59] Batch [5]#011Speed: 287.99 samples/sec#011loss=4.314933\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:40 INFO 140098330203968] processed a total of 593 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2444.2079067230225, \"sum\": 2444.2079067230225, \"min\": 2444.2079067230225}}, \"EndTime\": 1550455300.143444, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455297.698905}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:40 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=242.601985608 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:40 INFO 140098330203968] #progress_metric: host=algo-1, completed 60 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:40 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:40 INFO 140098330203968] Epoch[60] Batch[0] avg_epoch_loss=4.256725\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:41 INFO 140098330203968] Epoch[60] Batch[5] avg_epoch_loss=4.320001\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:41 INFO 140098330203968] Epoch[60] Batch [5]#011Speed: 298.10 samples/sec#011loss=4.320001\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] Epoch[60] Batch[10] avg_epoch_loss=4.293082\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] Epoch[60] Batch [10]#011Speed: 285.32 samples/sec#011loss=4.260778\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] processed a total of 679 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2693.6399936676025, \"sum\": 2693.6399936676025, \"min\": 2693.6399936676025}}, \"EndTime\": 1550455302.837493, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455300.143529}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=252.064215591 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] #progress_metric: host=algo-1, completed 61 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:42 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:43 INFO 140098330203968] Epoch[61] Batch[0] avg_epoch_loss=4.287281\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:44 INFO 140098330203968] Epoch[61] Batch[5] avg_epoch_loss=4.283317\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:44 INFO 140098330203968] Epoch[61] Batch [5]#011Speed: 297.33 samples/sec#011loss=4.283317\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] Epoch[61] Batch[10] avg_epoch_loss=4.267810\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] Epoch[61] Batch [10]#011Speed: 283.29 samples/sec#011loss=4.249201\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] processed a total of 709 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2965.082883834839, \"sum\": 2965.082883834839, \"min\": 2965.082883834839}}, \"EndTime\": 1550455305.802984, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455302.837571}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=239.106574744 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] #progress_metric: host=algo-1, completed 62 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:45 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_4663aa5d-c706-4f20-9df9-7bc2c2dbba39-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 77.99410820007324, \"sum\": 77.99410820007324, \"min\": 77.99410820007324}}, \"EndTime\": 1550455305.881466, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455305.803065}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:46 INFO 140098330203968] Epoch[62] Batch[0] avg_epoch_loss=4.343434\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:47 INFO 140098330203968] Epoch[62] Batch[5] avg_epoch_loss=4.310054\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:47 INFO 140098330203968] Epoch[62] Batch [5]#011Speed: 293.82 samples/sec#011loss=4.310054\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:48 INFO 140098330203968] processed a total of 624 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2412.3849868774414, \"sum\": 2412.3849868774414, \"min\": 2412.3849868774414}}, \"EndTime\": 1550455308.293983, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455305.881539}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:48 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.651863062 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:48 INFO 140098330203968] #progress_metric: host=algo-1, completed 63 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:48 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:48 INFO 140098330203968] Epoch[63] Batch[0] avg_epoch_loss=4.343734\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:49 INFO 140098330203968] Epoch[63] Batch[5] avg_epoch_loss=4.356107\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:49 INFO 140098330203968] Epoch[63] Batch [5]#011Speed: 294.52 samples/sec#011loss=4.356107\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] Epoch[63] Batch[10] avg_epoch_loss=4.345090\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] Epoch[63] Batch [10]#011Speed: 301.63 samples/sec#011loss=4.331871\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] processed a total of 674 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2631.8631172180176, \"sum\": 2631.8631172180176, \"min\": 2631.8631172180176}}, \"EndTime\": 1550455310.926261, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455308.294067}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=256.082834658 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] #progress_metric: host=algo-1, completed 64 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:50 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:51 INFO 140098330203968] Epoch[64] Batch[0] avg_epoch_loss=4.271764\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:52 INFO 140098330203968] Epoch[64] Batch[5] avg_epoch_loss=4.324020\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:52 INFO 140098330203968] Epoch[64] Batch [5]#011Speed: 307.30 samples/sec#011loss=4.324020\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] Epoch[64] Batch[10] avg_epoch_loss=4.321886\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] Epoch[64] Batch [10]#011Speed: 281.80 samples/sec#011loss=4.319324\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] processed a total of 653 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2639.644145965576, \"sum\": 2639.644145965576, \"min\": 2639.644145965576}}, \"EndTime\": 1550455313.566372, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455310.926324}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=247.370681041 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] #progress_metric: host=algo-1, completed 65 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:53 INFO 140098330203968] Epoch[65] Batch[0] avg_epoch_loss=4.260790\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] Epoch[65] Batch[5] avg_epoch_loss=4.346834\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] Epoch[65] Batch [5]#011Speed: 290.11 samples/sec#011loss=4.346834\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] processed a total of 601 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2400.3701210021973, \"sum\": 2400.3701210021973, \"min\": 2400.3701210021973}}, \"EndTime\": 1550455315.967204, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455313.566454}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=250.36482449 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] #progress_metric: host=algo-1, completed 66 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:55 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:56 INFO 140098330203968] Epoch[66] Batch[0] avg_epoch_loss=4.287777\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:57 INFO 140098330203968] Epoch[66] Batch[5] avg_epoch_loss=4.299644\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:57 INFO 140098330203968] Epoch[66] Batch [5]#011Speed: 301.81 samples/sec#011loss=4.299644\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] Epoch[66] Batch[10] avg_epoch_loss=4.313006\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] Epoch[66] Batch [10]#011Speed: 301.56 samples/sec#011loss=4.329040\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] processed a total of 666 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2578.476905822754, \"sum\": 2578.476905822754, \"min\": 2578.476905822754}}, \"EndTime\": 1550455318.546096, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455315.967289}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.2802907 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] #progress_metric: host=algo-1, completed 67 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:01:58 INFO 140098330203968] Epoch[67] Batch[0] avg_epoch_loss=4.501410\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] Epoch[67] Batch[5] avg_epoch_loss=4.372367\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] Epoch[67] Batch [5]#011Speed: 298.12 samples/sec#011loss=4.372367\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] processed a total of 638 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2370.305061340332, \"sum\": 2370.305061340332, \"min\": 2370.305061340332}}, \"EndTime\": 1550455320.916825, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455318.546173}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.147422649 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] #progress_metric: host=algo-1, completed 68 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:00 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:01 INFO 140098330203968] Epoch[68] Batch[0] avg_epoch_loss=4.251869\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:02 INFO 140098330203968] Epoch[68] Batch[5] avg_epoch_loss=4.279238\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:02 INFO 140098330203968] Epoch[68] Batch [5]#011Speed: 296.09 samples/sec#011loss=4.279238\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] Epoch[68] Batch[10] avg_epoch_loss=4.283893\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] Epoch[68] Batch [10]#011Speed: 301.13 samples/sec#011loss=4.289479\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] processed a total of 665 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2594.4511890411377, \"sum\": 2594.4511890411377, \"min\": 2594.4511890411377}}, \"EndTime\": 1550455323.511732, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455320.916907}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=256.302824897 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] #progress_metric: host=algo-1, completed 69 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:03 INFO 140098330203968] Epoch[69] Batch[0] avg_epoch_loss=4.262158\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:05 INFO 140098330203968] Epoch[69] Batch[5] avg_epoch_loss=4.287671\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:05 INFO 140098330203968] Epoch[69] Batch [5]#011Speed: 299.60 samples/sec#011loss=4.287671\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] Epoch[69] Batch[10] avg_epoch_loss=4.280826\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] Epoch[69] Batch [10]#011Speed: 296.23 samples/sec#011loss=4.272612\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] processed a total of 644 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2595.889091491699, \"sum\": 2595.889091491699, \"min\": 2595.889091491699}}, \"EndTime\": 1550455326.108087, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455323.511825}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.073007236 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] #progress_metric: host=algo-1, completed 70 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:06 INFO 140098330203968] Epoch[70] Batch[0] avg_epoch_loss=4.246464\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:07 INFO 140098330203968] Epoch[70] Batch[5] avg_epoch_loss=4.242714\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:07 INFO 140098330203968] Epoch[70] Batch [5]#011Speed: 304.45 samples/sec#011loss=4.242714\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] Epoch[70] Batch[10] avg_epoch_loss=4.247202\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] Epoch[70] Batch [10]#011Speed: 293.21 samples/sec#011loss=4.252588\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] processed a total of 645 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2594.863176345825, \"sum\": 2594.863176345825, \"min\": 2594.863176345825}}, \"EndTime\": 1550455328.703391, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455326.108163}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.556623286 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] #progress_metric: host=algo-1, completed 71 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:08 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_59c2790b-8b9b-4c5e-a4cf-af694108a9ee-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 81.35199546813965, \"sum\": 81.35199546813965, \"min\": 81.35199546813965}}, \"EndTime\": 1550455328.785199, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455328.703468}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:09 INFO 140098330203968] Epoch[71] Batch[0] avg_epoch_loss=4.205283\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:10 INFO 140098330203968] Epoch[71] Batch[5] avg_epoch_loss=4.200588\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:10 INFO 140098330203968] Epoch[71] Batch [5]#011Speed: 289.41 samples/sec#011loss=4.200588\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] Epoch[71] Batch[10] avg_epoch_loss=4.195242\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] Epoch[71] Batch [10]#011Speed: 293.04 samples/sec#011loss=4.188827\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] processed a total of 655 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2663.823127746582, \"sum\": 2663.823127746582, \"min\": 2663.823127746582}}, \"EndTime\": 1550455331.449167, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455328.785274}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=245.876236935 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] #progress_metric: host=algo-1, completed 72 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_e915bb06-b36a-47a2-9225-8d52d2fb899e-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 69.93889808654785, \"sum\": 69.93889808654785, \"min\": 69.93889808654785}}, \"EndTime\": 1550455331.51957, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455331.449246}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:11 INFO 140098330203968] Epoch[72] Batch[0] avg_epoch_loss=4.254036\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:13 INFO 140098330203968] Epoch[72] Batch[5] avg_epoch_loss=4.237330\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:13 INFO 140098330203968] Epoch[72] Batch [5]#011Speed: 299.45 samples/sec#011loss=4.237330\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] Epoch[72] Batch[10] avg_epoch_loss=4.252202\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] Epoch[72] Batch [10]#011Speed: 287.72 samples/sec#011loss=4.270050\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] processed a total of 644 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2651.6849994659424, \"sum\": 2651.6849994659424, \"min\": 2651.6849994659424}}, \"EndTime\": 1550455334.1714, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455331.519645}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=242.854005471 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] #progress_metric: host=algo-1, completed 73 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:14 INFO 140098330203968] Epoch[73] Batch[0] avg_epoch_loss=4.212753\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:15 INFO 140098330203968] Epoch[73] Batch[5] avg_epoch_loss=4.210079\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:15 INFO 140098330203968] Epoch[73] Batch [5]#011Speed: 272.09 samples/sec#011loss=4.210079\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:16 INFO 140098330203968] processed a total of 609 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2518.2950496673584, \"sum\": 2518.2950496673584, \"min\": 2518.2950496673584}}, \"EndTime\": 1550455336.690097, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455334.171473}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:16 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=241.81885704 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:16 INFO 140098330203968] #progress_metric: host=algo-1, completed 74 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:16 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:17 INFO 140098330203968] Epoch[74] Batch[0] avg_epoch_loss=4.239408\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:18 INFO 140098330203968] Epoch[74] Batch[5] avg_epoch_loss=4.253116\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:18 INFO 140098330203968] Epoch[74] Batch [5]#011Speed: 303.98 samples/sec#011loss=4.253116\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:19 INFO 140098330203968] processed a total of 610 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2353.1789779663086, \"sum\": 2353.1789779663086, \"min\": 2353.1789779663086}}, \"EndTime\": 1550455339.043745, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455336.690173}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:19 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=259.210801379 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:19 INFO 140098330203968] #progress_metric: host=algo-1, completed 75 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:19 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:19 INFO 140098330203968] Epoch[75] Batch[0] avg_epoch_loss=4.416149\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:20 INFO 140098330203968] Epoch[75] Batch[5] avg_epoch_loss=4.277645\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:20 INFO 140098330203968] Epoch[75] Batch [5]#011Speed: 296.92 samples/sec#011loss=4.277645\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:21 INFO 140098330203968] processed a total of 603 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2372.8580474853516, \"sum\": 2372.8580474853516, \"min\": 2372.8580474853516}}, \"EndTime\": 1550455341.417054, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455339.043819}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:21 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.110111806 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:21 INFO 140098330203968] #progress_metric: host=algo-1, completed 76 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:21 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:21 INFO 140098330203968] Epoch[76] Batch[0] avg_epoch_loss=4.170366\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:22 INFO 140098330203968] Epoch[76] Batch[5] avg_epoch_loss=4.196334\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:22 INFO 140098330203968] Epoch[76] Batch [5]#011Speed: 300.14 samples/sec#011loss=4.196334\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:23 INFO 140098330203968] processed a total of 633 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2351.526975631714, \"sum\": 2351.526975631714, \"min\": 2351.526975631714}}, \"EndTime\": 1550455343.769081, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455341.417142}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:23 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.174536988 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:23 INFO 140098330203968] #progress_metric: host=algo-1, completed 77 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:23 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:23 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_05a22aad-35f5-4800-9f65-39d2aaae6327-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 58.66098403930664, \"sum\": 58.66098403930664, \"min\": 58.66098403930664}}, \"EndTime\": 1550455343.828219, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455343.769146}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:24 INFO 140098330203968] Epoch[77] Batch[0] avg_epoch_loss=4.208636\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:25 INFO 140098330203968] Epoch[77] Batch[5] avg_epoch_loss=4.167605\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:25 INFO 140098330203968] Epoch[77] Batch [5]#011Speed: 297.09 samples/sec#011loss=4.167605\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] processed a total of 634 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2378.0570030212402, \"sum\": 2378.0570030212402, \"min\": 2378.0570030212402}}, \"EndTime\": 1550455346.2064, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455343.828274}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=266.581996316 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] #progress_metric: host=algo-1, completed 78 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_ecfc9fe6-7542-4d17-adbd-90a310f9fdeb-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 60.90593338012695, \"sum\": 60.90593338012695, \"min\": 60.90593338012695}}, \"EndTime\": 1550455346.267906, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455346.206552}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:26 INFO 140098330203968] Epoch[78] Batch[0] avg_epoch_loss=4.143866\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:27 INFO 140098330203968] Epoch[78] Batch[5] avg_epoch_loss=4.169144\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:27 INFO 140098330203968] Epoch[78] Batch [5]#011Speed: 301.12 samples/sec#011loss=4.169144\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:28 INFO 140098330203968] processed a total of 613 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2410.949945449829, \"sum\": 2410.949945449829, \"min\": 2410.949945449829}}, \"EndTime\": 1550455348.67898, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455346.267965}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:28 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.245765098 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:28 INFO 140098330203968] #progress_metric: host=algo-1, completed 79 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:28 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:28 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_6b660212-3fd2-4af9-8330-a84fc681ce8b-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 60.66393852233887, \"sum\": 60.66393852233887, \"min\": 60.66393852233887}}, \"EndTime\": 1550455348.740049, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455348.679045}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:29 INFO 140098330203968] Epoch[79] Batch[0] avg_epoch_loss=4.203887\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:30 INFO 140098330203968] Epoch[79] Batch[5] avg_epoch_loss=4.208917\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:30 INFO 140098330203968] Epoch[79] Batch [5]#011Speed: 293.13 samples/sec#011loss=4.208917\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:31 INFO 140098330203968] processed a total of 637 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2402.9409885406494, \"sum\": 2402.9409885406494, \"min\": 2402.9409885406494}}, \"EndTime\": 1550455351.143132, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455348.740121}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:31 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=265.078143521 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:31 INFO 140098330203968] #progress_metric: host=algo-1, completed 80 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:31 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:31 INFO 140098330203968] Epoch[80] Batch[0] avg_epoch_loss=4.184781\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:32 INFO 140098330203968] Epoch[80] Batch[5] avg_epoch_loss=4.184607\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:32 INFO 140098330203968] Epoch[80] Batch [5]#011Speed: 301.36 samples/sec#011loss=4.184607\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:33 INFO 140098330203968] processed a total of 625 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2419.142961502075, \"sum\": 2419.142961502075, \"min\": 2419.142961502075}}, \"EndTime\": 1550455353.562692, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455351.143214}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:33 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.343254046 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:33 INFO 140098330203968] #progress_metric: host=algo-1, completed 81 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:33 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:34 INFO 140098330203968] Epoch[81] Batch[0] avg_epoch_loss=4.200870\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] Epoch[81] Batch[5] avg_epoch_loss=4.187211\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] Epoch[81] Batch [5]#011Speed: 286.51 samples/sec#011loss=4.187211\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] processed a total of 636 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2423.23899269104, \"sum\": 2423.23899269104, \"min\": 2423.23899269104}}, \"EndTime\": 1550455355.986379, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455353.562775}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=262.445724895 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] #progress_metric: host=algo-1, completed 82 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:35 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:36 INFO 140098330203968] Epoch[82] Batch[0] avg_epoch_loss=4.157598\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:37 INFO 140098330203968] Epoch[82] Batch[5] avg_epoch_loss=4.166191\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:37 INFO 140098330203968] Epoch[82] Batch [5]#011Speed: 298.77 samples/sec#011loss=4.166191\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] Epoch[82] Batch[10] avg_epoch_loss=4.181511\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] Epoch[82] Batch [10]#011Speed: 299.61 samples/sec#011loss=4.199895\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] processed a total of 647 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2586.9131088256836, \"sum\": 2586.9131088256836, \"min\": 2586.9131088256836}}, \"EndTime\": 1550455358.573746, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455355.986455}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=250.09519528 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] #progress_metric: host=algo-1, completed 83 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:38 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:39 INFO 140098330203968] Epoch[83] Batch[0] avg_epoch_loss=4.268992\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] Epoch[83] Batch[5] avg_epoch_loss=4.203380\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] Epoch[83] Batch [5]#011Speed: 298.21 samples/sec#011loss=4.203380\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] processed a total of 606 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2373.865842819214, \"sum\": 2373.865842819214, \"min\": 2373.865842819214}}, \"EndTime\": 1550455360.948009, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455358.573809}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=255.267418003 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] #progress_metric: host=algo-1, completed 84 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:40 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:41 INFO 140098330203968] Epoch[84] Batch[0] avg_epoch_loss=4.232632\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:42 INFO 140098330203968] Epoch[84] Batch[5] avg_epoch_loss=4.179939\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:42 INFO 140098330203968] Epoch[84] Batch [5]#011Speed: 306.68 samples/sec#011loss=4.179939\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:43 INFO 140098330203968] processed a total of 632 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2355.9958934783936, \"sum\": 2355.9958934783936, \"min\": 2355.9958934783936}}, \"EndTime\": 1550455363.304471, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455360.948082}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:43 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=268.237245653 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:43 INFO 140098330203968] #progress_metric: host=algo-1, completed 85 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:43 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:43 INFO 140098330203968] Epoch[85] Batch[0] avg_epoch_loss=4.153802\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:44 INFO 140098330203968] Epoch[85] Batch[5] avg_epoch_loss=4.170751\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:44 INFO 140098330203968] Epoch[85] Batch [5]#011Speed: 305.32 samples/sec#011loss=4.170751\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:45 INFO 140098330203968] processed a total of 593 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2388.5438442230225, \"sum\": 2388.5438442230225, \"min\": 2388.5438442230225}}, \"EndTime\": 1550455365.69343, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455363.304556}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:45 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.25592809 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:45 INFO 140098330203968] #progress_metric: host=algo-1, completed 86 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:45 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:46 INFO 140098330203968] Epoch[86] Batch[0] avg_epoch_loss=4.511135\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:47 INFO 140098330203968] Epoch[86] Batch[5] avg_epoch_loss=4.317241\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:47 INFO 140098330203968] Epoch[86] Batch [5]#011Speed: 307.30 samples/sec#011loss=4.317241\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] Epoch[86] Batch[10] avg_epoch_loss=4.278773\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] Epoch[86] Batch [10]#011Speed: 296.46 samples/sec#011loss=4.232612\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] processed a total of 675 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2582.350015640259, \"sum\": 2582.350015640259, \"min\": 2582.350015640259}}, \"EndTime\": 1550455368.276202, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455365.693507}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=261.379199622 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] #progress_metric: host=algo-1, completed 87 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:48 INFO 140098330203968] Epoch[87] Batch[0] avg_epoch_loss=4.252762\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:49 INFO 140098330203968] Epoch[87] Batch[5] avg_epoch_loss=4.206502\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:49 INFO 140098330203968] Epoch[87] Batch [5]#011Speed: 300.86 samples/sec#011loss=4.206502\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] Epoch[87] Batch[10] avg_epoch_loss=4.228342\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] Epoch[87] Batch [10]#011Speed: 294.85 samples/sec#011loss=4.254551\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] processed a total of 660 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2592.0591354370117, \"sum\": 2592.0591354370117, \"min\": 2592.0591354370117}}, \"EndTime\": 1550455370.868674, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455368.276268}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.612134249 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] #progress_metric: host=algo-1, completed 88 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:50 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:51 INFO 140098330203968] Epoch[88] Batch[0] avg_epoch_loss=4.249505\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:52 INFO 140098330203968] Epoch[88] Batch[5] avg_epoch_loss=4.216936\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:52 INFO 140098330203968] Epoch[88] Batch [5]#011Speed: 303.79 samples/sec#011loss=4.216936\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:53 INFO 140098330203968] processed a total of 627 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2341.947078704834, \"sum\": 2341.947078704834, \"min\": 2341.947078704834}}, \"EndTime\": 1550455373.211053, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455370.86875}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:53 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=267.712592718 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:53 INFO 140098330203968] #progress_metric: host=algo-1, completed 89 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:53 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:53 INFO 140098330203968] Epoch[89] Batch[0] avg_epoch_loss=4.144236\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:54 INFO 140098330203968] Epoch[89] Batch[5] avg_epoch_loss=4.205333\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:54 INFO 140098330203968] Epoch[89] Batch [5]#011Speed: 300.01 samples/sec#011loss=4.205333\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:55 INFO 140098330203968] processed a total of 624 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2395.339012145996, \"sum\": 2395.339012145996, \"min\": 2395.339012145996}}, \"EndTime\": 1550455375.606829, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455373.211127}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:55 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=260.492751477 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:55 INFO 140098330203968] #progress_metric: host=algo-1, completed 90 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:55 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:56 INFO 140098330203968] Epoch[90] Batch[0] avg_epoch_loss=4.255981\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:57 INFO 140098330203968] Epoch[90] Batch[5] avg_epoch_loss=4.197814\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:57 INFO 140098330203968] Epoch[90] Batch [5]#011Speed: 306.52 samples/sec#011loss=4.197814\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] Epoch[90] Batch[10] avg_epoch_loss=4.183835\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] Epoch[90] Batch [10]#011Speed: 299.11 samples/sec#011loss=4.167060\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] processed a total of 669 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2618.9680099487305, \"sum\": 2618.9680099487305, \"min\": 2618.9680099487305}}, \"EndTime\": 1550455378.226271, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455375.606912}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=255.432640992 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] #progress_metric: host=algo-1, completed 91 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:58 INFO 140098330203968] Epoch[91] Batch[0] avg_epoch_loss=4.165494\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:59 INFO 140098330203968] Epoch[91] Batch[5] avg_epoch_loss=4.152416\u001b[0m\n",
"\u001b[31m[02/18/2019 02:02:59 INFO 140098330203968] Epoch[91] Batch [5]#011Speed: 307.03 samples/sec#011loss=4.152416\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:00 INFO 140098330203968] processed a total of 624 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2396.7981338500977, \"sum\": 2396.7981338500977, \"min\": 2396.7981338500977}}, \"EndTime\": 1550455380.623498, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455378.226348}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:00 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=260.333891982 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:00 INFO 140098330203968] #progress_metric: host=algo-1, completed 92 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:00 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:01 INFO 140098330203968] Epoch[92] Batch[0] avg_epoch_loss=4.215059\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:02 INFO 140098330203968] Epoch[92] Batch[5] avg_epoch_loss=4.165628\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:02 INFO 140098330203968] Epoch[92] Batch [5]#011Speed: 299.60 samples/sec#011loss=4.165628\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:03 INFO 140098330203968] processed a total of 595 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2395.6940174102783, \"sum\": 2395.6940174102783, \"min\": 2395.6940174102783}}, \"EndTime\": 1550455383.019616, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455380.623581}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:03 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.349119462 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:03 INFO 140098330203968] #progress_metric: host=algo-1, completed 93 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:03 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:03 INFO 140098330203968] Epoch[93] Batch[0] avg_epoch_loss=4.207124\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:04 INFO 140098330203968] Epoch[93] Batch[5] avg_epoch_loss=4.141407\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:04 INFO 140098330203968] Epoch[93] Batch [5]#011Speed: 294.86 samples/sec#011loss=4.141407\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] Epoch[93] Batch[10] avg_epoch_loss=4.146842\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] Epoch[93] Batch [10]#011Speed: 295.61 samples/sec#011loss=4.153363\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] processed a total of 646 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2623.7709522247314, \"sum\": 2623.7709522247314, \"min\": 2623.7709522247314}}, \"EndTime\": 1550455385.643805, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455383.019702}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=246.198010223 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] #progress_metric: host=algo-1, completed 94 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:05 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_96d4aaab-936e-4dc4-9986-5c2df41f3cee-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 65.59300422668457, \"sum\": 65.59300422668457, \"min\": 65.59300422668457}}, \"EndTime\": 1550455385.709861, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455385.6439}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:06 INFO 140098330203968] Epoch[94] Batch[0] avg_epoch_loss=4.436582\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:07 INFO 140098330203968] Epoch[94] Batch[5] avg_epoch_loss=4.258602\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:07 INFO 140098330203968] Epoch[94] Batch [5]#011Speed: 291.62 samples/sec#011loss=4.258602\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:08 INFO 140098330203968] processed a total of 605 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2409.153938293457, \"sum\": 2409.153938293457, \"min\": 2409.153938293457}}, \"EndTime\": 1550455388.119163, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455385.70994}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:08 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=251.112582734 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:08 INFO 140098330203968] #progress_metric: host=algo-1, completed 95 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:08 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:08 INFO 140098330203968] Epoch[95] Batch[0] avg_epoch_loss=4.202283\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:09 INFO 140098330203968] Epoch[95] Batch[5] avg_epoch_loss=4.203795\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:09 INFO 140098330203968] Epoch[95] Batch [5]#011Speed: 306.39 samples/sec#011loss=4.203795\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] Epoch[95] Batch[10] avg_epoch_loss=4.183821\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] Epoch[95] Batch [10]#011Speed: 289.31 samples/sec#011loss=4.159852\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] processed a total of 658 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2652.5230407714844, \"sum\": 2652.5230407714844, \"min\": 2652.5230407714844}}, \"EndTime\": 1550455390.772097, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455388.119245}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.054579967 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] #progress_metric: host=algo-1, completed 96 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:10 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:11 INFO 140098330203968] Epoch[96] Batch[0] avg_epoch_loss=4.179824\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:12 INFO 140098330203968] Epoch[96] Batch[5] avg_epoch_loss=4.166861\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:12 INFO 140098330203968] Epoch[96] Batch [5]#011Speed: 304.63 samples/sec#011loss=4.166861\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:13 INFO 140098330203968] processed a total of 616 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2418.199062347412, \"sum\": 2418.199062347412, \"min\": 2418.199062347412}}, \"EndTime\": 1550455393.190692, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455390.772176}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:13 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=254.722066792 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:13 INFO 140098330203968] #progress_metric: host=algo-1, completed 97 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:13 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:13 INFO 140098330203968] Epoch[97] Batch[0] avg_epoch_loss=4.149429\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:14 INFO 140098330203968] Epoch[97] Batch[5] avg_epoch_loss=4.126558\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:14 INFO 140098330203968] Epoch[97] Batch [5]#011Speed: 294.99 samples/sec#011loss=4.126558\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] Epoch[97] Batch[10] avg_epoch_loss=4.115863\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] Epoch[97] Batch [10]#011Speed: 297.52 samples/sec#011loss=4.103030\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] processed a total of 681 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2639.0998363494873, \"sum\": 2639.0998363494873, \"min\": 2639.0998363494873}}, \"EndTime\": 1550455395.830203, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455393.190775}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=258.030875128 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] #progress_metric: host=algo-1, completed 98 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] best epoch loss so far\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:15 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/state_a8207ccf-a6e9-4e49-bc8b-87b03f844e3c-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"state.serialize.time\": {\"count\": 1, \"max\": 64.82291221618652, \"sum\": 64.82291221618652, \"min\": 64.82291221618652}}, \"EndTime\": 1550455395.895516, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455395.830282}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:16 INFO 140098330203968] Epoch[98] Batch[0] avg_epoch_loss=4.113667\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:17 INFO 140098330203968] Epoch[98] Batch[5] avg_epoch_loss=4.130583\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:17 INFO 140098330203968] Epoch[98] Batch [5]#011Speed: 299.99 samples/sec#011loss=4.130583\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:18 INFO 140098330203968] processed a total of 638 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2363.935947418213, \"sum\": 2363.935947418213, \"min\": 2363.935947418213}}, \"EndTime\": 1550455398.259594, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455395.895592}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:18 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=269.874933362 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:18 INFO 140098330203968] #progress_metric: host=algo-1, completed 99 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:18 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:18 INFO 140098330203968] Epoch[99] Batch[0] avg_epoch_loss=4.169965\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:19 INFO 140098330203968] Epoch[99] Batch[5] avg_epoch_loss=4.124378\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:19 INFO 140098330203968] Epoch[99] Batch [5]#011Speed: 296.15 samples/sec#011loss=4.124378\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] Epoch[99] Batch[10] avg_epoch_loss=4.161443\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] Epoch[99] Batch [10]#011Speed: 298.55 samples/sec#011loss=4.205920\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] processed a total of 648 examples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 2605.5049896240234, \"sum\": 2605.5049896240234, \"min\": 2605.5049896240234}}, \"EndTime\": 1550455400.86551, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455398.259676}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] #throughput_metric: host=algo-1, train throughput=248.692831285 records/second\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] #progress_metric: host=algo-1, completed 100 % of epochs\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] loss did not improve\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] Final loss: 4.11586336656 (occurred at epoch 97)\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] #quality_metric: host=algo-1, train final_loss <loss>=4.11586336656\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] Worker algo-1 finished training.\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 WARNING 140098330203968] wait_for_all_workers will not sync workers since the kv store is not running distributed\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:20 INFO 140098330203968] All workers finished. Serializing model for prediction.\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"get_graph.time\": {\"count\": 1, \"max\": 2447.5011825561523, \"sum\": 2447.5011825561523, \"min\": 2447.5011825561523}}, \"EndTime\": 1550455403.313824, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455400.865589}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:23 INFO 140098330203968] Number of GPUs being used: 0\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"finalize.time\": {\"count\": 1, \"max\": 2695.5811977386475, \"sum\": 2695.5811977386475, \"min\": 2695.5811977386475}}, \"EndTime\": 1550455403.561869, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455403.313911}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:23 INFO 140098330203968] Serializing to /opt/ml/model/model_algo-1\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:23 INFO 140098330203968] Saved checkpoint to \"/opt/ml/model/model_algo-1-0000.params\"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"model.serialize.time\": {\"count\": 1, \"max\": 35.932064056396484, \"sum\": 35.932064056396484, \"min\": 35.932064056396484}}, \"EndTime\": 1550455403.597922, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455403.561944}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:23 INFO 140098330203968] Successfully serialized the model for prediction.\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:23 INFO 140098330203968] Evaluating model accuracy on testset using 100 samples\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"model.bind.time\": {\"count\": 1, \"max\": 0.04100799560546875, \"sum\": 0.04100799560546875, \"min\": 0.04100799560546875}}, \"EndTime\": 1550455403.598759, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455403.597972}\n",
"\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"model.score.time\": {\"count\": 1, \"max\": 3267.841100692749, \"sum\": 3267.841100692749, \"min\": 3267.841100692749}}, \"EndTime\": 1550455406.866561, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455403.598814}\n",
"\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, RMSE): 60.5305300519\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, mean_wQuantileLoss): 0.0483206\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.1]): 0.019846\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.2]): 0.0321616\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.3]): 0.0418986\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.4]): 0.0494645\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.5]): 0.0553834\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.6]): 0.0593422\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.7]): 0.0611647\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.8]): 0.0603628\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #test_score (algo-1, wQuantileLoss[0.9]): 0.0552618\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #quality_metric: host=algo-1, test RMSE <loss>=60.5305300519\u001b[0m\n",
"\u001b[31m[02/18/2019 02:03:26 INFO 140098330203968] #quality_metric: host=algo-1, test mean_wQuantileLoss <loss>=0.0483206287026\u001b[0m\n",
"\u001b[31m#metrics {\"Metrics\": {\"totaltime\": {\"count\": 1, \"max\": 262211.47108078003, \"sum\": 262211.47108078003, \"min\": 262211.47108078003}, \"setuptime\": {\"count\": 1, \"max\": 9.274005889892578, \"sum\": 9.274005889892578, \"min\": 9.274005889892578}}, \"EndTime\": 1550455406.937366, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/DeepAR\"}, \"StartTime\": 1550455406.866628}\n",
"\u001b[0m\n",
"\n",
"2019-02-18 02:04:17 Uploading - Uploading generated training model\n",
"2019-02-18 02:04:17 Completed - Training job completed\n",
"Billable seconds: 356\n",
"CPU times: user 1.03 s, sys: 88.2 ms, total: 1.12 s\n",
"Wall time: 8min 15s\n"
]
}
],
"source": [
"%%time\n",
"data_channels = {\n",
" \"train\": \"{}/train/\".format(s3_data_path),\n",
" \"test\": \"{}/test/\".format(s3_data_path)\n",
"}\n",
"\n",
"estimator.fit(inputs=data_channels, wait=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since you pass a test set in this example, accuracy metrics for the forecast are computed and logged (see bottom of the log).\n",
"You can find the definition of these metrics from [our documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). You can use these to optimize the parameters and tune your model or use SageMaker's [Automated Model Tuning service](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune the model for you."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create endpoint and predictor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have a trained model, we can use it to perform predictions by deploying it to an endpoint.\n",
"\n",
"**Note: Remember to delete the endpoint after running this experiment. A cell at the very bottom of this notebook will do that: make sure you run it at the end.**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To query the endpoint and perform predictions, we can define the following utility class: this allows making requests using `pandas.Series` objects rather than raw JSON strings."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"class DeepARPredictor(sagemaker.predictor.RealTimePredictor):\n",
" \n",
" def __init__(self, *args, **kwargs):\n",
" super().__init__(*args, content_type=sagemaker.content_types.CONTENT_TYPE_JSON, **kwargs)\n",
" \n",
" def predict(self, ts, cat=None, dynamic_feat=None, \n",
" num_samples=100, return_samples=False, quantiles=[\"0.1\", \"0.5\", \"0.9\"]):\n",
" \"\"\"Requests the prediction of for the time series listed in `ts`, each with the (optional)\n",
" corresponding category listed in `cat`.\n",
" \n",
" ts -- `pandas.Series` object, the time series to predict\n",
" cat -- integer, the group associated to the time series (default: None)\n",
" num_samples -- integer, number of samples to compute at prediction time (default: 100)\n",
" return_samples -- boolean indicating whether to include samples in the response (default: False)\n",
" quantiles -- list of strings specifying the quantiles to compute (default: [\"0.1\", \"0.5\", \"0.9\"])\n",
" \n",
" Return value: list of `pandas.DataFrame` objects, each containing the predictions\n",
" \"\"\"\n",
" prediction_time = ts.index[-1] + 1\n",
" quantiles = [str(q) for q in quantiles]\n",
" req = self.__encode_request(ts, cat, dynamic_feat, num_samples, return_samples, quantiles)\n",
" res = super(DeepARPredictor, self).predict(req)\n",
" return self.__decode_response(res, ts.index.freq, prediction_time, return_samples)\n",
" \n",
" def __encode_request(self, ts, cat, dynamic_feat, num_samples, return_samples, quantiles):\n",
" instance = series_to_dict(ts, cat if cat is not None else None, dynamic_feat if dynamic_feat else None)\n",
"\n",
" configuration = {\n",
" \"num_samples\": num_samples,\n",
" \"output_types\": [\"quantiles\", \"samples\"] if return_samples else [\"quantiles\"],\n",
" \"quantiles\": quantiles\n",
" }\n",
" \n",
" http_request_data = {\n",
" \"instances\": [instance],\n",
" \"configuration\": configuration\n",
" }\n",
" \n",
" return json.dumps(http_request_data).encode('utf-8')\n",
" \n",
" def __decode_response(self, response, freq, prediction_time, return_samples):\n",
" # we only sent one time series so we only receive one in return\n",
" # however, if possible one will pass multiple time series as predictions will then be faster\n",
" predictions = json.loads(response.decode('utf-8'))['predictions'][0]\n",
" prediction_length = len(next(iter(predictions['quantiles'].values())))\n",
" prediction_index = pd.DatetimeIndex(start=prediction_time, freq=freq, periods=prediction_length) \n",
" if return_samples:\n",
" dict_of_samples = {'sample_' + str(i): s for i, s in enumerate(predictions['samples'])}\n",
" else:\n",
" dict_of_samples = {}\n",
" return pd.DataFrame(data={**predictions['quantiles'], **dict_of_samples}, index=prediction_index)\n",
"\n",
" def set_frequency(self, freq):\n",
" self.freq = freq\n",
" \n",
"def encode_target(ts):\n",
" return [x if np.isfinite(x) else \"NaN\" for x in ts] \n",
"\n",
"def series_to_dict(ts, cat=None, dynamic_feat=None):\n",
" \"\"\"Given a pandas.Series object, returns a dictionary encoding the time series.\n",
"\n",
" ts -- a pands.Series object with the target time series\n",
" cat -- an integer indicating the time series category\n",
"\n",
" Return value: a dictionary\n",
" \"\"\"\n",
" obj = {\"start\": str(ts.index[0]), \"target\": encode_target(ts)}\n",
" if cat is not None:\n",
" obj[\"cat\"] = cat\n",
" if dynamic_feat is not None:\n",
" obj[\"dynamic_feat\"] = dynamic_feat \n",
" return obj"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can deploy the model and create and endpoint that can be queried using our custom DeepARPredictor class."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:sagemaker:Creating model with name: forecasting-deepar-2019-02-18-02-31-28-839\n",
"INFO:sagemaker:Creating endpoint with name deepar-electricity-demo-2019-02-18-02-17-50-611\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------!"
]
}
],
"source": [
"predictor = estimator.deploy(\n",
" initial_instance_count=1,\n",
" instance_type='ml.m4.xlarge',\n",
" predictor_cls=DeepARPredictor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Make predictions and plot results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use the `predictor` object to generate predictions."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0.1</th>\n",
" <th>0.5</th>\n",
" <th>0.9</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>2018-04-01 00:00:00</th>\n",
" <td>532.975891</td>\n",
" <td>553.226196</td>\n",
" <td>569.472961</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-04-01 02:00:00</th>\n",
" <td>574.191345</td>\n",
" <td>590.034729</td>\n",
" <td>608.815247</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-04-01 04:00:00</th>\n",
" <td>595.375610</td>\n",
" <td>611.162781</td>\n",
" <td>631.016907</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-04-01 06:00:00</th>\n",
" <td>591.094055</td>\n",
" <td>608.075806</td>\n",
" <td>629.426270</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2018-04-01 08:00:00</th>\n",
" <td>588.998962</td>\n",
" <td>609.441711</td>\n",
" <td>630.701538</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0.1 0.5 0.9\n",
"2018-04-01 00:00:00 532.975891 553.226196 569.472961\n",
"2018-04-01 02:00:00 574.191345 590.034729 608.815247\n",
"2018-04-01 04:00:00 595.375610 611.162781 631.016907\n",
"2018-04-01 06:00:00 591.094055 608.075806 629.426270\n",
"2018-04-01 08:00:00 588.998962 609.441711 630.701538"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"predictor.predict(ts=timeseries[0], quantiles=[0.10, 0.5, 0.90]).head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below we define a plotting function that queries the model and displays the forecast."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"def plot(\n",
" predictor, \n",
" target_ts, \n",
" cat=None, \n",
" dynamic_feat=None, \n",
" forecast_date=end_training, \n",
" show_samples=False, \n",
" plot_history=7 * 12,\n",
" confidence=80\n",
"):\n",
" print(\"calling served model to generate predictions starting from {}\".format(str(forecast_date)))\n",
" assert(confidence > 50 and confidence < 100)\n",
" low_quantile = 0.5 - confidence * 0.005\n",
" up_quantile = confidence * 0.005 + 0.5\n",
" \n",
" # we first construct the argument to call our model\n",
" args = {\n",
" \"ts\": target_ts[:forecast_date],\n",
" \"return_samples\": show_samples,\n",
" \"quantiles\": [low_quantile, 0.5, up_quantile],\n",
" \"num_samples\": 100\n",
" }\n",
"\n",
"\n",
" if dynamic_feat is not None:\n",
" args[\"dynamic_feat\"] = dynamic_feat\n",
" fig = plt.figure(figsize=(20, 6))\n",
" ax = plt.subplot(2, 1, 1)\n",
" else:\n",
" fig = plt.figure(figsize=(20, 3))\n",
" ax = plt.subplot(1,1,1)\n",
" \n",
" if cat is not None:\n",
" args[\"cat\"] = cat\n",
" ax.text(0.9, 0.9, 'cat = {}'.format(cat), transform=ax.transAxes)\n",
"\n",
" # call the end point to get the prediction\n",
" prediction = predictor.predict(**args)\n",
"\n",
" # plot the samples\n",
" if show_samples: \n",
" for key in prediction.keys():\n",
" if \"sample\" in key:\n",
" prediction[key].plot(color='lightskyblue', alpha=0.2, label='_nolegend_')\n",
" \n",
" \n",
" # plot the target\n",
" target_section = target_ts[forecast_date-plot_history:forecast_date+prediction_length]\n",
" target_section.plot(color=\"black\", label='target')\n",
" \n",
" # plot the confidence interval and the median predicted\n",
" ax.fill_between(\n",
" prediction[str(low_quantile)].index, \n",
" prediction[str(low_quantile)].values, \n",
" prediction[str(up_quantile)].values, \n",
" color=\"b\", alpha=0.3, label='{}% confidence interval'.format(confidence)\n",
" )\n",
" prediction[\"0.5\"].plot(color=\"b\", label='P50')\n",
" ax.legend(loc=2) \n",
" \n",
" # fix the scale as the samples may change it\n",
" ax.set_ylim(target_section.min() * 0.5, target_section.max() * 1.5)\n",
" \n",
" if dynamic_feat is not None:\n",
" for i, f in enumerate(dynamic_feat, start=1):\n",
" ax = plt.subplot(len(dynamic_feat) * 2, 1, len(dynamic_feat) + i, sharex=ax)\n",
" feat_ts = pd.Series(\n",
" index=pd.DatetimeIndex(start=target_ts.index[0], freq=target_ts.index.freq, periods=len(f)),\n",
" data=f\n",
" )\n",
" feat_ts[forecast_date-plot_history:forecast_date+prediction_length].plot(ax=ax, color='g')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can interact with the function previously defined, to look at the forecast of any customer at any point in (future) time. \n",
"\n",
"For each request, the predictions are obtained by calling our served model on the fly.\n",
"\n",
"Here we forecast the consumption of an office after week-end (note the lower week-end consumption). \n",
"You can select any time series and any forecast date, just click on `Run Interact` to generate the predictions from our served endpoint and see the plot."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"style = {'description_width': 'initial'}"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "15c58dbcce3e4102b324fe02ac0202b2",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"interactive(children=(IntSlider(value=51, description='forecast_day', style=SliderStyle(description_width='ini…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"@interact_manual(\n",
" forecast_day=IntSlider(min=0, max=100, value=51, style=style),\n",
" confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),\n",
" history_weeks_plot=IntSlider(min=1, max=20, value=1, style=style),\n",
" show_samples=Checkbox(value=False),\n",
" continuous_update=False\n",
")\n",
"def plot_interact(forecast_day, confidence, history_weeks_plot, show_samples):\n",
" plot(\n",
" predictor,\n",
" target_ts=timeseries[0],\n",
" forecast_date=end_training + datetime.timedelta(days=forecast_day),\n",
" show_samples=show_samples,\n",
" plot_history=history_weeks_plot * 12 * 7,\n",
" confidence=confidence\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"# Additional features\n",
"\n",
"We have seen how to prepare a dataset and run DeepAR for a simple example.\n",
"\n",
"In addition DeepAR supports the following features:\n",
"\n",
"* missing values: DeepAR can handle missing values in the time series during training as well as for inference.\n",
"* Additional time features: DeepAR provides a set default time series features such as hour of day etc. However, you can provide additional feature time series via the `dynamic_feat` field. \n",
"* generalize frequencies: any integer multiple of the previously supported base frequencies (minutes `min`, hours `H`, days `D`, weeks `W`, month `M`) are now allowed; e.g., `15min`. We already demonstrated this above by using `2H` frequency.\n",
"* categories: If your time series belong to different groups (e.g. types of product, regions, etc), this information can be encoded as one or more categorical features using the `cat` field.\n",
"\n",
"We will now demonstrate the missing values and time features support. For this part we will reuse the electricity dataset but will do some artificial changes to demonstrate the new features: \n",
"* We will randomly mask parts of the time series to demonstrate the missing values support.\n",
"* We will include a \"special-day\" that occurs at different days for different time series during this day we introduce a strong up-lift\n",
"* We train the model on this dataset giving \"special-day\" as a custom time series feature"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As discussed above we will create a \"special-day\" feature and create an up-lift for the time series during this day. This simulates real world application where you may have things like promotions of a product for a certain time or a special event that influences your time series. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_special_day_feature(ts, fraction=0.05):\n",
" # First select random day indices (plus the forecast day)\n",
" num_days = (ts.index[-1] - ts.index[0]).days\n",
" rand_indices = list(np.random.randint(0, num_days, int(num_days * 0.1))) + [num_days]\n",
" \n",
" feature_value = np.zeros_like(ts)\n",
" for i in rand_indices:\n",
" feature_value[i * 12: (i + 1) * 12] = 1.0\n",
" feature = pd.Series(index=ts.index, data=feature_value)\n",
" return feature\n",
"\n",
"def drop_at_random(ts, drop_probability=0.1):\n",
" assert(0 <= drop_probability < 1)\n",
" random_mask = np.random.random(len(ts)) < drop_probability\n",
" return ts.mask(random_mask)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"special_day_features = [create_special_day_feature(ts) for ts in timeseries]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now create the up-lifted time series and randomly remove time points.\n",
"\n",
"The figures below show some example time series and the `special_day` feature value in green. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_uplift = [ts * (1.0 + feat) for ts, feat in zip(timeseries, special_day_features)]\n",
"time_series_processed = [drop_at_random(ts) for ts in timeseries_uplift]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, axs = plt.subplots(5, 2, figsize=(20, 20), sharex=True)\n",
"axx = axs.ravel()\n",
"for i in range(0, 10):\n",
" ax = axx[i]\n",
" ts = time_series_processed[i][:400]\n",
" ts.plot(ax=ax)\n",
" ax.set_ylim(-0.1 * ts.max(), ts.max())\n",
" ax2 = ax.twinx()\n",
" special_day_features[i][:400].plot(ax=ax2, color='g')\n",
" ax2.set_ylim(-0.2, 7)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"training_data_new_features = [\n",
" {\n",
" \"start\": str(start_dataset),\n",
" \"target\": encode_target(ts[start_dataset:end_training]),\n",
" \"dynamic_feat\": [special_day_features[i][start_dataset:end_training].tolist()]\n",
" }\n",
" for i, ts in enumerate(time_series_processed)\n",
"]\n",
"print(len(training_data_new_features))\n",
"\n",
"# as in our previous example, we do a rolling evaluation over the next 7 days\n",
"num_test_windows = 7\n",
"\n",
"test_data_new_features = [\n",
" {\n",
" \"start\": str(start_dataset),\n",
" \"target\": encode_target(ts[start_dataset:end_training + 2*k*prediction_length]),\n",
" \"dynamic_feat\": [special_day_features[i][start_dataset:end_training + 2*k*prediction_length].tolist()]\n",
" }\n",
" for k in range(1, num_test_windows + 1) \n",
" for i, ts in enumerate(timeseries_uplift)\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def check_dataset_consistency(train_dataset, test_dataset=None):\n",
" d = train_dataset[0]\n",
" has_dynamic_feat = 'dynamic_feat' in d\n",
" if has_dynamic_feat:\n",
" num_dynamic_feat = len(d['dynamic_feat'])\n",
" has_cat = 'cat' in d\n",
" if has_cat:\n",
" num_cat = len(d['cat'])\n",
" \n",
" def check_ds(ds):\n",
" for i, d in enumerate(ds):\n",
" if has_dynamic_feat:\n",
" assert 'dynamic_feat' in d\n",
" assert num_dynamic_feat == len(d['dynamic_feat'])\n",
" for f in d['dynamic_feat']:\n",
" assert len(d['target']) == len(f)\n",
" if has_cat:\n",
" assert 'cat' in d\n",
" assert len(d['cat']) == num_cat\n",
" check_ds(train_dataset)\n",
" if test_dataset is not None:\n",
" check_ds(test_dataset)\n",
" \n",
"check_dataset_consistency(training_data_new_features, test_data_new_features)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"write_dicts_to_file(\"train_new_features.json\", training_data_new_features)\n",
"write_dicts_to_file(\"test_new_features.json\", test_data_new_features)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"s3_data_path_new_features = \"s3://{}/{}-new-features/data\".format(s3_bucket, s3_prefix)\n",
"s3_output_path_new_features = \"s3://{}/{}-new-features/output\".format(s3_bucket, s3_prefix)\n",
"\n",
"print('Uploading to S3 this may take a few minutes depending on your connection.')\n",
"copy_to_s3(\"train_new_features.json\", s3_data_path_new_features + \"/train/train_new_features.json\", override=True)\n",
"copy_to_s3(\"test_new_features.json\", s3_data_path_new_features + \"/test/test_new_features.json\", override=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"estimator_new_features = sagemaker.estimator.Estimator(\n",
" sagemaker_session=sagemaker_session,\n",
" image_name=image_name,\n",
" role=role,\n",
" train_instance_count=1,\n",
" train_instance_type='ml.c4.2xlarge',\n",
" base_job_name='deepar-electricity-demo-new-features',\n",
" output_path=s3_output_path_new_features\n",
")\n",
"\n",
"hyperparameters = {\n",
" \"time_freq\": freq,\n",
" \"context_length\": str(context_length),\n",
" \"prediction_length\": str(prediction_length),\n",
" \"epochs\": \"400\",\n",
" \"learning_rate\": \"5E-4\",\n",
" \"mini_batch_size\": \"64\",\n",
" \"early_stopping_patience\": \"40\",\n",
" \"num_dynamic_feat\": \"auto\", # this will use the `dynamic_feat` field if it's present in the data\n",
"}\n",
"estimator_new_features.set_hyperparameters(**hyperparameters)\n",
"\n",
"estimator_new_features.fit(\n",
" inputs={\n",
" \"train\": \"{}/train/\".format(s3_data_path_new_features),\n",
" \"test\": \"{}/test/\".format(s3_data_path_new_features)\n",
" }, \n",
" wait=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we spawn an endpoint to visualize our forecasts on examples we send on the fly."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"predictor_new_features = estimator_new_features.deploy(\n",
" initial_instance_count=1,\n",
" instance_type='ml.m4.xlarge',\n",
" predictor_cls=DeepARPredictor)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"customer_id = 120\n",
"predictor_new_features.predict(\n",
" ts=time_series_processed[customer_id][:-prediction_length], \n",
" dynamic_feat=[special_day_features[customer_id].tolist()], \n",
" quantiles=[0.1, 0.5, 0.9]\n",
").head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we can query the endpoint to see predictions for arbitrary time series and time points."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@interact_manual(\n",
" customer_id=IntSlider(min=0, max=369, value=13, style=style), \n",
" forecast_day=IntSlider(min=0, max=100, value=21, style=style),\n",
" confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),\n",
" missing_ratio=FloatSlider(min=0.0, max=0.95, value=0.2, step=0.05, style=style),\n",
" show_samples=Checkbox(value=False),\n",
" continuous_update=False\n",
")\n",
"def plot_interact(customer_id, forecast_day, confidence, missing_ratio, show_samples): \n",
" forecast_date = end_training + datetime.timedelta(days=forecast_day)\n",
" target = time_series_processed[customer_id][start_dataset:forecast_date + prediction_length]\n",
" target = drop_at_random(target, missing_ratio)\n",
" dynamic_feat = [special_day_features[customer_id][start_dataset:forecast_date + prediction_length].tolist()]\n",
" plot(\n",
" predictor_new_features,\n",
" target_ts=target, \n",
" dynamic_feat=dynamic_feat,\n",
" forecast_date=forecast_date,\n",
" show_samples=show_samples, \n",
" plot_history=7*12,\n",
" confidence=confidence\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete endpoints"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"predictor.delete_endpoint()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"predictor_new_features.delete_endpoint()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.2"
},
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment