Skip to content

Instantly share code, notes, and snippets.

@Mattamorphic
Created May 10, 2021 07:10
Show Gist options
  • Save Mattamorphic/ed6325b18f2ba90633711d191ea83899 to your computer and use it in GitHub Desktop.
Save Mattamorphic/ed6325b18f2ba90633711d191ea83899 to your computer and use it in GitHub Desktop.
tf_dev_summit_2019
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "2kXAFPLsEKZm"
},
"source": [
"##### Copyright 2019 The TensorFlow Authors.\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"colab": {},
"colab_type": "code",
"id": "4eFFg1vlEJGY"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Zr1JuYsaoTDE"
},
"source": [
"# What's new with TensorBoard?\n",
"\n",
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
" \u003ctd\u003e\n",
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/community/en/tensorboard/tf_dev_summit_2019.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
" \u003c/td\u003e\n",
" \u003ctd\u003e\n",
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/tensorboard/examples/blob/master/community/en/tensorboard/tf_dev_summit_2019.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
" \u003c/td\u003e\n",
"\u003c/table\u003e"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "diTH3H5L6pcB"
},
"source": [
"This is a notebook from the TensorBoard talk at the TensorFlow Dev Summit 2019, which can be found [here](https://www.youtube.com/watch?v=xM8sO33x_OU). More detailed documentation about these features can be found at [tensorflow.org/tensorboard](https://www.tensorflow.org/tensorboard)\n",
"\n",
"This notebook was slightly modified after the demo to remove account-specific data."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "h8yQwDlIz4My"
},
"outputs": [],
"source": [
"# Load the TensorBoard notebook extension\n",
"%load_ext tensorboard.notebook "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "XaacEJm60MUt"
},
"outputs": [],
"source": [
"# Clear any logs from previous runs\n",
"!rm -rf ./logs/ "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "ObvQlJAH6Aln"
},
"outputs": [],
"source": [
"from __future__ import absolute_import, division, print_function, unicode_literals\n",
"%tensorflow_version 2.x\n",
"import tensorflow as tf\n",
"import datetime"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "OEPfW_CIoaaI"
},
"source": [
"### A dataset you've never seen before. FashionMNIST"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "UmLnA3h282gS"
},
"outputs": [],
"source": [
"fashion_mnist = tf.keras.datasets.fashion_mnist\n",
" \n",
"(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n",
"x_train, x_test = x_train / 255.0, x_test / 255.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "s9qqxNNOowFB"
},
"source": [
"### Keras Sequential model trained with `.fit()`"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "za6m7-M0-hOG"
},
"outputs": [],
"source": [
"def train_test_model(run_dir, hparams):\n",
"\n",
" model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(hparams['num_units'], activation=tf.nn.relu),\n",
" tf.keras.layers.Dropout(hparams['dropout_rate']),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n",
" ])\n",
"\n",
" model.compile(optimizer= hparams['optimizer'],\n",
" loss='sparse_categorical_crossentropy',\n",
" metrics=['accuracy'])\n",
"\n",
" model.fit(x_train, y_train, \n",
" validation_data=(x_test, y_test), \n",
" epochs=3, \n",
" callbacks=[tf.keras.callbacks.TensorBoard(run_dir + \"/keras\")])\n",
" \n",
" scores = model.evaluate(x_test, y_test)\n",
" return scores"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "AHiPRhtj_JNa"
},
"outputs": [],
"source": [
"train_test_model(\"logs/sample\", {'num_units' : 16, 'dropout_rate' : 0.1, 'optimizer' : 'adam'})"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "YYwlqWbXo5tc"
},
"source": [
"### Alright let's download this data, install TensorBoard, and open it locally. Or..."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "eGEC-gRWDJux"
},
"outputs": [],
"source": [
"%tensorboard --logdir logs/sample"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "oIgtqsiZ8Ktv"
},
"source": [
"Note that train and validation now appear as separate runs so you can compare them on the same charts. \n",
"\n",
"In the Graphs dashboard, click on \"keras\" in the \"Tag\" dropdown to view the Keras conceptual graph.\n",
"\n",
"Other useful APIs for working with TensorBoard in notebooks:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "4Qq2uUd0DQcA"
},
"outputs": [],
"source": [
"# from tensorboard import notebook\n",
"# notebook.display(height=1000)\n",
"# notebook.list()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "6IPoyD1MZyvM"
},
"source": [
"# Hyperparameter Tuning"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "UpfFgNco8v8E"
},
"source": [
"Today, you can try out different hyperparameters by encoding them in the run names and then comparing them in TensorBoard. This is not ideal, so let's see if we can do something better. Note that the experience below will change over time."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "R3xZXCasv3a7"
},
"source": [
"\u003cimg src=\"https://user-images.githubusercontent.com/2601900/53466264-2535ac00-3a06-11e9-8010-ce2f448a0758.png\" alt=\"Sad Hparams\" width=\"70%\"/\u003e"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "aKe7HKRNztF_"
},
"source": [
"### Let's try to do something better\n",
"\n",
"#### Preview APIs that will change:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "8Nb4HvuCb2Nj"
},
"outputs": [],
"source": [
"from tensorboard.plugins.hparams import api_pb2\n",
"from tensorboard.plugins.hparams import summary as hparams_summary\n",
"from google.protobuf import struct_pb2"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "qVuWG8NBzysZ"
},
"source": [
"### We'll try a few modifications to our very simple model"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "D8MuCY0uZ2xO"
},
"outputs": [],
"source": [
"num_units_list = [16, 32] # Number of units in the dense layer\n",
"dropout_rate_list = [0.1, 0.2] # Dropout rate\n",
"optimizer_list = ['adam']"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "QQsCNqTmz_KK"
},
"source": [
"### Log a summary of which hyperparameters and metrics we care about"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "fBTWqmXNaERV"
},
"outputs": [],
"source": [
"def create_experiment_summary(num_units_list, dropout_rate_list, optimizer_list):\n",
" num_units_list_val = struct_pb2.ListValue()\n",
" num_units_list_val.extend(num_units_list)\n",
" dropout_rate_list_val = struct_pb2.ListValue()\n",
" dropout_rate_list_val.extend(dropout_rate_list)\n",
" optimizer_list_val = struct_pb2.ListValue()\n",
" optimizer_list_val.extend(optimizer_list)\n",
" return hparams_summary.experiment_pb(\n",
" # List our hyperparameters\n",
" hparam_infos=[\n",
" api_pb2.HParamInfo(name='num_units',\n",
" display_name='Number of units',\n",
" type=api_pb2.DATA_TYPE_FLOAT64,\n",
" domain_discrete=num_units_list_val),\n",
" api_pb2.HParamInfo(name='dropout_rate',\n",
" display_name='Dropout rate',\n",
" type=api_pb2.DATA_TYPE_FLOAT64,\n",
" domain_discrete=dropout_rate_list_val),\n",
" api_pb2.HParamInfo(name='optimizer',\n",
" display_name='Optimizer',\n",
" type=api_pb2.DATA_TYPE_STRING,\n",
" domain_discrete=optimizer_list_val)\n",
" ],\n",
" # List our metrics\n",
" metric_infos=[\n",
" api_pb2.MetricInfo(\n",
" name=api_pb2.MetricName(\n",
" tag='accuracy'),\n",
" display_name='Accuracy'),\n",
" ]\n",
" )\n",
"\n",
"exp_summary = create_experiment_summary(num_units_list, dropout_rate_list, optimizer_list)\n",
"root_logdir_writer = tf.summary.create_file_writer(\"logs/hparam_tuning\")\n",
"with root_logdir_writer.as_default():\n",
" tf.summary.import_event(tf.compat.v1.Event(summary=exp_summary).SerializeToString())"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "pzSyvfyZ0IQ6"
},
"source": [
"### Wrap our existing training code to log when each run has finished, along with its metrics"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "UNqOHUnVasDz"
},
"outputs": [],
"source": [
"def run(run_dir, hparams):\n",
" writer = tf.summary.create_file_writer(run_dir)\n",
" summary_start = hparams_summary.session_start_pb(hparams=hparams)\n",
"\n",
" with writer.as_default():\n",
" tf.summary.import_event(tf.compat.v1.Event(summary=summary_start).SerializeToString())\n",
" loss, accuracy = train_test_model(run_dir, hparams)\n",
" \n",
" tf.summary.scalar('accuracy', accuracy, step=0, description=\"The accuracy\")\n",
" summary_end = hparams_summary.session_end_pb(api_pb2.STATUS_SUCCESS)\n",
" tf.summary.import_event(tf.compat.v1.Event(summary=summary_end).SerializeToString())"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "y0OTtS3j0eT_"
},
"source": [
"### Try all combinations of hyperparameters as separate runs"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "qLUZSP4wcTwZ"
},
"outputs": [],
"source": [
"%tensorboard --logdir logs/hparam_tuning"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "p38E9rOubmdw"
},
"outputs": [],
"source": [
"session_num = 0\n",
"\n",
"for num_units in num_units_list:\n",
" for dropout_rate in dropout_rate_list:\n",
" for optimizer in optimizer_list:\n",
" hparams = {'num_units': num_units, 'dropout_rate': dropout_rate, 'optimizer': optimizer}\n",
" print('--- Running training session %d' % (session_num + 1))\n",
" print(hparams)\n",
" run_name = \"run-%d\" % session_num\n",
" run(\"logs/hparam_tuning/\" + run_name, hparams)\n",
" session_num += 1"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "jjs5_qTK_MTi"
},
"source": [
"Refresh TensorBoard and look at the HParams dashboard for various visualizations"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "rcOrkDHU05XO"
},
"source": [
"### Testing more combinations will take a while, let's cheat and look at the end result"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "RWNbag4H9ZOp"
},
"outputs": [],
"source": [
"# The Dev Summit demo showed using logs directly from Google Drive, such as \n",
"# the following:\n",
"\n",
"# from google.colab import drive\n",
"# drive.mount('/content/gdrive')\n",
"\n",
"# %tensorboard --logdir /content/gdrive/My\\ Drive/DevSummit/hparams_demo\n",
"\n",
"# For this notebook, download the logs directly (but you can place them in \n",
"# Google Drive to replicate the above experience)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "RnEqRUM39Rqw"
},
"outputs": [],
"source": [
"%%bash\n",
"wget -q 'https://storage.googleapis.com/download.tensorflow.org/tensorboard/hparams_demo_logs.zip'\n",
"unzip -q hparams_demo_logs.zip -d logs/hparam_demo"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "H0dmFaIm9S6v"
},
"outputs": [],
"source": [
"%tensorboard --logdir logs/hparam_demo"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "0z7DLT4G1UFT"
},
"source": [
"# Summary\n",
"\n",
"We've looked at:\n",
"- TensorBoard in Colab\n",
"- Easier comparison of train and validation runs\n",
"- Keras conceptual graph visualization\n",
"- Hyperparameter tuning with the HParams dashboard\n",
"\n",
"Read the TensorBoard documentation at: [tensorflow.org/tensorboard](https://tensorflow.org/tensorboard)"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "tf_dev_summit_2019.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true,
"version": "0.3.2"
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment