Skip to content

Instantly share code, notes, and snippets.

@serithemage
Last active September 11, 2018 04:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save serithemage/ee86d2d9db762e522287cebfb540e35d to your computer and use it in GitHub Desktop.
Save serithemage/ee86d2d9db762e522287cebfb540e35d to your computer and use it in GitHub Desktop.
mxnet_on_colab.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "mxnet_on_colab.ipynb",
"version": "0.3.2",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"metadata": {
"id": "sxTZKk9GfQAQ",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# Google Colaboratory에서 Keras의 백엔드로서 MXNet을 설정하는 방법\n",
"\n",
"> 이 문서는 [Shikoan](https://blog.shikoan.com/)님이 작성한 [\"Google ColaboratoryでMXNetバックエンドの高速Kerasを体験しよう\"](https://qiita.com/koshian2/items/b688bb8ac05e1c956e9f)를 번역한 것 입니다. 번역을 허락해 주신 Shikoan님에게 감사드립니다."
]
},
{
"metadata": {
"id": "s0g887LoDd06",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## Keras + MXNet은 빠른다\n",
"---\n",
"Keras의 MXNet백엔드는, Amazon에 의해 개발이 진행되고 있습니다.\n",
"\n",
"[이전에 작성한 글](https://qiita.com/koshian2/items/9877ed4fb3716eac0c37)에서 Keras는 느리다고 했더니 MXNet를 백엔드로 사용하면 빠르다고 지적해 주신 분들이 있었기 때문에 어떻게든 Google Colab에서 사용하는 방법을 궁리해 보았습니다. Colab에서 이를 설정하는 정보가 전혀 없어기 때문에 이 글을 작성합니다.\n",
"\n",
"| Instance Type | GPUs | Batch Size | Keras-MXNet (img/sec) | Keras-TensorFlow (img/sec) |\n",
"|---------------|------|------------|-----------------------|----------------------------|\n",
"| C5.18X Large | 0 | 32 | 87 | 59 |\n",
"| P3.8X Large | 1 | 32 | 831 | 509 |\n",
"| P3.8X Large | 4 | 128 | 1783 | 699 |\n",
"| P3.16X Large | 8 | 256 | 1680 | 435 |\n",
"\n",
"출전 : https://github.com/awslabs/keras-apache-mxnet/tree/master/benchmark\n",
"\n",
"이게 진짠가 싶을 정도 인데요, 특히 GPU수가 늘어날수록 차이가 많이 나네요. 꼭 한번 여러분도 체험 해 보시기 바랍니다."
]
},
{
"metadata": {
"id": "LA4NIO6JMpe0",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## 인스톨\n",
"---\n",
"2018년 8월 현재, Keras는 공식적으로는 MXNet을 백엔드로 대응하지 않고 있기 때문에, 포크된 Keras(keras-mxnet)을 사용합니다.\n",
"\n",
"Google Colab의 GPU엑셀레이터를 활성화 한다음 GPU대응의 MXNet을 인스톨 합니다.\n"
]
},
{
"metadata": {
"id": "7b2ZcEx-f0By",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 258
},
"outputId": "6733f639-d0b2-456b-dd5a-2e2a56f37124"
},
"cell_type": "code",
"source": [
"!pip install mxnet-cu80\n"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Collecting mxnet-cu80\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/f6/6c/566a1d4b8b1005b7d9ccfaecd7632f6dca596246f6657827b2d4e97c72c7/mxnet_cu80-1.2.1-py2.py3-none-manylinux1_x86_64.whl (299.1MB)\n",
"\u001b[K 100% |████████████████████████████████| 299.1MB 123kB/s \n",
"\u001b[?25hRequirement already satisfied: requests<2.19.0,>=2.18.4 in /usr/local/lib/python3.6/dist-packages (from mxnet-cu80) (2.18.4)\n",
"Collecting graphviz<0.9.0,>=0.8.1 (from mxnet-cu80)\n",
" Downloading https://files.pythonhosted.org/packages/53/39/4ab213673844e0c004bed8a0781a0721a3f6bb23eb8854ee75c236428892/graphviz-0.8.4-py2.py3-none-any.whl\n",
"Requirement already satisfied: numpy<1.15.0,>=1.8.2 in /usr/local/lib/python3.6/dist-packages (from mxnet-cu80) (1.14.5)\n",
"Requirement already satisfied: idna<2.7,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<2.19.0,>=2.18.4->mxnet-cu80) (2.6)\n",
"Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<2.19.0,>=2.18.4->mxnet-cu80) (3.0.4)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<2.19.0,>=2.18.4->mxnet-cu80) (2018.8.24)\n",
"Requirement already satisfied: urllib3<1.23,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<2.19.0,>=2.18.4->mxnet-cu80) (1.22)\n",
"Installing collected packages: graphviz, mxnet-cu80\n",
"Successfully installed graphviz-0.8.4 mxnet-cu80-1.2.1\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "WNZ7O2L3NnWl",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"CUDA 9.0대응의 MXNet을 인스톨 하면, import keras실행시에 libcudart.so.9.0: cannot open shared object file: No such file or directory라는 에러메세지가 표시됩니다. CUDA 8.0의 MXNet을 인스톨 하는 경우엔 문제 없었습니다. 이 문제에 대한 해결책이 있다면 알려주세요.\n",
"\n",
"그 다음에는 keras-mxnet도 마찬가지로 인스톨합니다."
]
},
{
"metadata": {
"id": "KpjgOo4C8ggy",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 309
},
"outputId": "a966343c-c59b-4f02-c96f-8bcfc365f936"
},
"cell_type": "code",
"source": [
"!pip install keras-mxnet"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"Collecting keras-mxnet\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/fc/96/9276f9f6fc8f55f8e4fdf0a481d6cf132bb0c91aa0454e1c82dd739dd720/keras_mxnet-2.2.2-py2.py3-none-any.whl (353kB)\n",
"\u001b[K 100% |████████████████████████████████| 358kB 5.7MB/s \n",
"\u001b[?25hRequirement already satisfied: h5py>=2.7.1 in /usr/local/lib/python3.6/dist-packages (from keras-mxnet) (2.8.0)\n",
"Collecting keras-preprocessing==1.0.1 (from keras-mxnet)\n",
" Downloading https://files.pythonhosted.org/packages/f8/33/275506afe1d96b221f66f95adba94d1b73f6b6087cfb6132a5655b6fe338/Keras_Preprocessing-1.0.1-py2.py3-none-any.whl\n",
"Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from keras-mxnet) (1.11.0)\n",
"Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from keras-mxnet) (1.14.5)\n",
"Collecting keras-applications==1.0.4 (from keras-mxnet)\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/54/90/8f327deaa37a71caddb59b7b4aaa9d4b3e90c0e76f8c2d1572005278ddc5/Keras_Applications-1.0.4-py2.py3-none-any.whl (43kB)\n",
"\u001b[K 100% |████████████████████████████████| 51kB 5.6MB/s \n",
"\u001b[?25hRequirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from keras-mxnet) (0.19.1)\n",
"Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras-mxnet) (3.13)\n",
"Requirement already satisfied: keras>=2.1.6 in /usr/local/lib/python3.6/dist-packages (from keras-preprocessing==1.0.1->keras-mxnet) (2.1.6)\n",
"Installing collected packages: keras-preprocessing, keras-applications, keras-mxnet\n",
"Successfully installed keras-applications-1.0.4 keras-mxnet-2.2.2 keras-preprocessing-1.0.1\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "28hRrC5EO2Ir",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## Channels_first로 전환\n",
"---\n",
"CNN에서 사용하는 경우, 이 부분만 주의할 필요가 있습니다. **MXNet은 Channel_first** 이므로, 예를 들면 CIFAR라면 (50000, 32, 32, 3)이 아니라 (50000, 3, 32, 32)가 됩니다. TensorFlow 처럼 channel_last로 작성하면 실패하기 때문이라고 생각합니다.\n",
"\n",
"## Keras.json의 편집\n",
"\n",
"디폴트를 channels_last로부터 channels_first로 전환합니다. Jupyter Notebook에서도 가능합니다. 다음의 코드를 실행해 주세요.\n"
]
},
{
"metadata": {
"id": "UJZorQ768ttQ",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import os\n",
"keras_json='{\\n \"floatx\": \"float32\",\\n \"epsilon\": 1e-07,\\n \"backend\": \"mxnet\",\\n \"image_data_format\": \"channels_first\"\\n}'\n",
"keras_json_dir=os.environ['HOME']+\"/.keras\"\n",
"if not os.path.exists(keras_json_dir): os.mkdir(keras_json_dir)\n",
"with open(keras_json_dir+\"/keras.json\", \"w\") as fp:\n",
" fp.write(keras_json)\n",
" \n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "hOtU_NudQDJW",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"제데로 반영이 되었는지는 다음의 코드를 실행하는 것으로 확인 가능합니다."
]
},
{
"metadata": {
"id": "2g9W4XJc9GWs",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 119
},
"outputId": "95b05bee-1b40-43c4-c687-5784224f94b6"
},
"cell_type": "code",
"source": [
"!cat ~/.keras/keras.json"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"{\r\n",
" \"floatx\": \"float32\",\r\n",
" \"epsilon\": 1e-07,\r\n",
" \"backend\": \"mxnet\",\r\n",
" \"image_data_format\": \"channels_first\"\r\n",
"}"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "nMgorEiVRJfL",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## MXNet이 백엔드로 지정되었는지를 확인\n",
"\n",
"import keras를 실행하여 Using MXNet Backend라고 표시되면 OK입니다."
]
},
{
"metadata": {
"id": "DhrU3iOPRW1S",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "d0067ac6-a6b1-42b3-c4d2-8b152df0a442"
},
"cell_type": "code",
"source": [
"import keras"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"Using MXNet backend\n"
],
"name": "stderr"
}
]
},
{
"metadata": {
"id": "n-V7QrjbRcIU",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## 모델 작성시의 주의점\n",
"channels_first이기 때문에 Conv2d나 Pooling과 같은 레이어를 추가할 때엔, data_format과 같이 명시적으로 선언해 줄 필요가 있습니다. 또한, BatchNormalization이라던지 Merge (Add, Concatenate등)를 실시 할 때엔, axis=1 이라고 체널 축을 지정할 필요가 있습니다.\n",
"\n",
"에러 메세지에도 나오지만, channels_last로 하게 되면 너무 무거워지기 때문에 주의가 필요합니다.\n",
"\n",
"## MNIST샘플\n",
"---\n",
"\n",
"간단한 샘플코드로서 AlexNet을 만들어 봤습니다. 데이터는 MNIST입니다.\n",
"\n"
]
},
{
"metadata": {
"id": "ds-tGFlAToAN",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 819
},
"outputId": "db8716c7-3998-442b-dffb-74a3b25088a8"
},
"cell_type": "code",
"source": [
"from keras.layers import Conv2D, Activation, MaxPooling2D, BatchNormalization, Input, Flatten, Dense\n",
"from keras.models import Model\n",
"from keras.optimizers import Adam\n",
"from keras.datasets import mnist\n",
"from keras.utils import to_categorical\n",
"import numpy as np\n",
"\n",
"# Data\n",
"(X_train, y_train), (X_test, y_test) = mnist.load_data()\n",
"X_train = np.expand_dims(X_train, axis=1)\n",
"X_test = np.expand_dims(X_test, axis=1)\n",
"X_train = X_train / 255.0\n",
"X_test = X_test / 255.0\n",
"y_train = to_categorical(y_train)\n",
"y_test = to_categorical(y_test)\n",
"\n",
"def create_conv_layers(input, nb_filters, kernel_size):\n",
" x = Conv2D(nb_filters, kernel_size=kernel_size, padding=\"same\", data_format=\"channels_first\")(input)\n",
" x = BatchNormalization(axis=1)(x)\n",
" x = Activation(\"relu\")(x)\n",
" return x\n",
"\n",
"# Model\n",
"input = Input(shape=(1, 28, 28))\n",
"x = create_conv_layers(input, 256, 5)\n",
"x = create_conv_layers(x, 256, 5)\n",
"x = MaxPooling2D((2,2), data_format=\"channels_first\")(x)\n",
"x = create_conv_layers(x, 384, 3)\n",
"x = create_conv_layers(x, 384, 3)\n",
"x = create_conv_layers(x, 384, 3)\n",
"x = Flatten()(x)\n",
"x = Dense(10, activation=\"softmax\")(x)\n",
"\n",
"model = Model(input, x)\n",
"model.compile(Adam(lr=0.0001), loss=\"categorical_crossentropy\", metrics=[\"acc\"])\n",
"model.fit(X_train, y_train, batch_size=128, epochs=20, validation_data=(X_test, y_test))"
],
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"text": [
"Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz\n",
"11493376/11490434 [==============================] - 1s 0us/step\n",
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/20\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"/usr/local/lib/python3.6/dist-packages/mxnet/module/bucketing_module.py:408: UserWarning: Optimizer created manually outside Module but rescale_grad is not normalized to 1.0/batch_size/num_workers (1.0 vs. 0.0078125). Is this intended?\n",
" force_init=force_init)\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"60000/60000 [==============================] - 138s 2ms/step - loss: 0.1319 - acc: 0.9628 - val_loss: 0.1098 - val_acc: 0.9716\n",
"Epoch 2/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0522 - acc: 0.9850 - val_loss: 0.0747 - val_acc: 0.9805\n",
"Epoch 3/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0400 - acc: 0.9881 - val_loss: 0.0798 - val_acc: 0.9803\n",
"Epoch 4/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0350 - acc: 0.9897 - val_loss: 0.0415 - val_acc: 0.9898\n",
"Epoch 5/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0319 - acc: 0.9911 - val_loss: 0.0848 - val_acc: 0.9785\n",
"Epoch 6/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0299 - acc: 0.9920 - val_loss: 0.0463 - val_acc: 0.9891\n",
"Epoch 7/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0297 - acc: 0.9924 - val_loss: 0.0826 - val_acc: 0.9815\n",
"Epoch 8/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0213 - acc: 0.9938 - val_loss: 0.0363 - val_acc: 0.9920\n",
"Epoch 9/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0240 - acc: 0.9938 - val_loss: 0.0380 - val_acc: 0.9923\n",
"Epoch 10/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0213 - acc: 0.9942 - val_loss: 0.0831 - val_acc: 0.9841\n",
"Epoch 11/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0247 - acc: 0.9940 - val_loss: 0.0403 - val_acc: 0.9923\n",
"Epoch 12/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0169 - acc: 0.9957 - val_loss: 0.0631 - val_acc: 0.9889\n",
"Epoch 13/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0182 - acc: 0.9955 - val_loss: 0.0635 - val_acc: 0.9885\n",
"Epoch 14/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0193 - acc: 0.9951 - val_loss: 0.0624 - val_acc: 0.9889\n",
"Epoch 15/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0180 - acc: 0.9957 - val_loss: 0.0371 - val_acc: 0.9920\n",
"Epoch 16/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0152 - acc: 0.9961 - val_loss: 0.0626 - val_acc: 0.9900\n",
"Epoch 17/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0182 - acc: 0.9960 - val_loss: 0.0634 - val_acc: 0.9901\n",
"Epoch 18/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0155 - acc: 0.9963 - val_loss: 0.0577 - val_acc: 0.9900\n",
"Epoch 19/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0154 - acc: 0.9964 - val_loss: 0.0803 - val_acc: 0.9871\n",
"Epoch 20/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0112 - acc: 0.9970 - val_loss: 0.0977 - val_acc: 0.9840\n"
],
"name": "stdout"
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7f79edf6ce10>"
]
},
"metadata": {
"tags": []
},
"execution_count": 6
}
]
},
{
"metadata": {
"id": "ltuI7xLpTu9Y",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"\n",
"결과는 다음과 비슷하게 나올껍니다.\n",
"\n",
"```\n",
"Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz\n",
"11493376/11490434 [==============================] - 1s 0us/step\n",
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/20\n",
"/usr/local/lib/python3.6/dist-packages/mxnet/module/bucketing_module.py:408: UserWarning: Optimizer created manually outside Module but rescale_grad is not normalized to 1.0/batch_size/num_workers (1.0 vs. 0.0078125). Is this intended?\n",
" force_init=force_init)\n",
"60000/60000 [==============================] - 138s 2ms/step - loss: 0.1319 - acc: 0.9628 - val_loss: 0.1098 - val_acc: 0.9716\n",
"Epoch 2/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0522 - acc: 0.9850 - val_loss: 0.0747 - val_acc: 0.9805\n",
"Epoch 3/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0400 - acc: 0.9881 - val_loss: 0.0798 - val_acc: 0.9803\n",
"Epoch 4/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0350 - acc: 0.9897 - val_loss: 0.0415 - val_acc: 0.9898\n",
"Epoch 5/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0319 - acc: 0.9911 - val_loss: 0.0848 - val_acc: 0.9785\n",
"Epoch 6/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0299 - acc: 0.9920 - val_loss: 0.0463 - val_acc: 0.9891\n",
"Epoch 7/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0297 - acc: 0.9924 - val_loss: 0.0826 - val_acc: 0.9815\n",
"Epoch 8/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0213 - acc: 0.9938 - val_loss: 0.0363 - val_acc: 0.9920\n",
"Epoch 9/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0240 - acc: 0.9938 - val_loss: 0.0380 - val_acc: 0.9923\n",
"Epoch 10/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0213 - acc: 0.9942 - val_loss: 0.0831 - val_acc: 0.9841\n",
"Epoch 11/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0247 - acc: 0.9940 - val_loss: 0.0403 - val_acc: 0.9923\n",
"Epoch 12/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0169 - acc: 0.9957 - val_loss: 0.0631 - val_acc: 0.9889\n",
"Epoch 13/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0182 - acc: 0.9955 - val_loss: 0.0635 - val_acc: 0.9885\n",
"Epoch 14/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0193 - acc: 0.9951 - val_loss: 0.0624 - val_acc: 0.9889\n",
"Epoch 15/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0180 - acc: 0.9957 - val_loss: 0.0371 - val_acc: 0.9920\n",
"Epoch 16/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0152 - acc: 0.9961 - val_loss: 0.0626 - val_acc: 0.9900\n",
"Epoch 17/20\n",
"60000/60000 [==============================] - 126s 2ms/step - loss: 0.0182 - acc: 0.9960 - val_loss: 0.0634 - val_acc: 0.9901\n",
"Epoch 18/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0155 - acc: 0.9963 - val_loss: 0.0577 - val_acc: 0.9900\n",
"Epoch 19/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0154 - acc: 0.9964 - val_loss: 0.0803 - val_acc: 0.9871\n",
"Epoch 20/20\n",
"60000/60000 [==============================] - 125s 2ms/step - loss: 0.0112 - acc: 0.9970 - val_loss: 0.0977 - val_acc: 0.9840\n",
"<keras.callbacks.History at 0x7f79edf6ce10>\n",
"```\n",
"\n",
"덧붙여서 이것을 친숙한 TensorFlow 백엔드에서 실행시키면 이렇게 나옵니다.\n",
"\n",
"```\n",
"Epoch 1/20\n",
"60000/60000 [==============================] - 153s 3ms/step - loss: 0.1154 - acc: 0.9648 - val_loss: 0.0774 - val_acc: 0.9795\n",
"```\n",
"\n"
]
},
{
"metadata": {
"id": "uxEVmwW0lSY2",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment