Skip to content

Instantly share code, notes, and snippets.

@seongjoojin
Created March 16, 2018 00:50
Show Gist options
  • Save seongjoojin/659a303c800f1f81a38c460ced3e1bbc to your computer and use it in GitHub Desktop.
Save seongjoojin/659a303c800f1f81a38c460ced3e1bbc to your computer and use it in GitHub Desktop.
study180317
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "study180317.ipynb",
"version": "0.3.2",
"views": {},
"default_view": {},
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"metadata": {
"id": "7NdWpOn8CQba",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 1. 헬로우 딥러닝 MNIST\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"![대체 텍스트](https://i.imgur.com/Un7aQtW.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"MNIST란? 0-9까지의 손글씨로 쓴 숫자 데이터\n",
"\n",
"미니배치 : 데이터를 적당한 크기로 잘라서 학습\n",
"\n",
"ex) 60,000장의 훈련 데이터중에서 100장을 무작위 뽑아서 100장만을 학습 (밑바닥부터 시작하는 딥러닝에서 나온 내용)\n",
"\n",
"에포크 : 학습 데이터 전체를 한 바퀴 도는 것"
]
},
{
"metadata": {
"id": "TwNKPhU_BlmP",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 17
}
],
"base_uri": "https://localhost:8080/",
"height": 374
},
"outputId": "da307a04-020a-4035-f0a3-cbc39085a15a",
"executionInfo": {
"status": "ok",
"timestamp": 1521031409981,
"user_tz": -540,
"elapsed": 21505,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"# !pip install --upgrade tensorflow\n",
"\n",
"# !pip install numpy\n",
"# !pip install matplotlib\n",
"# !pip install pillow\n",
"\n",
"import tensorflow as tf\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"# 텐서플로우에 기본 내장된 mnist 모듈을 이용하여 데이터를 로드합니다.\n",
"# 지정한 폴더에 MNIST 데이터가 없는 경우 자동으로 데이터를 다운로드합니다.\n",
"# one_hot 옵션은 레이블을 동물 분류 예제에서 보았던 one_hot 방식의 데이터로 만들어줍니다.\n",
"mnist = input_data.read_data_sets(\"./mnist/data/\", one_hot=True)\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"# 입력 값의 차원은 [배치크기, 특성값] 으로 되어 있습니다.\n",
"# 손글씨 이미지는 28x28 픽셀로 이루어져 있고, 이를 784개의 특성값으로 정합니다.\n",
"X = tf.placeholder(tf.float32, [None, 784])\n",
"# 결과는 0~9 의 10 가지 분류를 가집니다.\n",
"Y = tf.placeholder(tf.float32, [None, 10])\n",
"\n",
"# 신경망의 레이어는 다음처럼 구성합니다.\n",
"# 784(입력 특성값)\n",
"# -> 256 (히든레이어 뉴런 갯수) -> 256 (히든레이어 뉴런 갯수)\n",
"# -> 10 (결과값 0~9 분류)\n",
"W1 = tf.Variable(tf.random_normal([784, 256], stddev=0.01))\n",
"# 입력값에 가중치를 곱하고 ReLU 함수를 이용하여 레이어를 만듭니다.\n",
"L1 = tf.nn.relu(tf.matmul(X, W1))\n",
"\n",
"W2 = tf.Variable(tf.random_normal([256, 256], stddev=0.01))\n",
"# L1 레이어의 출력값에 가중치를 곱하고 ReLU 함수를 이용하여 레이어를 만듭니다.\n",
"L2 = tf.nn.relu(tf.matmul(L1, W2))\n",
"\n",
"W3 = tf.Variable(tf.random_normal([256, 10], stddev=0.01))\n",
"# 최종 모델의 출력값은 W3 변수를 곱해 10개의 분류를 가지게 됩니다.\n",
"model = tf.matmul(L2, W3)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"batch_size = 100\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(15):\n",
" total_cost = 0\n",
"\n",
" for i in range(total_batch):\n",
" # 텐서플로우의 mnist 모델의 next_batch 함수를 이용해\n",
" # 지정한 크기만큼 학습할 데이터를 가져옵니다.\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
"\n",
" _, cost_val = sess.run([optimizer, cost], feed_dict={X: batch_xs, Y: batch_ys})\n",
" total_cost += cost_val\n",
"\n",
" print('Epoch:', '%04d' % (epoch + 1),\n",
" 'Avg. cost =', '{:.3f}'.format(total_cost / total_batch))\n",
"\n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"######\n",
"# model 로 예측한 값과 실제 레이블인 Y의 값을 비교합니다.\n",
"# tf.argmax 함수를 이용해 예측한 값에서 가장 큰 값을 예측한 레이블이라고 평가합니다.\n",
"# 예) [0.1 0 0 0.7 0 0.2 0 0 0 0] -> 3\n",
"is_correct = tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"print('정확도:', sess.run(accuracy,\n",
" feed_dict={X: mnist.test.images,\n",
" Y: mnist.test.labels}))"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0001 Avg. cost = 0.406\n",
"Epoch: 0002 Avg. cost = 0.149\n",
"Epoch: 0003 Avg. cost = 0.099\n",
"Epoch: 0004 Avg. cost = 0.071\n",
"Epoch: 0005 Avg. cost = 0.055\n",
"Epoch: 0006 Avg. cost = 0.041\n",
"Epoch: 0007 Avg. cost = 0.035\n",
"Epoch: 0008 Avg. cost = 0.025\n",
"Epoch: 0009 Avg. cost = 0.020\n",
"Epoch: 0010 Avg. cost = 0.019\n",
"Epoch: 0011 Avg. cost = 0.015\n",
"Epoch: 0012 Avg. cost = 0.014\n",
"Epoch: 0013 Avg. cost = 0.011\n",
"Epoch: 0014 Avg. cost = 0.013\n",
"Epoch: 0015 Avg. cost = 0.010\n",
"최적화 완료!\n",
"정확도: 0.9783\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "sKcwFGEJFHzI",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"![대체 텍스트](https://i.imgur.com/tGpF9Mz.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"드롭아웃 : 학습 시 전체 신경망 중 일부만을 사용하도록 하는 것"
]
},
{
"metadata": {
"id": "2_6zB320FvZn",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{},
{}
],
"base_uri": "https://localhost:8080/",
"height": 898
},
"outputId": "269c15b1-6388-4075-83ae-47134ba35806",
"executionInfo": {
"status": "ok",
"timestamp": 1520755603043,
"user_tz": -540,
"elapsed": 98005,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data/\", one_hot=True)\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"X = tf.placeholder(tf.float32, [None, 784])\n",
"Y = tf.placeholder(tf.float32, [None, 10])\n",
"keep_prob = tf.placeholder(tf.float32)\n",
"\n",
"W1 = tf.Variable(tf.random_normal([784, 256], stddev=0.01))\n",
"L1 = tf.nn.relu(tf.matmul(X, W1))\n",
"# 텐서플로우에 내장된 함수를 이용하여 dropout 을 적용합니다.\n",
"# 함수에 적용할 레이어와 확률만 넣어주면 됩니다. 겁나 매직!!\n",
"L1 = tf.nn.dropout(L1, keep_prob)\n",
"\n",
"W2 = tf.Variable(tf.random_normal([256, 256], stddev=0.01))\n",
"L2 = tf.nn.relu(tf.matmul(L1, W2))\n",
"L2 = tf.nn.dropout(L2, keep_prob)\n",
"\n",
"W3 = tf.Variable(tf.random_normal([256, 10], stddev=0.01))\n",
"model = tf.matmul(L2, W3)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"batch_size = 100\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(30):\n",
" total_cost = 0\n",
"\n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
"\n",
" _, cost_val = sess.run([optimizer, cost],\n",
" feed_dict={X: batch_xs,\n",
" Y: batch_ys,\n",
" keep_prob: 0.8})\n",
" total_cost += cost_val\n",
"\n",
" print('Epoch:', '%04d' % (epoch + 1),\n",
" 'Avg. cost =', '{:.3f}'.format(total_cost / total_batch))\n",
"\n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"######\n",
"is_correct = tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"print('정확도:', sess.run(accuracy,\n",
" feed_dict={X: mnist.test.images,\n",
" Y: mnist.test.labels,\n",
" keep_prob: 1}))\n",
"\n",
"#########\n",
"# 결과 확인 (matplot)\n",
"######\n",
"labels = sess.run(model,\n",
" feed_dict={X: mnist.test.images,\n",
" Y: mnist.test.labels,\n",
" keep_prob: 1})\n",
"\n",
"fig = plt.figure()\n",
"for i in range(10):\n",
" subplot = fig.add_subplot(2, 5, i + 1)\n",
" subplot.set_xticks([])\n",
" subplot.set_yticks([])\n",
" subplot.set_title('%d' % np.argmax(labels[i]))\n",
" subplot.imshow(mnist.test.images[i].reshape((28, 28)),\n",
" cmap=plt.cm.gray_r)\n",
"\n",
"plt.show()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0001 Avg. cost = 0.418\n",
"Epoch: 0002 Avg. cost = 0.164\n",
"Epoch: 0003 Avg. cost = 0.114\n",
"Epoch: 0004 Avg. cost = 0.090\n",
"Epoch: 0005 Avg. cost = 0.072\n",
"Epoch: 0006 Avg. cost = 0.061\n",
"Epoch: 0007 Avg. cost = 0.053\n",
"Epoch: 0008 Avg. cost = 0.046\n",
"Epoch: 0009 Avg. cost = 0.040\n",
"Epoch: 0010 Avg. cost = 0.038\n",
"Epoch: 0011 Avg. cost = 0.034\n",
"Epoch: 0012 Avg. cost = 0.033\n",
"Epoch: 0013 Avg. cost = 0.030\n",
"Epoch: 0014 Avg. cost = 0.029\n",
"Epoch: 0015 Avg. cost = 0.026\n",
"Epoch: 0016 Avg. cost = 0.026\n",
"Epoch: 0017 Avg. cost = 0.023\n",
"Epoch: 0018 Avg. cost = 0.020\n",
"Epoch: 0019 Avg. cost = 0.022\n",
"Epoch: 0020 Avg. cost = 0.022\n",
"Epoch: 0021 Avg. cost = 0.022\n",
"Epoch: 0022 Avg. cost = 0.018\n",
"Epoch: 0023 Avg. cost = 0.018\n",
"Epoch: 0024 Avg. cost = 0.021\n",
"Epoch: 0025 Avg. cost = 0.020\n",
"Epoch: 0026 Avg. cost = 0.016\n",
"Epoch: 0027 Avg. cost = 0.016\n",
"Epoch: 0028 Avg. cost = 0.016\n",
"Epoch: 0029 Avg. cost = 0.018\n",
"Epoch: 0030 Avg. cost = 0.015\n",
"최적화 완료!\n",
"정확도: 0.9833\n"
],
"name": "stdout"
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcwAAAENCAYAAACCb1WXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3X2czXX+//HnuIhmCKWm2WLcMjUV\nkdIiUtuFhEYkJUm27cJsqRQrX9F1UqTULRk3ZZdCkmKRNVEao9gulJKUi51EiSUZV5nfH/3eb++z\nc8a8hznnc86Zx/1267Yv73PmfF575sx5nff7vC+SioqKigQAAA6pUtAJAAAQDyiYAAB4oGACAOCB\nggkAgAcKJgAAHiiYAAB4iKuCOW/ePLVv3z7kv8zMTO3cuTPo1BJWbm6uOnfurCuuuEI9evTQ6tWr\ng04p4e3bt0/Dhw9XZmamNm3aFHQ6CS8/P19dunTR5Zdfrj59+vCcR8miRYuUmZmpgoKCoFPxViXo\nBMrCFEljzpw5mjt3rmrUqBFgVolr8+bNGjRokF577TVlZGRo8uTJGjp0qKZMmRJ0agktOztbZ511\nVtBpVAi7du1S//79NX78eDVq1Eh///vfNWzYML300ktBp5bQCgsLNXLkSNWuXTvoVMokrnqYrj17\n9ujZZ5/VgAEDgk4lYVWpUkUjR45URkaGJOncc8/VmjVrAs4q8WVnZ6tfv35Bp1EhLF26VPXq1VOj\nRo0kSVdffbXy8vIYtYqwMWPGKCsrSykpKUGnUiZxWzCnT5+uc845R/Xr1w86lYR13HHHqW3btvbf\n77//vpo2bRpgRhVDs2bNgk6hwli3bp3q1atn/52SkqLatWtrw4YNAWaV2L7++mstWbJEN910U9Cp\nlFlcDckaBw4c0IQJEzR27NigU6kw8vPzNXHiRE2cODHoVIByU1hYqGrVqoW0VatWTbt27Qooo8RW\nVFSkYcOGaciQIapatWrQ6ZRZXPYwP/nkEyUnJ+vUU08NOpUKYcGCBRo0aJDGjh1rh2eBRJCcnKw9\ne/aEtO3evTvuhgrjxdSpU5WRkaHmzZsHncphicuCuWjRIl144YVBp1EhLFmyRI899pgmTJjARBQk\nnFNOOSVk+PWXX37R9u3blZ6eHmBWiSs3N1e5ublq3bq1WrdurR9++EHdunXT0qVLg07NS1wWzFWr\nVqlhw4ZBp5HwCgsLdf/992vMmDE830hILVq00MaNG7V8+XJJ0iuvvKI//elPSk5ODjizxJSTk6P8\n/Hzl5eUpLy9PaWlpmj59ulq2bBl0al7i8jvMTZs2qW7dukGnkfByc3O1detW3XfffSHtkyZN4vmP\nkC1btuiGG26w/+7Vq5cqV66siRMnKjU1NcDMElP16tU1atQoPfzwwyosLFT9+vU1fPjwoNNCjEri\nPEwAAEoXl0OyAABEGwUTAAAPFEwAADxQMAEA8EDBBADAAwUTAAAPFEwAADxQMAEA8EDBBADAAwUT\nAAAPFEwAADxQMAEA8EDBBADAQ1we74Xy8fTTT9u4sLDQxitWrLDx9OnTi/1c3759bdyqVSsb9+rV\nq7xTBICYQQ8TAAAPFEwAADxwgHQFdO2110qSXn/99SN+rIyMDBsvWLBAklS/fv0jflyEt3r1ahtn\nZmZKkp577jnbduedd0Y9p3jz66+/2njAgAGSpLFjx9q25s2b29j9G0lPT49Cdohl9DABAPBAwQQA\nwAOzZCsIMwwrlT4Ue/rpp9u4ffv2kqTvvvvOtr399ts2XrNmjY0nTZokSRo8ePCRJYsSffLJJzau\nVOn3z7snnXRSUOnEpY0bN9o4JydHklS5cmXbtnz5chvPmjXLxnfccUcUsotvH3/8sY27du1q43Xr\n1pXbNebPn2/jM844w8b16tUrt2uUhB4mAAAe6GEmMPeT8ptvvlns9saNG9vY7TXWrVvXxjVq1JAk\n7d2717a1aNHCxp999pmNf/755yPMGKX59NNPbWx+N+4neYT3008/2bh3794BZpLY3nnnHRvv2bMn\nItdw36smTJhg4ylTpkTkei56mAAAeKBgAgDgIaJDsu62aubLdUn6wx/+YOPq1atLknr27GnbTjzx\nRBu76/xQNj/88ION3eW2ZijWHT5JS0s75GO52+h99dVXYe/TqVOnw8oTh/b555/beMyYMTa+8cYb\ng0gnbrjrU2fOnGnjZcuWeT/G4sWLbWz+hpo2bWrb2rZteyQpJoz9+/dLkubMmRPxa7nrZEeNGmVj\ns742JSUlYtemhwkAgAcKJgAAHiI6JGu2nZJKX4fjbk11zDHH2PjMM88s97yk0DU7AwcOlBTa1U8E\nV155pY3d9ZI1a9aUJB177LHejzV16lQbuzNmEXlff/21jd1t3dy1tSju7rvvtrG7zrIsZsyYUSx2\nt36cNm2ajc8999zDukYiWLhwoSRpyZIltu1vf/tbRK61detWG69cudLGu3btksSQLAAAgaNgAgDg\nIaJDsuPHj7exu8DdHWb98ssvJYVu+bVo0SIbL1261MZmKGTDhg2lXrtq1aqSQhfhu7NG3cc1w7OJ\nNiTrOtyTFp566ilJoadkuNxNDNwY5WfEiBE2btCggY0T+fV6JDp06CApdGb4b7/95v3z7nuGO7y3\nfv16SdLatWtt23nnnWfjAwcOlD3ZOObO3r7uuuskha5qiNQWme7GBdFGDxMAAA8R7WFecsklYWOX\n2dzbtW3bNhu7PU/zidpnHVW1atUkHTwzUArdVNz94rhhw4alPl5FMnv2bBsPHTpUUug2V6mpqTYe\nPny4jZOTk6OQXcXgTpJzX+/u6zmSkxvizXvvvWfjVatWSZKSkpJsW2mTfm6//XYbt2vXzsa1atWy\n8bvvvitJeuyxx8I+xosvvmjjvn37+qQd19znwUy4MQcwSAe3biwP7vu1+7t2f8fRQA8TAAAPFEwA\nADzE5GklderUsfHFF19c7PaShnfDeeONN2zsDvU2adLExuYLa/zOPeUk3IkD7vq/Cy+8MCo5VTTu\nsJPr+OOPj3Imscsdtnb/hrds2XLIn3PXUXbr1k2SNGzYMNtW0lcLZuLcSy+9FPZaZj23JO3evVtS\n6BmaZiJiPHO3O3W3wTOTfdxJUOXp0UcftbE7DHvRRRfZuHbt2hG5toseJgAAHiiYAAB4iMkh2fLw\n448/SpKys7Ntm7suy8z+lMq2RVyiuuqqq2zsnmJiuIfuusMjiIwVK1aEbXeH/Sq6ffv22bi0YVj3\nVBF3m0d3zWVpzJCsu76wf//+Nna3LTS/p6ysLNuWCLPxX3/9dRu7/38jNSvYDLu/+uqrtq1KlYNl\na8iQITaOxpA3PUwAADxQMAEA8JCwQ7IvvPCCpINDs1LoLCp3AXhF5W4V6J4y4M6MNbMy3aGP8lyQ\njFD5+fmSpJdfftm2NWvWzMaXXXZZ1HOKV+6MTff5LMswbDjuMOvkyZNt/NFHHx3R48aq7du329jd\nUtTlfvVVnsaNGydJ+umnn2ybu7VquFUUkUQPEwAADwnVw/zggw9s7G7ZZrz11ls2bty4cVRyimVd\nu3a1cUmTJnr27CkpMSYsxIPc3FxJoWuG3e0jq1evHvWc4kG4zdU//PDDiFzLnTzobrgebrN3d32n\nu21cPHFHnAoKCmzco0ePiF/722+/LdYW5Hs3PUwAADxQMAEA8JBQQ7LuVk179+6VJF166aW2rVWr\nVlHPKRaZ8+Tck2Bc7nZTDz/8cDRSwv/nnhtrXHPNNQFkEvvGjh1r49JOIylPs2bNsrH7NxTudJSH\nHnooanlFSs2aNW189tln29g9D9OcJlIea9rdiZruuk+jdevWR3yNw0UPEwAADxRMAAA8xP2QbGFh\noY3nzZtnY3OAtDskkginBRyun3/+2caPP/64pIPD1v/LHXZhzWXkbdq0ycaLFy+WFHrYeZcuXaKe\nUzxwDzqPFHf935dffinp4N/PoZi1nonwnnP00Ufb2JxKIoWeXNKxY0dJoVsFluaLL76wsTsbdv36\n9TYOd0B0pUrB9fPoYQIA4IGCCQCAh7gfkn3qqads7M5Yu+KKKyRJ559/ftRzikUjR460cbgtvNzT\nSpgZG12vvPKKjTdv3izp4OsXwXrsscdsbLbbLEmDBg1sPHHiREmhh1UnggcffNDG7kYNZnjcPci7\nNO5h6O7Qa2knz/Tp08f7GuWNHiYAAB7isofpftn/yCOP2LhWrVo2fuCBB6KaU6wbNWrUIW93Pz0z\n0Se63EkORp06dQLIBJLUoUMHG69atcr759xNwS+44IJyzSlWnHHGGTaeNm2ajc3oXrit7ErSrVu3\nsO3u2bvhthN0JyFFGz1MAAA8UDABAPAQV0OyZi1hv379bNv+/ftt7A6lsA1e2bjrNMuydswdBjc/\nt2/fPtvmnqXnMqdxPPPMM6Vew2wz9uSTT9q25ORk7xxjnbvVmtGpU6cAMokv4U4Hcc2dOzfsz91y\nyy023rhx4yEfN9w6wJJEY11orDJntrpntx6uU0455ZC3u1vynXXWWUd8vbKghwkAgAcKJgAAHmJ+\nSNYdajEH6a5du9a2uVs1uTNmUTZNmjQ5rJ/r3r27jdPS0iQdXEsoSVOmTDmyxBypqak2HjJkSLk9\nbhDMFnhS6PMFf3379rXxwIEDi91utmuTSj7NJFy7+55T2ikot99+e6l5omzcIXE3NqI9DOuihwkA\ngAcKJgAAHmJ+SNZdCLt8+fJit7sL8hs2bBiVnOKRO4N45syZ5fa47uLl0rizb8OdOJCVlWXj5s2b\nF7u9TZs2Zcwudr355ps2dmd6m1mGF154YdRzijddu3a18YgRI2xc2tZqZWFOHZEOLtrPycmxbeZr\nCJQfd2ZyWWYpRwM9TAAAPMRkD9PdKqxdu3bFbn/66adtzHo1PzNmzLCx+TRe0nmYLnMGoM/knZtv\nvlmSlJ6eHvb2q6++2sbuFlsVxa5du2xc0hrBa665RlLpk00Q+jqbOnWqjc0IyujRo4/4Gv/3f/9n\n4zvuuOOIHw+l2717d7G2ILfDc9HDBADAAwUTAAAPSUXhFroEbPDgwTZ+4oknit2+bNkyG4ebHALE\nInfLwLZt29rYXV/66quvSkqsrf+CMm/ePBuPGzfOxu5WhFdeeaUk6bbbbrNt7luiewJJop1tGatO\nPPFEG5u/maFDh9q2u+66K+o5GfQwAQDwQMEEAMBDzAzJuluFuVta/fLLL8Xuy5AsACQmM0wuSffc\nc48k6eKLLw4qnRD0MAEA8EDBBADAQ8xsXPDBBx/YONwwrHTwZJIaNWpEJScAQHSFO1A9VtDDBADA\nQ8z0MEty9tln2zg3N1eSdOyxxwaVDgCggqKHCQCABwomAAAeYmYdJgAAsYweJgAAHiiYAAB4oGAC\nAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiY\nAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcK\nJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCB\nggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4\noGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAA\nHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkA\ngAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHuKuYO7bt0/Dhw9XZmamNm3aFHQ6FcaiRYuUmZmp\ngoKCoFNJeDNnzlTHjh110UUXacCAAdq7d2/QKSWsgoICNWrUSO3bt7f/DRw4MOi0Elo8v77jrmBm\nZ2crOTk56DQqlMLCQo0cOVK1a9cOOpWEt3r1aj3xxBMaP368Fi5cqAMHDignJyfotBJaamqq5s2b\nZ/8bMWJE0CklrHh/fcdlwezXr1/QaVQoY8aMUVZWllJSUoJOJeEtXbpULVu2VFpampKSktS7d2/N\nnz8/6LSAchHvr++4K5jNmjULOoUK5euvv9aSJUt00003BZ1KhZCUlKQDBw7YfycnJ2vDhg0BZpT4\ndu7cqezsbLVv314333yzvv3226BTSljx/vqOu4KJ6CkqKtKwYcM0ZMgQVa1aNeh0KoRWrVopLy9P\nq1ev1v79+zV58mTt2bMn6LQSVkpKijp16qTBgwdrzpw5at26tbKzs7V///6gU0tI8f76pmCiRFOn\nTlVGRoaaN28edCoVRkZGhh544AH1799f3bt3V0ZGhmrWrBl0WgmrTp06Gjp0qE4++WRVqlRJffr0\n0ZYtW7Ru3bqgU0tI8f76rhJ0Aohdubm5+uKLL7Rw4UJJ0tatW9WtWzeNHj1aLVu2DDi7xNWlSxd1\n6dJFkrRs2TKddtppAWeUuLZv364dO3aoXr16tu3AgQOqUoW3xkiJ59c3PUyUKCcnR/n5+crLy1Ne\nXp7S0tI0ffp0imUErV+/Xp07d9aOHTu0b98+jR07Vl27dg06rYT1+eefq3fv3tq6daskadq0aUpL\nSwspoCg/8f76jquPUVu2bNENN9xg/92rVy9VrlxZEydOVGpqaoCZAeUjPT1dl1xyiTp37qykpCR1\n7NjRfhpH+WvTpo2uv/569ejRQ0lJSUpNTdWYMWNUuXLloFNLSPH++k4qKioqCjoJAABiHUOyAAB4\noGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAA\nHiiYAAB4oGACAOCBggkAgAcKJgAAHqoEnQAAxINt27ZJkjZs2FDqfdPT0yVJzzzzjG1r3LixjU87\n7TQbN23atLxSRITRwwQAwAMFEwAADwk1JDtr1iwbZ2VlSZLGjBlj2/r27WvjypUrRy+xGPPjjz9K\nkrp3727bzj//fBvfeuutNm7QoEFEcti+fbsk6f3337dt7du3t3HVqlUjcl2gNLNnz7ax+56yaNEi\nSdI333xT6mNkZmZKktatW2fb9uzZE/a+Bw4cOIwsEQR6mAAAeEgqKioqCjqJI/Hzzz/b2P3y/Pvv\nvy923127dtn46KOPjmxiMcZMWJAOTjgwvTxJ6tKli42nTp0akRzc651zzjmSpC1btti25cuX2/jU\nU0+NSA6xYseOHTYeNGiQjVeuXClJWrBggW2jt12+vv32Wxu/8MILkqRx48bZtsLCQhtH4+2RHmb8\noIcJAIAHCiYAAB7iftKPO2kk3DBsjx49bFy9evWo5BQr3OFOd4KPGcb+61//atvcyVGR8uijj9p4\n7dq1kkKHwhJ9GHbSpEk2HjJkiI3Dretzh2yPO+64yCZWwRQUFNh49OjR5fa4p59+uo3dNZcItWbN\nGkmh709vvvmmjc3kKkmqVOn3Pt3tt99u29wJitF+z6CHCQCABwomAAAe4nKWrLueye2ef/zxx8Xu\nO2fOHBtfccUVkU0sxsyfP9/G7hpHY/PmzTY+/vjjI5LDF198YeOzzjrLxmZW7sSJE21bzZo1I5JD\n0MwQYLNmzWybOxyVlJRU7Geuu+46Gz///PM2PvbYYyORYtxzn08zzNqmTRvb5r7+8/PzbdyhQwdJ\nUo0aNWzbzp07bXz55Zfb2AyztmjRwra5v1N35n1KSsph/L9ILJ9//rmNzWxkSZoxY4Yk6aeffjqs\nx3VnjZv1rtLB3/ezzz5r24466qjDukZJ6GECAOCBggkAgIe4nCW7YsUKG4cbhpWkKlV+/79W0YZh\nzbZ3kvTGG2+Evc+ECRMkRWcY9rLLLgt7n65du0pK3GFY19NPPy0pdJON0kyZMsXGc+fOtbE7u/bO\nO++UVP7DTvHi119/tbH7Ovvss88kSTNnzgz7c61atbLxJ598Iil0C0h31vLJJ59sYzNjE8WZ92R3\n6NXdAMXdtMRwn9sLLrjAxu7v4qmnnpIknXvuubbtww8/tLH7N2W+fnM3sHFn15YHXgEAAHiIyx6m\n+dL4UErq2SS6e++918buuj+zFZ0kXXPNNRHN4YMPPrDxpk2bbNynTx8b33DDDRHNIWjr16+38csv\nv1zsdvdTcGpqqo3/9a9/Fbuv++nc9FYlqWfPnpKkE0888ciSjSN79+618fXXX29j06uUpMGDB0uS\nLr300lIfL9zhAvXr1z+CDCuO2267zcZmHWVJE3nc34WZ/Pf444/btpLWyJsJWi+++KJtc99HPv30\nUxubv4Ps7GzbdvXVV9u4PEbU6GECAOCBggkAgIe4HJJ97733wra7kx/c7n5F4q7pc+OTTjrJxuU5\nScQ92cE85+4X/24OZrJRReAOFZlt7tq2bWvb3Nfw7t27bfzqq69Kkp544gnbZrYSk0KHuDt37iwp\ndFJQoq7TNGsj3b9r96xKd7htwIABkqTk5OQoZZfY3NfniBEjbJyTk2Njs5z/hBNOsG3u+cPmdyKV\nbY2qmdSzf/9+2/bQQw/Z2F0n6549Gin0MAEA8EDBBADAQ1wNyS5ZskRS6NZWLncI5uyzz45KTvFi\n9uzZNm7Xrp0kqXbt2rbNHT4pjXuagBsvXbq02H0jPSM3VrnbN5ph6XvuuSfsfd0Zgn/+858lSdOn\nT7dt7oHH7k6W5vVeEdZhmjWVw4cPt23p6ek2Xrx4sY1r1aoVvcQqAPdv3KyLlEJfi+YrH3cFwx//\n+Efva/z22282/s9//mPjG2+8UZLUsWNH27Zt27ZDPlavXr1s7L7HlQd6mAAAeKBgAgDgIa6GZJct\nW3bI28syrJio7rrrLhu/++67Nt64caONzQxNd0jlrbfe8r6G+3PhTtpo2LChjSvqbOXXXnutWNs/\n//lPG1911VWH/Pnly5eXeo2WLVtKCj1pI1GZr2Nc7kkh7jZrKF/uDNXKlSuHvY85QcTdts79WmHV\nqlXFfsY93eWrr74KG9etW1dS6OzwkpgNQNztI92TTcoDPUwAADzE1XmYZju1yZMn2zb3S133/DU+\ncYZ+Oe6uC5w3b56k0DVV7vZsvXv3PuTjul+qN2nS5JC3u+ddViTTpk2zsTnb0n2u3M3V3det2WLs\n9ddft23uBvXu79SsuXQnvJx55plHnHssMuv73HMvq1WrZuNBgwbZOCsrS1JoDxSHz11r7W5H6G7j\nuGvXLkmho08lMQdjuD3XsnA3wTeHOEjSc889J0lKS0s7rMf1unbEHhkAgARCwQQAwEPMD8m6J1+Y\nrcXclN21WNHYGgnSd999Z2N3go9Z+zp//nzbFqkzN2Pd1q1bbWyeI/fUkdImTrmn7bhbDXbq1MnG\nq1evliTdeuuttm3s2LFHknbMMs9RuOfqf5mJKe5ZiC1atLCxu84vIyNDktSoUaOwj7Vy5Uobm3M0\n+brnd//9739tbNbH5uXl2bbjjjvOxu4JMGaNsnvCjDtZqDTu5E53UmF5r7kMhx4mAAAeKJgAAHiI\n+XWYZrd6KfwMrIp6UHSQHn74YRu7Q2Rm1m1FHYZ1uaeGmBmv3bp1s20lDc/269dPkvTkk0/aNnfr\nPHdWoDnR5J133rFt7jZ67nB5vLvvvvskSSNHjiz1vmabNXco240Pl5mpe9FFF9k2d7ZzReMOgbpb\nFvoy295JJQ/JHnPMMZKkUaNG2babbrrJxiWtC40UepgAAHigYAIA4CHmZ8mazQqkgxsWuEMB7ozM\n8847L3qJVTDuQvru3bvb2AyZSNLChQslSeecc070EosjCxYssLE5KFoKfT2b4e6StrsLt4jc3dYw\nUTeNMMOsH3/8sW3r2bOnjfft22fjgoKCkJ8pb+7XEO5hxu6WbCiZ+erGfb7c35/LvOe7GyYEiR4m\nAAAeYrKHaT4hSqHrd0yqjRs3tm3utmKIHHNOoyS9/PLLNu7Ro4eN3V4TIs9MOHE/fbtrBN3tEN1J\nSIkuNzdXUmiv5cEHH7TxRx99VG7X6ty5s43NtoYobvz48Tbu37+/JOmXX34Je1/3/d0cQuBugxgk\nepgAAHigYAIA4CEm12G6Z9+FGzF2h0EQHXPnzrVxSkqKjc36OESfmXz19ttv2zZ3XeDzzz9v46FD\nh0YvsYBdcsklxdrc4Wl3SNacl9inTx/bdsstt9j4mWeesTFfOZSN+zzfe++9Ng43FOueyPPiiy/a\nOFaGYg16mAAAeKBgAgDgISaHZN3t8Fx169aVJN19993RTKdCM6dfbNq0yba5h02z5jI45iDdgQMH\n2raZM2fa2J0Zag6xPu2006KTXIxp166djQcPHmxjM5N23Lhxtu2bb76x8aJFiw75uCeddFI5ZZh4\nZs2aZeMdO3YUu939asf9WqFNmzaRTewI0MMEAMADBRMAAA8xOSTrnr7gqlevniSpVq1a0UynQjND\nsu52YB06dAh7XzP7bdu2bbbN3XgCkWEO7pakRx55xMbuDOb7779fkjRp0iTbdvTRR0chu9hwxhln\n2Pjaa6+18dSpU4vd12zx+L+qVPn97bJjx462zT1VBqEzYM0WeCVxtz11T4CJZfQwAQDwEDM9THcb\nqzVr1oS9jzkX0KydQjDMJ20ptMdi1qy5W1sl0gbg8cA9Y/Cll16y8YwZMySFTmhp0qRJ9BILmNub\nHj16tI1Nj+jf//63bdu8ebONGzRoYGPz3LqTqfC7nTt3Sgrtye/duzfsfZs2bSop9PcQL+hhAgDg\ngYIJAICHmBmSNWvKpNBzLVeuXGnjU089Nao5IbycnBwbu6cQ/OUvf5EkPfDAA1HPCb87/vjjbeye\nv5meni5JGj58uG2rqFu9ueuIZ8+eLUn6xz/+Ydvy8/Nt7A6/nnDCCZFPLk69++67kqTvv/++1PuO\nGjVK0sGv2OIJPUwAADxQMAEA8BCTB0hv3LjRxkOGDLGx2YbtjjvuiHpOFdXixYslScOGDbNtbdu2\ntXHfvn1tXKdOHUnSUUcdFaXs4MtsDeeeBOSeJnHmmWdGPSckDjPzdcWKFWFvd7dvjOe1q/QwAQDw\nQMEEAMBDTA7JAihf5rQIM3T2o1P0AAAAkUlEQVQmSc8++6yNs7Kyop4TEofZtrSgoMC2ubOK3QO8\n09LSopdYOaOHCQCAh5hZhwkgco455hhJ0tq1awPOBImof//+If8rha7HjudepYseJgAAHiiYAAB4\nYNIPAAAe6GECAOCBggkAgAcKJgAAHiiYAAB4oGACAOCBggkAgAcKJgAAHiiYAAB4oGACAODh/wG6\n+3syRs6cSAAAAABJRU5ErkJggg==\n",
"text/plain": [
"<matplotlib.figure.Figure at 0x7f1d7c6f89b0>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"metadata": {
"id": "rrPSE9e7lMU6",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2. 붓꽃 데이터\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"https://www.kaggle.com/uciml/iris/data\n",
"\n",
"캐글 데이터 참조 \n",
"\n",
"원본데이터 Iris.csv 참조\n",
"\n",
"https://thebook.io/006723/ch04/01/\n",
"Iris 각 특성값들 참조"
]
},
{
"metadata": {
"id": "uTGTyJLClNvX",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{}
],
"base_uri": "https://localhost:8080/",
"height": 251
},
"outputId": "b5604e9c-7bc3-42ec-a2fb-11ddc52eda55",
"executionInfo": {
"status": "ok",
"timestamp": 1520847764215,
"user_tz": -540,
"elapsed": 3417,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"tf.set_random_seed(777)\n",
"\n",
"# 가공한 붓꽃 트레이닝 데이터 입력\n",
"xy = np.loadtxt('https://raw.githubusercontent.com/Crpediem/deeprunning-study/master/180317/Iris_train.csv',delimiter=',',\n",
" dtype=np.float32)\n",
"x_data = xy[:,0:4]\n",
"y_data = xy[:,4:7]\n",
"\n",
"# 가공한 붓꽃 테스트 데이터 입력\n",
"test_xy = np.loadtxt('https://raw.githubusercontent.com/Crpediem/deeprunning-study/master/180317/Iris_test.csv',delimiter=',',\n",
" dtype=np.float32)\n",
"\n",
"x_test = test_xy[:,0:4]\n",
"y_test = test_xy[:,4:7]\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"X = tf.placeholder(tf.float32, [None, 4])\n",
"Y = tf.placeholder(tf.float32, [None, 3])\n",
"keep_prob = tf.placeholder(tf.float32)\n",
"\n",
"W1 = tf.Variable(tf.random_normal([4, 10], stddev=0.01))\n",
"L1 = tf.nn.relu(tf.matmul(X, W1))\n",
"L1 = tf.nn.dropout(L1, keep_prob)\n",
"\n",
"W2 = tf.Variable(tf.random_normal([10, 10], stddev=0.01))\n",
"L2 = tf.nn.relu(tf.matmul(L1, W2))\n",
"L2 = tf.nn.dropout(L2, keep_prob)\n",
"\n",
"W3 = tf.Variable(tf.random_normal([10, 3], stddev=0.01))\n",
"model = tf.matmul(L2, W3)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"for step in range(1000):\n",
" sess.run(optimizer, feed_dict={X: x_data, Y: y_data, keep_prob: 0.8})\n",
"\n",
" if (step + 1) % 100 == 0:\n",
" print(step + 1, sess.run(cost, feed_dict={X: x_data, Y: y_data, keep_prob: 0.8}))\n",
"\n",
"#########\n",
"# 결과 확인\n",
"# 0: Iris-setosa 1: Iris-versicolor, 2: Iris-virginica\n",
"######\n",
"prediction = tf.argmax(model, 1)\n",
"target = tf.argmax(Y, 1)\n",
"print('예측값:', sess.run(prediction, feed_dict={X: x_test, keep_prob: 1}))\n",
"print('실제값:', sess.run(target, feed_dict={Y: y_test, keep_prob: 1}))\n",
"\n",
"is_correct = tf.equal(prediction, target)\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"print('정확도: %.2f' % sess.run(accuracy * 100, feed_dict={X: x_test, Y: y_test, keep_prob: 1}))"
],
"execution_count": 0,
"outputs": [
{
"output_type": "stream",
"text": [
"100 0.23863082\n",
"200 0.23964703\n",
"300 0.18831006\n",
"400 0.23864982\n",
"500 0.19743586\n",
"600 0.19411115\n",
"700 0.18276615\n",
"800 0.19053736\n",
"900 0.20052955\n",
"1000 0.2167687\n",
"예측값: [0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2]\n",
"실제값: [0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2]\n",
"정확도: 100.00\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "JbV9NeOJl8_N",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## csv 파일 데이터 분류\n",
"\n",
"#### 1단계. 데이터 다듬기 \n",
"\n",
"캐글의 원본 csv에서 train과 test에서 사용되지 않을 id값과 label값을 삭제하고\n",
"분류되는 값인 Iris-setosa, Iris-versicolor, Iris-virginica을 one-hot 형식으로 변환시켜줍니다.\n",
"예) Iris-setosa = 100, Iris-versicolor = 010, Iris-virginica = 001\n",
"\n",
"#### 2단계. train과 test 데이터 나누기\n",
"\n",
"분류한 값을 저장 후 test와 train할 값을 나누어 각각 다른 csv로 저장합니다.\n",
"\n",
"#### 3단계. python list split\n",
"\n",
"생성한 csv에서 학습할 값과 분류되는 값을 분류하여 x_data,y_data에 저장합니다.\n",
"test할 데이터도 동일하게 진행합니다."
]
},
{
"metadata": {
"id": "oZ-3RG9eM6mV",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 3. Wine data\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data\n",
"\n",
"위 주소에서 제공하는 데이터를 가지고 위의 소개한 방법으로 csv파일 분류을 동일하게 진행하여 학습해보도록 하겠습니다.\n"
]
},
{
"metadata": {
"id": "AykS3QhdqsZJ",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 7
}
],
"base_uri": "https://localhost:8080/",
"height": 251
},
"outputId": "9ed3444a-2aa8-4be3-b9b3-ef83e4b99d6f",
"executionInfo": {
"status": "ok",
"timestamp": 1520990309495,
"user_tz": -540,
"elapsed": 3186,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"\n",
"# 가공한 트레이닝 데이터 입력\n",
"xy = np.loadtxt('https://raw.githubusercontent.com/Crpediem/deeprunning-study/master/180317/wine_train.csv',delimiter=',',\n",
" dtype=np.float32)\n",
"x_data = xy[:,0:13]\n",
"y_data = xy[:,13:16]\n",
"\n",
"# 가공한 테스트 데이터 입력\n",
"test_xy = np.loadtxt('https://raw.githubusercontent.com/Crpediem/deeprunning-study/master/180317/wine_test.csv',delimiter=',',\n",
" dtype=np.float32)\n",
"\n",
"x_test = test_xy[:,0:13]\n",
"y_test = test_xy[:,13:16]\n",
"\n",
"# print(x_data)\n",
"# print(y_data)\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"#####\n",
"X = tf.placeholder(tf.float32, [None, 13])\n",
"Y = tf.placeholder(tf.float32, [None, 3])\n",
"keep_prob = tf.placeholder(tf.float32)\n",
"\n",
"W1 = tf.Variable(tf.random_normal([13, 100], stddev=0.01))\n",
"L1 = tf.nn.relu(tf.matmul(X, W1))\n",
"L1 = tf.nn.dropout(L1, keep_prob)\n",
"\n",
"W2 = tf.Variable(tf.random_normal([100, 100], stddev=0.01))\n",
"L2 = tf.nn.relu(tf.matmul(L1, W2))\n",
"L2 = tf.nn.dropout(L2, keep_prob)\n",
"\n",
"W3 = tf.Variable(tf.random_normal([100, 3], stddev=0.01))\n",
"model = tf.matmul(L2, W3)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"for step in range(1000):\n",
" sess.run(optimizer, feed_dict={X: x_data, Y: y_data, keep_prob: 0.8})\n",
"\n",
" if (step + 1) % 100 == 0:\n",
" print(step + 1, sess.run(cost, feed_dict={X: x_data, Y: y_data, keep_prob: 0.8}))\n",
"\n",
"#########\n",
"# 결과 확인\n",
"######\n",
"prediction = tf.argmax(model, 1)\n",
"target = tf.argmax(Y, 1)\n",
"print('예측값:', sess.run(prediction, feed_dict={X: x_test, keep_prob: 1}))\n",
"print('실제값:', sess.run(target, feed_dict={Y: y_test, keep_prob: 1}))\n",
"\n",
"is_correct = tf.equal(prediction, target)\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"print('정확도: %.2f' % sess.run(accuracy * 100, feed_dict={X: x_test, Y: y_test, keep_prob: 1}))"
],
"execution_count": 16,
"outputs": [
{
"output_type": "stream",
"text": [
"100 0.5879292\n",
"200 0.1710964\n",
"300 0.11573343\n",
"400 0.08524342\n",
"500 0.07245682\n",
"600 0.066718794\n",
"700 0.082000956\n",
"800 0.09255613\n",
"900 0.0823765\n",
"1000 0.080762655\n",
"예측값: [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]\n",
"실제값: [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]\n",
"정확도: 100.00\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "kEY8SbUhP3sW",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 4. CNN (합성곱 신경망)\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"CNN은 기본적으로 컨볼루션 계층(convolution layer, 합성곱 계층)과 폴링 계층(pooling layer)로 구성됨.\n",
"\n",
"값을 압축할 때 컨볼루션 계층은 가중치와 편향을 적용하고 풀링계층은 단순히 값들 중 하나를 선택해서 가져오는 방식.\n",
"\n",
"스트라이드 : 몇 칸씩 움직일지 정하는 값\n",
"\n",
"커널, 필터 : 가중치와 필터"
]
},
{
"metadata": {
"id": "KeGsLr6PVISu",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 17
}
],
"base_uri": "https://localhost:8080/",
"height": 561
},
"outputId": "5bd9aa83-b092-4cef-865d-658726085636",
"executionInfo": {
"status": "ok",
"timestamp": 1521035157602,
"user_tz": -540,
"elapsed": 67003,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data\", one_hot=True)\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"# 기존 모델에서는 입력 값을 28x28 하나의 차원으로 구성하였으나,\n",
"# CNN 모델을 사용하기 위해 2차원 평면과 특성치의 형태를 갖는 구조로 만듭니다.\n",
"X = tf.placeholder(tf.float32, [None, 28, 28, 1])\n",
"#print(\"X:\",X)\n",
"Y = tf.placeholder(tf.float32, [None, 10])\n",
"#print(\"Y:\",Y)\n",
"keep_prob = tf.placeholder(tf.float32)\n",
"\n",
"# 각각의 변수와 레이어는 다음과 같은 형태로 구성됩니다.\n",
"# W1 [3 3 1 32] -> [3 3]: 커널 크기, 1: 입력값 X 의 특성수, 32: 필터 갯수\n",
"# L1 Conv shape=(?, 28, 28, 32)\n",
"# Pool ->(?, 14, 14, 32)\n",
"W1 = tf.Variable(tf.random_normal([3, 3, 1, 32], stddev=0.01))\n",
"print('W1 : ',W1)\n",
"# tf.nn.conv2d 를 이용해 한칸씩 움직이는 컨볼루션 레이어를 쉽게 만들 수 있습니다.\n",
"# padding='SAME' 은 커널 슬라이딩시 최외곽에서 한칸 밖으로 더 움직이는 옵션\n",
"L1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')\n",
"print('L1 : ',L1)\n",
"L1 = tf.nn.relu(L1)\n",
"print('L1 : ',L1)\n",
"# Pooling 역시 tf.nn.max_pool 을 이용하여 쉽게 구성할 수 있습니다.\n",
"L1 = tf.nn.max_pool(L1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME')\n",
"print('L1 : ',L1)\n",
"# L1 = tf.nn.dropout(L1, keep_prob)\n",
"\n",
"# L2 Conv shape=(?, 14, 14, 64)\n",
"# Pool ->(?, 7, 7, 64)\n",
"# W2 의 [3, 3, 32, 64] 에서 32 는 L1 에서 출력된 W1 의 마지막 차원, 필터의 크기 입니다.\n",
"W2 = tf.Variable(tf.random_normal([3, 3, 32, 64], stddev=0.01))\n",
"print('W2 : ',W2)\n",
"L2 = tf.nn.conv2d(L1, W2, strides=[1, 1, 1, 1], padding='SAME')\n",
"print('L2 : ',L2)\n",
"L2 = tf.nn.relu(L2)\n",
"L2 = tf.nn.max_pool(L2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME')\n",
"print('L2 : ',L2)\n",
"L2 = tf.nn.dropout(L2, keep_prob)\n",
"\n",
"# FC 레이어: 입력값 7x7x64 -> 출력값 256\n",
"# Full connect를 위해 직전의 Pool 사이즈인 (?, 7, 7, 64) 를 참고하여 차원을 줄여줍니다.\n",
"# Reshape ->(?, 256)\n",
"W3 = tf.Variable(tf.random_normal([7 * 7 * 64, 256], stddev=0.01))\n",
"print('W3 : ',W3)\n",
"L3 = tf.reshape(L2,[-1,7 * 7 * 64])\n",
"print('L3 : ',L3)\n",
"L3 = tf.matmul(L3, W3)\n",
"L3 = tf.nn.relu(L3)\n",
"print('L3 : ',L3)\n",
"L3 = tf.nn.dropout(L3, keep_prob)\n",
"\n",
"# 최종 출력값 L3 에서의 출력 256개를 입력값으로 받아서 0~9 레이블인 10개의 출력값을 만듭니다.\n",
"W4 = tf.Variable(tf.random_normal([256, 10], stddev=0.01))\n",
"print('W4 : ',W4)\n",
"model = tf.matmul(L3, W4)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"#optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)\n",
"optimizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\n",
"# 최적화 함수를 RMSPropOptimizer 로 바꿔서 결과를 확인해봅시다.\n",
"# optimizer = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"batch_size = 100\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(15):\n",
" total_cost = 0\n",
" \n",
" for i in range(total_batch):\n",
" # 이미지 데이터를 CNN 모델을 위한 자료형태인 [28 28 1] 의 형태로 재구성합니다.\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" batch_xs = batch_xs.reshape(-1, 28, 28, 1)\n",
" \n",
" _, cost_val = sess.run([optimizer, cost],feed_dict={X : batch_xs, Y : batch_ys, keep_prob : 0.7})\n",
" total_cost += cost_val\n",
" \n",
" print('Epoch:', '%04d' % (epoch + 1), 'Avg. cost =','{:.3f}'.format(total_cost / total_batch))\n",
" \n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"######\n",
"is_correct = tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"\n",
"print('정확도:',sess.run(accuracy,feed_dict={X:mnist.test.images.reshape(-1, 28, 28, 1),Y:mnist.test.labels,keep_prob:1}))"
],
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"W1 : <tf.Variable 'Variable_19:0' shape=(3, 3, 1, 32) dtype=float32_ref>\n",
"L1 : Tensor(\"Conv2D_8:0\", shape=(?, 28, 28, 32), dtype=float32)\n",
"L1 : Tensor(\"Relu_14:0\", shape=(?, 28, 28, 32), dtype=float32)\n",
"L1 : Tensor(\"MaxPool_8:0\", shape=(?, 14, 14, 32), dtype=float32)\n",
"W2 : <tf.Variable 'Variable_20:0' shape=(3, 3, 32, 64) dtype=float32_ref>\n",
"L2 : Tensor(\"Conv2D_9:0\", shape=(?, 14, 14, 64), dtype=float32)\n",
"L2 : Tensor(\"MaxPool_9:0\", shape=(?, 7, 7, 64), dtype=float32)\n",
"W3 : <tf.Variable 'Variable_21:0' shape=(3136, 256) dtype=float32_ref>\n",
"L3 : Tensor(\"Reshape_4:0\", shape=(?, 3136), dtype=float32)\n",
"L3 : Tensor(\"Relu_16:0\", shape=(?, 256), dtype=float32)\n",
"W4 : <tf.Variable 'Variable_22:0' shape=(256, 10) dtype=float32_ref>\n",
"Epoch: 0001 Avg. cost = 0.759\n",
"Epoch: 0002 Avg. cost = 0.124\n",
"Epoch: 0003 Avg. cost = 0.105\n",
"Epoch: 0004 Avg. cost = 0.101\n",
"Epoch: 0005 Avg. cost = 0.101\n",
"Epoch: 0006 Avg. cost = 0.102\n",
"Epoch: 0007 Avg. cost = 0.103\n",
"Epoch: 0008 Avg. cost = 0.107\n",
"Epoch: 0009 Avg. cost = 0.107\n",
"Epoch: 0010 Avg. cost = 0.111\n",
"Epoch: 0011 Avg. cost = 0.105\n",
"Epoch: 0012 Avg. cost = 0.111\n",
"Epoch: 0013 Avg. cost = 0.117\n",
"Epoch: 0014 Avg. cost = 0.116\n",
"Epoch: 0015 Avg. cost = 0.118\n",
"최적화 완료!\n",
"정확도: 0.9768\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "D49sPJ_ZtO1V",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 17
}
],
"base_uri": "https://localhost:8080/",
"height": 374
},
"outputId": "a6d9254a-6b0c-4edb-e604-fb1fb5e084b7",
"executionInfo": {
"status": "ok",
"timestamp": 1521034525369,
"user_tz": -540,
"elapsed": 61768,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"# 신경망 구성을 손쉽게 해 주는 유틸리티 모음인 tensorflow.layers 를 사용해봅니다.\n",
"# 01 - CNN.py 를 재구성한 것이니, 소스를 한 번 비교해보세요.\n",
"# 이처럼 TensorFlow 에는 간단하게 사용할 수 있는 다양한 함수와 유틸리티들이 매우 많이 마련되어 있습니다.\n",
"# 다만, 처음에는 기본적인 개념에 익숙히지는 것이 좋으므로 이후에도 가급적 기본 함수들을 이용하도록 하겠습니다.\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data\", one_hot=True)\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"X = tf.placeholder(tf.float32, [None, 28, 28, 1])\n",
"#print(\"X:\",X)\n",
"Y = tf.placeholder(tf.float32, [None, 10])\n",
"#print(\"Y:\",Y)\n",
"is_training = tf.placeholder(tf.bool)\n",
"\n",
"# 기본적으로 inputs, outputs size, kernel_size 만 넣어주면\n",
"# 활성화 함수 적용은 물론, 컨볼루션 신경망을 만들기 위한 나머지 수치들은 알아서 계산해줍니다.\n",
"# 특히 Weights 를 계산하는데 xavier_initializer 를 쓰고 있는 등,\n",
"# 크게 신경쓰지 않아도 일반적으로 효율적인 신경망을 만들어줍니다.\n",
"L1 = tf.layers.conv2d(X, 32, [3, 3])\n",
"L1 = tf.layers.max_pooling2d(L1, [2, 2], [2, 2])\n",
"L1 = tf.layers.dropout(L1 ,0.7, is_training)\n",
"\n",
"L2 = tf.layers.conv2d(L1, 64, [3, 3])\n",
"L2 = tf.layers.max_pooling2d(L2, [2, 2], [2, 2])\n",
"L2 = tf.layers.dropout(L2, 0.7, is_training)\n",
"\n",
"L3 = tf.contrib.layers.flatten(L2)\n",
"L3 = tf.layers.dense(L3,256,activation=tf.nn.relu)\n",
"L3 = tf.layers.dropout(L3, 0.5, is_training)\n",
"\n",
"model = tf.layers.dense(L3, 10, activation=None)\n",
"\n",
"cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))\n",
"#optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)\n",
"optimizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"batch_size = 100\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(15):\n",
" total_cost = 0\n",
" \n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" batch_xs = batch_xs.reshape(-1, 28, 28, 1)\n",
" \n",
" _, cost_val = sess.run([optimizer, cost],feed_dict={X : batch_xs, Y : batch_ys, is_training: True})\n",
" total_cost += cost_val\n",
" \n",
" print('Epoch:', '%04d' % (epoch + 1), 'Avg. cost =','{:.3f}'.format(total_cost / total_batch))\n",
" \n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"######\n",
"is_correct = tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n",
"\n",
"print('정확도:',sess.run(accuracy,feed_dict={X:mnist.test.images.reshape(-1, 28, 28, 1),Y:mnist.test.labels,is_training:False}))"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0001 Avg. cost = 0.584\n",
"Epoch: 0002 Avg. cost = 0.193\n",
"Epoch: 0003 Avg. cost = 0.199\n",
"Epoch: 0004 Avg. cost = 0.226\n",
"Epoch: 0005 Avg. cost = 0.223\n",
"Epoch: 0006 Avg. cost = 0.218\n",
"Epoch: 0007 Avg. cost = 0.242\n",
"Epoch: 0008 Avg. cost = 0.257\n",
"Epoch: 0009 Avg. cost = 0.239\n",
"Epoch: 0010 Avg. cost = 0.265\n",
"Epoch: 0011 Avg. cost = 0.264\n",
"Epoch: 0012 Avg. cost = 0.301\n",
"Epoch: 0013 Avg. cost = 0.301\n",
"Epoch: 0014 Avg. cost = 0.258\n",
"Epoch: 0015 Avg. cost = 0.281\n",
"최적화 완료!\n",
"정확도: 0.9875\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "nlWNVOmHwyoa",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 5. 밑바닥부터 시작하는 딥러닝 ( 챕터 7. CNN - 합성곱 )\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"합성곱 계층의 입출력데이터 : 특징맵(feature map)\n",
"\n",
"합성곱의 입력데이터 : 입력 특징맵(input feature map)\n",
"\n",
"합성곱의 출력데이터 : 출력 특징맵(output feature map)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/EUE2Sil.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"합성곱의 연산은 위와 같은 과정으로 연산되게 됩니다.\n",
"\n",
"\n",
"![대체 텍스트](https://i.imgur.com/fiGeQH2.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"패딩(padding) : 합성곱 연산을 수행하기 전에 입력 데이터 주변을 특정값(0)으로 채우는 것\n",
"\n",
"![대체 텍스트](https://i.imgur.com/CNpTYcC.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"스트라이드(stride) : 필터를 적용하는 위치의 간격\n",
"\n",
"http://cs231n.github.io/convolutional-networks/\n",
"\n",
"-합성곱 연산의 '움직이는 데모'\n",
"\n",
"https://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html\n",
"\n",
"-웹에서 보는 MNIST의 CNN 과정\n",
"\n",
"폴링계층의 특징\n",
"\n",
"1. 학습해야 할 매개변수가 없다\n",
"2. 채널 수가 변하지 않는다\n",
"3. 입력의 변화에 영향을 적게 받는다(강건하다)\n",
"\n",
"정리\n",
"\n",
"* 대표적인 CNN에는 LeNet와 AlexNex이 있다.\n",
"* 딥러닝의 발전에는 빅데이터와 GPU가 크게 기여하였다.\n",
"\n",
"\n"
]
},
{
"metadata": {
"id": "2-bX17VceM3e",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 6. 밑바닥부터 시작하는 딥러닝( 챕터8 - 딥러닝 )\n",
"\n",
"\n",
"---\n",
"\n",
"\n",
"## 6-1. VGG\n",
"\n",
"\n",
"![대체 텍스트](https://i.imgur.com/XW4Dxdf.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"VGG : 합성곱 계층과 풀링 계층으로 구성되는 '기본적'인 CNN\n",
"\n",
"합성곱 계층, 완전연결 계층을 모두 16층(혹은 19층)으로 심화한게 특징\n",
"\n",
"층의 깊이에 따라서 'VGG16', 'VGG19'로 구분 됨\n",
"\n",
"3X3의 작은 필터를 사용한 합성곱 계층을 연속으로 거치며 합성곱 계층을 2-4회 연속으로 풀링 계층을 두어 크기를 전반으로 줄이는 처리를 반복하고 마지막에는 완전연결 계층을 통과시켜 결과를 출력\n",
"\n",
"## 6-2. GoogLeNet\n",
"\n",
"![대체 텍스트](https://i.imgur.com/4e2uF62.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"GoogLeNet 구성이며 사각형은 합성곱 계층과 폴링 계층등의 계층을 나타냄\n",
"\n",
"GoogLeNet은 세로 방향 깊이뿐 아니라 가로 방향도 깊다는 점이 특징\n",
"\n",
"\n",
"![대체 텍스트](https://i.imgur.com/ehXolg6.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"GoogLeNet의 인셉션 구조입니다.\n",
"\n",
"(인셉션 : GoogLeNet의 가로 방향 '폭')\n",
"\n",
"인셉션 : 크기가 다른 필터(와 풀링)를 여러 개 적용하여 그 결과를 결합하는 것.\n",
"\n",
"GoogLeNet에서 1x1크기의 필터를 사용한 합성곱 계층을 많은 곳에 사용한 이유 : 1x1의 합성곱 연산은 매개변수 제거와 고속처리에 기여함\n",
"\n",
"## 6-3. ResNet\n",
"\n",
"![대체 텍스트](https://i.imgur.com/5rf3LUn.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"딥러닝의 학습에서는 층이 지나치게 깊으면 학습이 잘 되지 않고, 오히려 성능이 떨어지는 경우가 많음\n",
"\n",
"ResNet은 이러한 문제를 해결하기 위해서 스킵 연결(skip connection)을 도입\n",
"\n",
"스킵 연결 : 입력 데이터를 합성곱 계층을 건너뛰어 출력에 바로 더하는 구조\n",
"\n",
"스킵 연결은 역전파 때 스킨 연결이 심호감쇠를 막아줘서 층이 깊어져도 학습이 효율적으로 할 수 있도록 해줍니다.\n",
"\n",
"ResNet은 합성곱 계층을 2개 층마다 건너뛰면서 층을 깊게 함\n",
"\n",
"## 6-4. 딥러닝의 활용\n",
"\n",
"-사물 검출 : 이미지 속에 담긴 사물의 위치와 종류(클래스)를 알아내는 기술(R-CNN, Faster R-CNN)\n",
"\n",
"-분할 : 이미지를 픽셀 수준에서 분류하는 문제(FCN)\n",
"\n",
"-사진 캡션 생성(NIC)\n",
"\n",
"## 6-5. 딥러닝의 미래\n",
"\n",
"-이미지 스타일(화풍) 변환(Prisma 앱 : http://prisma-ai.com/)\n",
"\n",
"-이미지생성(DCGAN) : 대량의 이미지를 사용하여 학습하여 학습이 끝난 후 아무런 입력 이미지 없이도 새로운 이미지를 생성\n",
"\n",
"-자율주행(SegNet)\n",
"\n",
"-Deep Q-Network(강화학습) : 에이전트는 더 좋은 보상을 받기 위해 스스로 학습\n",
"\n",
"![대체 텍스트](https://i.imgur.com/dlVBtOz.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"보상은 정해진 것이 아니라 '예상 보상'\n",
"\n",
"DQN은 Q학습이라는 강화학습 알고리즘을 기초로 함\n",
"\n",
"Q학습에서는 최적 행동 가치 함수로 최적인 행동을 정함\n",
"\n",
"이 함수를 CNN으로 비슷하게 흉내 내어 사용하는 것이 DQN\n",
"\n",
"![대체 텍스트](https://i.imgur.com/SYtc9Gp.png)\n",
"\n",
"출처 : 밑바닥부터 시작하는 딥러닝\n",
"\n",
"DQN에서 사용하는 CNN은 게임 영상 프레임(4개의 연속한 프레임)을 입력하여 최종적으로 겡미을 제어하는 움직임(조이스틱 이동량이나 버튼 조작 여부)에 대하여 각 동작의 '가치'를 출력"
]
},
{
"metadata": {
"id": "0IsDlPvn9qF7",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 7. AutoEncoder\n",
"\n",
"---\n",
"\n",
"오토인코더 : 비지도 학습 중 가장 널리 쓰이는 신경망\n",
"\n",
"오토인코더는 입력값과 출력값을 같게 하는 신경망이며, 가운데 계층의 노드 수가 입력값보다 적은 것이 특징\n",
"\n",
"이런 구조라서 입력 데이터를 압축하는 효과를 얻게 되고, 노이즈 제거에도 매우 효과적이라고 알려짐\n",
"\n",
"오토인코더의 핵심은 입력층으로 들어온 데이터를 인코더를 통해 은닉층으로 내보내고, \n",
"\n",
"은닉층의 데이터를 디코더를 통해 출력층으로 내보낸 뒤,\n",
"\n",
"만들어진 출력값을 입력값과 비슷해지도록 만드는 가중치를 찾아내는 것\n",
"\n",
"오토인코더 방식 : 변이형 오토인코더(Variational Autoencoder), 잡음제거 오토인코더(Denoising Autoencoder) 등"
]
},
{
"metadata": {
"id": "0VAxJWZa9pkt",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 22
},
{
"item_id": 23
}
],
"base_uri": "https://localhost:8080/",
"height": 680
},
"outputId": "5b17da26-b5b6-4875-9b63-24e7405f4d38",
"executionInfo": {
"status": "ok",
"timestamp": 1521090514240,
"user_tz": -540,
"elapsed": 35640,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data\", one_hot=True)\n",
"\n",
"#########\n",
"# 옵션 설정\n",
"######\n",
"learning_rate = 0.01\n",
"training_epoch = 20\n",
"batch_size = 100\n",
"# 신경망 레이어 구성 옵션\n",
"n_hidden = 256 # 히든 레이어의 뉴런 갯수\n",
"n_input = 28*28 # 입력값 크기 - 이미지 픽셀수\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"# Y 가 없습니다. 입력값을 Y로 사용하기 때문입니다.\n",
"X = tf.placeholder(tf.float32,[None, n_input])\n",
"\n",
"# 인코더 레이어와 디코더 레이어의 가중치와 편향 변수를 설정합니다.\n",
"# 다음과 같이 이어지는 레이어를 구성하기 위한 값들 입니다.\n",
"# input -> encode -> decode -> output\n",
"W_encode = tf.Variable(tf.random_normal([n_input, n_hidden]))\n",
"b_encode = tf.Variable(tf.random_normal([n_hidden]))\n",
"\n",
"# sigmoid 함수를 이용해 신경망 레이어를 구성합니다.\n",
"# sigmoid(X * W + b)\n",
"# 인코더 레이어 구성\n",
"encoder = tf.nn.sigmoid(tf.add(tf.matmul(X, W_encode),b_encode))\n",
"\n",
"# encode 의 아웃풋 크기를 입력값보다 작은 크기로 만들어 정보를 압축하여 특성을 뽑아내고,\n",
"# decode 의 출력을 입력값과 동일한 크기를 갖도록하여 입력과 똑같은 아웃풋을 만들어 내도록 합니다.\n",
"# 히든 레이어의 구성과 특성치을 뽑아내는 알고리즘을 변경하여 다양한 오토인코더를 만들 수 있습니다.\n",
"W_decode = tf.Variable(tf.random_normal([n_hidden, n_input]))\n",
"b_decode = tf.Variable(tf.random_normal([n_input]))\n",
"\n",
"# 디코더 레이어 구성\n",
"# 이 디코더가 최종 모델이 됩니다.\n",
"decoder = tf.nn.sigmoid(tf.add(tf.matmul(encoder, W_decode),b_decode))\n",
"\n",
"# 디코더는 인풋과 최대한 같은 결과를 내야 하므로, 디코딩한 결과를 평가하기 위해\n",
"# 입력 값인 X 값을 평가를 위한 실측 결과 값으로하여 decoder 와의 차이를 손실값으로 설정합니다.\n",
"cost = tf.reduce_mean(tf.pow(X - decoder,2))\n",
"optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(training_epoch):\n",
" total_cost = 0\n",
" \n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" _,cost_val = sess.run([optimizer, cost], feed_dict={X:batch_xs})\n",
" total_cost += cost_val\n",
" \n",
" print('Epoch:','%04d' % (epoch + 1),'Avg. cost = ','{:.4f}'.format(total_cost / total_batch))\n",
" \n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"# 입력값(위쪽)과 모델이 생성한 값(아래쪽)을 시각적으로 비교해봅니다.\n",
"######\n",
"sample_size = 10\n",
"\n",
"samples = sess.run(decoder, feed_dict={X:mnist.test.images[:sample_size]})\n",
"\n",
"fig, ax = plt.subplots(2, sample_size, figsize=(sample_size, 2))\n",
"\n",
"for i in range(sample_size):\n",
" ax[0][i].set_axis_off()\n",
" ax[1][i].set_axis_off()\n",
" ax[0][i].imshow(np.reshape(mnist.test.images[i],(28, 28)))\n",
" ax[1][i].imshow(np.reshape(samples[i], (28, 28)))\n",
" \n",
"plt.show()"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\n",
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0001 Avg. cost = 0.0598\n",
"Epoch: 0002 Avg. cost = 0.0362\n",
"Epoch: 0003 Avg. cost = 0.0315\n",
"Epoch: 0004 Avg. cost = 0.0285\n",
"Epoch: 0005 Avg. cost = 0.0269\n",
"Epoch: 0006 Avg. cost = 0.0262\n",
"Epoch: 0007 Avg. cost = 0.0254\n",
"Epoch: 0008 Avg. cost = 0.0248\n",
"Epoch: 0009 Avg. cost = 0.0245\n",
"Epoch: 0010 Avg. cost = 0.0243\n",
"Epoch: 0011 Avg. cost = 0.0240\n",
"Epoch: 0012 Avg. cost = 0.0237\n",
"Epoch: 0013 Avg. cost = 0.0236\n",
"Epoch: 0014 Avg. cost = 0.0233\n",
"Epoch: 0015 Avg. cost = 0.0230\n",
"Epoch: 0016 Avg. cost = 0.0230\n",
"Epoch: 0017 Avg. cost = 0.0229\n",
"Epoch: 0018 Avg. cost = 0.0228\n",
"Epoch: 0019 Avg. cost = 0.0228\n",
"Epoch: 0020 Avg. cost = 0.0228\n",
"최적화 완료!\n"
],
"name": "stdout"
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAlAAAACNCAYAAAB43USdAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJztnXl8VNX5h59JAgERZVGURRAV11pE\n3Pe6VS0K7liwYpW6VK1bbRWt0uJeu1gXRKQqriguVfvTqpW6lLRuqCgaqaBUgkrZISHJZH5/zOc9\n986dS8iFJHMmfp9/EmbuhHPmnHvue77vclKZTCaDEEIIIYRoMiWFboAQQgghRLEhA0oIIYQQIiEy\noIQQQgghEiIDSgghhBAiITKghBBCCCESIgNKCCGEECIhMqCEEEIIIRIiA0oIIYQQIiEyoIQQQggh\nEiIDSgghhBAiITKghBBCCCESIgNKCCGEECIhMqCEEEIIIRJSVugGiIAHH3wQgJUrVwLw9ttvM2HC\nhJxrrrrqKg4++GAADjrooFZtnxBCCCGySIESQgghhEhIKpPJZArdiG875557LgB33XVXk67fcccd\nAXj99dcB2HjjjVumYQVk4cKFAPTo0QOAxx57DIDjjz++YG1aV2praxk3bhwA1157LZBVD5944gmg\nbY6fEMI/ampqAFi0aFHee926dQPgnnvuYddddwWgX79+APTq1auVWlhcSIESQgghhEiIYqAKzLnn\nnrtG5WnQoEFOcfn0008BuO+++/joo48AePzxxwE444wzWqGlrcsnn3wCQElJ1sbv06dPIZuzXixf\nvpzrr78eCPozbdo0XnnlFQCGDRtWsLatC/PmzQPge9/7HgCzZ89O9PmZM2fSt29fADbaaKPmbVwr\n88477wAwePBgAJ588kkAjjnmGDfWvmExliNHjgTggAMOAOD000+nS5cu6/Q3Tdn46KOPGDhwIACl\npaXr21TRDMyYMcMp+M888wwAH374Yd513/3udwGorKx042mk0+kWbmVxIgOqQHzxxRcATJw40b22\n++67A/D8888DsMEGG9C+fXsgmMCzZ8/mjTfeAAI3V1vkX//6FwCdO3cGYM899yxkc9aJVatWAXDq\nqacWuCXNy4svvgiQt8g2lccff5xvvvkGgNtvv73Z2tXaVFdXc9xxx+W8duyxxwJZt62PBlRNTQ1b\nb701ELhxevbsCbBOxpPNAXP5VFVVOYO6e/fu693e5mL16tUAXHfddbz33nsATJ06FWhbht6iRYvc\nhvy6664DsvO0KZE677//fou2rS3i3x0uhBBCCOE5XilQFRUVAPzxj38EoHfv3nTs2BGA0047DQgC\n3exnsWLqUSaTccrTSy+9BMCGG26Yd/29994LwJtvvuleGzp0aAu3sjBUVVVx9dVXA3DRRRcVuDXJ\nMdfqI488AgSKTZS//e1vQKAumoQ+YMCAlm7iOtHQ0AAEbqp1Zf/992fMmDFAVqkBnNJaTHzwwQd8\n/vnnOa+dd955AJSVebW0OjX0tNNOc+rfr371KwB3r60Lt956KxC43J977jmvlKdXX30VgB//+McA\nzJkzx71nc8+eMW2BhQsXcuWVVyb6zKBBg4DAA1IMmHq6dOlSIKsmvvDCC0CgKF522WUADBw4sMXm\npBQoIYQQQoiEeFXGYLvttgOCgOk4LOV7r732Svz3t9xySwAuv/xyABfIWkiWLl3qdt+N7YT23ntv\nAP7973+71ywQcPvtt2/BFrY+FRUV7LPPPgB8/PHHAGy77baFbFIibAfUWAxMQ0ND3vumPL3wwgts\nscUWLdfAdcTmmwUJ33zzzUBylfDhhx92cWHLli0DsvF+xUJ9fT0Ahx12GNOmTct5b8aMGUDwHfnC\nzJkzgdx2LV++HFj3737BggUuvd0SWW677TbKy8vXp6nNgs0rWze+/vprAFKplLvGysfcdNNNRaVC\nrVq1ysXOWjFlU68/++wz9thjDyCIH12+fDknn3wyALvssguAW1/79+/v1FLfVeCqqiogGzd5zz33\nAPDVV1+t9XNlZWVOZTv88MMBuOaaa5ol9s0rnfmpp54CgkVop512cou2BRU//fTTQPYh079/fyBX\nljVsUliApGUOQWBI/eIXv2juLiRmbTWAJk+eDOACHyGYBBYM2tYYM2YM22yzDRCMVTFgWU3m6mqM\nHj16uAw0C7o1N8iWW27pXdZLVVWVq4Bvdch++tOfrtPfmjJlSrO1qxB8+eWXADnGk603vhlOlnH3\n8MMPu9fMdbw+hhPAbrvt5l6zue+D8QSBa9HclXHccccdQPa7sevN0PAxsNxcjkcccYRLJApvqAG2\n2mort55YUsDSpUvdWhM2IH1n/vz5QJBocueddwKwZMkSd43VqRoyZIh7Hv785z8Hgizhl156yc3Z\nhx56CIA99tiDo48+er3bKBeeEEIIIURCvFKgdthhh5yfEEiTp5xyCgA33HADAHPnznUK1GeffZb3\nt0yONAWqf//+bjdSLC6vd999l7POOgsI0nB79uzpguzbtWtXsLa1BLazeOWVV9y4+y4rG5WVlbz9\n9ttA4LqLc+FZgOfRRx/tJHYLMv/Zz37mrvvLX/4CZOsJ+cC4ceOcy8d2vUnHprq6GsgqzT6m+DcV\nS38PM3z48AK0ZO1YsL6tGQcddBD777//ev1NS2SZP38+l1xyCQAHHnjgev3N5mTp0qXccsstOa9Z\nCETfvn3zFNDFixe7gGO73+ISeQqFqdH2LHjjjTf4/e9/DwTPxzDRchTFeNLBmDFjmDRpEpDvpjvx\nxBOdK9LUpnDSxmuvvQbA+PHjAfjRj37kkgl69+4NZGvvra8LG6RACSGEEEIkxisFqil06NAByFWR\nwopVFIudWrhwoSvGaDFEvjN9+nSnPBlnn312UQVUJ8GqOgNeBlHHYarZwQcfvMaAxgEDBrg0alOZ\nwuqhnfdn6mpVVZWLKZkwYQKQ3XUVIi7DSos8+OCD7LzzzkAQd5AUU0FKSkpcAUpfYmaSYOVGIFDh\nbOx8w2JeTPHr169f4nlUV1cHBDv6X//61+5vWyKBT3z66acuvd0UJYuvra+vd/fihRdeCMCsWbNc\nXJsVQjUFuNDB5bW1tS5W6/777wdgs8024yc/+QnQdrwQlphhwfE33HCDK/65+eabA4F6f+aZZzaq\nfttYmnJ38803O6Uu6akJa0MKlBBCCCFEQopOgWoqln1iO4qGhgb+8Ic/AIXfVawN2yE9+uij7jVL\nFTdffVskXCR07NixBWxJ07FdTpz6ZHPv3nvvbdTPbjEKFtcwfPhwN38t3f/www8vSPFY2/WuWLGC\nK664Yp3+hql0f/rTn4BshtNvfvMb93uxYLGW//d//+deszg2i63wncmTJ7vYD4uVaawMxUsvveSy\n9qxQoWExOb5RW1vrlDeLATPKyso47LDDgKCApJVKgeBsRl/m5fTp012cmWWZvfXWW84T01aw813t\n+ZbJZFyZoX/84x9A48p3Q0ODK11x/vnnA7DvvvsC8L///c9dZ6rWhRde2Czqd5s1oKxyt6Uvdu/e\nfZ1dD63FihUrgGCBrqmpYbPNNgNwD69iCapOgj2Yfvvb3wLZStVxwZHFgqX733333UDTgxQPPfRQ\nIJt++/LLL7dM45qInXEWfmiua+X7P//5z0BgZA4ePLhoEjnCWJJAmKRVn1ubiy++GAiqx8+bN8+5\ns+xhYmtlHJlMJi/13cZu3Lhxzd3cZsFqBEEQ8B9XZTvuHrOHri/rbLiNduhzsR/AHYeVfgkHg9sY\nvPXWW0BQ/iR8ELKtre+88467P+2ZaWUQwljdsjFjxjSLkSwXnhBCCCFEQtqcAvWf//wHCHZexvTp\n010wmq+ceOKJQFA1F+CCCy4Aiv/sv8awXZadDzhw4EDvzhJbG+HimWs6+25tmCKQTqfzinGOHTvW\nBWG3BuaatLPe1rVoJuSfLFBMZ26Fef3113P+3a1bN+du9xVLxjA31dy5c3n22WeBwF1i62KcK2/k\nyJF57snvf//7gL9r0hlnnOFUNUtptxI2s2fPdsUUbb3p1q2bc/PceOONAIwYMQII1IxCYSo2BMVQ\nBw8e7IpA9unTpyDtam522mknIAh7mDJlinuWn3DCCUBuEVBTj+IKDkeVp5KSElcp38pbNFeZCilQ\nQgghhBAJ8eosvObAAsVNgTJV56GHHvImMDCK+W73228/ICjZf9xxx/Hggw8C/vjkWwILRrXdVkVF\nhTvPyXdsxxqOhbG076RYvMbw4cOdAmXp51999VWr7vitD1byo6amhr///e9A05MwLBA+GrMxdepU\nhg0b1lxNbRVmz57tzuq0sdl6662bPS3aNxYvXuxOsrf16fnnnwf8Pb+wurraKW+LFy8GAnU3rGKc\ndNJJQPaoEItb/OCDD4DgvNRCx3mlUqnYorP2mq07dmzJ7NmzXVmfrbbayl1vcaam9PgeR1VTU+OO\ncLEjkzbddFMge9SVlfexxKNwaZEoV155pYshbu7g++Lyk6yFuro6FyxpEfbXX3894E9WRZTq6mp3\ns5rhZAwePLhNG06QDZw3l4KdI1YsxhPgDNx1YdWqVQD897//BXIrkRtWSb+156/Vl7HFeMKECU5e\nv/rqq9f4OavlVVlZ6RbtaBByMZ3HZSxZsiTPrWquhbbMuHHj3HjZWWS+Gk5Gx44dXeVpM/rMkIIg\nw9fW3bKyMk477TQAl/FmAcsXX3xxQV2VN954o2tnGJuLVpPLfq4Nc9faBsaMFN/o0KGDGwv7GYe5\nncMGlGU1P/LII0D20O+WOvlALjwhhBBCiIS0KQXqnnvucUGDP/zhD4FcGdNHxo8fn5dOa4Gp0UD4\ntsjjjz9OVVUVEJx3+G3hd7/7HRBf88qqzVtF5EKdZ3XNNdcAWRfI5MmTARo9S82CblOp1Borsx91\n1FHN28hWwPoOQfD0OeecU6jmtDjTp08HsrXJbO757vYJs+OOOwJB4L+V0ujWrZtTNMKJKueddx4A\nM2fOBILSDuPGjXP3aSG49NJLOfnkkwEYMmQIkPVUmLobVUXXhpX1ueuuuwDYZZddGD16dHM1t9Ww\n+nRxCtrTTz8NBGUfWhIpUEIIIYQQCWkTQeQzZswAsunRVhnYim/5rkB17NgxL/bJznHy6UTwluKq\nq67i2muvBXA/43z+vmIFP2fNmuVea0oQ+ciRI13yQFwgsqlxttPyAYvVsp9x7LXXXu53U1BvvfXW\nnGvs3KtiwKobd+3a1e32LUbPztlsi9gp97fccotTZ6Lj2BYxxerAAw8EstWvrXCjTydY2Hpja82l\nl14KxBcHbYxRo0blFB4tBp5//nmGDx8OBPcnBOVR3njjDYBWKYUjBUoIIYQQIiFFHQNVXV0NBLv1\ndDrtCqD5rjw1hh3psqbMAcswjBYTs9ROCL6buOKL9rkrrrii4Kd5h2NLLMurmDABNxyL8N577+Vc\nM3ToUObNm5fzWkNDQ6OZIT4pT4YV7Wtq8b4BAwbEvl5VVeWyC33HYmLC42trTFvGijZ26tTJqVHf\nBuwol3PPPReAO+64g/vuuw+As88+u2DtimLZsYbF/L788stOebFxO+uss9wxWbfddlsrtrJ5saK+\np5xySo7yBNn4PCuQ2ppFmIvWgGpoaOAHP/gBAJ988gmQnVTFcghtY6ztYFK7ke1cHwsMvOOOOxL/\nP2eeeeY6tHD9serUX375ZUH+/+bCDiu1Q38Bdt11VyDXAG6slksU389XaypmXEajBIrFeIKgWjUE\nAfKFumdag2eeeQYIqjn37NnT1VT6NmAlG375y18C2eBzq8JvZSs22WSTwjSuEQ455BD3u7nIrYRP\nZWUlTzzxROznimlsLaHGQlwga+BD1p1uhy23JnLhCSGEEEIkpGgVqEWLFrkKpcbkyZO9PZ9pTYwY\nMcKl2DaV8ePHr/E9ky/DhRdHjRoFwN57751zrcnVhcCqbqfTaZcWb6n7xcSRRx4JZHfqVo6hqZgS\ns+eeewJBarElQhQ7tpsvxsKZxlNPPeV+t0rk5kJvi9xwww1AMGZhd6WFCNTU1ACFK63RGpgXYMKE\nCYwcORLAVbO+/fbbCx76EMUKZJ5zzjmu4Klhay0EzwVTzK1PPmPzzgLlw1jxYbs3WxspUEIIIYQQ\nCSk6Bcr8n+F06QceeACAQYMGFaRN68PEiRNdwa9oOQMIApLj4pssSHCbbbZxrx1zzDEA9OjRo9nb\n2hxY2u2jjz7qXrNjFFqq3H5LYrvwl19+mccffxxoegyTpYUX27lwTcUSGQyf0sDXhiVm2NloEMRb\n+HosVEtQWlrqjkWxI3xsnS1kgcnWYtiwYe78uIkTJwLZ4rIWf+oLpojdeOONLsDazq6sqqpy6v75\n558PBEHyPmPPQ1OXwuVhzJtihX4LRdHVgYoeFgwwd+5cAPr27VuIJokE2IPJqur27NnTua58k8XX\nlffffx8IDKT77rvPuVEvuOACIBtY3a9fP6DtukLMDWJBrZYBZAd8+4xl3V122WVAth6SnbvVlg0H\nc+tXVFQA2Xlq7jz7LqxOWzFVJl8fbNPetWtXIFu1vBhqYlmIyyuvvOLGzjYBxYDVydttt92A3FCA\njz76CIDtt9++9RsWovi2/EIIIYQQBaZoFChLe7dqo+E6EFKghPAPc82aYlHo3eK6sHz5ciB72r2p\nM23V5QrBOmtjdvDBB7tx7NChA/DtcmGGserXzz33nDs9wEpbiObH3HTRiv833XRTbEB5IZACJYQQ\nQgiRkKJRoCZNmgTkFrGzaqzm6910001bvV1CCCHaPpZOv/POO7tK7YMHDy5kk9o0/fv3BwIPk5V9\nqays9OacWClQQgghhBAJKboyBsY+++zDiy++CBRXerQQQojiwwqoVlZWFrgl3w7sKBo769ayeH1R\nn6CIXHhCCCGEEL4gF54QQgghREJkQAkhhBBCJEQGlBBCCCFEQmRACSGEEEIkRAaUEEIIIURCZEAJ\nIYQQQiREBpQQQgghREJkQAkhhBBCJEQGlBBCCCFEQmRACSGEEEIkRAaUEEIIIURCZEAJIYQQQiRE\nBpQQQgghREJkQAkhhBBCJEQGlBBCCCFEQmRACSGEEEIkRAaUEEIIIURCZEAJIYQQQiREBlSRceWV\nV+a9lk6nSafTBWiNEEIUN5lMJu81ramiKciAEkIIIYRISCoTZ34LLygtLc3bBcW9VsxkMhlSqVTO\naw0NDZSUtB3bvrq6mo4dO+a8NmjQIN59990Ctah5+TaMYRxx/S5m4vpTX19PWVlZgVrU/MT1MZ1O\nU1paWqAWNS91dXW0a9cu57W2NoY+0bZXOCGEEEKIFqDozNKwYBbdSTS2Iyym3WKvXr0AOOyww/jt\nb38LQNeuXQHo0qWL+33o0KEA3HvvvUBx9dFIpVI0NDS434358+cDsO222wKwYsUKoLiUjdraWiDb\n9s022wyA5cuXu/dvueUWAC688EIAtwsulj7utttuAFRUVLi219XVufdt12v3rF2zePFiN4eLhUwm\n45Tf8Dz98MMPAdh9990BWL16NVBcu/6f/exnANx8881OvbAxKykpcb/bfWrjWEzKjfUhk8m4320c\nrV+Au++KbR21uVlaWur6Y2vmL3/5S7788ksAXnrpJQBWrlwJQPfu3fnf//7X2s1dJxpzloXHKzq+\nLflc9P4Ot8kQ90CJ+0LDNwoEX2Iqlcp7zRds0bKHz5w5c4Bsn63f9fX1AIwdO5avvvoKgCOPPBKI\nv3mK4QEMuWMYXsjGjx8PQLdu3YDgu2nXrp234xj97u1hOnr0aFatWgUERkX79u357LPPYj9XUlLi\n5ThWV1cDOHfkf//7XyA7/8Jtt5/R8bEx7Nq1KwcffDAAf//731u+4c1EdCzS6TTHHHMMEBjLNl8X\nLVrUuo1LQKdOnYDgIXrUUUcB8Wtk2OCwnzavy8vLc9Yen4j2w+beiBEj6NGjBwC33norkG27vW+f\nszXZ1zXV5lv79u2BoJ/19fVUVFQAcNxxxwHZeVpTU+N+h2Adqq+vd/0Kr78+EF7z10Z4nkb72JLP\nfn9mhBBCCCFEkeC1AvXss8/yyiuvAPDzn/8cgI033hjIWqVxgav2mlmhYes1KkH7QKdOnTjttNMA\n3C7B2hdup0mw8+fPd33cYYcdgNaRKluS6A5vyZIlPProowAccsghQO534bsCZcHho0aNAmDWrFmu\nzeFg8pkzZ+b83G677fKu8YXw3Npkk02AfBddmDj3jl2/ZMkSZs2a1ZLNbXbiXAQVFRVUVVXlvD9i\nxAggdy3yaZ6edNJJrv0XXXQRANdffz0QP45hVcJ+N9UjnU679fnQQw9tuUavA9ZWU0lNrZ81axYb\nbLABAJdccgkAffv2dXPV1DV7bqTT6Rx3pg+E55Z5JqxtlZWVbizsGZjJZNyaYj/tHu7VqxfbbLNN\n6zW+iTQ0NOStL2HXpPXXvDEPP/ww77zzDhCov0888QSQfcba9TZ3mws/ZoQQQgghRBHhfRmDqNVv\nVmi3bt3YY489gCBYbsGCBXkxQx06dABg6dKl7LrrrgA888wzLd/wBJiv13YVZnmn02nX3+9973sA\n/POf/3S7iPfeew+A/v37A1lL3XZSPu16GyPsuzamTZvG0UcfDcAXX3wBBLElYXzr45IlSwDo2bMn\nEMQMlZWVubbafEyn0+53m7P2ub/97W9suumm7rO+8I9//AMIVEG7x+rq6vLaGd4lR/n3v//tFIHF\nixe3VHOblfActfiTXXbZhU8++QQIxsl2xF26dHHX+zZPO3fuDODUsw033BDIjmecOh+NgbL5WlVV\nxV577QUE96kvLFu2DICtt94agG+++QbIxm2ZCvGd73wHyN5vpkpF42cgPwnCB37xi18AcPHFFwO4\nJJVp06a5mDbrQ7t27Tj22GMBuOCCC4DgewlfV15e3gotbzprmnfbbLMNX3/9NRA8O8OKVdRmqK2t\ndUkrCxYsaNY2+rM6x1BRUcHYsWMBmDJlCpDNGoDsZP7ggw+A4OaAYFE3Cda+zNraWp599lkAXn75\nZSB4EBSSTCazRqkS4LrrrgPgzTffBLKT3R68m2++ec7nwhPHVzdXHNZf+zl27FjXR1vcw/jYp0wm\nw2233QYEbgAbj/Lycnej2wK98cYbu35YFoxJzwcddJAzjqPB2YVi9erVHH/88UCQEWou5/Ly8rwA\n1HD2VtRAnjp1qvseLLjeHmC+kkqlXJttbKqqqlzfzCixAG1fad++vcvEihoNpaWljSbm2Hy1jcER\nRxzhDJW5c+cCsOWWW7ZY25tKOp3m1VdfBbIbZwiMg7KyMrfhtmDrAQMGMG7cOAAXThEmmq1X6Htx\nk002cW06++yzgcCo32+//ZxRb5vO0tLSNRoXYdd8NDC9kNTX17t19NJLLwXggQceALLttPXU+nXq\nqaey//77A3D11VcD5ATO28bGnpnNZUjJhSeEEEIIkRCvXXj19fV5qYnhYFzbCZkC1bFjR5eaa6mq\n9u9+/fo5C9tUKl+wnU1c+uUuu+wCwEcffQRkZWcLjttiiy1yPl9aWupl4OrasDE1d06fPn1cssC8\nefOAXPncx74tW7aM7bffHsjf3TQ0NDglzWp33XjjjW6H+NZbbwHZul92/RVXXAHgFNhCc8kll3Df\nffcBuMBhc4GE1RkjTg21ebrRRhu53aFvqdONYW0944wzAHjooYfcTthcJKaUh/vv03wdOnQoU6dO\nBfLViJKSkrx1Ni5549NPPwWyLsx99tkHCFR9H6iurqZPnz5A4FbfeeedAZgxY4bzYpjiW1JS4gKp\n7V4MK4m+ufDat2/PySefDMCkSZOAoG3hEihGXAKEUV9f795vSrmA1mLSpEmcf/75QKAk2XNx9erV\n/OY3vwGyJWIANt10U3cvWmmUAQMGAPD++++765q7vIgUKCGEEEKIhHgdA5VKpdwuKRocnclknF/b\ndhupVMqlZxqmTtXU1Dj/vC++bMP6FC3AOGnSJBcsZ9b0TTfdRN++fXM+35S4Bd+Ia7PtNGpqalyh\nxWj7fe3P6NGjncIZrigO0Lt3by6//HIgSJ0OB2yaWmq73qVLl3LzzTcDwdy2HVShuOuuu1zZDFPa\n4tSjuPGx66w0RXV1tVMYfYq7iCM8T02dsdjLuro6148//elPQG7/fZyr7dq1c9+5xRlaH8OKv83h\ncAX2119/HYAhQ4YA2f49/fTTrdf4JjJ//nx3L9m9ZfFOdXV1/POf/wRg+PDhQFZRs6K2++67b871\nZWVleYUmC/3c2HDDDZk4cSKQn2QSLrsQp5hFT32IS47w4V4877zz3HPQ4pbuv/9+IDuGV155Zd5n\n7LuorKwEgrjEVatWuTHbaKONgCDJYH3xw4IQQgghhCgivFagIN/qD/+7MZXFMi2s1EFDQ4M7t6rQ\nO4g1ES1ZMGPGDLeTsjPxDjzwQHe97Rjs50YbbeS98mSEd0K2w/3Vr34FZLOD7rzzTsDfsYry2muv\nuYwf64+pRx9//LHbHcUVn7SzD++44w4guzO2cS+08mTU1dW50gPW9vBRC9HYu/DxCXYvXnXVVe7z\nlhHrw263McKZhKZmWxYe4GJnrOyE7zGIzz//vFOwTzrpJADOPPNMIBtTafelxVx+/fXXrtCmrZ82\nZqtWrWr0fLJC8cUXXzh1zeL2rM0lJSWu7IvFb82cOdP129L7LR6oXbt27n72ZS166623XPvi4p3s\ntTj11FQdK4VTXl7uYoJ9KplSU1PjYnwtLs1iRsN9NFKplPNgzJ49G4Df//73AHz++edujlv/mwuv\ng8gbGhpiz7QzolXHS0pK3GSw+hgmrW+44YZuIfeJ8ESwlG5blL/55hvXN6uo26tXr7zJE64b5Uug\n49qIOxTajIYBAwbwr3/9C8g/68lXwga9Ld5WZ6dz5855br1w/+13G8e+fft6V1enc+fOLhU4Gmya\nSqViDUPrq9WPMrfsZptt5v6W74QNKHuAhvtvRu+Pf/xjwJ9A4zXRsWNH9xCJlmBIp9Mu6SZcG8iS\ndaJlKaqrq72smn/AAQcwffp0IChVY7X/SkpK3BjZONbX17t71koAmJHp45o6f/58VzMubsMcfS6G\nf7efNm51dXVeBY8bqVTKPQ/msgiCAAANV0lEQVQsucZqWK1YscLVCbQkKwgSjuw+tQ1PeDPX3Phh\nUgshhBBCFBH+aHbku+viVIfGLMmGhga3g7Cd4e233w7A3Xff7SS+8EnbrU2comYuOCviZoUVy8vL\n3fk+AwcOdNdH2x1O81xTVXNfiBs/GzNj33339a7dUaInhadSKfedW6FCC1gE8twAcanFppAuW7Ys\nT3Ft7XR/Uyks2H3YsGF5aqB9B+3bt88LTg3//thjjwHw/e9/H8gqjJZObBW7fXGPGNF5mslknFvZ\nxrJ79+6MHDkS8Fd5iq43H3/8sati/Ze//AUIxviQQw5h2LBhQFDAd++99+aYY44Bgvn5wx/+EMgm\nuUQVVR/W1BUrVjhPhJXcsNIvW265pTtX1daYzp07O+XtrLPOAuDJJ58Esve3KXDW19Yulmpqkyna\nDzzwgAuAt/fCJ3DEnQoQTcay7yecJGD9K4QiFQ1g32CDDZxH5qmnngJwCQtlZWV5amh43YmutSUl\nJS705bnnngPiCzSvC36tWkIIIYQQRYBX2/zoOXZhtSVabDJ87EB4l37PPffk/E0LFBw+fLjbaRUy\nniaaPtrQ0ODaZX5662P//v1d+QJ7Lax0GLZjCB8L42vMUHTn0NDQwE033QQEqscpp5zibfsN+85t\nN15aWup2UbZzMsLHJcTtmOw1U+Kqq6sLpjwZNictODp8HES0EG1c/zKZjPs+bDdv9/epp57q0v99\nU56MqAK1aNEiV4rC3tt22229Oz8sSnQe9enTh0ceeQQgr+xG3D1nygwEx2jZGhvueyHH0dpt83L0\n6NGcc845QNA3u0/D54Xa0R9TpkzhgAMOAAL12JIchg4d6vpZqD6a8mT9LC0t5bLLLgOCuMJTTjkF\nyKqC5smwchObbLIJy5cvz/mbpo6HyzQUUkUNJyZANlHDYoFt3tk4jB492sUEm7IY7p/dn7ZGjxw5\nkgkTJgDN75HxyoAywme7NVZ7I+6Gty8qapSUl5fnLYo+PKQXLlzoJootaDbI55xzjgtuDAcGRhe8\ncIZiYw9qH7C2mkFYV1fnAj4tsHH33Xd310fbH35Y+8DChQuBXOncFvKwlByutQO5N7JVdrZq1nV1\nde57skNCCxV0bXXVPv30U3dPmaFrCRpDhgxx89SMpcrKSldvx2qZmetj2bJledXJfTOkolm/CxYs\ncK4cY9CgQWsMKfBtnhqZTCbPPWuu2JKSEtdvCwu47LLL3Lpk58WF60f5dE6crYsjR450rrg417mF\ndZx44olA9ns44ogjgCCD69prrwWy959lcvuS0JJOp/PW+XAogWWJmsHfuXPnvEO7rR7d5ZdfnmdU\nFNKQsmfAVltt5YwiC2mxf2+wwQZuXG1dfOGFF9zaavPztddeA7InJrRUn/xatYQQQgghigAvyxjE\npbjHueuiO4HZs2e7Ksn77bcfEJzRFD4nzgdsx3bxxRfzhz/8Acjv67Jly2JPqbfPRncOce4934ju\nVCdPnszpp58OBJLzE0884X09HWO77bYDsgqN9c2CbS1QMa5kQTqdZtasWUBQ98vqSKVSKZda3txn\nNyXF/v9Bgwbx+eefA8Eu0RSZzTffPEc1huwu0Ha9JsvbSQAvvviiq/Him/JkRNebww8/3AUk29jM\nmTPHuUJ8n6c2N+vr652SYiqT7djr6urcmjJ37lwgO+6mbliauH0+XNrBh/6H3d3WrldffRXAuegu\nuugiV1MvrLzZeI8aNQrAuTm33nprd58WOt3f5h0Eaoy1KZzQYWpL+FzDaOiEjeGTTz7pSj0Uun9h\nwuqmrR/2LEyn025c7ZmxatUqd70pjD/5yU+A3BNNmhs/Vy8hhBBCCI/xUoEy1tY0e//LL78EYIcd\ndnC7ENstde3aFYjfIRVy12T+2s0339z5eG2HYX7dDh06xKaHR3f7cdjnfEmvjo6ltW/MmDEuiNwK\npd1yyy1e7Wwbw3ZF1dXV7ndTkqIJEBDsCsePH+8qr0fPZerQoYOLN7EKuuGSCK1JOHbLFKiZM2cC\nQQr1Flts4VQpi3Oqqalx6eIWdGznkv3nP/9xqodP87Sx8xl79erFkiVLgOBcypkzZ+bEnvhMOFU9\num7EjcGFF14IwG233eYClK2qd7jwq08xUOG1Mpp8EVc1Pxxzau224GSLS914441dUdtoPGprs2DB\nAiAbg2ixl3bfnXDCCUD23rK+m8rUrl07N7amGlvfjz32WKe2+bDmRpWyOJYtW+aqydvamclk3Ppi\n61ScUtrcSIESQgghhEiIVwpUY5ljcUqM7Q7tjJza2lqXXWCF4MzyDKevFpLG+mi7CdtphOOfLHOm\nXbt2eX5so7a21u2SfMXG0X7269fP9dcKLg4bNsz7GCjb5fTr1y/vPcv8tJifSy65hLfffhsIYhc6\nderkYqWiu66OHTsW/NihxhSFaLbomu5XK4AXLTY5Z84cVx7BV6z/8+fPB7JZQaYa33333QCcfvrp\n3sZwGdE4pzBxZQysj5b9mclkePPNN4HgnLjwGuZD/xubq+HyL419LnoPWsHFUaNG5WWwtTbh0j1r\nes/ae8IJJ7i+WjzXmDFjuPTSS3OuM0Wud+/ezJkzp+Ua30QaG0N7z9Sm73znO660g41XeXm5y/oN\nF522a1pqnnpVxiA6ycPpmtH36urqOOqoo4BgkejevTtjxowBGpeSC5niH/0/e/Xq5QwIu0F22mkn\nIFuDxRY5W9jKy8vZbbfdAJyLxFx/hxxyiDO0ogHphSQcxGjtsjGzvgMMHjw45zPh630jajiFDx01\nw9ceWitXrsw7xHL58uVujtr4RSvSF5Lo/ZPJZJrkbgtXprb+GOaG9LV2UnieGuYqSaVSzugbMWKE\ne833eRo1nFavXu3cjnEGsFV7Njd0p06d3GHX0bW4rq7ObeKilflbkziXpN1v0U1m3HoYLg1jf8vK\nGoT/thnT9n20FtH7bdGiRW6NsXvJzosLn11oCVQLFy50xoWNk/3N8N82V6WdA9iaRMcwfI6fzTcr\nd/PVV1+5620OT5w4ke9+97s516/pbzcnhd8+CCGEEEIUGV4pUEZ4p7smBeXdd99l2rRpQLDjf+65\n51wQa2MuIB92i9av8MnaFqRqO4ilS5fmtbW2ttalcFqw+YcffgiQo3L4oDwZ4Z26je2UKVPcv63S\nrLkN4kpU+Iql+YdTwC3t1lS2cEqu7Yhra2vd7nHo0KEA/PnPf269hjeRsPsgGmQadntElZhMJuOS\nI6zquO3gfaOx87QmT54MZMfL+hGu/G/4Pl+tP+GCwtGwiPr6epfIYdcMHDgwJxgXgh19WN3xIQU+\n7AaKqg7h8Ym6i8KV9e06uzdtDkPrK09RTD3q1q2bc/FbH6y9u+66K6+//joQJFLNmTPHXWcqcLhf\nRiGUpyhxz35Tsi08J51Ou/lm8/XYY4/Nuwdb456UAiWEEEIIkRAvFahw4LdhlqlZ4cOGDXPX/frX\nvwayRd98SMVsCuGjWWyX+8YbbwCw4447AlllyVQpU53mzZvnVDZTnnwnrCjZbsLOckqlUi7N3XYV\nPgSmNhVLYFi5cqXbDdkRJ3Hn2NlrnTt3dtfb/PURi5GIuxfD91icUvrwww8DuXFuPhJWzaJp+Q8+\n+KC7bquttor9XDEQLqkRPQLKSKfTeYV7Fy9eTEVFBRCcHecr1p9Vq1bllRwIq6VR70S4HIPP2Pq4\nevXqNfbvr3/9K1dddRUQlJ0IJxd98803rdrmpIRVQfPEWHFsU7BTqZQrPmxH7oTncmuOpVdZeHGE\nKzdDcL7NkCFDnMvH6tJ07NgxT2b2PcgTgr6Zyycsy8adBWfYdYUM4Gwq1m7ro50v1bt3b5e1FnfQ\no8/jFsXG8f333weCWjoffPABhx56KAA/+tGPANhzzz3p0qULEIxb3Dl5PtGUc9/M8Kiurnauacs8\n/OlPfwoElYJ9JDpP9957byC7WTn11FOB4LzNcABuMc1TIzqe9fX17mFlAcU9evRwWU1//etfgcYz\nUH0hmu0bJmpAxhlVPtS1aoxov+JcX7ZxefPNN936Y5vvYnguvvDCCwD84Ac/AIL1tUuXLsyYMQPA\nnWgAjYfttBR+zg4hhBBCCI/xXoGKcvzxxwMwderU2Pd93zmILI3VNoHi2CGtjW/7XNx9990BXB0h\n3xW2ONam7raFebq2PqxcuRII1ItiJOk4Fdu4rq29xdafOGz+2XyMsrZnSkvw7VzZhRBCCCHWg6JT\noIQQQgghCo0UKCGEEEKIhMiAEkIIIYRIiAwoIYQQQoiEyIASQgghhEiIDCghhBBCiITIgBJCCCGE\nSIgMKCGEEEKIhMiAEkIIIYRIiAwoIYQQQoiEyIASQgghhEiIDCghhBBCiITIgBJCCCGESIgMKCGE\nEEKIhMiAEkIIIYRIiAwoIYQQQoiEyIASQgghhEiIDCghhBBCiITIgBJCCCGESIgMKCGEEEKIhMiA\nEkIIIYRIiAwoIYQQQoiEyIASQgghhEiIDCghhBBCiITIgBJCCCGESIgMKCGEEEKIhMiAEkIIIYRI\niAwoIYQQQoiEyIASQgghhEjI/wPwtkEtTW0EEgAAAABJRU5ErkJggg==\n",
"text/plain": [
"<matplotlib.figure.Figure at 0x7fcc27dc6e10>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"metadata": {
"id": "dLYlB3OBHTXG",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 8. AutoEncoder 정리\n",
"\n",
"오토 인코더 학습 과정\n",
"\n",
"1. 인풋과 히든레이어의 가중치를 계산해 시그모이드 함수를 통과시킴\n",
"2. 1의 결과물과 출력 레이어의 가중치를 계산해서 시그모이드 함수를 통과시킴\n",
"3. 2의 값을 이용해 MSE(Mean Squarded Error)를 계산\n",
"4. 3의 결과로 나온 loss 값을 SGD등으로 최적화시킴\n",
"5. 오류역전파를 사용하여 가중치를 갱신\n",
"\n",
"Stacked AutoEncoder : hidden layer를 여러 개 쌓아서 구현한 AutoEncoder\n",
"\n",
"Denoising AutoEncoder : 잡음을 제거할 수 있는 AutoEncoder (복원 능력을 더 강화하기 위해 기본적인 autoencoder의 학습방법을 변형시킴)"
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment