Skip to content

Instantly share code, notes, and snippets.

@seongjoojin
Created March 29, 2018 14:13
Show Gist options
  • Save seongjoojin/4bd485e763648fa7b0cf36618705d5f2 to your computer and use it in GitHub Desktop.
Save seongjoojin/4bd485e763648fa7b0cf36618705d5f2 to your computer and use it in GitHub Desktop.
study180331
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "study180331.ipynb",
"version": "0.3.2",
"views": {},
"default_view": {},
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"metadata": {
"id": "ib7E3JRvRizY",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 1. Autoencoder \n",
"\n",
"3분 딥러닝 책 정리\n",
"\n",
"- 비지도 학습에서 널리 쓰이는 신경망\n",
"\n",
"- 입력값과 출력값을 같게 하는 신경망\n",
"\n",
"- 가운데 계층의 노드 수가 입력값보다 적은 것이 특징\n",
"\n",
"- 입력데이터를 압축하는 효과를 얻게되며 노이즈 제거에도 효과적\n",
"\n",
"- 입력층 데이터를 인코더를 통해 은닉층으로 내보내고, 은닉층 데이터를 디코더를 통해 출력으로 내보낸 뒤, 만들어진 출력값을 입력값과 비슷해지도록 만드는 가중치를 찾아내는것\n",
"\n",
"- 변이형 오토인코더(Variational Autoencoder), 잡음제거 인코더(Denoising Autoencoder) 등"
]
},
{
"metadata": {
"id": "hj3hoRRwPPfP",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 22
},
{
"item_id": 23
}
],
"base_uri": "https://localhost:8080/",
"height": 651
},
"outputId": "0e2e930a-c477-40f3-d9a5-efae567a00b0",
"executionInfo": {
"status": "ok",
"timestamp": 1521900055598,
"user_tz": -540,
"elapsed": 35324,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data\", one_hot=True)\n",
"\n",
"#########\n",
"# 옵션 설정\n",
"######\n",
"learning_rate = 0.01\n",
"training_epoch = 20\n",
"batch_size = 100\n",
"# 신경망 레이어 구성 옵션\n",
"n_hidden = 256 # 히든 레이어의 뉴런 갯수\n",
"n_input = 28*28 # 입력값 크기 - 이미지 픽셀수\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"# Y 가 없습니다. 입력값을 Y로 사용하기 때문입니다.\n",
"X = tf.placeholder(tf.float32,[None, n_input])\n",
"\n",
"# 인코더 레이어와 디코더 레이어의 가중치와 편향 변수를 설정합니다.\n",
"# 다음과 같이 이어지는 레이어를 구성하기 위한 값들 입니다.\n",
"# input -> encode -> decode -> output\n",
"W_encode = tf.Variable(tf.random_normal([n_input, n_hidden]))\n",
"b_encode = tf.Variable(tf.random_normal([n_hidden]))\n",
"\n",
"# sigmoid 함수를 이용해 신경망 레이어를 구성합니다.\n",
"# sigmoid(X * W + b)\n",
"# 인코더 레이어 구성\n",
"encoder = tf.nn.sigmoid(tf.add(tf.matmul(X, W_encode),b_encode))\n",
"\n",
"# encode 의 아웃풋 크기를 입력값보다 작은 크기로 만들어 정보를 압축하여 특성을 뽑아내고,\n",
"# decode 의 출력을 입력값과 동일한 크기를 갖도록하여 입력과 똑같은 아웃풋을 만들어 내도록 합니다.\n",
"# 히든 레이어의 구성과 특성치을 뽑아내는 알고리즘을 변경하여 다양한 오토인코더를 만들 수 있습니다.\n",
"W_decode = tf.Variable(tf.random_normal([n_hidden, n_input]))\n",
"b_decode = tf.Variable(tf.random_normal([n_input]))\n",
"\n",
"# 디코더 레이어 구성\n",
"# 이 디코더가 최종 모델이 됩니다.\n",
"decoder = tf.nn.sigmoid(tf.add(tf.matmul(encoder, W_decode),b_decode))\n",
"\n",
"# 디코더는 인풋과 최대한 같은 결과를 내야 하므로, 디코딩한 결과를 평가하기 위해\n",
"# 입력 값인 X 값을 평가를 위한 실측 결과 값으로하여 decoder 와의 차이를 손실값으로 설정합니다.\n",
"cost = tf.reduce_mean(tf.pow(X - decoder,2))\n",
"optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"init = tf.global_variables_initializer()\n",
"sess = tf.Session()\n",
"sess.run(init)\n",
"\n",
"total_batch = int(mnist.train.num_examples / batch_size)\n",
"\n",
"for epoch in range(training_epoch):\n",
" total_cost = 0\n",
" \n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" _,cost_val = sess.run([optimizer, cost], feed_dict={X:batch_xs})\n",
" total_cost += cost_val\n",
" \n",
" print('Epoch:','%04d' % (epoch + 1),'Avg. cost = ','{:.4f}'.format(total_cost / total_batch))\n",
" \n",
"print('최적화 완료!')\n",
"\n",
"#########\n",
"# 결과 확인\n",
"# 입력값(위쪽)과 모델이 생성한 값(아래쪽)을 시각적으로 비교해봅니다.\n",
"######\n",
"sample_size = 10\n",
"\n",
"samples = sess.run(decoder, feed_dict={X:mnist.test.images[:sample_size]})\n",
"\n",
"fig, ax = plt.subplots(2, sample_size, figsize=(sample_size, 2))\n",
"\n",
"for i in range(sample_size):\n",
" ax[0][i].set_axis_off()\n",
" ax[1][i].set_axis_off()\n",
" ax[0][i].imshow(np.reshape(mnist.test.images[i],(28, 28)))\n",
" ax[1][i].imshow(np.reshape(samples[i], (28, 28)))\n",
" \n",
"plt.show()"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\n",
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0001 Avg. cost = 0.0627\n",
"Epoch: 0002 Avg. cost = 0.0349\n",
"Epoch: 0003 Avg. cost = 0.0300\n",
"Epoch: 0004 Avg. cost = 0.0275\n",
"Epoch: 0005 Avg. cost = 0.0257\n",
"Epoch: 0006 Avg. cost = 0.0246\n",
"Epoch: 0007 Avg. cost = 0.0241\n",
"Epoch: 0008 Avg. cost = 0.0233\n",
"Epoch: 0009 Avg. cost = 0.0227\n",
"Epoch: 0010 Avg. cost = 0.0224\n",
"Epoch: 0011 Avg. cost = 0.0222\n",
"Epoch: 0012 Avg. cost = 0.0220\n",
"Epoch: 0013 Avg. cost = 0.0218\n",
"Epoch: 0014 Avg. cost = 0.0214\n",
"Epoch: 0015 Avg. cost = 0.0211\n",
"Epoch: 0016 Avg. cost = 0.0210\n",
"Epoch: 0017 Avg. cost = 0.0209\n",
"Epoch: 0018 Avg. cost = 0.0208\n",
"Epoch: 0019 Avg. cost = 0.0208\n",
"Epoch: 0020 Avg. cost = 0.0207\n",
"최적화 완료!\n"
],
"name": "stdout"
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAlAAAACNCAYAAAB43USdAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJztnXmcFMX5h59dVlyIJyqKEBARL6LI\n4a3x1ngbFEUlolGj4H0rmIgJKl5JNIKGqNEQRfGIRv2JCtEIBlQ8UAMBEQ/U9QSRYxfYnfn90Z+3\nuqend9mG3Zma9fv8s8tM71A1VV391vc9qiybzWYRQgghhBCNprzYDRBCCCGEKDVkQAkhhBBCpEQG\nlBBCCCFESmRACSGEEEKkRAaUEEIIIURKZEAJIYQQQqREBpQQQgghREpkQAkhhBBCpEQGlBBCCCFE\nSmRACSGEEEKkRAaUEEIIIURKZEAJIYQQQqREBpQQQgghREoqit0AEfLAAw8AsHTpUgDeeOMNxowZ\nk3PNr3/9a/bff38A9t1334K2TwghhBABUqCEEEIIIVJSls1ms8VuxA+dIUOGAPDnP/+5Uddvv/32\nAEyZMgWA9ddfv3kaVkS++eYbANq3bw/AI488AsCxxx5btDatLitWrGDEiBEAXHfddUCgHj7++ONA\nyxw/IYR/1NTUALBgwYK899q1awfAPffcQ+/evQHo0qULAJtvvnmBWlhaSIESQgghhEiJYqCKzJAh\nQ+pVnnr16uUUl/fffx+A+++/n5kzZwLw6KOPAnD66acXoKWFZfbs2QCUlwc2fqdOnYrZnDVi8eLF\n3HDDDUDYn5deeokXX3wRgGOOOaZobVsd5s+fD8B+++0HwNy5c1P9/XvvvUfnzp0BWG+99Zq2cQXm\nzTffBKBPnz4A/OMf/wDgqKOOcmPtGxZjOXDgQAB++tOfAnDaaaexwQYbrNZnmrIxc+ZMevbsCUCr\nVq3WtKmiCXj77bedgv/UU08B8N///jfvuh133BGAOXPmuPE06urqmrmVpYkMqCLxySefAHD33Xe7\n13beeWcAJkyYAEDbtm1p3bo1EE7guXPn8sorrwChm6sl8uqrrwKw7rrrArDrrrsWszmrxbJlywD4\nxS9+UeSWNC0vvPACQN4i21geffRRvv76awBGjRrVZO0qNNXV1fTr1y/ntZ///OdA4Lb10YCqqamh\nW7duQOjG6dChA8BqGU82B8zlU1VV5QzqjTbaaI3b21QsX74cgOuvv54ZM2YA8NhjjwEty9BbsGCB\n25Bff/31QDBPGxOp88477zRr21oi/t3hQgghhBCe45UCNW3aNABuu+02ADp27EibNm0AGDRoEBAG\nutnPUsXUo2w265SniRMnArDOOuvkXX/fffcB8Prrr7vXjj766GZuZXGoqqrimmuuAeCiiy4qcmvS\nY67Vhx56CAgVmzjPP/88EKqLJqF37969uZu4WmQyGSB0U60ue++9N8OGDQMCpQZwSmsp8e677/Lx\nxx/nvHbuuecCUFHh1dLq1NBBgwY59e83v/kNgLvXVofbb78dCF3uzzzzjFfK08svvwzAL3/5SwA+\n/PBD957NPXvGtAS++eYbrr766lR/06tXLyD0gJQCpp4uWrQICNTE5557DggVxcsvvxyAnj17Ntuc\nlAIlhBBCCJESr8oYbLPNNkAYMJ2EpXzvtttuqT9/iy22AOCqq64CcIGsxWTRokVu993QTmj33XcH\n4LXXXnOvWSDgtttu24wtLDzTpk1jjz32AOB///sfAFtvvXUxm5QK2wE1FAOTyWTy3jfl6bnnnuPH\nP/5x8zVwNbH5ZkHCN998M5BeJRw3bpyLC/v++++BIN6vVKitrQXgoIMO4qWXXsp57+233wbC78gX\n3nvvPSC3XYsXLwZW/7v/4osvXHq7JbLccccdrL322mvS1CbB5pWtG1999RUAZWVl7horH3PTTTeV\nlAq1bNkyFztrxZRNvZ43bx677LILEMaPLl68mBNOOAGAnXbaCcCtr127dnVqqe8qcFVVFRDETd5z\nzz0AfPnll6v8u4qKCqeyHXzwwQAMHz68SWLfvNKZn3jiCSBchHr06OEWbQsqfvLJJ4HgIdO1a1cg\nV5Y1bFJYgKRlDkFoSF1xxRVN3YXUrKoG0NixYwFc4COEk8CCQVsaw4YNY6uttgLCsSoFLKvJXF0N\n0b59e5eBZkG35gbZYostvMt6qaqqchXwrQ7ZOeecs1qfNX78+CZrVzH47LPPAHKMJ1tvfDOcLONu\n3Lhx7jVzHa+J4QTQt29f95rNfR+MJwhdi+auTGL06NFA8N3Y9WZo+BhYbi7Hn/3sZy6RKLqhBthy\nyy3demJJAYsWLXJrTdSA9J3PP/8cCBNN7rzzTgC+++47d43VqTriiCPc8/Cyyy4DwizhiRMnujn7\n4IMPArDLLrtw5JFHrnEb5cITQgghhEiJVwrUdtttl/MTQmnyxBNPBGDkyJEAfPTRR06BmjdvXt5n\nmRxpClTXrl3dbqRUXF5vvfUWZ511FhCm4Xbo0MEF2a+11lpFa1tzYDuLF1980Y2777KyMWfOHN54\n4w0gdN0lufAswPPII490ErsFmV9wwQXuun/+859AUE/IB0aMGOFcPrbrTTs21dXVQKA0+5ji31gs\n/T3KgAEDitCSVWPB+rZm7Lvvvuy9995r9JmWyPL5559zySWXALDPPvus0Wc2JYsWLeLWW2/Nec1C\nIDp37pyngC5cuNAFHNv9lpTIUyxMjbZnwSuvvMIf/vAHIHw+RomXoyjFkw6GDRvGvffeC+S76fr3\n7+9ckaY2RZM2Jk+eDMBdd90FwCmnnOKSCTp27AgEtffW1IUNUqCEEEIIIVLjlQLVGCorK4FcFSmq\nWMWx2KlvvvnGFWO0GCLfmTp1qlOejLPPPrukAqrTYFWdAS+DqJMw1Wz//fevN6Cxe/fuLo3aVKao\nemjn/Zm6WlVV5WJKxowZAwS7rmLEZVhpkQceeIAddtgBCOMO0mIqSHl5uStA6UvMTBqs3AiEKpyN\nnW9YzIspfl26dEk9j1auXAmEO/rf/va37rMtkcAn3n//fZfeboqSxdfW1ta6e/HCCy8EYNasWS6u\nzQqhmgJc7ODyFStWuFitv/3tbwBsuumm/OpXvwJajhfCEjMsOH7kyJGu+Odmm20GhOr9GWec0aD6\nbWNpyt3NN9/slLq0pyasCilQQgghhBApKTkFqrFY9ontKDKZDH/84x+B4u8qVoXtkB5++GH3mqWK\nm6++JRItEnrttdcWsSWNx3Y5SeqTzb377ruvQT+7xShYXMOAAQPc/LV0/4MPPrgoxWNt17tkyRKG\nDh26Wp9hKt2f/vQnIMhw+t3vfud+LxUs1vLZZ591r1kcm8VW+M7YsWNd7IfFyjRUhmLixIkua88K\nFRoWk+MbK1ascMqbxYAZFRUVHHTQQUBYQNJKpUB4NqMv83Lq1KkuzsyyzKZPn+48MS0FO9/Vnm/Z\nbNaVGfr3v/8NNKx8ZzIZV7rivPPOA2DPPfcE4Ntvv3XXmap14YUXNon63WINKKvcbemLG2200Wq7\nHgrFkiVLgHCBrqmpYdNNNwVwD69SCapOgz2YbrnlFiCoVJ0UHFkqWLr/X/7yF6DxQYoHHnggEKTf\nTpo0qXka10jsjLPoQ3N1K9//9a9/BUIjs0+fPiWTyBHFkgSipK36XGguvvhiIKweP3/+fOfOsoeJ\nrZVJZLPZvNR3G7sRI0Y0dXObBKsRBGHAf1KV7aR7zB66vqyz0Tbaoc+lfgB3Elb6JRoMbmMwffp0\nICx/Ej0I2dbWN998092f9sy0MghRrG7ZsGHDmsRIlgtPCCGEECIlLU6B+uCDD4Bw52VMnTrVBaP5\nSv/+/YGwai7A+eefD5T+2X8NYbssOx+wZ8+e3p0ltiqixTPrO/tuVZgiUFdXl1eM89prr3VB2IXA\nXJN21tvqFs2E/JMFSunMrShTpkzJ+Xe7du2cu91XLBnD3FQfffQRTz/9NBC6S2xdTHLlDRw4MM89\necghhwD+rkmnn366U9Uspd1K2MydO9cVU7T1pl27ds7Nc+ONNwJw8sknA6GaUSxMxYawGGqfPn1c\nEchOnToVpV1NTY8ePYAw7GH8+PHuWX7ccccBuUVATT1KKjgcV57Ky8tdpXwrb9FUZSqkQAkhhBBC\npMSrs/CaAgsUNwXKVJ0HH3zQm8DAOOa73WuvvYCwZH+/fv144IEHAH988s2BBaPabmvatGnuPCff\nsR1rNBbG0r7TYvEaAwYMcAqUpZ9/+eWXBd3xWx+s5EdNTQ3/+te/gMYnYVggfDxm47HHHuOYY45p\nqqYWhLlz57qzOm1sunXr1uRp0b6xcOFCd5K9rU8TJkwA/D2/sLq62ilvCxcuBEJ1N6piHH/88UBw\nVIjFLb777rtAeF5qseO8ysrKEovO2mu27tixJXPnznVlfbbcckt3vcWZmtLjexxVTU2NO8LFjkza\nZJNNgOCoKyvvY4lH0dIica6++moXQ9zUwfel5SdZBStXrnTBkhZhf8MNNwD+ZFXEqa6udjerGU5G\nnz59WrThBEHgvLkU7ByxUjGeAGfgrg7Lli0D4NNPPwVyK5EbVkm/0PPX6svYYjxmzBgnr19zzTX1\n/p3V8pozZ45btONByKV0Hpfx3Xff5blVzbXQkhkxYoQbLzuLzFfDyWjTpo2rPG1GnxlSEGb42rpb\nUVHBoEGDAFzGmwUsX3zxxUV1Vd54442unVFsLlpNLvu5KsxdaxsYM1J8o7Ky0o2F/UzC3M5RA8qy\nmh966CEgOPS7uU4+kAtPCCGEECIlLUqBuueee1zQ4EknnQTkypg+ctddd+Wl01pgajwQviXy6KOP\nUlVVBYTnHf5Q+P3vfw8k17yyavNWEblY51kNHz4cCFwgY8eOBWjwLDULui0rK6u3Mvthhx3WtI0s\nANZ3CIOnBw8eXKzmNDtTp04FgtpkNvd8d/tE2X777YEw8N9KabRr184pGtFElXPPPReA9957DwhL\nO4wYMcLdp8Xg0ksv5YQTTgDgiCOOAAJPham7cVV0VVhZnz//+c8A7LTTTpx55plN1dyCYfXpkhS0\nJ598EgjLPjQnUqCEEEIIIVLSIoLI3377bSBIj7bKwFZ8y3cFqk2bNnmxT3aOk08ngjcXv/71r7nu\nuusA3M8kn7+vWMHPWbNmudcaE0Q+cOBAlzyQFIhsapzttHzAYrXsZxK77bab+90U1Ntvvz3nGjv3\nqhSw6sYbbrih2+1bjJ6ds9kSsVPub731VqfOxMexJWKK1T777AME1a+tcKNPJ1jYemNrzaWXXgok\nFwdtiFNPPTWn8GgpMGHCBAYMGACE9yeE5VFeeeUVgIKUwpECJYQQQgiRkpKOgaqurgbC3XpdXZ0r\ngOa78tQQdqRLfZkDlmEYLyZmqZ0QfjdJxRft74YOHVr007yjsSWW5VVKmIAbjUWYMWNGzjVHH300\n8+fPz3ktk8k0mBnik/JkWNG+xhbv6969e+LrVVVVLrvQdywmJjq+tsa0ZKxo449+9COnRv0QsKNc\nhgwZAsDo0aO5//77ATj77LOL1q44lh1rWMzvpEmTnPJi43bWWWe5Y7LuuOOOArayabGivieeeGKO\n8gRBfJ4VSC1kEeaSNaAymQyHH344ALNnzwaCSVUqh9A2xKoOJrUb2c71scDA0aNHp/5/zjjjjNVo\n4Zpj1ak/++yzovz/TYUdVmqH/gL07t0byDWAG6rlEsf389UaixmX8SiBUjGeIKxWDWGAfLHumULw\n1FNPAWE15w4dOriaSj8ErGTDlVdeCQTB51aF38pWbLzxxsVpXAMccMAB7ndzkVsJnzlz5vD4448n\n/l0pja0l1FiICwQGPgTudDtsuZDIhSeEEEIIkZKSVaAWLFjgKpQaY8eO9fZ8pvo4+eSTXYptY7nr\nrrvqfc/ky2jhxVNPPRWA3XffPedak6uLgVXdrqurc2nxlrpfShx66KFAsFO3cgyNxZSYXXfdFQhT\niy0RotSx3XwpFs40nnjiCfe7VSI3F3pLZOTIkUA4ZlF3pYUI1NTUAMUrrVEIzAswZswYBg4cCOCq\nWY8aNarooQ9xrEDm4MGDXcFTw9ZaCJ8Lpphbn3zG5p0Fykex4sN2bxYaKVBCCCGEECkpOQXK/J/R\ndOm///3vAPTq1asobVoT7r77blfwK17OAMKA5KT4JgsS3GqrrdxrRx11FADt27dv8rY2BZZ2+/DD\nD7vX7BiF5iq335zYLnzSpEk8+uijQONjmCwtvNTOhWsslshg+JQGviosMcPORoMw3sLXY6Gag1at\nWrljUewIH1tni1lgslAcc8wx7vy4u+++GwiKy1r8qS+YInbjjTe6AGs7u7Kqqsqp++eddx4QBsn7\njD0PTV2Klocxb4oV+i0WJVcHKn5YMMBHH30EQOfOnYvRJJECezBZVd0OHTo415Vvsvjq8s477wCh\ngXT//fc7N+r5558PBIHVXbp0AVquK8TcIBbUahlAdsC3z1jW3eWXXw4E9ZDs3K2WbDiYW3/atGlA\nME/NnWffhdVpK6XK5GuCbdo33HBDIKhaXgo1sSzE5cUXX3RjZ5uAUsDq5PXt2xfIDQWYOXMmANtu\nu23hGxah9Lb8QgghhBBFpmQUKEt7t2qj0ToQUqCE8A9zzZpiUezd4uqwePFiIDjt3tSZlupyhXCd\ntTHbf//93ThWVlYCPywXZhSrfv3MM8+40wOstIVoesxNF6/4f9NNNyUGlBcDKVBCCCGEECkpGQXq\n3nvvBXKL2Fk1VvP1brLJJgVvlxBCiJaPpdPvsMMOrlJ7nz59itmkFk3Xrl2B0MNkZV/mzJnjzTmx\nUqCEEEIIIVJScmUMjD322IMXXngBKK30aCGEEKWHFVCdM2dOkVvyw8COorGzbi2L1xf1CUrIhSeE\nEEII4Qty4QkhhBBCpEQGlBBCCCFESmRACSGEEEKkRAaUEEIIIURKZEAJIYQQQqREBpQQQgghREpk\nQAkhhBBCpEQGlBBCCCFESmRACSGEEEKkRAaUEEIIIURKZEAJIYQQQqREBpQQQgghREpkQAkhhBBC\npEQGlBBCCCFESmRACSGEEEKkRAaUEEIIIURKZEAJIYQQQqREBpQQQgghREpkQAkhhBBCpEQGlBBC\nCCFESmRAeczaa69d7CaIJqCurq7YTWhWamtri92EZiebzRa7CUKsEs3TwiIDSgghhBAiJRXFbsCq\nMIu6rKws8fWk9xrz9z7Rpk0bAKqrq3Nev/LKK/nkk08AOOOMMwCYOHEirVq1ynlt1KhRAJSXl549\nnLRjymazfPnllwBst912ALz55psAbLnlloVrXErqm2vV1dV0794dgO+++85dM23aNAB69OiR83fR\ncfRp/mYyGSB/nmUyGfdeFGuzvWc/11prLXeNT/1bFfG5ms1m+eqrr4BwDC+66CIArr766sI2LgVL\nly4F4Ec/+lHO65lMxo2D9bW2ttaNV1xpjI5jfXPDN7LZLCtXrgTCttp6au9Hf0bf86mPDd039p4p\n3y+//DLDhg0DwnXUvoPoffvLX/4SgHvvvbeZWt28xL+TpPFq6vWmLOu55meToDGTtqysLHGRM3yY\n+El06tQJgN122w2Av//970CwQFmbly1bBkDHjh1ZtGgRAIcccggAzzzzDJC8EPj+YIqOT/Rmvuqq\nqwC4//77Afjss8+AoI/xRd6XPsYXXxuzX/3qVzz11FMA1NTUALDOOuu4vh166KFAOH7ZbDbnd/Cj\nj3Yv3nPPPQD86U9/AmD69Om0bt067/r6FrR3332XnXbaKeczo3PXV+Jry/Lly+nWrRsAVVVVAPz4\nxz8GYO7cuc7A8GkMAbbeemsgbPMLL7wAwC677OKuSVp3rR82jmVlZa6Pm222GQBffPFFcza90cTb\nav355z//yUcffQTA4MGDgWADG98ARO8/+w58MqDMmDVjeJ111nHvjR07FgjWHQjGydqctLmz76pt\n27ZAuG75RnTeGdF7siGxpbnW0+LPBCGEEEKIEsN7F55Zjh9++CEAHTp0AOp3A8RVANt5lJWVuc+q\nqPCr259++ikQyqrWzqiV/N577wGwYsUKJ72bAhBXZOJ/6zvx3e7ixYudOnPWWWflvAfh9+SbK8j+\nb5urp5xyCgBvvfWWa7MlBqxcuZLRo0cD4fj3798fgA033DBRmSl2H60t1113HRD2JWkcor/HlZt5\n8+ZxzDHHADg1IHqdj3M3SaifPXs2X3/9NRC22VzOZWVlXikWUebMmQMEagzkKk9GQ26PqCJj92dU\nefKh39ZWWzd/8YtfADBr1iw23nhjAM4880wgWWWKfo4P/YljzzBTnqy/X331lVOebM2B8N6NJyZV\nVFS450lUeRozZgwQqljFJEnJtnAXW2v/8pe/8OKLLwLh8+SKK64A4PDDD3f9Xnfddd3nNoX67c+M\nEEIIIYQoEbyPgbLdrVmfZnGvt956Lu5io402AoI4GbveLPRvvvnG/fvAAw8E4IEHHihQ6xuH+bEr\nKyuB3B247Qp69+4NwPvvv++us0Br+07q6upcv33cxSeRzWbzdhhPP/00xx9/PBDGaay33nru+qRg\nax9YsmQJANtvvz0An3/+ORC02RIFli9fDgTz0XZANo9/8pOfAHDHHXe4GCGf1NIpU6YAcMQRRwBh\n7EyfPn3cNVHVKR6Ia/3t168fM2bMAHBJEqWAKREWf7L99tszb948IIwfee211wDYZpttvJ2nHTt2\nBODjjz8GcscnrsRkMpk8RdzGdeLEiZx44okALFiwoECtbxyWrGH30bfffgsEMT+2Xp533nlAkIyz\n6aabAuTF8tXV1bn++zSOFr916623AuGz44MPPmCPPfYAcpWVo48+GoDzzz8fCJ+ZixYtokuXLjmf\n4Qvx5BNbXw888EBmz54NhGtKTU1NnofJ/m7lypVssMEGQNPPUy8NqLlz5wKw1VZbMXz4cCCQ6ACX\n9dK6dWsny9kXtWzZsryb3L7M2tpa92VHfxYreDVqNCTJxhD064ILLgDCzIjly5ez7777AvB///d/\nQK4LxbCbvba21quHcJykIPJ99tmH119/HcAFzCf1MRoYWCyDMTqOhx12GBBkvUTJZDJOTrd2VlZW\n5gW42jXrrbeey5bZYost3P9TjD5G+2fuKXOjT5gwAVi1O93abf3s1auXcyOZQVnMMWwM0Xlqm7LN\nN9/cGVPmGrBNTWVlpfsbHx68V155JQAjR45083P33XcHcscnHvietCmzRIhu3bq5/kbncNK9Wmjs\nuWHu5vXXXx8InhErVqwAcp8RFjJw3HHH5XxONADZxrFYczXqSjSDb/r06UBoFC9fvtz1y4zB6DPA\n2h1tf/w1X+5FM5IGDhwIBKEQhn0XtnEZMmSIS+C4/PLLgXA9bdu2rZuzNvajRo3inHPOWeM2Fv/O\nFkIIIYQoMbxUoKKWtjUvulOFwLqMq0wLFy50rh7bBZmC0bt3bxfoOGTIECBwlRSLJAUqrp7NmTOH\nXr16AeGur2PHjs79seGGG+Zcn/QZvqeHR9tuu4ONN97YBXq+//77QO4uyYcdvREdR6tRZa6RqDvE\n1FKrtTJ06FA3RydNmgSEga4VFRXccsstAJx++ulA8eR1u+/WXnttttlmGyCsJWO7v2j9ICOpfIi5\nozfbbDOn3Jirxfeq+9ls1qksf/3rX4EgwcH6beNkiR1RV5APu/mtttoKCNT9eGV8G4tsNpsYAhAv\nQ/HYY48BgRvJlDdTC4rpBorei5tvvjkQqoV2b02ePJn58+cD4dxu1aoVP/3pTwEYP348EM7taH+K\nrSjac2v06NGu9t9pp50GhO2sqalxc8/W/uXLlye6Ju2aJFWqWETH0FytFuJi6+W6667r3JSWiNOx\nY0enOFnguyUJLFiwgH79+gFhv5vK7PHnSSSEEEIIUSJ4qUBFiae4N6S2RF8zFi5cCEDnzp3dZ8Ur\nfhcbU5fiwW+DBw92vnmzvp999ln23ntvIPxOopV1k0og+Ew0/ut///sfADvuuCN77rknEKoz0YBc\nX/sWL9Zm/+7SpQs33ngjgIuTiioupgAcddRRQBicDX6do2cK1H//+18g7F9UgYqOTTy278knnwTg\nhBNOcEqxBff6TjabdUGsffv2BQJ11O5ZUx3bt28P+D1PbUcfV/0ymYybiza2rVu3dnPwjTfeAHDJ\nOBUVFcyaNQsIC2n6giljdlqDxULV1dW5+EoLgP/+++9dvy3Y2ooZR78jn8YzXhU+Gmht6pkpbHV1\ndS4GLN6HJKXRF6w97dq1A8J1cdttt81TOsvKylx/rXzBpZdeCgSKqZ0Q0NTmjhQoIYQQQoiUeJme\nFd25JpVvj/87qdCZqTK2q48W37JYm6TjJwpFNN3bdrG263vppZeAYMdulrbFWOyxxx55WU3W7+iu\n1+eihJDbPuvHb37zGyD4Hm677Tb3PlDvUQTFxubZJptskvedm1L4/PPP56mF0Tlu425HpGy99dZO\nJbBsG8t2KjQ77rgjADNnzuRf//oXkJ9VV9+8s9dsZ3jZZZcBgZpqypOPRQoh//7JZrMudsayhLPZ\nrItDtJg9X2MOba2rrKx0O3qLC7K4vDZt2rh595///AcI0uIt08uy90zhuO2225zy5MN6Y2146623\nXHFIK9hq62h5ebmLdzKV6ttvv3Xzz16LZhL6sqaamjZmzBg3z+xZZvfiOuusk5OBDcnxhUnKU7H7\nB2FbV6xY4corvPrqq0BYeiH6zLA+rrXWWu7+tLi3gw8+GAiK9SZl5TcFXhpQDaVTRgP5koL6bEE2\nd4F9+ZbiCMU1nOJEg6LjZ9zV1dWx6667AnDzzTcDwQSIu4iSZEnfDI36KCsrc64Rq4zcqVMnV0vJ\nV8PJsPm2dOlSNw5WF+mJJ54AgjGLGwpJ5znZg80eUFA8w8mIVgo3t1t8LKL3YrRKvF1n34MtcNZP\n+1sfSeqjbWyim7prr70W8NdwMqza+IwZM5y78frrrwdgxIgRQDAPbYztARMt9WIPLTMajzzySPf5\nPtyfNgcffPBBd99Y+YbJkycDwTha36L3lj0TbNOT1J9i99FqWkG+sGD/jrpczViInqnakAur2P2D\n5HIhEydOBMKEk3POOcfNPevwem41AAANEUlEQVT3okWLXIkjW4OshmB0s9pUhpPh5+olhBBCCOEx\nXipQRpK1nGRJR91h5howl5els9o5OZBrkRZrB5xUcDB+7lvbtm1dMGO0nfWd25SkxEWL4/lEtFCf\npYXbzmHvvff2uvhnFNsVZbNZ5zZ4/vnnAdy/V65c6XY+SdWp42fGWVIBhLvIhx56KK/IXyGwtlRW\nVtKjRw8gf6eayWTy1ODovL7rrruA0C3SqVMn97e+n99o7Vu2bBlPP/00ELazTZs2DBo0KOc1X3n2\n2WeB4NxFc5Xba8auu+7q+rPffvsBwf1pqeKm5p900klA8tpZzCKM9v9uu+227jVLeLCK99FCxOau\nLC8vdyU2hg4dCoR9bN26dZ6LK3p2XiG55JJLgKC4rqXxm1odPWPT2hY9xcPabutQ9DkaL55qf1MM\nunfvDgQJRbb2nHrqqUCoEl588cWJLkx7P35OoIUQ2HXQdOVwpEAJIYQQQqTEyzIG8TNwIL/cfJKF\nnM1m+d3vfgeEaatmoS5fvjwn2BqKG7cQDcC1nZCliZuqcfTRR3PfffcBJO4SSjkWytqayWQ4+eST\nAdwOf/Lkyc7f72v7DTti58wzz3SxIRZbEVUK4wXcokHWNkdNsdpggw1cPJy9ZnFihcZ2cxtssIE7\nfsVSopNUtGhshs1jK+JoJUVuuukmLrzwwpzP9zUWyvpVVVXl4vJsbDp06ODiiXxUeaNEy2HE42aS\nysHYeCxZssSd0WgJBXafVldXO+XDp/GrqalxRW0tDiaqjNrv5557LgDDhg1zfbQ4mujRNPHnRrEV\ntrKyMlcuY4cddgDCY3m6d+/u1CkL8M9kMi7xoXPnzkAYkB2NCfKhBI6thTU1Ndx5550ArpSPxecN\nHjzYnTP6t7/9DYB58+bVWyD22GOP5cEHHwRCFaup+uilARUl3rzozR7PGshkMm6CfP/990AgWUP4\n5Uc/M3rOUTH5+c9/DoQLk7myxo0b59w20ckdd93ZvysqKvIWw+hhoD4QD6Zevny5O+jRDJCPP/64\n3kORfTmnKQlrv7mRo5l30TGCoF82zjfccEPOz9raWjd+Np/feeednDlcDGyRNperuUqeeuopd820\nadOAIPjcAnftHjQjY/bs2S7DJsmg9IH42vLJJ5+4h7It1FdccYULwI67nH2ep3FjNxokHnf1nH32\n2Tz88MNAkJEH4YPZx0Brw4x9C543Y76srMy5MO1neXk5I0eOzHnNsvY++eQTtz750jfI3zRHz4aL\nr/dJ9b322WcfIEjcSQqX8KGv0cOAIXRT1tbWuv6Ye++ZZ55xLj/b4Nnm9pBDDslLHGuq/vmzbRBC\nCCGEKBG8itSNB9MmKUTR9+JqyyuvvOLcJ6bqWF2PKI1J6WwuzKq2//uJJ57gH//4R841pkRccskl\nDQZTW/+j1/hQy6Mh7Lu3Xfwf//hHJ9vuvPPOOdf4jH3PPXv2BIJg1biEHHWxxhW1TCbjgnNvvfVW\nIPdMMtsBWxmBuOrY3FjbbWxOOukkpk6dmnONKWIrV650rkZzDfzkJz/h66+/zvksU6A6duyYeAK8\nT1i7bExOPPFENwYWnDp06NB61V3f7r+kAOh4eYIo5j4ZP368m7u2lvqmPEUTUiDo19Zbbw2EypNV\npd5///256qqrgNx15uKLLwbCoHNT3c444wweeeQRoHh9NAVmr732AoJ2x8fMxigpaaht27ZuHltA\ntdV0GzhwIGPHjgVC91Yx78UkN7IpT1HvhSUFWHJYTU2NW4/GjRsHwAEHHAA0b7V1/59UQgghhBCe\n4ZUCFd+VRq3sJGXFXjN/989+9jPnrzb/Z33B5vW919zE1ZVTTjmFY489Fgh3ALbrt50HNBxQn0Q8\nwLyYZDKZvNIL1q7//Oc/rh8Wr9BQPIwvO3trxzvvvAPkFr+M7oQhN83f+v/ss8/yhz/8Iee16O7L\ndk/RMgKFxPpnuz/bkUNwBhyEcYbz5s1zcQcWBwZw+eWXA4HKCMlKafz/KyZRVTses/baa6+560xl\na9u2rRftbgzxfkH+PI3GlVrR0JUrV3LBBRcAuXPcPtMH5TCuxEfPZrT+3nLLLUCgKMY9ENFUflP/\nLR715ZdfdskPVgC20GNubbMyEtEyJ6ZQm8IULbRs/ezevbtTTa2avH1XVVVVOdW8obj3YnTc7Pdo\ncV6AqVOncsIJJwBhYkomk3EFjK1MRSHO+ZMCJYQQQgiREq8UqDitWrVqMNXW0v8PP/xwIPDvDh8+\nHAjTv5PwwV9vbVi6dGleKredOt23b1+3SzRVav3113eWeLy0QTTl1vAhvTp61EdcXfz3v//tfjcl\nrhSIZxMuW7bM7fJMrTA19JBDDnEZTNHSBXHFMRorZHEXhVae6iO6s7WyBPbvrl27Jp4dN2DAAABG\njRoFhPdrbW2tF/MyTpKi8vbbb+ddZ/E0pRSrZ+OyqjXVfrd4p3bt2rkzDOP99UF9gvx7MbreGFFF\n3hSNaFFbe79Dhw4ArhTAhAkT3FmBjz/+uPuMQhLvX2VlpWuvHVE2adIkIPecOHsuDh8+3KnGtjZZ\nLFS7du0KHl/ZGMrLy/O8FdaHww8/3MW22bOzffv2LpYrKSPWaOpnv5cGVFIgWVIF1T333BMIXAgQ\nSOvnn38+0PDhw8XE2mGVbzt37pyXYnrooYe6f9vNYG6Duro6V63VDgOdOXMmEJwBtWDBgkJ0IxVJ\nrhGrn7No0SI3prYY+DJWDWFttgOAzz77bLcwt2nTBshPw4XQDVJZWekWMbvOjIpx48Z5YzhFF+/6\nxqW+B6nVrrJ++lBnpiGS5qmdz1hXV+fG7rTTTitOA1eDhmo8Rasy2zVWX8dceHvttZdzzzb0+cUk\nybCrLzkhmpgUDSuIf8Zvf/tbINjg2bmqxSL+DCwrK8vrg9Uoi46HlQ+57777XNC4uf/s8OuddtrJ\nbfx8IGme2r1oIQFLlixx/TS36uTJk11trPhnNWeJFP+3UEIIIYQQnuGlApUUKB6vArt06VKXymhy\n84cffuiCXpM+yyfMfXPWWWfRrVs3IJQeowqG7RJNWaqrq2P69OlAUJAw+t4dd9xRoNanI+oasf6Y\nHF5RUeGUp/jYlQL9+/cHcgvSxRWltdZaK89lVVFR4RQNm79WLdgnGuOmqi+t3QJcoy5m34kXknzr\nrbeA4Hsw12VUSfO9bEichtqczWa5/fbbgbDa+plnnpl3RqOtT76SFDzc0PhE57jN0b59+wKh29kH\noqpa/GxNS97o168fjz32GACzZs0CYMaMGTnuSghPS/CV6Dy1JLE33ngDCNbXTTbZBAhONQDo1q1b\nXnhIIVytUqCEEEIIIVLipQKVhFmX1dXVQBC4ajEWliYdDRz3MU06ilnH1113HRdddBEQqlKWmllZ\nWenOcrK0zaqqKqdcHHzwwQBcffXVAO5YCV9I2unaDuj6668Hgt2CnVOUpDz6Nm5xTD168sknOeKI\nI4CwuFs0Xi9eTHKjjTZyiqGVLCiVPseJ9i+KpV0bFsdnR934RvR7NzXx9ddfB5KLTZYSDakyUYVw\n/vz5QBijOXz4cCZMmAAE8UAQJEUA7vgTn2mojE2UeMCyJXbEjwDxgYqKiryYYGvnvffeyxZbbAHg\nCvVmMhmX0GJFRC0u0WfV39RAez7aszCbzbpjpeyos6QyP4WgZAwoW8CGDRsGBEaGGRJmXED+jeAr\n0UG2QDgjGtRnUqVVyK2rq3OLeyn10bAb385C69mzJ71798653udzxOJE22kV5S3rzCTn559/3t3w\n11xzDRBI7nbOXSlkczUW+z5qamqYMmUK4E+2VmOIt9UqWn/xxRcuQ6uh0wFKgfiJD9E6SvHK96++\n+qpbexYvXlzglq4+9W1GogHFSQkStqb6uLZG2xTvlz0f1157bbeRtoOGn3vuOVdzzjYxvhLtl62V\nlpVufWzdurXrY/Qw63i4TyFoOSu3EEIIIUSBKMuWyPbQKj7beWl1dXVOzbCKyMU+rV6smngweXl5\neV66aqkrMvGyFNHzx0rVTddYWkr/rB8WTH3bbbe5HXEhg1SbE3NTJZ2j1qNHDyBUvkuVpMebzc1S\ncGOtiiQ1MUlhK2XiZzeec845LmwnWoG9KG0ryv8qhBBCCFHClIwCJYQQQgjhC1KghBBCCCFSIgNK\nCNHkSNgWpYLmqlhd5MITQgghhEiJFCghhBBCiJTIgBJCCCGESIkMKCGEEEKIlMiAEkIIIYRIiQwo\nIYQQQoiUyIASQgghhEiJDCghhBBCiJTIgBJCCCGESIkMKCGEEEKIlMiAEkIIIYRIiQwoIYQQQoiU\nyIASQgghhEiJDCghhBBCiJTIgBJCCCGESIkMKCGEEEKIlMiAEkIIIYRIiQwoIYQQQoiUyIASQggh\nhEiJDCghhBBCiJTIgBJCCCGESIkMKCGEEEKIlMiAEkIIIYRIiQwoIYQQQoiUyIASQgghhEiJDCgh\nhBBCiJT8P5edMAgqaEAoAAAAAElFTkSuQmCC\n",
"text/plain": [
"<matplotlib.figure.Figure at 0x7fcde37b08d0>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"metadata": {
"id": "XQuq1rmNXWVv",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 1-1. Convolutional Autoencoder (CAE)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/q00uX4g.png)\n",
"\n",
"출처 : http://elidavid.com/pubs/deeppainter.pdf\n",
"\n",
"전통적인 CNN 구조는 위와 같습니다.\n",
"\n",
"위의 논문은 Unpooling이란 기법을 이용하여서 이러한 CNN구조를 Autoencoder와 응용하려고 합니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/p3l3j6A.png)\n",
"\n",
"출처 : http://elidavid.com/pubs/deeppainter.pdf\n",
"\n",
"max pooling할 시 최대값의 자리를 기억하고 그자리에 압축된 차원을 다시 풀어내는 과정이라고 저는 이해하였습니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/KgdWKFB.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1505.04366.pdf\n",
"\n",
"deconvolution을 찾다가 위와 같이 동작한다는 것을 대략적으로 알게되었습니다.\n",
"\n",
"[what-are-deconvolutional-layers](https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers)\n",
"\n",
"위의 링크를 참고하다보면 결국 deconvolutional-layers는 역으로 padding을 주어서 차원을 늘리는 것으로 이해하게 되었습니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/Jlva4gy.png)\n",
"\n",
"출처 : http://elidavid.com/pubs/deeppainter.pdf\n",
"\n",
"uppooling과 deconvolution을 이용하여 autoencoder와 같이 동일한 이미지를 구현하는 것도 가능해지게 됩니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/nPnBwr6.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1505.04366.pdf\n",
"\n",
"위의 논문처럼 물체의 특징을 더욱 명확하게 잡아주기도 합니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/awAjYXA.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1505.04366.pdf\n",
"\n",
"전의 사용했던 방법보다 더 명확하게 윤곽 등을 잡아줌을 보실 수 있습니다.\n",
"\n",
"Convolutional Autoencoder를 응용한 예는 아래와 같습니다.\n",
"\n",
"https://github.com/alexjc/neural-enhance#1-examples--usage\n",
"\n",
"CSI영화에서나 볼것 같던 저화질 사진을 고화질로 만들기!\n",
"\n",
"https://github.com/richzhang/colorization\n",
"\n",
"흑백사진을 컬러사진으로 만드는 것!"
]
},
{
"metadata": {
"id": "2FjWRZMuBCxt",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 1-2. Autoencoder 종류\n",
"\n",
"VAE (Variational autoencoder) : https://www.slideshare.net/ssuser06e0c5/variational-autoencoder-76552518, http://jaejunyoo.blogspot.com/2017/04/auto-encoding-variational-bayes-vae-1.html, https://arxiv.org/pdf/1606.05908.pdf\n",
"\n",
"CVAE (Conditional Variational Autoencoder) : https://wiseodd.github.io/techblog/2016/12/17/conditional-vae/, https://arxiv.org/pdf/1406.5298.pdf"
]
},
{
"metadata": {
"id": "qRHrWg9WrZgs",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2. GAN (Generative Adversarial Network)\n",
"\n",
"3분 딥러닝 정리\n",
"\n",
"- 서로 대립(adversarial)하는 두 신경망을 경쟁시켜가며 결과물 생성 방법을 학습\n",
"\n",
"- GAN 비유 : 위조지폐범(생성자)과 경찰(구분자)에 대한 이야기. 위조지폐범은 경찰을 최대한 속이려고 노력하고, 경찰은 위조한 지폐를 최대한 감별하려고 노력한다는 이야기.\n",
"\n",
"- 위조지폐를 만들고 감별하려는 경쟁을 통해 서로의 능력이 발전하게 되고, 그러다 보면 결국 위조지폐범은 진짜와 거의 구분할 수 없을 정도로 진짜 같은 위조지폐를 만들 수 있게 된다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/WI9qPYr.png)\n",
"\n",
"GAN의 기본 구조\n",
"\n",
"- 실제 이미지를 주고 구분자(Discriminator)에게 이미지가 진짜임을 판단하게 함.\n",
"\n",
"- 그런 다음 생성자(Generator)를 통해 노이즈로부터 임의의 이미지를 만들고 이것을 다시 같은 구분자를 통해 진짜 이미지인지를 판단하게 함.\n",
"\n",
"- 이렇게 생성자는 구분자를 속여 진짜처럼 보이게하고, 구분자는 생성사가 만든 이미지를 최대한 가짜라고 구분하도록 훈련하는 것이 GAN의 핵심"
]
},
{
"metadata": {
"id": "_lP9S9rRiOYt",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 65
},
{
"item_id": 103
}
],
"base_uri": "https://localhost:8080/",
"height": 1802
},
"outputId": "ac1313b5-eed6-43fe-ef5b-8c3097d18a5a",
"executionInfo": {
"status": "ok",
"timestamp": 1521947588061,
"user_tz": -540,
"elapsed": 314828,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"# 2016년에 가장 관심을 많이 받았던 비감독(Unsupervised) 학습 방법인\n",
"# Generative Adversarial Network(GAN)을 구현해봅니다.\n",
"# https://arxiv.org/abs/1406.2661\n",
"import tensorflow as tf\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data/\", one_hot=True)\n",
"\n",
"#########\n",
"# 옵션 설정\n",
"######\n",
"total_epoch = 100\n",
"batch_size = 100\n",
"learning_rate = 0.0002\n",
"# 신경망 레이어 구성 옵션\n",
"n_hidden = 256\n",
"n_input = 28 * 28\n",
"n_noise = 128 # 생성기의 입력값으로 사용할 노이즈의 크기\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"# GAN 도 Unsupervised 학습이므로 Autoencoder 처럼 Y 를 사용하지 않습니다.\n",
"X = tf.placeholder(tf.float32, [None, n_input])\n",
"# 노이즈 Z를 입력값으로 사용합니다.\n",
"Z = tf.placeholder(tf.float32, [None, n_noise])\n",
"\n",
"# 생성기 신경망에 사용하는 변수들입니다.\n",
"G_W1 = tf.Variable(tf.random_normal([n_noise, n_hidden], stddev=0.01))\n",
"G_b1 = tf.Variable(tf.zeros([n_hidden]))\n",
"G_W2 = tf.Variable(tf.random_normal([n_hidden, n_input], stddev=0.01))\n",
"G_b2 = tf.Variable(tf.zeros([n_input]))\n",
"\n",
"# 판별기 신경망에 사용하는 변수들입니다.\n",
"D_W1 = tf.Variable(tf.random_normal([n_input, n_hidden], stddev=0.01))\n",
"D_b1 = tf.Variable(tf.zeros([n_hidden]))\n",
"# 판별기의 최종 결과값은 얼마나 진짜와 가깝냐를 판단하는 한 개의 스칼라값입니다.\n",
"D_W2 = tf.Variable(tf.random_normal([n_hidden, 1], stddev=0.01))\n",
"D_b2 = tf.Variable(tf.zeros([1]))\n",
"\n",
"\n",
"# 생성기(G) 신경망을 구성합니다.\n",
"def generator(noise_z):\n",
" hidden = tf.nn.relu(\n",
" tf.matmul(noise_z, G_W1) + G_b1)\n",
" output = tf.nn.sigmoid(\n",
" tf.matmul(hidden, G_W2) + G_b2)\n",
"\n",
" return output\n",
"\n",
"\n",
"# 판별기(D) 신경망을 구성합니다.\n",
"def discriminator(inputs):\n",
" hidden = tf.nn.relu(\n",
" tf.matmul(inputs, D_W1) + D_b1)\n",
" output = tf.nn.sigmoid(\n",
" tf.matmul(hidden, D_W2) + D_b2)\n",
"\n",
" return output\n",
"\n",
"\n",
"# 랜덤한 노이즈(Z)를 만듭니다.\n",
"def get_noise(batch_size, n_noise):\n",
" return np.random.normal(size=(batch_size, n_noise))\n",
"\n",
"\n",
"# 노이즈를 이용해 랜덤한 이미지를 생성합니다.\n",
"G = generator(Z)\n",
"# 노이즈를 이용해 생성한 이미지가 진짜 이미지인지 판별한 값을 구합니다.\n",
"D_gene = discriminator(G)\n",
"# 진짜 이미지를 이용해 판별한 값을 구합니다.\n",
"D_real = discriminator(X)\n",
"\n",
"# 논문에 따르면, GAN 모델의 최적화는 loss_G 와 loss_D 를 최대화 하는 것 입니다.\n",
"# 다만 loss_D와 loss_G는 서로 연관관계가 있기 때문에 두 개의 손실값이 항상 같이 증가하는 경향을 보이지는 않을 것 입니다.\n",
"# loss_D가 증가하려면 loss_G는 하락해야하고, loss_G가 증가하려면 loss_D는 하락해야하는 경쟁관계에 있기 때문입니다.\n",
"# 논문의 수식에 따른 다음 로직을 보면 loss_D 를 최대화하기 위해서는 D_gene 값을 최소화하게 됩니다.\n",
"# 판별기에 진짜 이미지를 넣었을 때에도 최대값을 : tf.log(D_real)\n",
"# 가짜 이미지를 넣었을 때에도 최대값을 : tf.log(1 - D_gene)\n",
"# 갖도록 학습시키기 때문입니다.\n",
"# 이것은 판별기는 생성기가 만들어낸 이미지가 가짜라고 판단하도록 판별기 신경망을 학습시킵니다.\n",
"loss_D = tf.reduce_mean(tf.log(D_real) + tf.log(1 - D_gene))\n",
"# 반면 loss_G 를 최대화하기 위해서는 D_gene 값을 최대화하게 되는데,\n",
"# 이것은 가짜 이미지를 넣었을 때, 판별기가 최대한 실제 이미지라고 판단하도록 생성기 신경망을 학습시킵니다.\n",
"# 논문에서는 loss_D 와 같은 수식으로 최소화 하는 생성기를 찾지만,\n",
"# 결국 D_gene 값을 최대화하는 것이므로 다음과 같이 사용할 수 있습니다.\n",
"loss_G = tf.reduce_mean(tf.log(D_gene))\n",
"\n",
"# loss_D 를 구할 때는 판별기 신경망에 사용되는 변수만 사용하고,\n",
"# loss_G 를 구할 때는 생성기 신경망에 사용되는 변수만 사용하여 최적화를 합니다.\n",
"D_var_list = [D_W1, D_b1, D_W2, D_b2]\n",
"G_var_list = [G_W1, G_b1, G_W2, G_b2]\n",
"\n",
"# GAN 논문의 수식에 따르면 loss 를 극대화 해야하지만, minimize 하는 최적화 함수를 사용하기 때문에\n",
"# 최적화 하려는 loss_D 와 loss_G 에 음수 부호를 붙여줍니다.\n",
"train_D = tf.train.AdamOptimizer(learning_rate).minimize(-loss_D,\n",
" var_list=D_var_list)\n",
"train_G = tf.train.AdamOptimizer(learning_rate).minimize(-loss_G,\n",
" var_list=G_var_list)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"sess = tf.Session()\n",
"sess.run(tf.global_variables_initializer())\n",
"\n",
"total_batch = int(mnist.train.num_examples/batch_size)\n",
"loss_val_D, loss_val_G = 0, 0\n",
"\n",
"for epoch in range(total_epoch):\n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" noise = get_noise(batch_size, n_noise)\n",
"\n",
" # 판별기와 생성기 신경망을 각각 학습시킵니다.\n",
" _, loss_val_D = sess.run([train_D, loss_D],\n",
" feed_dict={X: batch_xs, Z: noise})\n",
" _, loss_val_G = sess.run([train_G, loss_G],\n",
" feed_dict={Z: noise})\n",
"\n",
" print('Epoch:', '%04d' % epoch,\n",
" 'D loss: {:.4}'.format(loss_val_D),\n",
" 'G loss: {:.4}'.format(loss_val_G))\n",
"\n",
" #########\n",
" # 학습이 되어가는 모습을 보기 위해 주기적으로 이미지를 생성하여 저장\n",
" ######\n",
" if epoch == 0 or (epoch + 1) % 10 == 0:\n",
" sample_size = 10\n",
" noise = get_noise(sample_size, n_noise)\n",
" samples = sess.run(G, feed_dict={Z: noise})\n",
"\n",
" fig, ax = plt.subplots(1, sample_size, figsize=(sample_size, 1))\n",
"\n",
" for i in range(sample_size):\n",
" ax[i].set_axis_off()\n",
" ax[i].imshow(np.reshape(samples[i], (28, 28)))\n",
"\n",
" plt.savefig('drive/sample/{}.png'.format(str(epoch).zfill(3)), bbox_inches='tight')\n",
" plt.close(fig)\n",
"\n",
"print('최적화 완료!')"
],
"execution_count": 10,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0000 D loss: -0.5558 G loss: -2.002\n",
"Epoch: 0001 D loss: -0.5231 G loss: -2.208\n",
"Epoch: 0002 D loss: -0.1094 G loss: -3.621\n",
"Epoch: 0003 D loss: -0.4748 G loss: -1.641\n",
"Epoch: 0004 D loss: -0.2574 G loss: -2.2\n",
"Epoch: 0005 D loss: -0.4035 G loss: -2.358\n",
"Epoch: 0006 D loss: -0.2761 G loss: -2.291\n",
"Epoch: 0007 D loss: -0.2453 G loss: -2.965\n",
"Epoch: 0008 D loss: -0.5022 G loss: -2.252\n",
"Epoch: 0009 D loss: -0.3237 G loss: -2.639\n",
"Epoch: 0010 D loss: -0.4251 G loss: -1.969\n",
"Epoch: 0011 D loss: -0.5817 G loss: -2.035\n",
"Epoch: 0012 D loss: -0.4711 G loss: -2.031\n",
"Epoch: 0013 D loss: -0.4874 G loss: -2.197\n",
"Epoch: 0014 D loss: -0.4399 G loss: -2.377\n",
"Epoch: 0015 D loss: -0.5376 G loss: -2.084\n",
"Epoch: 0016 D loss: -0.4366 G loss: -2.166\n",
"Epoch: 0017 D loss: -0.5343 G loss: -2.478\n",
"Epoch: 0018 D loss: -0.5699 G loss: -2.086\n",
"Epoch: 0019 D loss: -0.2971 G loss: -2.599\n",
"Epoch: 0020 D loss: -0.2664 G loss: -2.784\n",
"Epoch: 0021 D loss: -0.4597 G loss: -2.497\n",
"Epoch: 0022 D loss: -0.3805 G loss: -2.302\n",
"Epoch: 0023 D loss: -0.4185 G loss: -2.698\n",
"Epoch: 0024 D loss: -0.4464 G loss: -2.338\n",
"Epoch: 0025 D loss: -0.449 G loss: -2.335\n",
"Epoch: 0026 D loss: -0.4199 G loss: -2.6\n",
"Epoch: 0027 D loss: -0.4857 G loss: -2.657\n",
"Epoch: 0028 D loss: -0.6752 G loss: -2.176\n",
"Epoch: 0029 D loss: -0.6672 G loss: -2.373\n",
"Epoch: 0030 D loss: -0.5185 G loss: -2.313\n",
"Epoch: 0031 D loss: -0.5914 G loss: -2.079\n",
"Epoch: 0032 D loss: -0.5496 G loss: -2.196\n",
"Epoch: 0033 D loss: -0.5406 G loss: -2.078\n",
"Epoch: 0034 D loss: -0.6039 G loss: -2.458\n",
"Epoch: 0035 D loss: -0.73 G loss: -1.967\n",
"Epoch: 0036 D loss: -0.4596 G loss: -2.128\n",
"Epoch: 0037 D loss: -0.5531 G loss: -2.039\n",
"Epoch: 0038 D loss: -0.4586 G loss: -2.249\n",
"Epoch: 0039 D loss: -0.4681 G loss: -2.406\n",
"Epoch: 0040 D loss: -0.5894 G loss: -2.022\n",
"Epoch: 0041 D loss: -0.6075 G loss: -1.999\n",
"Epoch: 0042 D loss: -0.5924 G loss: -2.142\n",
"Epoch: 0043 D loss: -0.7234 G loss: -2.189\n",
"Epoch: 0044 D loss: -0.628 G loss: -2.123\n",
"Epoch: 0045 D loss: -0.6904 G loss: -1.822\n",
"Epoch: 0046 D loss: -0.7167 G loss: -1.722\n",
"Epoch: 0047 D loss: -0.7571 G loss: -1.678\n",
"Epoch: 0048 D loss: -0.6259 G loss: -1.986\n",
"Epoch: 0049 D loss: -0.7109 G loss: -1.872\n",
"Epoch: 0050 D loss: -0.875 G loss: -2.081\n",
"Epoch: 0051 D loss: -0.6315 G loss: -2.048\n",
"Epoch: 0052 D loss: -0.9139 G loss: -2.174\n",
"Epoch: 0053 D loss: -0.7971 G loss: -2.105\n",
"Epoch: 0054 D loss: -0.8814 G loss: -1.731\n",
"Epoch: 0055 D loss: -0.8196 G loss: -1.789\n",
"Epoch: 0056 D loss: -0.5786 G loss: -1.973\n",
"Epoch: 0057 D loss: -0.7821 G loss: -1.75\n",
"Epoch: 0058 D loss: -0.7066 G loss: -2.036\n",
"Epoch: 0059 D loss: -0.7845 G loss: -2.11\n",
"Epoch: 0060 D loss: -0.7671 G loss: -1.958\n",
"Epoch: 0061 D loss: -0.7667 G loss: -1.896\n",
"Epoch: 0062 D loss: -0.8751 G loss: -1.837\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"Epoch: 0063 D loss: -0.8811 G loss: -1.76\n",
"Epoch: 0064 D loss: -0.8577 G loss: -1.619\n",
"Epoch: 0065 D loss: -0.8113 G loss: -1.961\n",
"Epoch: 0066 D loss: -0.8002 G loss: -1.749\n",
"Epoch: 0067 D loss: -0.6915 G loss: -1.787\n",
"Epoch: 0068 D loss: -0.7332 G loss: -1.863\n",
"Epoch: 0069 D loss: -0.7654 G loss: -1.899\n",
"Epoch: 0070 D loss: -0.7906 G loss: -1.975\n",
"Epoch: 0071 D loss: -0.7264 G loss: -1.835\n",
"Epoch: 0072 D loss: -0.6903 G loss: -1.975\n",
"Epoch: 0073 D loss: -0.8463 G loss: -1.705\n",
"Epoch: 0074 D loss: -0.8756 G loss: -1.732\n",
"Epoch: 0075 D loss: -0.7775 G loss: -1.792\n",
"Epoch: 0076 D loss: -0.8721 G loss: -1.79\n",
"Epoch: 0077 D loss: -0.6937 G loss: -2.054\n",
"Epoch: 0078 D loss: -0.941 G loss: -1.997\n",
"Epoch: 0079 D loss: -0.8532 G loss: -1.897\n",
"Epoch: 0080 D loss: -0.6867 G loss: -1.87\n",
"Epoch: 0081 D loss: -0.8426 G loss: -1.668\n",
"Epoch: 0082 D loss: -0.7468 G loss: -1.869\n",
"Epoch: 0083 D loss: -0.7337 G loss: -1.86\n",
"Epoch: 0084 D loss: -0.7979 G loss: -1.863\n",
"Epoch: 0085 D loss: -0.8709 G loss: -1.823\n",
"Epoch: 0086 D loss: -0.6681 G loss: -2.123\n",
"Epoch: 0087 D loss: -0.6939 G loss: -1.95\n",
"Epoch: 0088 D loss: -0.6603 G loss: -1.803\n",
"Epoch: 0089 D loss: -0.6357 G loss: -1.865\n",
"Epoch: 0090 D loss: -0.8916 G loss: -1.982\n",
"Epoch: 0091 D loss: -0.7315 G loss: -1.798\n",
"Epoch: 0092 D loss: -0.7277 G loss: -1.892\n",
"Epoch: 0093 D loss: -0.6864 G loss: -1.987\n",
"Epoch: 0094 D loss: -0.7388 G loss: -1.727\n",
"Epoch: 0095 D loss: -0.5766 G loss: -2.086\n",
"Epoch: 0096 D loss: -0.7962 G loss: -2.141\n",
"Epoch: 0097 D loss: -0.7953 G loss: -2.072\n",
"Epoch: 0098 D loss: -0.7547 G loss: -2.063\n",
"Epoch: 0099 D loss: -0.8343 G loss: -1.86\n",
"최적화 완료!\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "qE43q-rYJfST",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"결과값\n",
"\n",
"![대체 텍스트](https://i.imgur.com/gtCyHOW.gif)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/fsBGRqb.png)"
]
},
{
"metadata": {
"id": "_88Sx706JgHS",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 66
},
{
"item_id": 103
}
],
"base_uri": "https://localhost:8080/",
"height": 1802
},
"outputId": "c4576bd5-8732-4d0b-b5be-20e8114e0406",
"executionInfo": {
"status": "ok",
"timestamp": 1521948057664,
"user_tz": -540,
"elapsed": 354081,
"user": {
"displayName": "sj jin",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
"userId": "107618035827318999649"
}
}
},
"cell_type": "code",
"source": [
"# GAN 모델을 이용해 단순히 랜덤한 숫자를 생성하는 아닌,\n",
"# 원하는 손글씨 숫자를 생성하는 모델을 만들어봅니다.\n",
"# 이런 방식으로 흑백 사진을 컬러로 만든다든가, 또는 선화를 채색한다든가 하는 응용이 가능합니다.\n",
"import tensorflow as tf\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"mnist = input_data.read_data_sets(\"./mnist/data/\", one_hot=True)\n",
"\n",
"tf.reset_default_graph() \n",
"\n",
"#########\n",
"# 옵션 설정\n",
"######\n",
"total_epoch = 100\n",
"batch_size = 100\n",
"n_hidden = 256\n",
"n_input = 28 * 28\n",
"n_noise = 128\n",
"n_class = 10\n",
"\n",
"#########\n",
"# 신경망 모델 구성\n",
"######\n",
"X = tf.placeholder(tf.float32, [None, n_input])\n",
"# 노이즈와 실제 이미지에, 그에 해당하는 숫자에 대한 정보를 넣어주기 위해 사용합니다.\n",
"Y = tf.placeholder(tf.float32, [None, n_class])\n",
"Z = tf.placeholder(tf.float32, [None, n_noise])\n",
"\n",
"\n",
"def generator(noise, labels):\n",
" with tf.variable_scope('generator'):\n",
" # noise 값에 labels 정보를 추가합니다.\n",
" inputs = tf.concat([noise, labels], 1)\n",
"\n",
" # TensorFlow 에서 제공하는 유틸리티 함수를 이용해 신경망을 매우 간단하게 구성할 수 있습니다.\n",
" hidden = tf.layers.dense(inputs, n_hidden,\n",
" activation=tf.nn.relu)\n",
" output = tf.layers.dense(hidden, n_input,\n",
" activation=tf.nn.sigmoid)\n",
"\n",
" return output\n",
"\n",
"\n",
"def discriminator(inputs, labels, reuse=None):\n",
" with tf.variable_scope('discriminator') as scope:\n",
" # 노이즈에서 생성한 이미지와 실제 이미지를 판별하는 모델의 변수를 동일하게 하기 위해,\n",
" # 이전에 사용되었던 변수를 재사용하도록 합니다.\n",
" if reuse:\n",
" scope.reuse_variables()\n",
"\n",
" inputs = tf.concat([inputs, labels], 1)\n",
"\n",
" hidden = tf.layers.dense(inputs, n_hidden,\n",
" activation=tf.nn.relu)\n",
" output = tf.layers.dense(hidden, 1,\n",
" activation=None)\n",
"\n",
" return output\n",
"\n",
"\n",
"def get_noise(batch_size, n_noise):\n",
" return np.random.uniform(-1., 1., size=[batch_size, n_noise])\n",
"\n",
"# 생성 모델과 판별 모델에 Y 즉, labels 정보를 추가하여\n",
"# labels 정보에 해당하는 이미지를 생성할 수 있도록 유도합니다.\n",
"G = generator(Z, Y)\n",
"D_real = discriminator(X, Y)\n",
"D_gene = discriminator(G, Y, True)\n",
"\n",
"# 손실함수는 다음을 참고하여 GAN 논문에 나온 방식과는 약간 다르게 작성하였습니다.\n",
"# http://bamos.github.io/2016/08/09/deep-completion/\n",
"# 진짜 이미지를 판별하는 D_real 값은 1에 가깝도록,\n",
"# 가짜 이미지를 판별하는 D_gene 값은 0에 가깝도록 하는 손실 함수입니다.\n",
"loss_D_real = tf.reduce_mean(\n",
" tf.nn.sigmoid_cross_entropy_with_logits(\n",
" logits=D_real, labels=tf.ones_like(D_real)))\n",
"loss_D_gene = tf.reduce_mean(\n",
" tf.nn.sigmoid_cross_entropy_with_logits(\n",
" logits=D_gene, labels=tf.zeros_like(D_gene)))\n",
"# loss_D_real 과 loss_D_gene 을 더한 뒤 이 값을 최소화 하도록 최적화합니다.\n",
"loss_D = loss_D_real + loss_D_gene\n",
"# 가짜 이미지를 진짜에 가깝게 만들도록 생성망을 학습시키기 위해, D_gene 을 최대한 1에 가깝도록 만드는 손실함수입니다.\n",
"loss_G = tf.reduce_mean(\n",
" tf.nn.sigmoid_cross_entropy_with_logits(\n",
" logits=D_gene, labels=tf.ones_like(D_gene)))\n",
"\n",
"# TensorFlow 에서 제공하는 유틸리티 함수를 이용해\n",
"# discriminator 와 generator scope 에서 사용된 변수들을 쉽게 가져올 수 있습니다.\n",
"vars_D = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n",
" scope='discriminator')\n",
"vars_G = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n",
" scope='generator')\n",
"\n",
"train_D = tf.train.AdamOptimizer().minimize(loss_D,\n",
" var_list=vars_D)\n",
"train_G = tf.train.AdamOptimizer().minimize(loss_G,\n",
" var_list=vars_G)\n",
"\n",
"#########\n",
"# 신경망 모델 학습\n",
"######\n",
"sess = tf.Session()\n",
"sess.run(tf.global_variables_initializer())\n",
"\n",
"total_batch = int(mnist.train.num_examples/batch_size)\n",
"loss_val_D, loss_val_G = 0, 0\n",
"\n",
"for epoch in range(total_epoch):\n",
" for i in range(total_batch):\n",
" batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
" noise = get_noise(batch_size, n_noise)\n",
"\n",
" _, loss_val_D = sess.run([train_D, loss_D],\n",
" feed_dict={X: batch_xs, Y: batch_ys, Z: noise})\n",
" _, loss_val_G = sess.run([train_G, loss_G],\n",
" feed_dict={Y: batch_ys, Z: noise})\n",
"\n",
" print('Epoch:', '%04d' % epoch,\n",
" 'D loss: {:.4}'.format(loss_val_D),\n",
" 'G loss: {:.4}'.format(loss_val_G))\n",
"\n",
" #########\n",
" # 학습이 되어가는 모습을 보기 위해 주기적으로 레이블에 따른 이미지를 생성하여 저장\n",
" ######\n",
" if epoch == 0 or (epoch + 1) % 10 == 0:\n",
" sample_size = 10\n",
" noise = get_noise(sample_size, n_noise)\n",
" samples = sess.run(G,\n",
" feed_dict={Y: mnist.test.labels[:sample_size],\n",
" Z: noise})\n",
"\n",
" fig, ax = plt.subplots(2, sample_size, figsize=(sample_size, 2))\n",
"\n",
" for i in range(sample_size):\n",
" ax[0][i].set_axis_off()\n",
" ax[1][i].set_axis_off()\n",
"\n",
" ax[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))\n",
" ax[1][i].imshow(np.reshape(samples[i], (28, 28)))\n",
"\n",
" plt.savefig('drive/samples2/{}.png'.format(str(epoch).zfill(3)), bbox_inches='tight')\n",
" plt.close(fig)\n",
"\n",
"print('최적화 완료!')"
],
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"text": [
"Extracting ./mnist/data/train-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/train-labels-idx1-ubyte.gz\n",
"Extracting ./mnist/data/t10k-images-idx3-ubyte.gz\n",
"Extracting ./mnist/data/t10k-labels-idx1-ubyte.gz\n",
"Epoch: 0000 D loss: 0.008381 G loss: 7.819\n",
"Epoch: 0001 D loss: 0.002816 G loss: 8.38\n",
"Epoch: 0002 D loss: 0.004621 G loss: 8.365\n",
"Epoch: 0003 D loss: 0.02734 G loss: 7.128\n",
"Epoch: 0004 D loss: 0.05892 G loss: 5.778\n",
"Epoch: 0005 D loss: 0.04257 G loss: 7.798\n",
"Epoch: 0006 D loss: 0.07385 G loss: 5.412\n",
"Epoch: 0007 D loss: 0.1051 G loss: 7.021\n",
"Epoch: 0008 D loss: 0.08916 G loss: 6.726\n",
"Epoch: 0009 D loss: 0.121 G loss: 5.985\n",
"Epoch: 0010 D loss: 0.1702 G loss: 5.48\n",
"Epoch: 0011 D loss: 0.1457 G loss: 5.325\n",
"Epoch: 0012 D loss: 0.3287 G loss: 4.135\n",
"Epoch: 0013 D loss: 0.3641 G loss: 4.339\n",
"Epoch: 0014 D loss: 0.3754 G loss: 4.196\n",
"Epoch: 0015 D loss: 0.4576 G loss: 4.049\n",
"Epoch: 0016 D loss: 0.3181 G loss: 3.975\n",
"Epoch: 0017 D loss: 0.4755 G loss: 3.883\n",
"Epoch: 0018 D loss: 0.5816 G loss: 3.482\n",
"Epoch: 0019 D loss: 0.331 G loss: 3.589\n",
"Epoch: 0020 D loss: 0.6673 G loss: 2.806\n",
"Epoch: 0021 D loss: 0.6348 G loss: 2.594\n",
"Epoch: 0022 D loss: 0.471 G loss: 2.719\n",
"Epoch: 0023 D loss: 0.6326 G loss: 2.598\n",
"Epoch: 0024 D loss: 0.7446 G loss: 2.77\n",
"Epoch: 0025 D loss: 0.6136 G loss: 2.396\n",
"Epoch: 0026 D loss: 0.6266 G loss: 2.414\n",
"Epoch: 0027 D loss: 0.709 G loss: 2.317\n",
"Epoch: 0028 D loss: 0.7416 G loss: 2.718\n",
"Epoch: 0029 D loss: 0.7774 G loss: 1.999\n",
"Epoch: 0030 D loss: 0.8911 G loss: 2.296\n",
"Epoch: 0031 D loss: 0.5947 G loss: 2.635\n",
"Epoch: 0032 D loss: 0.6868 G loss: 1.982\n",
"Epoch: 0033 D loss: 0.793 G loss: 2.16\n",
"Epoch: 0034 D loss: 0.59 G loss: 2.538\n",
"Epoch: 0035 D loss: 0.5976 G loss: 2.489\n",
"Epoch: 0036 D loss: 0.8132 G loss: 2.483\n",
"Epoch: 0037 D loss: 0.7541 G loss: 2.184\n",
"Epoch: 0038 D loss: 0.658 G loss: 2.47\n",
"Epoch: 0039 D loss: 0.9847 G loss: 2.007\n",
"Epoch: 0040 D loss: 0.7038 G loss: 2.114\n",
"Epoch: 0041 D loss: 0.7773 G loss: 2.065\n",
"Epoch: 0042 D loss: 0.7423 G loss: 2.042\n",
"Epoch: 0043 D loss: 0.698 G loss: 2.058\n",
"Epoch: 0044 D loss: 0.6892 G loss: 2.026\n",
"Epoch: 0045 D loss: 0.6513 G loss: 2.281\n",
"Epoch: 0046 D loss: 0.7602 G loss: 2.186\n",
"Epoch: 0047 D loss: 0.6106 G loss: 2.33\n",
"Epoch: 0048 D loss: 0.656 G loss: 2.172\n",
"Epoch: 0049 D loss: 0.8573 G loss: 2.139\n",
"Epoch: 0050 D loss: 0.8015 G loss: 2.166\n",
"Epoch: 0051 D loss: 0.7882 G loss: 1.967\n",
"Epoch: 0052 D loss: 0.712 G loss: 2.051\n",
"Epoch: 0053 D loss: 0.6346 G loss: 2.158\n",
"Epoch: 0054 D loss: 0.8106 G loss: 1.987\n",
"Epoch: 0055 D loss: 0.6558 G loss: 2.16\n",
"Epoch: 0056 D loss: 0.5882 G loss: 2.297\n",
"Epoch: 0057 D loss: 0.8101 G loss: 1.683\n",
"Epoch: 0058 D loss: 0.7171 G loss: 2.087\n",
"Epoch: 0059 D loss: 0.7361 G loss: 2.194\n",
"Epoch: 0060 D loss: 0.7318 G loss: 2.112\n",
"Epoch: 0061 D loss: 0.7172 G loss: 2.126\n",
"Epoch: 0062 D loss: 0.7897 G loss: 1.78\n",
"Epoch: 0063 D loss: 0.8808 G loss: 1.851\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"Epoch: 0064 D loss: 0.781 G loss: 1.901\n",
"Epoch: 0065 D loss: 0.776 G loss: 1.824\n",
"Epoch: 0066 D loss: 0.8589 G loss: 1.757\n",
"Epoch: 0067 D loss: 0.8871 G loss: 1.842\n",
"Epoch: 0068 D loss: 0.8163 G loss: 1.941\n",
"Epoch: 0069 D loss: 0.8028 G loss: 1.93\n",
"Epoch: 0070 D loss: 0.7555 G loss: 2.299\n",
"Epoch: 0071 D loss: 0.6915 G loss: 1.749\n",
"Epoch: 0072 D loss: 0.7556 G loss: 2.059\n",
"Epoch: 0073 D loss: 0.8928 G loss: 1.804\n",
"Epoch: 0074 D loss: 0.5796 G loss: 2.053\n",
"Epoch: 0075 D loss: 0.7524 G loss: 2.198\n",
"Epoch: 0076 D loss: 0.6148 G loss: 2.297\n",
"Epoch: 0077 D loss: 0.8725 G loss: 2.054\n",
"Epoch: 0078 D loss: 0.6748 G loss: 2.139\n",
"Epoch: 0079 D loss: 0.6214 G loss: 2.029\n",
"Epoch: 0080 D loss: 0.8045 G loss: 1.854\n",
"Epoch: 0081 D loss: 0.8058 G loss: 2.216\n",
"Epoch: 0082 D loss: 0.8828 G loss: 1.685\n",
"Epoch: 0083 D loss: 0.5637 G loss: 2.233\n",
"Epoch: 0084 D loss: 0.6604 G loss: 2.062\n",
"Epoch: 0085 D loss: 0.8072 G loss: 1.908\n",
"Epoch: 0086 D loss: 0.7107 G loss: 1.955\n",
"Epoch: 0087 D loss: 0.764 G loss: 2.233\n",
"Epoch: 0088 D loss: 0.721 G loss: 2.409\n",
"Epoch: 0089 D loss: 0.8139 G loss: 2.203\n",
"Epoch: 0090 D loss: 0.9588 G loss: 2.022\n",
"Epoch: 0091 D loss: 0.7603 G loss: 1.894\n",
"Epoch: 0092 D loss: 0.7426 G loss: 2.0\n",
"Epoch: 0093 D loss: 0.7321 G loss: 2.102\n",
"Epoch: 0094 D loss: 0.8118 G loss: 1.551\n",
"Epoch: 0095 D loss: 0.6821 G loss: 1.776\n",
"Epoch: 0096 D loss: 0.8436 G loss: 2.168\n",
"Epoch: 0097 D loss: 0.8298 G loss: 1.904\n",
"Epoch: 0098 D loss: 0.8047 G loss: 1.753\n",
"Epoch: 0099 D loss: 0.6291 G loss: 2.39\n",
"최적화 완료!\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "IMMg5phZL_fd",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"실행결과\n",
"\n",
"![대체 텍스트](https://i.imgur.com/3RS0X9c.gif)"
]
},
{
"metadata": {
"id": "ff8JUHpcMes5",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"### GAN 응용\n",
"\n",
"https://arxiv.org/pdf/1611.07004v1.pdf\n",
"\n",
"image-to-image 변환\n",
"\n",
"![대체 텍스트](https://i.imgur.com/vi0aEKx.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1611.07004v1.pdf\n",
"\n",
"Paints Chainer\n",
"\n",
"http://paintschainer.preferred.tech/index_en.html\n",
"\n",
"![대체 텍스트](https://i.imgur.com/eflNgrA.png)\n",
"\n",
"출처 : https://github.com/pfnet/PaintsChainer\n",
"\n",
"GAN 이론 참고\n",
"\n",
"http://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-1.html\n",
"\n",
"http://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html"
]
},
{
"metadata": {
"id": "ArsVo99QmBFJ",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2-1. GAN 더 이해해보기\n",
"\n",
"![대체 텍스트](https://i.imgur.com/oi8phoL.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1406.2661.pdf\n",
"\n",
"초록색 선이 가짜 값이고 검은 도트 선이 진짜 값입니다. 초록색 도트 선은 구분자입니다.\n",
"\n",
"초록색 선을 보시면 가짜의 값이 처음에는 아예 맞지 않다가 점점 구분자를 통해서 가짜값이 점점 진짜의 값과 거의 구분할 수 없는 정도의 값이 되어간다고 설명해주고 있습니다.\n",
"\n",
"구분자의 선은 0-1사이의 확률의 그래프라고 보시면 될거 같습니다.\n",
"\n",
"위의 과정 참고 동영상 url : https://youtu.be/0r3g7-4bMYU"
]
},
{
"metadata": {
"id": "qwIc0_n769GH",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Generative Adversarial Network for approximating a 1D Gaussian distribution\n",
"\n",
"url : https://github.com/hwalsuklee/tensorflow-GAN-1d-gaussian-ex\n",
"\n",
"![대체 텍스트](https://i.imgur.com/ffvFSmB.png)\n",
"\n",
"트레이닝 100번 했을 시 \n",
"\n",
"파랜색선은 진짜 데이터이고 빨간선이 생성자가 만든 가짜 데이터라고 생각하시면 됩니다!\n",
"\n",
"![대체 텍스트](https://i.imgur.com/y2vFF0X.png)\n",
"\n",
"트레이닝 1000번 했을 시 \n",
"\n",
"![대체 텍스트](https://i.imgur.com/i0BYWeV.png)\n",
"\n",
"트레이닝 3000번 했을 시 \n",
"\n",
"![대체 텍스트](https://i.imgur.com/gQnUkuM.png)\n",
"\n",
"트레이닝 5000번 했을 시\n",
"\n",
"![대체 텍스트](https://i.imgur.com/7nmTCEs.png)\n",
"\n",
"트레이닝 10000번 했을시\n",
"\n",
"![대체 텍스트](https://i.imgur.com/wawFJX1.png)\n",
"\n",
"트레이닝 100000번 했을시\n",
"\n",
"십만번 트레이닝 했을시에는 거의 진짜 데이터와 다르지 않음을 보실 수 있습니다.\n",
"\n",
"정규분포가 궁금하신 분 : https://ko.wikipedia.org/wiki/%EC%A0%95%EA%B7%9C%EB%B6%84%ED%8F%AC"
]
},
{
"metadata": {
"id": "VXvKERFC64EG",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"![대체 텍스트](https://i.imgur.com/fz4F2p7.png)\n",
"\n",
"원본 이미지 출처 : https://arxiv.org/pdf/1406.2661.pdf\n",
"\n",
"G(Generator)는 생성용 벡터 z로부터 데이터를 생성\n",
"\n",
"D(Discriminator)는 대상 데이터가 진짜(데이터 세트)인가 가짜(G에 의해 생성)를 식별 \n",
"\n",
"![대체 텍스트](https://i.imgur.com/etwzVQU.png)\n",
"\n",
"원본 이미지 출처 : https://arxiv.org/pdf/1406.2661.pdf\n",
"\n",
"discriminator object function(loss)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/JkWwfif.png)\n",
"\n",
"원본 이미지 출처 : https://arxiv.org/pdf/1406.2661.pdf\n",
"\n",
"generator object function\n",
"\n",
"제 식대로 논문에 나온 수식을 이해하자면 서로 값을 크게하는 목표 / 값을 작게하는 목표 라는 상반된 목표를 가졌기에\n",
"\n",
"위조지폐범(생성자)과 경찰(구분자)에 대한 이야기. 위조지폐범은 경찰을 최대한 속이려고 노력하고, 경찰은 위조한 지폐를 최대한 감별하려고 노력한다는 비유가 어느 정도는 이해가 되는듯 싶습니다."
]
},
{
"metadata": {
"id": "UNJkjOkh-RZY",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2-2. GAN 응용\n",
"\n",
"https://github.com/Crpediem/icml2016\n",
"\n",
"Generative Adversarial Text-to-Image Synthesis\n",
"\n",
"http://mattya.github.io/chainer-DCGAN/\n",
"\n",
"chainer-DCGAN"
]
},
{
"metadata": {
"id": "Xpy_ST59v8dF",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2-3. DCGAN (Deep Convolutional Generative Adversarial Nets)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/dahVLr4.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1511.06434.pdf\n",
"\n",
"기존의 GAN과 DCGAN의 차이점은 fully-connected 가 CNN 구조로 대체된 것이 차별점입니다.\n",
"\n",
"(DCGAN의 전체이름이 Deep Convolutional Generative Adversarial Nets도 바로 이런 이유가 아닌가 싶습니다.)\n",
"\n",
"![대체 텍스트](https://i.imgur.com/5eOVjDE.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1511.06434.pdf\n",
"\n",
"다섯번 에폭시를 돌려 학습한 후 생성된 침실 사진이라고 합니다.\n",
"\n",
"https://github.com/znxlwm/tensorflow-MNIST-GAN-DCGAN\n",
"\n",
"mnist를 GAN과 DCGAN을 비교한 결과인데요 DCGAN이 epoch이 적더라도 더 좋은 결과를 보여줌을 보실 수 있습니다.\n",
"\n",
"![대체 텍스트](https://i.imgur.com/kf6vJhJ.png)\n",
"\n",
"출처 : https://arxiv.org/pdf/1511.06434.pdf\n",
"\n",
"또한 DCGAN을 통해서 사진에서 일정한 특징을 뽑아서 다른 사진에 그 특징을 더할 수 있음을 논문에서는 보여주고 있습니다.\n",
"\n",
"(위의 이미지들도 다 DCGAN으로 생성된 가짜 이미지들입니다.)"
]
},
{
"metadata": {
"id": "CjW6Smx9JLLT",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 2-4. 그 외 GAN 종류들\n",
"\n",
"- infoGAN : https://arxiv.org/pdf/1606.03657.pdf, https://medium.com/emergent-future/learning-interpretable-latent-representations-with-infogan-dd710852db46, https://wiseodd.github.io/techblog/2017/01/29/infogan/\n",
"\n",
"- ConditionalGAN : https://arxiv.org/pdf/1411.1784.pdf, http://t-lab.tistory.com/29\n",
"\n",
"- Wasserstein GAN : https://arxiv.org/pdf/1701.07875.pdf, https://tensorflow.blog/2017/02/06/wasserstein-gan-1701-07875/\n",
"\n",
"- Least Squares GAN : https://arxiv.org/pdf/1611.04076v2.pdf, http://jaejunyoo.blogspot.com/2017/03/lsgan-1.html, https://wiseodd.github.io/techblog/2017/03/02/least-squares-gan/\n",
"\n",
"- Energy Based GAN : https://arxiv.org/pdf/1609.03126.pdf, http://jaejunyoo.blogspot.com/2018/02/energy-based-generative-adversarial-nets-1.html\n",
"\n",
"- f-GAN : https://arxiv.org/pdf/1606.00709.pdf, http://jaejunyoo.blogspot.com/2017/06/f-gan.html\n",
"\n",
"- DiscoGAN : https://arxiv.org/pdf/1703.05192.pdf, https://github.com/ilguyi/discoGAN.tensorflow.slim, https://angrypark.github.io/DiscoGAN-paper-reading/\n",
"\n",
"- CycleGAN : https://github.com/junyanz/CycleGAN, https://arxiv.org/pdf/1703.10593.pdf, https://taeoh-kim.github.io/blog/gan%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-image-to-image-translation-pix2pix-cyclegan-discogan/\n",
"\n",
"- Boundary Equilibrium GAN : https://arxiv.org/pdf/1703.10717.pdf, http://jaejunyoo.blogspot.com/2017/04/began-boundary-equilibrium-gan-1.html"
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment