Skip to content

Instantly share code, notes, and snippets.

@takatakamanbou
Last active July 13, 2020 15:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save takatakamanbou/9d58b99eb782b27602ca7efc37235790 to your computer and use it in GitHub Desktop.
Save takatakamanbou/9d58b99eb782b27602ca7efc37235790 to your computer and use it in GitHub Desktop.
PIP2020-14-note2.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "PIP2020-14-note2.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true,
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/takatakamanbou/9d58b99eb782b27602ca7efc37235790/pip2020-14-note2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NI3mY1FGZWgV",
"colab_type": "text"
},
"source": [
"# 2020年度パターン情報処理第14回講義資料その2\n",
"\n",
"この科目のウェブサイトへ https://www-tlab.math.ryukoku.ac.jp/wiki/?PIP/2020\n",
"\n",
"![hoge](https://www-tlab.math.ryukoku.ac.jp/~takataka/course/PIP/PIP-logo-96x96.png)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ffaGVK2o5VWe",
"colab_type": "text"
},
"source": [
"## はじめに"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0IpujRN_Wuib",
"colab_type": "text"
},
"source": [
"### これは何?\n",
"\n",
"これは,Google Colaboratory(以下 Google Colab) という,Google が提供しているサービスを利用して作成した資料です.Notebook と呼びます.クラウド上に仮想的に構築された Linux マシン上で Python のプログラムを実行することができます."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QRNjWashW4pr",
"colab_type": "text"
},
"source": [
"### Notebook の動かし方\n",
"\n",
"この Notebook では,上の方のセルで作った変数や関数を後のセルで使うことがあります.そのため,上の方のセルを実行せずに下の方を実行すると,エラーになることがあります.上から順に実行していきましょう.\n",
"\n",
"また,メニューの「ランタイム」から,「すべてのセルを実行」したりすることも可能です."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1jfSXIY6VNxP",
"colab_type": "text"
},
"source": [
"## 準備\n",
"\n",
"以下のセルは,プログラムを実行するための準備を行うためのものです.このセルを実行してから,先へ進みましょう."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IFYQk7ONZENz",
"colab_type": "code",
"colab": {}
},
"source": [
"# 科学技術計算のライブラリ NumPy のモジュールを np という名前で使えるようにする\n",
"import numpy as np\n",
"\n",
"# グラフを描くためのライブラリ matplotlib の pyplot を plt という名前でインポート\n",
"import matplotlib.pyplot as plt\n",
"# matplotlib のグラフをより美しく便利に描くためのほげ\n",
"import seaborn as sns\n",
"sns.set()\n",
"\n",
"# コンピュータビジョン・画像処理のためのライブラリ OpenCV のモジュール cv2 をインポート\n",
"import cv2\n",
"\n",
"# 深層学習のためのライブラリ Keras をインポート\n",
"import keras\n",
"\n",
"# 時間計測のためのほげ\n",
"import datetime"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "HXUMt-TsY-2B",
"colab_type": "text"
},
"source": [
"## ★ 手書き数字画像の識別(2)\n",
"\n",
"MNIST と呼ばれる手書き数字画像のデータを使って識別の実験をしてみよう.今度は,ロジスティック回帰とニューラルネットだよ.\n",
"\n",
"参考: http://yann.lecun.com/exdb/mnist/"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_EAfO4znc1vS",
"colab_type": "text"
},
"source": [
"### 準備"
]
},
{
"cell_type": "code",
"metadata": {
"id": "TMq9Hz4LYiOK",
"colab_type": "code",
"colab": {}
},
"source": [
"# 上記サイトから 4 つのファイルを入手し, gunzip\n",
"! for fn in train-images-idx3-ubyte t10k-images-idx3-ubyte train-labels-idx1-ubyte t10k-labels-idx1-ubyte; do if [ ! -e ${fn} ]; then wget -nc http://yann.lecun.com/exdb/mnist/${fn}.gz ; gunzip ${fn}.gz; fi; done\n",
"! ls -l"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "KJcd4JC0Ytql",
"colab_type": "code",
"colab": {}
},
"source": [
"# MNIST のためのクラスの定義\n",
"\n",
"import struct\n",
"import os\n",
"import numpy as np\n",
"\n",
"class MNIST:\n",
" \n",
" def __init__(self):\n",
"\n",
" fnImageL = 'train-images-idx3-ubyte'\n",
" fnImageT = 't10k-images-idx3-ubyte'\n",
" fnLabelL = 'train-labels-idx1-ubyte'\n",
" fnLabelT = 't10k-labels-idx1-ubyte'\n",
" \n",
" if not os.path.exists(fnImageL) or not os.path.exists(fnImageT) or not os.path.exists(fnLabelL) or not os.path.exists(fnLabelT):\n",
" print('Please get the MNIST files first.')\n",
" return\n",
" \n",
" self.fnImage = {'L': fnImageL, 'T': fnImageT}\n",
" self.fnLabel = {'L': fnLabelL, 'T': fnLabelT}\n",
" self.nrow = 28\n",
" self.ncol = 28\n",
" self.nclass = 10\n",
" \n",
" \n",
" def getLabel( self, LT ):\n",
" \n",
" return _readLabel( self.fnLabel[LT] )\n",
" \n",
" \n",
" def getImage( self, LT ):\n",
" \n",
" return _readImage( self.fnImage[LT] )\n",
" \n",
" \n",
"##### reading the label file\n",
"#\n",
"def _readLabel( fnLabel ):\n",
" \n",
" f = open( fnLabel, 'rb' )\n",
" \n",
" ### header (two 4B integers, magic number(2049) & number of items)\n",
" #\n",
" header = f.read( 8 )\n",
" mn, num = struct.unpack( '>2i', header ) # MSB first (bigendian)\n",
" assert mn == 2049\n",
" #print mn, num\n",
" \n",
" ### labels (unsigned byte)\n",
" #\n",
" label = np.array( struct.unpack( '>%dB' % num, f.read() ), dtype = int )\n",
" \n",
" f.close()\n",
" \n",
" return label\n",
"\n",
" \n",
"##### reading the image file\n",
"#\n",
"def _readImage( fnImage ):\n",
" \n",
" f = open( fnImage, 'rb' )\n",
" \n",
" ### header (four 4B integers, magic number(2051), #images, #rows, and #cols\n",
" #\n",
" header = f.read( 16 )\n",
" mn, num, nrow, ncol = struct.unpack( '>4i', header ) # MSB first (bigendian)\n",
" assert mn == 2051\n",
" #print mn, num, nrow, ncol\n",
" \n",
" ### pixels (unsigned byte)\n",
" #\n",
" npixel = ncol * nrow\n",
" #pixel = np.empty( ( num, npixel ), dtype = int )\n",
" #pixel = np.empty( ( num, npixel ), dtype = np.int32 )\n",
" pixel = np.empty( ( num, npixel ) )\n",
" for i in range( num ):\n",
" buf = struct.unpack( '>%dB' % npixel, f.read( npixel ) )\n",
" pixel[i, :] = np.asarray( buf )\n",
" \n",
" f.close()\n",
" \n",
" return pixel\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "9B7_wzfnYaYd",
"colab_type": "code",
"colab": {}
},
"source": [
"# データを可視化するための関数\n",
"\n",
"def display(data, nx, ny, nrow=28, ncol=28, gap=4):\n",
"\n",
" assert data.shape[0] == nx*ny\n",
" assert data.shape[1] == nrow*ncol\n",
"\n",
" # 並べた画像の幅と高さ\n",
" width = nx * (ncol + gap) + gap\n",
" height = ny * (nrow + gap) + gap\n",
"\n",
" # 画像の作成\n",
" img = np.zeros((height, width), dtype = int) + 128\n",
" for iy in range(ny):\n",
" lty = iy*(nrow + gap) + gap\n",
" for ix in range(nx):\n",
" ltx = ix*(ncol + gap) + gap\n",
" img[lty:lty+nrow, ltx:ltx+ncol] = data[iy*nx+ix].reshape((nrow, ncol))\n",
"\n",
" # 画像の出力\n",
" plt.axis('off')\n",
" plt.imshow(img, cmap = 'gray')\n",
" #cv2.imwrite(ID + '_proto.png', img)\n",
" plt.show()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "3p3d_5hDagv1",
"colab_type": "text"
},
"source": [
"### 学習データの準備"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ZPjOgsYvZAgX",
"colab_type": "code",
"colab": {}
},
"source": [
"# MNIST の学習データ\n",
"mn = MNIST()\n",
"datL = mn.getImage('L')\n",
"labL = mn.getLabel('L')\n",
"NL = datL.shape[0]\n",
"print('# 学習データのデータ数: ', NL)\n",
"D = mn.nrow * mn.ncol\n",
"K = 10"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "jvpXmhq2ZNr9",
"colab_type": "code",
"colab": {}
},
"source": [
"# 学習データの最初の50個を可視化してみる\n",
"display(datL[:50, :], 10, 5)\n",
"\n",
"# これらの正解ラベル\n",
"for i in range(5):\n",
" print(labL[i*10:((i+1)*10)])"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "D-boZcagmGZW",
"colab_type": "code",
"colab": {}
},
"source": [
"# MNIST のテストデータ\n",
"datT = mn.getImage('T')\n",
"labT = mn.getLabel('T')\n",
"NT = datT.shape[0]\n",
"print('# テストデータのデータ数: ', NT)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9faxTKNdr7I1",
"colab_type": "text"
},
"source": [
"### ロジスティック回帰とニューラルネットによる識別"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hdprDDL2LOqD",
"colab_type": "text"
},
"source": [
"ネットワークとその学習法を定義."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IUNa4Pmjr8B4",
"colab_type": "code",
"colab": {}
},
"source": [
"### ネットワークを定義する関数\n",
"#\n",
"def makeNetwork(model_type, D, K, H1 = 256, H2 = 128):\n",
" \n",
" network = keras.models.Sequential()\n",
" \n",
" if model_type == 1: # ロジスティック回帰\n",
" print('# ロジスティック回帰やで.入力は{}次元,出力は{}次元ですわ.'.format(D, K))\n",
" network.add(keras.layers.Dense(K, input_shape = (D,)))\n",
" network.add(keras.layers.Activation('softmax'))\n",
" elif model_type == 2: # 2層 MLP\n",
" print('# 2層MLPでおま.入力は{}次元,隠れ層のニューロン数は {},出力のニューロン数は {} でっせ.'.format(D, H1, K))\n",
" network.add(keras.layers.Dense(H1, input_shape = (D,)))\n",
" network.add(keras.layers.Activation('relu'))\n",
" network.add(keras.layers.Dense(K))\n",
" network.add(keras.layers.Activation('softmax'))\n",
" elif model_type == 3: # 3層 MLP\n",
" print('# 3層MLPでおま.入力は{}次元,ニューロン数は,(1つ目の隠れ層) - (2つ目の隠れ層) - (出力) の順に {} - {} - {} でんがなまんがな.'.format(D, H1, H2, K))\n",
" network.add(keras.layers.Dense(H1, input_shape = (D,)))\n",
" network.add(keras.layers.Activation('relu'))\n",
" network.add(keras.layers.Dense(H2))\n",
" network.add(keras.layers.Activation('relu'))\n",
" network.add(keras.layers.Dense(K))\n",
" network.add(keras.layers.Activation('softmax'))\n",
" else:\n",
" exit('makeNetwork: model_type error')\n",
"\n",
" # パラメータ最適化法の設定.確率的勾配降下法(Stochastic Gradient Descent)を用いる\n",
" optimizer = keras.optimizers.SGD(lr = 0.1, momentum = 0.9)\n",
"\n",
" network.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy'])\n",
" \n",
" return network "
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_9LwnfGVLX67",
"colab_type": "text"
},
"source": [
"学習を実行し,テストデータで検証.以下の3行目の関数 makeNetwork の第1引数を変えるとネットワークの構造を変えることができる."
]
},
{
"cell_type": "code",
"metadata": {
"id": "-ZmNppti0_bf",
"colab_type": "code",
"colab": {}
},
"source": [
"### ネットワークの作成\n",
"#\n",
"network = makeNetwork(1, D, K) # 第1引数が 1 だとロジスティック回帰, 2 だと2層MLP, 3 だと3層MLP\n",
"\n",
"### 学習\n",
"#\n",
"batchsize = 128\n",
"XL = datL / 255\n",
"YL = keras.utils.to_categorical(labL, num_classes = K)\n",
"\n",
"print('# 学習回数 コスト 誤識別率[%]')\n",
"start = datetime.datetime.now()\n",
"\n",
"for i in range(10000+1): # 学習の繰り返し\n",
"\n",
" if (i < 500 and i % 100 == 0) or (i % 500 == 0):\n",
" lossL, accL = network.evaluate(XL, YL, batch_size = batchsize, verbose = 0) # 評価\n",
" print('{} {:.3f} {:.2f}'.format(i, lossL, (1-accL)*100))\n",
" \n",
" ii = np.random.randint(XL.shape[0], size = batchsize)\n",
" network.train_on_batch(XL[ii], YL[ii]) # 学習\n",
"\n",
"print('# 経過時間:', datetime.datetime.now() - start)\n",
"\n",
"lossL, accL = network.evaluate(XL, YL, batch_size = batchsize, verbose = 0)\n",
"print('# 学習データに対するコストと誤識別率[%]: {:.3f} {:.2f}'.format(lossL, (1-accL)*100))\n",
"\n",
"### テスト\n",
"#\n",
"XT = datT / 255\n",
"YT = keras.utils.to_categorical(labT, num_classes = K)\n",
"lossT, accT = network.evaluate(XT, YT, batch_size = batchsize, verbose = 0)\n",
"print('# テストデータに対するコストと誤識別率[%]: {:.3f} {:.2f}'.format(lossT, (1-accT)*100))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "7CKOPjZVLorx",
"colab_type": "text"
},
"source": [
"### ★★★ 次のことをやりましょう★★★\n",
"\n",
"(1) ロジスティック回帰で学習・テストする実験を行い,上記のセル出力の最初の1行(「# ロジスティック...」)と最後の3行(「# 経過時間」以降)をメモしておこう.ただし,ロジスティック回帰ではパラメータの初期値に乱数を用いており,その値によって学習の収束の行方が変わるので,実行するたびに結果は変化する(以下のMLPも同様).本当は何回か実験を繰り返すのがよいが,ここでは一度だけで十分である.\n",
"\n",
"(2) 2層MLPで学習・テストする実験を行い,(1) と同様に結果をメモしておこう.\n",
"\n",
"(3) 3層MLPで学習・テストする実験を行い,(1) と同様に結果をメモしておこう.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9TGB_01AUB9C",
"colab_type": "text"
},
"source": [
"## ★ GPGPUと畳み込みニューラルネットワーク(おまけ課題)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Zm33Pz88UG_D",
"colab_type": "code",
"colab": {}
},
"source": [
"### ネットワークを定義する関数\n",
"#\n",
"def makeCNN(K):\n",
" \n",
" cnn = keras.models.Sequential()\n",
" cnn.add(keras.layers.Conv2D(32, 5, input_shape = (28, 28, 1))) # 畳み込み層.カーネルサイズ 5x5, チャンネル数 32\n",
" cnn.add(keras.layers.Activation('relu'))\n",
" cnn.add(keras.layers.MaxPooling2D()) # 最大値プーリング 2x2\n",
" cnn.add(keras.layers.Conv2D(64, 5)) # 畳み込み層.カーネルサイズ 5x5, チャンネル数 64\n",
" cnn.add(keras.layers.Activation('relu'))\n",
" cnn.add(keras.layers.MaxPooling2D()) # 最大値プーリング 2x2\n",
" cnn.add(keras.layers.Flatten()) \n",
" cnn.add(keras.layers.Dense(1024)) # 全結合層.ニューロン数 1024\n",
" cnn.add(keras.layers.Activation('relu'))\n",
" cnn.add(keras.layers.Dense(K)) # 出力層\n",
" cnn.add(keras.layers.Activation('softmax'))\n",
"\n",
" # パラメータ最適化法の設定.Adam法を用いる\n",
" optimizer = keras.optimizers.Adam(lr = 0.001)\n",
"\n",
" cnn.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy'])\n",
" \n",
" return cnn "
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "UgYGSZgMWBQa",
"colab_type": "code",
"colab": {}
},
"source": [
"hoge # エラーを起こして不用意に実行しないようにしてある.おまけ課題やるときはこの行を削除しよう\n",
"\n",
"### CNNの作成\n",
"#\n",
"cnn = makeCNN(K)\n",
"\n",
"### 学習\n",
"#\n",
"batchsize = 128\n",
"XL = (datL / 255).reshape((-1, mn.nrow, mn.ncol))[:, :, :, np.newaxis]\n",
"YL = keras.utils.to_categorical(labL, num_classes = K)\n",
"\n",
"print('# 学習回数 コスト 誤識別率[%]')\n",
"start = datetime.datetime.now()\n",
"\n",
"for i in range(10000+1): # 学習の繰り返し\n",
"\n",
" if (i < 500 and i % 100 == 0) or (i % 500 == 0):\n",
" lossL, accL = cnn.evaluate(XL, YL, batch_size = batchsize, verbose = 0) # 評価\n",
" print('{} {:.3f} {:.2f}'.format(i, lossL, (1-accL)*100))\n",
" \n",
" ii = np.random.randint(XL.shape[0], size = batchsize)\n",
" cnn.train_on_batch(XL[ii], YL[ii]) # 学習\n",
"\n",
"print('# 経過時間:', datetime.datetime.now() - start)\n",
"\n",
"lossL, accL = cnn.evaluate(XL, YL, batch_size = batchsize, verbose = 0)\n",
"print('# 学習データに対するコストと誤識別率[%]: {:.3f} {:.2f}'.format(lossL, (1-accL)*100))\n",
"\n",
"### テスト\n",
"#\n",
"XT = (datT / 255).reshape((-1, mn.nrow, mn.ncol))[:, :, :, np.newaxis]\n",
"YT = keras.utils.to_categorical(labT, num_classes = K)\n",
"lossT, accT = cnn.evaluate(XT, YT, batch_size = batchsize, verbose = 0)\n",
"print('# テストデータに対するコストと誤識別率[%]: {:.3f} {:.2f}'.format(lossT, (1-accT)*100))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "xYxl7KI0b-tg",
"colab_type": "text"
},
"source": [
"### ★★★ (おまけ課題)次の問に答えなさい.★★★\n",
"\n",
"この問の答えは,Teamsチャットで takataka に直接送ってください.\n",
"\n",
"(1) GPU (Graphics Processing Unit) および GPGPU (General-purpose computing on GPU) について調べなさい.それぞれどのようなものか,ごく簡単にまとめて報告しなさい.\n",
"\n",
"(2) 3層MLPの実験のセルの\n",
"```\n",
"network = makeNetwork(3, D, K)\n",
"```\n",
"というところを\n",
"```\n",
"network = makeNetwork(3, D, K, H1=1024)\n",
"```\n",
"とすると,3層MLPのニューロン数を 1024-128-10 とすることができる.この条件で(CPUを使って学習の計算をする)実験を行い,経過時間を報告しなさい.たぶん3分から4分くらいかかります.\n",
"\n",
"(3) この Google Colaboratory では,設定を変更することで GPU を使用可能である.また,この宿題で使用している深層学習ライブラリは GPU に対応しており,GPU が利用可能なら自動的に利用するようになっている.というわけで,次のようにしてみよう.\n",
"\n",
"1. ページ上部のメニューの「ランタイム」> 「ランタイムのタイプを変更」> 「ハードウェアアクセラレータ」の項が「None」になっているのを「GPU」に変更して「保存」\n",
"1. 全てのセルを実行し直す\n",
"1. 3層MLPを用いた実験をやり直し,経過時間を記録する.\n",
"1. その経過時間をCPUを使った場合((2)の実験)とあわせて報告しなさい.\n",
"\n",
"\n",
"(4) 上記の CNN(Convolution Neural Network) の実験をGPUを用いて実行し,結果を報告しなさい.ロジスティック回帰や2層3層MLPとは結果がどのように異なるか考察しなさい.CNNはGPU使わなくても(CPUのみでも)動きますが,すごく遅いです.青春の無駄遣いをしたいというよっぽど暇な人以外はCPUのみの方で試す必要ありません.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "IsAJTYuj1NJa",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment