Skip to content

Instantly share code, notes, and snippets.

@tsu-nera
Created August 30, 2018 20:22
Show Gist options
  • Save tsu-nera/b86d6c188feb02885bbabd8bbad78c5b to your computer and use it in GitHub Desktop.
Save tsu-nera/b86d6c188feb02885bbabd8bbad78c5b to your computer and use it in GitHub Desktop.
motewan.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "motewan.ipynb",
"version": "0.3.2",
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"[View in Colaboratory](https://colab.research.google.com/gist/tsu-nera/b86d6c188feb02885bbabd8bbad78c5b/motewan.ipynb)"
]
},
{
"metadata": {
"id": "K9jpWXvl7ePt",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# tfjs-motewan-study\n",
"\n"
]
},
{
"metadata": {
"id": "0nfYwF2C7sFf",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"- https://github.com/HCIILAB/SCUT-FBP5500-Database-Release"
]
},
{
"metadata": {
"id": "67j3NQLh50Cw",
"colab_type": "code",
"colab": {
"resources": {
"http://localhost:8080/nbextensions/google.colab/files.js": {
"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=",
"ok": true,
"headers": [
[
"content-type",
"application/javascript"
]
],
"status": 200,
"status_text": ""
}
},
"base_uri": "https://localhost:8080/",
"height": 72
},
"outputId": "463cf68b-d4fc-4172-ccad-8bc3ac3c8d85"
},
"cell_type": "code",
"source": [
"from google.colab import files\n",
"uploaded = files.upload()"
],
"execution_count": 26,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <input type=\"file\" id=\"files-b41af0e6-96d2-4bdd-b9db-6a9c09e9dfe4\" name=\"files[]\" multiple disabled />\n",
" <output id=\"result-b41af0e6-96d2-4bdd-b9db-6a9c09e9dfe4\">\n",
" Upload widget is only available when the cell has been executed in the\n",
" current browser session. Please rerun this cell to enable.\n",
" </output>\n",
" <script src=\"/nbextensions/google.colab/files.js\"></script> "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {
"tags": []
}
},
{
"output_type": "stream",
"text": [
"Saving SCUT-FBP5500_v2.1.zip to SCUT-FBP5500_v2.1.zip\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "QrFFmuVwGvYz",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!unzip SCUT-FBP5500_v2.1.zip"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "UlpYw7x5H6Tz",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 66
},
"outputId": "fd78087a-e32d-444e-d3db-d88ef75a87c1"
},
"cell_type": "code",
"source": [
"!ls"
],
"execution_count": 78,
"outputs": [
{
"output_type": "stream",
"text": [
"All_Ratings.xlsx Images\t model.zip train_test_files\r\n",
"data\t\t Images_Sources.xlsx README.txt train_test_files.zip\r\n",
"facial landmark model\t\t small_last4.h5\r\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "CREe6mg-Hiqe",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import pandas as pd\n",
"ratings=pd.read_table(f\"/content/SCUT-FBP5500_v2/train_test_files/All_labels.txt\", delimiter=\" \", names=[\"image\", \"rating\"])"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "usaKgVJ5IJdF",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"ratings[\"race\"] = ratings[\"image\"].apply(lambda x: \"Asian\" if x[0]==\"A\" else \"Caucasian\")\n",
"ratings[\"gender\"] = ratings[\"image\"].apply(lambda x: \"Male\" if x[1]==\"M\" else \"Female\")\n",
"ratings[\"target\"] = ratings[\"rating\"]/5"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "42TrIX6mIRMX",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"ratings_M=ratings[ratings.gender==\"Male\"]"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "OK3ZgWAjIa29",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 197
},
"outputId": "26540632-7fd8-44bb-e668-34ea65d17c8b"
},
"cell_type": "code",
"source": [
"ratings_M.head()"
],
"execution_count": 33,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>image</th>\n",
" <th>rating</th>\n",
" <th>race</th>\n",
" <th>gender</th>\n",
" <th>target</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>AM1384.jpg</td>\n",
" <td>2.466667</td>\n",
" <td>Asian</td>\n",
" <td>Male</td>\n",
" <td>0.493333</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>AM1234.jpg</td>\n",
" <td>2.150000</td>\n",
" <td>Asian</td>\n",
" <td>Male</td>\n",
" <td>0.430000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>AM1774.jpg</td>\n",
" <td>3.750000</td>\n",
" <td>Asian</td>\n",
" <td>Male</td>\n",
" <td>0.750000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>AM704.jpg</td>\n",
" <td>2.483333</td>\n",
" <td>Asian</td>\n",
" <td>Male</td>\n",
" <td>0.496667</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>AM1172.jpg</td>\n",
" <td>2.266667</td>\n",
" <td>Asian</td>\n",
" <td>Male</td>\n",
" <td>0.453333</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" image rating race gender target\n",
"1 AM1384.jpg 2.466667 Asian Male 0.493333\n",
"2 AM1234.jpg 2.150000 Asian Male 0.430000\n",
"3 AM1774.jpg 3.750000 Asian Male 0.750000\n",
"6 AM704.jpg 2.483333 Asian Male 0.496667\n",
"7 AM1172.jpg 2.266667 Asian Male 0.453333"
]
},
"metadata": {
"tags": []
},
"execution_count": 33
}
]
},
{
"metadata": {
"id": "YsDS5HasIjfC",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 33
},
"outputId": "52c53815-252b-449f-f518-b81c26fe4259"
},
"cell_type": "code",
"source": [
"%cd SCUT-FBP5500_v2/"
],
"execution_count": 34,
"outputs": [
{
"output_type": "stream",
"text": [
"/content/SCUT-FBP5500_v2\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "yOh-5iC9JOfY",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"%mkdir -p data/train/Images\n",
"%mkdir -p data/val/Images\n",
"%mkdir -p Images/like\n",
"%mkdir -p Images/nope\n",
"%mkdir -p data/train/Images/like\n",
"%mkdir -p data/train/Images/nope\n",
"%mkdir -p data/val/Images/like\n",
"%mkdir -p data/val/Images/nope"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "cHz8N-JII3Wx",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import glob\n",
"import numpy as np\n",
"import shutil\n",
"g = glob.glob('Images/?M*.jpg')\n",
"shuf = np.random.permutation(g)\n",
"for i in range(2750):\n",
" if i<2000:\n",
" shutil.copy(shuf[i], 'data/train/' + shuf[i])\n",
" else:\n",
" shutil.copy(shuf[i], 'data/val/' + shuf[i])"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "6fVUwO1gJGPa",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"for row in ratings_M.iterrows():\n",
" if row[1][4] > 0.6:\n",
" shutil.move(\"Images/\"+row[1][0], \"Images/like/\"+row[1][0])\n",
" else:\n",
" shutil.move(\"Images/\"+row[1][0], \"Images/nope/\"+row[1][0])"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "AdzzR_U7Jqf-",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import glob\n",
"g = glob.glob('Images/like/?M*.jpg')\n",
"shuf = np.random.permutation(g)\n",
"for i in range(877):\n",
" if i>877/5:\n",
" shutil.copy(shuf[i], 'data/train/' + shuf[i])\n",
" else:\n",
" shutil.copy(shuf[i], 'data/val/' + shuf[i])"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "gNOJBntTLb-5",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import glob\n",
"g = glob.glob('Images/nope/?M*.jpg')\n",
"shuf = np.random.permutation(g)\n",
"for i in range(1873):\n",
" if i>1873/5:\n",
" shutil.copy(shuf[i], 'data/train/' + shuf[i])\n",
" else:\n",
" shutil.copy(shuf[i], 'data/val/' + shuf[i])"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "YPbD_vN5Ma3v",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
""
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "LYUgx1-wOq_F",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## build model"
]
},
{
"metadata": {
"id": "-Wa9B3WQnHbB",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 316
},
"outputId": "e688e5ff-cd49-43b0-baa1-fb90805fba49"
},
"cell_type": "code",
"source": [
"!pip install keras==2.2.2"
],
"execution_count": 45,
"outputs": [
{
"output_type": "stream",
"text": [
"Collecting keras==2.2.2\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/34/7d/b1dedde8af99bd82f20ed7e9697aac0597de3049b1f786aa2aac3b9bd4da/Keras-2.2.2-py2.py3-none-any.whl (299kB)\n",
"\u001b[K 100% |████████████████████████████████| 307kB 15.7MB/s \n",
"\u001b[?25hRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras==2.2.2) (2.8.0)\n",
"Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.2) (1.11.0)\n",
"Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.2) (1.14.5)\n",
"Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras==2.2.2) (3.13)\n",
"Requirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.2) (0.19.1)\n",
"Collecting keras-preprocessing==1.0.2 (from keras==2.2.2)\n",
" Downloading https://files.pythonhosted.org/packages/71/26/1e778ebd737032749824d5cba7dbd3b0cf9234b87ab5ec79f5f0403ca7e9/Keras_Preprocessing-1.0.2-py2.py3-none-any.whl\n",
"Collecting keras-applications==1.0.4 (from keras==2.2.2)\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/54/90/8f327deaa37a71caddb59b7b4aaa9d4b3e90c0e76f8c2d1572005278ddc5/Keras_Applications-1.0.4-py2.py3-none-any.whl (43kB)\n",
"\u001b[K 100% |████████████████████████████████| 51kB 23.5MB/s \n",
"\u001b[?25hInstalling collected packages: keras-preprocessing, keras-applications, keras\n",
" Found existing installation: Keras 2.1.6\n",
" Uninstalling Keras-2.1.6:\n",
" Successfully uninstalled Keras-2.1.6\n",
"Successfully installed keras-2.2.2 keras-applications-1.0.4 keras-preprocessing-1.0.2\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "gKhgRtTKO0Qz",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 50
},
"outputId": "f09a053e-d02e-4b98-c62c-946e779f7a42"
},
"cell_type": "code",
"source": [
"import keras\n",
"mobile_conv = keras.applications.mobilenet.MobileNet(weights='imagenet', include_top=False, input_shape=(160, 160, 3))"
],
"execution_count": 52,
"outputs": [
{
"output_type": "stream",
"text": [
"Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.6/mobilenet_1_0_160_tf_no_top.h5\n",
"17227776/17225924 [==============================] - 4s 0us/step\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "pmc_-O5wQdpD",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1616
},
"outputId": "5a2d0efa-cee1-4c75-fa91-30889e7d762f"
},
"cell_type": "code",
"source": [
"# Freeze the layers except the last 4 layers\n",
"for layer in mobile_conv.layers[:-4]:\n",
" layer.trainable = False\n",
" \n",
"# Check the trainable status of the individual layers\n",
"for layer in mobile_conv.layers:\n",
" print(layer, layer.trainable)"
],
"execution_count": 53,
"outputs": [
{
"output_type": "stream",
"text": [
"<keras.engine.topology.InputLayer object at 0x7fec820c9b38> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec8212d6d8> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec8212d630> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec8212dac8> False\n",
"<keras.layers.core.Activation object at 0x7fec8212df98> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7feceb114908> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7feceb114978> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec68001828> False\n",
"<keras.layers.core.Activation object at 0x7fec68010f60> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec67743e48> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7feceb1bb4e0> False\n",
"<keras.layers.core.Activation object at 0x7fec8212d0b8> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec8215ba58> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec67699f28> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec82128e48> False\n",
"<keras.layers.core.Activation object at 0x7fec676ba7f0> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec67625eb8> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec675bc128> False\n",
"<keras.layers.core.Activation object at 0x7fec67562e80> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec67516c50> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec6749aeb8> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec67516cc0> False\n",
"<keras.layers.core.Activation object at 0x7fec67462a58> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec6740a780> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec6740a0b8> False\n",
"<keras.layers.core.Activation object at 0x7fec673eb630> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec67355c18> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec672f4048> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec67355470> False\n",
"<keras.layers.core.Activation object at 0x7fec672dc358> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec6724cd30> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec6724c668> False\n",
"<keras.layers.core.Activation object at 0x7fec6722c470> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec6718fe80> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec6713e940> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec67164d30> False\n",
"<keras.layers.core.Activation object at 0x7fec671247b8> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec6708ff28> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec67029ba8> False\n",
"<keras.layers.core.Activation object at 0x7fec66fd3fd0> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec66f87da0> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66f87ba8> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66fad940> False\n",
"<keras.layers.core.Activation object at 0x7fec66ec6eb8> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec66ef68d0> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66e77da0> False\n",
"<keras.layers.core.Activation object at 0x7fec66e594a8> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec66deb358> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66d670f0> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66dc9d68> False\n",
"<keras.layers.core.Activation object at 0x7fec66d504a8> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec66cc1ef0> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66cc17b8> False\n",
"<keras.layers.core.Activation object at 0x7fec66c7cb70> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec66c2d940> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66bb2eb8> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66c2ddd8> False\n",
"<keras.layers.core.Activation object at 0x7fec66b01d68> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec66b2a470> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66a9ec88> False\n",
"<keras.layers.core.Activation object at 0x7fec66a4afd0> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec66a79cc0> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec669a2048> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec66a79e10> False\n",
"<keras.layers.core.Activation object at 0x7fec6693feb8> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec6696ea20> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec668f0e80> False\n",
"<keras.layers.core.Activation object at 0x7fec668d1748> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec668614a8> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66821a20> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec668bba20> False\n",
"<keras.layers.core.Activation object at 0x7fec667c7048> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec667b6cf8> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec666d0048> False\n",
"<keras.layers.core.Activation object at 0x7fec666f3cc0> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec666a6a90> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66628d68> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec666a6f60> False\n",
"<keras.layers.core.Activation object at 0x7fec665f8e10> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec6659e5c0> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec6659e128> False\n",
"<keras.layers.core.Activation object at 0x7fec664fe780> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec664f12b0> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec664f1e48> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec664ae2b0> False\n",
"<keras.layers.core.Activation object at 0x7fec664761d0> False\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec663e3b70> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec663e34a8> False\n",
"<keras.layers.core.Activation object at 0x7fec66347e10> False\n",
"<keras.layers.convolutional.ZeroPadding2D object at 0x7fec662d85f8> False\n",
"<keras.layers.convolutional.DepthwiseConv2D object at 0x7fec66331860> False\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec662cc240> False\n",
"<keras.layers.core.Activation object at 0x7fec66227e10> True\n",
"<keras.layers.convolutional.Conv2D object at 0x7fec661ce518> True\n",
"<keras.layers.normalization.BatchNormalization object at 0x7fec6618acc0> True\n",
"<keras.layers.core.Activation object at 0x7fec661adc18> True\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "BjoB5iEkQy52",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 300
},
"outputId": "c38fe1bd-8322-40d0-b904-3dcbc4a278f3"
},
"cell_type": "code",
"source": [
"from keras import models\n",
"from keras import layers\n",
"from keras import optimizers\n",
" \n",
"# Create the model\n",
"model = models.Sequential()\n",
" \n",
"# Add the vgg convolutional base model\n",
"model.add(mobile_conv)\n",
" \n",
"# Add new layers\n",
"model.add(layers.Flatten())\n",
"model.add(layers.Dense(1024, activation='relu'))\n",
"model.add(layers.Dropout(0.25))\n",
"model.add(layers.Dense(2, activation='softmax'))\n",
" \n",
"# Show a summary of the model. Check the number of trainable parameters\n",
"model.summary()"
],
"execution_count": 98,
"outputs": [
{
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"mobilenet_1.00_160 (Model) (None, 5, 5, 1024) 3228864 \n",
"_________________________________________________________________\n",
"flatten_7 (Flatten) (None, 25600) 0 \n",
"_________________________________________________________________\n",
"dense_13 (Dense) (None, 1024) 26215424 \n",
"_________________________________________________________________\n",
"dropout_7 (Dropout) (None, 1024) 0 \n",
"_________________________________________________________________\n",
"dense_14 (Dense) (None, 2) 2050 \n",
"=================================================================\n",
"Total params: 29,446,338\n",
"Trainable params: 27,268,098\n",
"Non-trainable params: 2,178,240\n",
"_________________________________________________________________\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "XuhNoQBORcd2",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 50
},
"outputId": "ef291b48-6171-48e9-9f1e-415f34d9bd80"
},
"cell_type": "code",
"source": [
"from keras.preprocessing.image import ImageDataGenerator\n",
"train_datagen = ImageDataGenerator(\n",
" rescale=1./255)\n",
" \n",
"validation_datagen = ImageDataGenerator(rescale=1./255)\n",
" \n",
"# Change the batchsize according to your system RAM\n",
"train_batchsize = 100\n",
"val_batchsize = 10\n",
"\n",
"train_dir = \"/content/SCUT-FBP5500_v2/data/train/Images\"\n",
"validation_dir = \"/content/SCUT-FBP5500_v2/data/val/Images\"\n",
"image_size=160\n",
"\n",
"train_generator = train_datagen.flow_from_directory(\n",
" train_dir,\n",
" target_size=(image_size, image_size),\n",
" batch_size=train_batchsize,\n",
" class_mode='categorical')\n",
" \n",
"validation_generator = validation_datagen.flow_from_directory(\n",
" validation_dir,\n",
" target_size=(image_size, image_size),\n",
" batch_size=val_batchsize,\n",
" class_mode='categorical',\n",
" shuffle=False)"
],
"execution_count": 99,
"outputs": [
{
"output_type": "stream",
"text": [
"Found 2199 images belonging to 2 classes.\n",
"Found 551 images belonging to 2 classes.\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "K3QMr6iTS0ZN",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 683
},
"outputId": "569554d7-e188-4697-a60a-315dd332afe3"
},
"cell_type": "code",
"source": [
"# Compile the model\n",
"sgd = optimizers.SGD(lr=0.0001)\n",
"model.compile(loss='binary_crossentropy',\n",
" optimizer=sgd,\n",
" metrics=['accuracy'])\n",
"# Train the model\n",
"history = model.fit_generator(\n",
" train_generator,\n",
" steps_per_epoch=train_generator.samples/train_generator.batch_size ,\n",
" epochs=20,\n",
" validation_data=validation_generator,\n",
" validation_steps=validation_generator.samples/validation_generator.batch_size,\n",
" verbose=1)\n",
" \n",
"# Save the model\n",
"model.save('small_last4.h5')"
],
"execution_count": 100,
"outputs": [
{
"output_type": "stream",
"text": [
"Epoch 1/20\n",
"22/21 [==============================] - 16s 714ms/step - loss: 1.2249 - acc: 0.5553 - val_loss: 0.7168 - val_acc: 0.6388\n",
"Epoch 2/20\n",
"22/21 [==============================] - 9s 391ms/step - loss: 0.9686 - acc: 0.6494 - val_loss: 0.6886 - val_acc: 0.6479\n",
"Epoch 3/20\n",
"22/21 [==============================] - 9s 413ms/step - loss: 0.7681 - acc: 0.7122 - val_loss: 0.6723 - val_acc: 0.6588\n",
"Epoch 4/20\n",
"22/21 [==============================] - 9s 425ms/step - loss: 0.6675 - acc: 0.7444 - val_loss: 0.6546 - val_acc: 0.6624\n",
"Epoch 5/20\n",
"22/21 [==============================] - 9s 396ms/step - loss: 0.6541 - acc: 0.7585 - val_loss: 0.6501 - val_acc: 0.6842\n",
"Epoch 6/20\n",
"22/21 [==============================] - 9s 413ms/step - loss: 0.6402 - acc: 0.7653 - val_loss: 0.6320 - val_acc: 0.6715\n",
"Epoch 7/20\n",
"22/21 [==============================] - 9s 404ms/step - loss: 0.6066 - acc: 0.7813 - val_loss: 0.6219 - val_acc: 0.6788\n",
"Epoch 8/20\n",
"22/21 [==============================] - 9s 405ms/step - loss: 0.5716 - acc: 0.7876 - val_loss: 0.6151 - val_acc: 0.6806\n",
"Epoch 9/20\n",
"22/21 [==============================] - 9s 402ms/step - loss: 0.5364 - acc: 0.7913 - val_loss: 0.6136 - val_acc: 0.6969\n",
"Epoch 10/20\n",
"22/21 [==============================] - 9s 403ms/step - loss: 0.4971 - acc: 0.8190 - val_loss: 0.6049 - val_acc: 0.7042\n",
"Epoch 11/20\n",
"22/21 [==============================] - 9s 405ms/step - loss: 0.5154 - acc: 0.8031 - val_loss: 0.6038 - val_acc: 0.7005\n",
"Epoch 12/20\n",
"22/21 [==============================] - 9s 405ms/step - loss: 0.5012 - acc: 0.8067 - val_loss: 0.5923 - val_acc: 0.7042\n",
"Epoch 13/20\n",
"22/21 [==============================] - 9s 403ms/step - loss: 0.4648 - acc: 0.8254 - val_loss: 0.5886 - val_acc: 0.7042\n",
"Epoch 14/20\n",
"22/21 [==============================] - 9s 414ms/step - loss: 0.4401 - acc: 0.8286 - val_loss: 0.5886 - val_acc: 0.7078\n",
"Epoch 15/20\n",
"22/21 [==============================] - 9s 406ms/step - loss: 0.4374 - acc: 0.8367 - val_loss: 0.5829 - val_acc: 0.7132\n",
"Epoch 16/20\n",
"22/21 [==============================] - 9s 403ms/step - loss: 0.4178 - acc: 0.8395 - val_loss: 0.5791 - val_acc: 0.7114\n",
"Epoch 17/20\n",
"22/21 [==============================] - 9s 410ms/step - loss: 0.4130 - acc: 0.8463 - val_loss: 0.5778 - val_acc: 0.7132\n",
"Epoch 18/20\n",
"22/21 [==============================] - 9s 399ms/step - loss: 0.4092 - acc: 0.8376 - val_loss: 0.5761 - val_acc: 0.7187\n",
"Epoch 19/20\n",
"22/21 [==============================] - 9s 404ms/step - loss: 0.3878 - acc: 0.8499 - val_loss: 0.5741 - val_acc: 0.7132\n",
"Epoch 20/20\n",
"22/21 [==============================] - 9s 410ms/step - loss: 0.3688 - acc: 0.8577 - val_loss: 0.5707 - val_acc: 0.7260\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "TFK7GqZRMBrb",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 183
},
"outputId": "59e906c6-f24b-4f5d-fea9-a8754e019c81"
},
"cell_type": "code",
"source": [
"model.predict_proba(validation_generator.next()[0])"
],
"execution_count": 87,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[0.18870194, 0.8112981 ],\n",
" [0.20778562, 0.79221433],\n",
" [0.35169077, 0.64830923],\n",
" [0.4806354 , 0.51936454],\n",
" [0.65494126, 0.3450587 ],\n",
" [0.12902428, 0.8709757 ],\n",
" [0.26034087, 0.73965913],\n",
" [0.59259474, 0.4074053 ],\n",
" [0.48254815, 0.5174518 ],\n",
" [0.03605931, 0.9639407 ]], dtype=float32)"
]
},
"metadata": {
"tags": []
},
"execution_count": 87
}
]
},
{
"metadata": {
"id": "ic7CF5Y-Tnjl",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!pip install -q tensorflowjs==0.5.7"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "rfYd04m2s8_Y",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 33
},
"outputId": "1564dd78-0eb2-4140-9be9-e40de9f54a04"
},
"cell_type": "code",
"source": [
"!tensorflowjs_converter --input_format keras small_last4.h5 model"
],
"execution_count": 101,
"outputs": [
{
"output_type": "stream",
"text": [
"Using TensorFlow backend.\r\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "UpxUmOo7tKEB",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1566
},
"outputId": "0a1a5a3c-0a51-4b16-b6b2-d8165bba8290"
},
"cell_type": "code",
"source": [
"!zip -r model.zip model && ls -l\n",
"import google.colab\n",
"google.colab.files.download('model.zip')"
],
"execution_count": 103,
"outputs": [
{
"output_type": "stream",
"text": [
"updating: model/ (stored 0%)\n",
"updating: model/group1-shard9of29 (deflated 8%)\n",
"updating: model/group1-shard5of29 (deflated 8%)\n",
"updating: model/group1-shard17of29 (deflated 8%)\n",
"updating: model/group1-shard27of29 (deflated 7%)\n",
"updating: model/model.json (deflated 94%)\n",
"updating: model/group1-shard24of29 (deflated 8%)\n",
"updating: model/group1-shard8of29 (deflated 8%)\n",
"updating: model/group1-shard10of29 (deflated 8%)\n",
"updating: model/group1-shard14of29 (deflated 8%)\n",
"updating: model/group1-shard18of29 (deflated 8%)\n",
"updating: model/group1-shard21of29 (deflated 8%)\n",
"updating: model/group1-shard1of29 (deflated 8%)\n",
"updating: model/group1-shard6of29 (deflated 8%)\n",
"updating: model/group1-shard22of29 (deflated 8%)\n",
"updating: model/group1-shard23of29 (deflated 8%)\n",
"updating: model/group1-shard26of29 (deflated 7%)\n",
"updating: model/group1-shard11of29 (deflated 8%)\n",
"updating: model/group1-shard28of29 (deflated 7%)\n",
"updating: model/group1-shard29of29 (deflated 7%)\n",
"updating: model/group1-shard13of29 (deflated 8%)\n",
"updating: model/group1-shard16of29 (deflated 8%)\n",
"updating: model/group1-shard4of29 (deflated 8%)\n",
"updating: model/group1-shard7of29 (deflated 8%)\n",
"updating: model/group1-shard15of29 (deflated 8%)\n",
"updating: model/group1-shard20of29 (deflated 8%)\n",
"updating: model/group1-shard25of29 (deflated 8%)\n",
"updating: model/group1-shard19of29 (deflated 8%)\n",
"updating: model/group1-shard3of29 (deflated 8%)\n",
"updating: model/group1-shard2of29 (deflated 8%)\n",
"updating: model/group1-shard12of29 (deflated 8%)\n",
"total 349024\n",
"-rw-rw-rw- 1 root root 20837072 Apr 15 14:31 All_Ratings.xlsx\n",
"drwxr-xr-x 4 root root 4096 Aug 30 16:11 data\n",
"drwxrwxr-x 2 root root 147456 May 11 13:38 facial landmark\n",
"drwxrwxr-x 4 root root 147456 Aug 30 16:11 Images\n",
"-rw-rw-rw- 1 root root 252839 Apr 15 14:30 Images_Sources.xlsx\n",
"drwxr-xr-x 2 root root 4096 Aug 30 16:27 model\n",
"-rw-r--r-- 1 root root 108785096 Aug 30 19:52 model.zip\n",
"-rw-rw-rw- 1 root root 1837 May 11 13:40 README.txt\n",
"-rw-r--r-- 1 root root 227027200 Aug 30 19:52 small_last4.h5\n",
"drwxrwxr-x 4 root root 4096 Apr 15 14:01 train_test_files\n",
"-rw-rw-r-- 1 root root 179562 Apr 24 13:04 train_test_files.zip\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"----------------------------------------\n",
"Exception happened during processing of request from ('::ffff:127.0.0.1', 59128, 0, 0)\n",
"Traceback (most recent call last):\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 317, in _handle_request_noblock\n",
" self.process_request(request, client_address)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 348, in process_request\n",
" self.finish_request(request, client_address)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 361, in finish_request\n",
" self.RequestHandlerClass(request, client_address, self)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 696, in __init__\n",
" self.handle()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 418, in handle\n",
" self.handle_one_request()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 406, in handle_one_request\n",
" method()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 639, in do_GET\n",
" self.copyfile(f, self.wfile)\n",
" File \"/usr/lib/python3.6/http/server.py\", line 800, in copyfile\n",
" shutil.copyfileobj(source, outputfile)\n",
" File \"/usr/lib/python3.6/shutil.py\", line 82, in copyfileobj\n",
" fdst.write(buf)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 775, in write\n",
" self._sock.sendall(b)\n",
"ConnectionResetError: [Errno 104] Connection reset by peer\n",
"----------------------------------------\n",
"----------------------------------------\n",
"Exception happened during processing of request from ('::ffff:127.0.0.1', 40540, 0, 0)\n",
"Traceback (most recent call last):\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 317, in _handle_request_noblock\n",
" self.process_request(request, client_address)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 348, in process_request\n",
" self.finish_request(request, client_address)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 361, in finish_request\n",
" self.RequestHandlerClass(request, client_address, self)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 696, in __init__\n",
" self.handle()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 418, in handle\n",
" self.handle_one_request()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 406, in handle_one_request\n",
" method()\n",
" File \"/usr/lib/python3.6/http/server.py\", line 639, in do_GET\n",
" self.copyfile(f, self.wfile)\n",
" File \"/usr/lib/python3.6/http/server.py\", line 800, in copyfile\n",
" shutil.copyfileobj(source, outputfile)\n",
" File \"/usr/lib/python3.6/shutil.py\", line 82, in copyfileobj\n",
" fdst.write(buf)\n",
" File \"/usr/lib/python3.6/socketserver.py\", line 775, in write\n",
" self._sock.sendall(b)\n",
"ConnectionResetError: [Errno 104] Connection reset by peer\n",
"----------------------------------------\n"
],
"name": "stderr"
}
]
},
{
"metadata": {
"id": "bhGbUJk4qZ5U",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!cp model.zip ../drive/"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "Uej9Jejxtv1E",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
""
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "RRpsAK2QtzUm",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 66
},
"outputId": "40b0d3d6-45e0-47c6-a4f0-512a883bf91a"
},
"cell_type": "code",
"source": [
"!ls"
],
"execution_count": 67,
"outputs": [
{
"output_type": "stream",
"text": [
"All_Ratings.xlsx Images\t model.zip train_test_files\r\n",
"data\t\t Images_Sources.xlsx README.txt train_test_files.zip\r\n",
"facial landmark model\t\t small_last4.h5\r\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "mLHf7RC1t1Sf",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}
@jaecheoljung
Copy link

勉強の参考とさせていただきます。 ありがとうございます。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment