Skip to content

Instantly share code, notes, and snippets.

@seisvelas
Last active May 29, 2019 11:00
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save seisvelas/0d68e3639012356fe2b7169229409869 to your computer and use it in GitHub Desktop.
Save seisvelas/0d68e3639012356fe2b7169229409869 to your computer and use it in GitHub Desktop.
My First Neural Network.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "My First Neural Network.ipynb",
"version": "0.3.2",
"provenance": [],
"include_colab_link": true
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/seisvelas/0d68e3639012356fe2b7169229409869/copy-of-regression_in_keras.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ugA8Ml_kNxsI",
"colab_type": "text"
},
"source": [
"# Ridiculously Simple Regression In Keras\n",
"\n",
"After the 998,232,212,863th guided tour of MNIST in Keras I decided that the only way I'd ever get anywhere is to just start trying to build my own models that do things, even if they do really simple things. \n",
"\n",
"My regression does virtually nothing cool. It learns the basic mapping y=2x+0. But I'm really happy because I actually made it work!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "daUr8eUpNxsM",
"colab_type": "code",
"colab": {}
},
"source": [
"import numpy as np\n",
"import random\n",
"from keras.optimizers import Adam\n",
"from keras.layers import Dense\n",
"from keras.models import Sequential"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "8Lq3epIAHKIc",
"colab_type": "text"
},
"source": [
"Then I make my datasets to regress upon. \n",
"\n",
"```\n",
"x = 0, 1, 2, ..., 5000\n",
"y = 2x + 0\n",
"```"
]
},
{
"cell_type": "code",
"metadata": {
"id": "RYX70jBoNxsU",
"colab_type": "code",
"colab": {}
},
"source": [
"X_train = np.array([np.array([i]) for i in range(5000)])\n",
"y_train = np.array([np.array([i[0]*2]) for i in X_train])"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZaRRLlQPGgfy",
"colab_type": "text"
},
"source": [
"Input is 1 dimensional, of course (just a number!). The input layer has 8 neurons, which is completely arbitrary. At different numbers of neurons, the loss was higher. There is nothing scientific about what I'm doing here. Instead of the scientific methdod this follows the Wiccan edict Do What Works."
]
},
{
"cell_type": "code",
"metadata": {
"id": "0quzn0ErNxss",
"colab_type": "code",
"colab": {}
},
"source": [
"model = Sequential()\n",
"model.add(Dense(8, input_dim=1, activation='relu'))\n",
"model.add(Dense(1))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "uG5RZ2kdNxs3",
"colab_type": "code",
"colab": {}
},
"source": [
"model.compile(optimizer=Adam(lr=0.0001),\n",
" loss='mse', # mean squared error\n",
" metrics=['mae'])"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "FZPVGdTgNxs8",
"colab_type": "code",
"outputId": "1d5f35f4-f4da-45a3-8a86-af69374696a2",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1394
}
},
"source": [
"model.fit(X_train, y_train, \n",
" batch_size=2, epochs=40, verbose=1)"
],
"execution_count": 297,
"outputs": [
{
"output_type": "stream",
"text": [
"Epoch 1/40\n",
"5000/5000 [==============================] - 6s 1ms/step - loss: 37647198.6214 - mean_absolute_error: 5309.0790\n",
"Epoch 2/40\n",
"5000/5000 [==============================] - 4s 703us/step - loss: 26650483.8375 - mean_absolute_error: 4463.3974\n",
"Epoch 3/40\n",
"5000/5000 [==============================] - 4s 702us/step - loss: 15605274.0208 - mean_absolute_error: 3400.6906\n",
"Epoch 4/40\n",
"5000/5000 [==============================] - 3s 695us/step - loss: 5516206.8437 - mean_absolute_error: 1987.0250\n",
"Epoch 5/40\n",
"5000/5000 [==============================] - 3s 698us/step - loss: 674401.6671 - mean_absolute_error: 633.8765\n",
"Epoch 6/40\n",
"5000/5000 [==============================] - 3s 698us/step - loss: 4431.7969 - mean_absolute_error: 35.7851\n",
"Epoch 7/40\n",
"5000/5000 [==============================] - 4s 712us/step - loss: 1.2563 - mean_absolute_error: 0.9356\n",
"Epoch 8/40\n",
"5000/5000 [==============================] - 3s 696us/step - loss: 1.2503 - mean_absolute_error: 0.9317\n",
"Epoch 9/40\n",
"5000/5000 [==============================] - 3s 699us/step - loss: 1.2342 - mean_absolute_error: 0.9267\n",
"Epoch 10/40\n",
"5000/5000 [==============================] - 3s 699us/step - loss: 1.1808 - mean_absolute_error: 0.9071\n",
"Epoch 11/40\n",
"5000/5000 [==============================] - 4s 703us/step - loss: 1.0541 - mean_absolute_error: 0.8623\n",
"Epoch 12/40\n",
"5000/5000 [==============================] - 3s 699us/step - loss: 0.8334 - mean_absolute_error: 0.7737\n",
"Epoch 13/40\n",
"5000/5000 [==============================] - 3s 697us/step - loss: 0.5908 - mean_absolute_error: 0.6510\n",
"Epoch 14/40\n",
"5000/5000 [==============================] - 3s 695us/step - loss: 0.3911 - mean_absolute_error: 0.5267\n",
"Epoch 15/40\n",
"5000/5000 [==============================] - 3s 695us/step - loss: 0.2475 - mean_absolute_error: 0.4149\n",
"Epoch 16/40\n",
"5000/5000 [==============================] - 3s 684us/step - loss: 0.1446 - mean_absolute_error: 0.3072\n",
"Epoch 17/40\n",
"5000/5000 [==============================] - 3s 697us/step - loss: 0.0673 - mean_absolute_error: 0.2036\n",
"Epoch 18/40\n",
"5000/5000 [==============================] - 3s 697us/step - loss: 0.0495 - mean_absolute_error: 0.1520\n",
"Epoch 19/40\n",
"5000/5000 [==============================] - 3s 695us/step - loss: 0.0736 - mean_absolute_error: 0.1088\n",
"Epoch 20/40\n",
"5000/5000 [==============================] - 3s 698us/step - loss: 0.0441 - mean_absolute_error: 0.0784\n",
"Epoch 21/40\n",
"5000/5000 [==============================] - 3s 695us/step - loss: 0.1893 - mean_absolute_error: 0.0905\n",
"Epoch 22/40\n",
"5000/5000 [==============================] - 3s 694us/step - loss: 8.0481e-05 - mean_absolute_error: 0.0057\n",
"Epoch 23/40\n",
"5000/5000 [==============================] - 4s 700us/step - loss: 0.0657 - mean_absolute_error: 0.0682\n",
"Epoch 24/40\n",
"5000/5000 [==============================] - 4s 700us/step - loss: 0.0068 - mean_absolute_error: 0.0177\n",
"Epoch 25/40\n",
"5000/5000 [==============================] - 4s 710us/step - loss: 0.0629 - mean_absolute_error: 0.0661\n",
"Epoch 26/40\n",
"5000/5000 [==============================] - 4s 704us/step - loss: 0.0357 - mean_absolute_error: 0.0417\n",
"Epoch 27/40\n",
"5000/5000 [==============================] - 3s 694us/step - loss: 0.0509 - mean_absolute_error: 0.0581\n",
"Epoch 28/40\n",
"5000/5000 [==============================] - 3s 699us/step - loss: 0.0171 - mean_absolute_error: 0.0398\n",
"Epoch 29/40\n",
"5000/5000 [==============================] - 3s 699us/step - loss: 0.0236 - mean_absolute_error: 0.0555\n",
"Epoch 30/40\n",
"5000/5000 [==============================] - 3s 692us/step - loss: 0.1105 - mean_absolute_error: 0.0536\n",
"Epoch 31/40\n",
"5000/5000 [==============================] - 3s 686us/step - loss: 0.0528 - mean_absolute_error: 0.0436\n",
"Epoch 32/40\n",
"5000/5000 [==============================] - 3s 692us/step - loss: 0.0302 - mean_absolute_error: 0.0345\n",
"Epoch 33/40\n",
"5000/5000 [==============================] - 3s 696us/step - loss: 0.0168 - mean_absolute_error: 0.0431\n",
"Epoch 34/40\n",
"5000/5000 [==============================] - 3s 694us/step - loss: 0.0258 - mean_absolute_error: 0.0381\n",
"Epoch 35/40\n",
"5000/5000 [==============================] - 4s 703us/step - loss: 0.0557 - mean_absolute_error: 0.0442\n",
"Epoch 36/40\n",
"5000/5000 [==============================] - 3s 692us/step - loss: 0.0222 - mean_absolute_error: 0.0463\n",
"Epoch 37/40\n",
"5000/5000 [==============================] - 4s 706us/step - loss: 0.0302 - mean_absolute_error: 0.0581\n",
"Epoch 38/40\n",
"5000/5000 [==============================] - 3s 696us/step - loss: 0.0209 - mean_absolute_error: 0.0494\n",
"Epoch 39/40\n",
"5000/5000 [==============================] - 3s 691us/step - loss: 0.0163 - mean_absolute_error: 0.0481\n",
"Epoch 40/40\n",
"5000/5000 [==============================] - 3s 697us/step - loss: 0.0449 - mean_absolute_error: 0.0315\n"
],
"name": "stdout"
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7fa52584e668>"
]
},
"metadata": {
"tags": []
},
"execution_count": 297
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "MRzg1K4QNxtK",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "5ee7a4d4-b520-48ca-ee28-ae1296cc7e97"
},
"source": [
"model.predict(np.array([17.5]))[0][0]"
],
"execution_count": 298,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"35.000267"
]
},
"metadata": {
"tags": []
},
"execution_count": 298
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A9h0xCxeIwKb",
"colab_type": "text"
},
"source": [
"# Woah!\n",
"\n",
"Testing our model on 17.5, it returns a prediction of 35.000267, which is close enough to the truth for me to count it!"
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment