Skip to content

Instantly share code, notes, and snippets.

@RodolfoFerro
Last active November 12, 2022 13:45
Show Gist options
  • Save RodolfoFerro/b8ce7d5c4f436846e8f738022e929e99 to your computer and use it in GitHub Desktop.
Save RodolfoFerro/b8ce7d5c4f436846e8f738022e929e99 to your computer and use it in GitHub Desktop.
ANs_101
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "ANs_101",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/RodolfoFerro/b8ce7d5c4f436846e8f738022e929e99/introducci-n-a-las-neuronas-artificiales.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R6jO_1gISKxk"
},
"source": [
"# Introducción a las neuronas artificiales 🧠\n",
"\n",
"## Contenido\n",
"\n",
"### Sección I\n",
"\n",
"1. Brief histórico\n",
"2. Unidad Umbralización Lineal (TLU)\n",
"3. Activación y bias – El perceptrón\n",
"\n",
"### Sección II\n",
"\n",
"4. Aprendizaje de neuronas\n",
"5. Entrenamiento de una neurona\n",
"6. Predicciones\n",
"\n",
"### Sección III – ¡Reto!\n",
"\n",
"7. El dataset a utilizar\n",
"8. Preparación de los datos\n",
"9. Creación del modelo\n",
"10. Entrenamiento del modelo\n",
"11. Evaluación y predicción\n",
"\n",
"### Premiación\n",
"\n",
"| Puntos | Actividad |\n",
"| ------ | ---------------------------------------- |\n",
"| 5 | Generalización al funcionamiento de las redes neuronales. |\n",
"| 4 | Mejor puntaje obtenido en el reto. |\n",
"| 3 | Cálculo de la derivada de una función. |\n",
"| 2 | Deducción de linealidad en TLUs. |\n",
"| 1 | Mejor _workflow_ sobre la data del reto. |\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xNVG2PnSEtQN"
},
"source": [
"## **Sección I**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tPk1Rkc4FZ5g"
},
"source": [
"### **Historia de las redes neuronales**\n",
"\n",
"Podríamos decir que la historia se remonta a dar un inicio con el modelo neuronal de McCulloch y Pitts de 1943, la **Threshold Logic Unit (TLU)**, o **Linear Threshold Unit**,​ que fue el primer modelo neuronal moderno, y ha servido de inspiración para el desarrollo de otros modelos neuronales. (Puedes leer más [aquí](https://es.wikipedia.org/wiki/Neurona_de_McCulloch-Pitts).)\n",
"\n",
"Posterior a los TLU, se la historia se complementa con el desarrollo de un tipo de neurona artificial con una **función de activación**, llamada **perceptrón**. Ésta fue desarrollada entre 1950 y 1960 por el científico **Frank Rosenblatt**."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uehq48zoSocy"
},
"source": [
"### **Entonces, ¿qué es una neurona artificial?**\n",
"\n",
"Una neurona artificial es una función matemática que concevida como un modelo de neuronas biológicas. (Puedes leer un poco más [aquí](https://en.wikipedia.org/wiki/Artificial_neuron).)\n",
"\n",
"El modelo general de una **neurona artificial** toma varias **entradas** $x_1, x_2,..., x_n $ y produce una **salida**. Se propuso que las entradas tuviesen **pesos** asciados $w_1, w_2, ..., w_n$, siendo éstos números reales que podemos interpretar como una expressión de la importancia respectiva para cada entrada de información para el cálculo del valor de salida de la neurona. La salida de la neurona, $0$ o $1$, está determinada con base en que la suma ponderada, \n",
"\n",
"$$\\displaystyle\\sum_{j}w_jx_j,$$\n",
"\n",
"<!-- $\\textbf{w}_{Layer}\\cdot\\textbf{x} = \n",
"\\begin{bmatrix}\n",
"w_{1, 1} & w_{1, 2} & \\cdots & w_{1, n}\\\\\n",
"w_{2, 1} & w_{2, 2} & \\cdots & w_{2, n}\\\\\n",
"\\vdots & \\vdots & \\ddots & \\vdots\\\\\n",
"w_{m, 1} & w_{m, 2} & \\cdots & w_{m, n}\\\\\n",
"\\end{bmatrix} \\cdot\n",
"\\begin{bmatrix}\n",
"x_1\\\\\n",
"x_2\\\\\n",
"\\vdots\\\\\n",
"x_n\n",
"\\end{bmatrix}$ -->\n",
"\n",
"(para $j \\in \\{1, 2, ..., n\\}$ ) sea menor o mayor que un **valor límite** que por ahora llamaremos **umbral**. (Aquí comenzamos con la formalización de lo que es un TLU y cómo funciona.)\n",
"\n",
"Visto de otro modo, una neurona artificial puede interpretarse como un sistema que toma decisiones con base en la evidencia presentada."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "q33kCpXyFgJ_"
},
"source": [
"#### **Implementemos una TLU**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "cLBMuek3lBHd"
},
"source": [
"import numpy as np\n",
"\n",
"\n",
"# Primero creamos nuestra clase TLU\n",
"class TLU():\n",
" def __init__(self, inputs, weights):\n",
" \"\"\"Class constructor.\n",
" \n",
" Parameters\n",
" ----------\n",
" inputs : list\n",
" List of input values.\n",
" weights : list\n",
" List of weight values.\n",
" \"\"\"\n",
"\n",
" self.inputs = None # TODO: np.array <- inputs\n",
" self.weights = None # TODO: np.array <- weights\n",
" \n",
" def decide(self, treshold):\n",
" \"\"\"Function that operates inputs @ weights.\n",
" \n",
" Parameters\n",
" ----------\n",
" treshold : int\n",
" Threshold value for decision.\n",
" \"\"\"\n",
"\n",
" # TODO: Inner product of data\n",
" pass"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "t42O74IdmKIw"
},
"source": [
"# Now, we need to set inputs and weights\n",
"inputs, weights = [], []\n",
"\n",
"questions = [\n",
" \"· ¿Cuál es la velocidad? \",\n",
" \"· ¿Ritmo cardiaco? \",\n",
" \"· ¿Respiración? \"\n",
"]\n",
"\n",
"for question in questions:\n",
" i = int(input(question))\n",
" w = int(input(\"· Y su peso asociado es... \"))\n",
" inputs.append(i)\n",
" weights.append(w)\n",
" print()\n",
"\n",
"treshold = int(input(\"· Y nuestro umbral/límite será: \"))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "ZHjy-k33oNFm"
},
"source": [
"artificial_neuron = TLU() # TODO Instantiate Perceptron\n",
"artificial_neuron.decide(treshold) # TODO Apply decision function with threshold"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "gUCCwUG6DgCX"
},
"source": [
"### **Bias y funciones de activación – El perceptrón**\n",
"\n",
"_Antes de continuar, introduciremos otro conceptos, el **bias** y la **función de activación**._\n",
"\n",
"La operación matemática que realiza la neurona para la decisión de umbralización se puede escribir como:\n",
"\n",
"$$ f(\\textbf{x}) = \n",
" \\begin{cases}\n",
" 0 & \\text{si $\\displaystyle\\sum_{j}w_jx_j <$ umbral o treshold} \\\\\n",
" 1 & \\text{si $\\displaystyle\\sum_{j}w_jx_j \\geq$ umbral o treshold} \\\\\n",
" \\end{cases},$$\n",
"\n",
"donde $j \\in \\{1, 2, ..., n\\}$, y así, $\\textbf{x} = (x_1, x_2, ..., x_n)$.\n",
"\n",
"De lo anterior, podemos despejar el umbral y escribirlo como $b$, obteniendo:\n",
"\n",
"$$ f(\\textbf{x}) = \n",
" \\begin{cases}\n",
" 0 & \\text{si $\\displaystyle\\sum_{j}w_jx_j + b < 0$} \\\\\n",
" 1 & \\text{si $\\displaystyle\\sum_{j}w_jx_j + b > 0$} \\\\\n",
" \\end{cases},$$\n",
"\n",
"donde $\\textbf{x} = (x_1, x_2, ..., x_n)$ y $j \\in \\{1, 2, ..., n\\}$.\n",
"\n",
"Esto que escribimos como $b$, también se le conoce como **bias**, y describe *qué tan susceptible la red es a __dispararse__*.\n",
"\n",
"Curiosamente, esta descripción matemática encaja con una función de salto o de escalón (función [_Heaviside_](https://es.wikipedia.org/wiki/Funci%C3%B3n_escal%C3%B3n_de_Heaviside)), que es una **función de activación**. Esto es, una función que permite el paso de información de acuerdo a la entrada y los pesos, permitiendo el disparo del lo procesado hacia la salida. La función de salto se ve como sigue:\n",
"\n",
"<center>\n",
" <img src=\"https://upload.wikimedia.org/wikipedia/commons/4/4a/Funci%C3%B3n_Cu_H.svg\" width=\"40%\" alt=\"Función escalón de Heaviside\">\n",
"</center>\n",
"\n",
"Sin embargo, podemos hacer a una neurona aún más susceptible con respecto a los datos de la misma (entradas, pesos, bias) añadiendo una función [sigmoide](https://es.wikipedia.org/wiki/Funci%C3%B3n_sigmoide). Esta fue una de las agregaciones de Rosenblatt al momento del desarrollo de su propuesta de perceptrón. La función sigmoide se ve como a continuación: \n",
"\n",
"<center>\n",
" <img src=\"https://upload.wikimedia.org/wikipedia/commons/6/66/Funci%C3%B3n_sigmoide_01.svg\" width=\"40%\" alt=\"Función sigmoide\">\n",
"</center>\n",
"\n",
"Esta función es suave, y por lo tanto tiene una diferente \"sensibililad\" a los cambios abruptos de valores. También, sus entradas en lugar de solo ser $1$'s o $0$'s, pueden ser valores en todos los números reales. La función sigmoide es descrita por la siguiente expresión matemática:\n",
"\n",
"$$f(z) = \\dfrac{1}{1+e^{-z}}$$\n",
"\n",
"O escrito en términos de entradas, pesos y bias:\n",
"\n",
"$$f(z) = \\dfrac{1}{1+\\exp{\\left\\{-\\left(\\displaystyle\\sum_{j}w_jx_j +b\\right)\\right\\}}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0G1MY4HQFsEd"
},
"source": [
"#### **Volviendo al ejemplo**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "qSn8VaEoDtHo"
},
"source": [
"# Modificamos para añadir la función de activación\n",
"class Perceptron():\n",
" def __init__(self, inputs, weights):\n",
" \"\"\"Class constructor.\n",
" \n",
" Parameters\n",
" ----------\n",
" inputs : list\n",
" List of input values.\n",
" weights : list\n",
" List of weight values.\n",
" \"\"\"\n",
"\n",
" self.inputs = None # TODO: np.array <- inputs\n",
" self.weights = None # TODO: np.array <- weights\n",
" \n",
" def decide(self, bias):\n",
" \"\"\"Function that operates inputs @ weights.\n",
" \n",
" Parameters\n",
" ----------\n",
" bias : int\n",
" The bias value for operation.\n",
" \"\"\"\n",
"\n",
" # TODO: Inner product of data + bias\n",
" # TODO: Apply sigmoid function f(z) = 1 / (1 + e^(-z))\n",
" pass"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "ogPy6NpfERfJ"
},
"source": [
"bias = int(input(\"· El nuevo bias será: \"))\n",
"perceptron = Perceptron(inputs, weights)\n",
"perceptron.decide(bias)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "mRGlbVZsFxdk"
},
"source": [
"> Esta es la neurona que usaremos para los siguientes tópicos."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NvmIk2G9EgOQ"
},
"source": [
"<center>\n",
" *********\n",
"</center>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YnY-np7LE3lS"
},
"source": [
"## **Sección II**"
]
},
{
"cell_type": "markdown",
"source": [
"### Aprendizaje de neuronas\n",
"\n",
"Veamos cómo se puede entrenar una sola neurona para hacer una predicción.\n",
"\n",
"Para este problema construiremos un perceptrón simple, como el propuesto por McCulloch & Pitts, usando la función sigmoide.\n",
"\n",
"#### **Planteamiento del problema:**\n",
"\n",
"Queremos mostrarle a una neurona simple un conjunto de ejemplos para que pueda aprender cómo se comporta una función. El conjunto de ejemplos es el siguiente:\n",
"\n",
"- `(1, 0)` debería devolver `1`.\n",
"- `(0, 1)` debe devolver `1`.\n",
"- `(0, 0)` debería devolver `0`.\n",
"\n",
"Entonces, si ingresamos a la neurona el valor de `(1, 1)`, debería poder predecir el número `1`.\n",
"\n",
"> ¿Puedes adivinar la función?\n",
"\n",
"#### ¿Que necesitamos hacer?\n",
"\n",
"Programar y entrenar una neurona para hacer predicciones.\n",
"\n",
"En concreto, vamos a hacer lo siguiente:\n",
"\n",
"- Construir la clase y su constructor.\n",
"- Definir la función sigmoide y su derivada\n",
"- Definir el número de épocas para el entrenamiento.\n",
"- Resolver el problema y predecir el valor de la entrada deseada\n",
"\n",
"**3 Puntos – Cálculo de la derivada de una función.**"
],
"metadata": {
"id": "I7-Ja9DK9cIA"
}
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"\n",
"\n",
"class TrainableNeuron():\n",
" def __init__(self, n):\n",
" \"\"\"Class constructor.\n",
" \n",
" Parameters\n",
" ----------\n",
" n : int\n",
" Input size.\n",
" \"\"\"\n",
" \n",
" np.random.seed(123)\n",
" self.synaptic_weights = None # TODO. Use np.random.random((n, 1)) to gen values in (-1, 1)\n",
"\n",
" def __sigmoid(self, x):\n",
" \"\"\"Sigmoid function.\n",
" \n",
" Parameters\n",
" ----------\n",
" x : float\n",
" Input value to sigmoid function.\n",
" \"\"\"\n",
" \n",
" # TODO: Return result of sigmoid function f(z) = 1 / (1 + e^(-z))\n",
" return None\n",
"\n",
" def __sigmoid_derivative(self, x):\n",
" \"\"\"Derivative of the Sigmoid function.\n",
" \n",
" Parameters\n",
" ----------\n",
" x : float\n",
" Input value to evaluated sigmoid function.\"\"\"\n",
"\n",
" # TODO: Return the derivate of sigmoid function x * (1 - x)\n",
" return None\n",
"\n",
" def train(self, training_inputs, training_output, iterations):\n",
" \"\"\"Training function.\n",
" \n",
" Parameters\n",
" ----------\n",
" training_inputs : list\n",
" List of features for training.\n",
" training_outputs : list\n",
" List of labels for training.\n",
" iterations : int\n",
" Number of iterations for training.\n",
" \n",
" Returns\n",
" -------\n",
" history : list\n",
" A list containing the training history.\n",
" \"\"\"\n",
"\n",
" history = []\n",
" \n",
" for iteration in range(iterations):\n",
" output = self.predict(training_inputs)\n",
" error = training_output.reshape((len(training_inputs), 1)) - output\n",
" #error = - training_output.reshape((len(training_inputs), 1)) * np.log(output) \\\n",
" # - (1 - training_output.reshape((len(training_inputs), 1))) * output\n",
" #error /= len(output)\n",
" adjustment = np.dot(training_inputs.T, error *\n",
" self.__sigmoid_derivative(output))\n",
" self.synaptic_weights += adjustment\n",
"\n",
" history.append(np.linalg.norm(error))\n",
" \n",
" return history\n",
"\n",
" def predict(self, inputs):\n",
" \"\"\"Prediction function. Applies input function to inputs tensor.\n",
" \n",
" Parameters\n",
" ----------\n",
" inputs : list\n",
" List of inputs to apply sigmoid function.\n",
" \"\"\"\n",
" # TODO: Apply self.__sigmoid to np.dot of (inputs, self.synaptic_weights)\n",
" return None"
],
"metadata": {
"id": "2NKx40hxqmo4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Generando las muestras\n",
"\n",
"Ahora podemos generar una lista de ejemplos basados en la descripción del problema."
],
"metadata": {
"id": "Ym_oEzbhxYKT"
}
},
{
"cell_type": "code",
"source": [
"# Training samples:\n",
"input_values = [(0, 1), (1, 0), (0, 0)] # TODO. Define the input values as a list of tuples\n",
"output_values = [1, 1, 0] # TODO. Define the desired outputs\n",
"\n",
"training_inputs = np.array(input_values)\n",
"training_output = np.array(output_values).T.reshape((3, 1))"
],
"metadata": {
"id": "BYW9aYSCxc1q"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Entrenando la neurona\n",
"\n",
"Para hacer el entrenamiento, primero definiremos una neurona. De forma predeterminada, contendrá pesos aleatorios (ya que aún no se ha entrenado):"
],
"metadata": {
"id": "DJUYV8H-xf7Y"
}
},
{
"cell_type": "code",
"source": [
"# Initialize Sigmoid Neuron:\n",
"neuron = TrainableNeuron(2)\n",
"print(\"Initial random weights:\")\n",
"neuron.synaptic_weights"
],
"metadata": {
"id": "cThkcQGMxrX8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO.\n",
"# We can modify the number of epochs to see how it performs.\n",
"epochs = 10000\n",
"\n",
"# We train the neuron a number of epochs:\n",
"history = neuron.train(training_inputs, training_output, epochs)\n",
"print(\"New synaptic weights after training: \")\n",
"neuron.synaptic_weights"
],
"metadata": {
"id": "WnuCP6eHxtQk"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Podemos evaluar el entrenamiento de la neurona."
],
"metadata": {
"id": "7KFucScQncbe"
}
},
{
"cell_type": "code",
"source": [
"import matplotlib.pyplot as plt\n",
"plt.style.use('seaborn')\n",
"\n",
"x = np.arange(len(history))\n",
"y = history\n",
"\n",
"plt.plot(x, y)"
],
"metadata": {
"id": "8vhWL1nLnZ-R"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Realizando predicciones"
],
"metadata": {
"id": "7vPb5a65x0bA"
}
},
{
"cell_type": "code",
"source": [
"# We predict to verify the performance:\n",
"one_one = np.array((1, 1))\n",
"print(\"Prediction for (1, 1): \")\n",
"neuron.predict(one_one)"
],
"metadata": {
"id": "YlhaCvTeyeYt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "0IHtR4uPEaCO"
},
"source": [
"**2 Puntos – Deducción de linealidad en TLUs.**\n",
"\n",
"<center>\n",
" *********\n",
"</center>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7NE4e3KuEVst"
},
"source": [
"## **Sección III – ¡Reto!**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1Z7JrTygMDSx"
},
"source": [
"### El dataset a utilizar: Naranjas vs. Manzanas\n",
"\n",
"El dataset ha sido una adaptación de datos encontrados en [Kaggle](https://www.kaggle.com/datasets/theblackmamba31/apple-orange). Dicho dataset está compuesto por conjuntos de imágenes de naranjas y manzanas que serán un utilizados para entrenar una neurona artificial.\n",
"\n",
"**1 Punto – Mejor propuesta de Workflow para trabajar con las imágenes.**\n"
]
},
{
"cell_type": "markdown",
"source": [
"Para cargar los datos, primero los descargaremos de un repositorio donde previamente los preparé para ustedes. \n",
"\n",
"Puedes explorar directamente los archivos fuente del [repositorio en GitHub – `apple-orange-dataset`](https://github.com/RodolfoFerro/apple-orange-dataset).\n",
"\n",
"Puedes también explorar el [script]() que he utilizado para la preparación de los mismos."
],
"metadata": {
"id": "UVg0AU2-Fqzr"
}
},
{
"cell_type": "code",
"source": [
"!wget https://raw.githubusercontent.com/RodolfoFerro/apple-orange-dataset/main/training_data.csv\n",
"!wget https://raw.githubusercontent.com/RodolfoFerro/apple-orange-dataset/main/testing_data.csv"
],
"metadata": {
"id": "1S81FXVEFzQo"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "CxfNdPU3NQge"
},
"source": [
"### Preparación de los datos\n"
]
},
{
"cell_type": "code",
"source": [
"import pandas as pd\n",
"\n",
"\n",
"training_df = pd.read_csv('training_data.csv')\n",
"testing_df = pd.read_csv('testing_data.csv')\n",
"\n",
"training_df"
],
"metadata": {
"id": "4fh3DURvLBvA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"R, G, B = [], [], []\n",
"\n",
"for color in training_df['color']:\n",
" rgb = color[1:-1].split(', ')\n",
" r, g, b = [int(value) for value in rgb]\n",
" R.append(r)\n",
" G.append(g)\n",
" B.append(b)\n",
"\n",
"training_df['r'] = R\n",
"training_df['g'] = G\n",
"training_df['b'] = B\n",
"training_df['class_str'] = training_df['class'].astype('str')\n",
"training_df"
],
"metadata": {
"id": "8IWxRHjQ4GS4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Exploración de los datos"
],
"metadata": {
"id": "h7SGMNlqx8Dx"
}
},
{
"cell_type": "code",
"source": [
"import plotly.express as px\n",
"\n",
"\n",
"fig = px.scatter_3d(training_df, x='r', y='g', z='b',\n",
" color='class_str', symbol='class_str',\n",
" opacity=0.5)\n",
"fig.show()"
],
"metadata": {
"id": "RXINRt1ox_-G"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Creación de una neurona artificial\n"
],
"metadata": {
"id": "npjrVs7jUBC3"
}
},
{
"cell_type": "code",
"source": [
"neuron = None #TODO: Create a neuron instance\n",
"neuron.synaptic_weights"
],
"metadata": {
"id": "eHmZ4nnccToB"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Entrenamiento del modelo\n",
"\n",
"Para entrenar el modelo, simplemente utilizamos el método `.train()` del modelo."
],
"metadata": {
"id": "B4DmYPVAUJ2d"
}
},
{
"cell_type": "code",
"source": [
"training_inputs = training_df[['r', 'g', 'b']].values / 255.\n",
"training_output = training_df['class'].values\n",
"\n",
"training_inputs, training_output"
],
"metadata": {
"id": "_0o5NZsB7ORw"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"history = None #TODO: Train a neuron"
],
"metadata": {
"id": "KX3X_t7B73NV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Evaluación y predicción\n",
"\n",
"Podemos evaluar el entrenamiento de la neurona."
],
"metadata": {
"id": "_2oyTh_jMAIM"
}
},
{
"cell_type": "code",
"source": [
"import matplotlib.pyplot as plt\n",
"plt.style.use('seaborn')\n",
"\n",
"x = np.arange(len(history))\n",
"y = history\n",
"\n",
"plt.plot(x, y)"
],
"metadata": {
"id": "buRgAf7xLvln"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Para predecir un color de ejemplo:"
],
"metadata": {
"id": "ZsC5ELq7Ad-F"
}
},
{
"cell_type": "code",
"source": [
"neuron.predict([0.73333333, 0.19215686, 0.15294118])"
],
"metadata": {
"id": "kLqvq2cnUfdD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"O utilizar funciones de scikit-learn para la evaluación de resultados en el conjunto de pruebas. (Utilizar [`sklearn.metrics.accuracy_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score))"
],
"metadata": {
"id": "8ubrtbZdoJ-m"
}
},
{
"cell_type": "markdown",
"source": [
"<center>\n",
" *********\n",
"</center>"
],
"metadata": {
"id": "hMCddqlrYosR"
}
},
{
"cell_type": "code",
"source": [
"R, G, B = [], [], []\n",
"\n",
"for color in testing_df['color']:\n",
" rgb = color[1:-1].split(', ')\n",
" r, g, b = [int(value) for value in rgb]\n",
" R.append(r)\n",
" G.append(g)\n",
" B.append(b)\n",
"\n",
"testing_df['r'] = R\n",
"testing_df['g'] = G\n",
"testing_df['b'] = B\n",
"testing_df['class_str'] = testing_df['class'].astype('str')\n",
"\n",
"\n",
"fig = px.scatter_3d(testing_df, x='r', y='g', z='b',\n",
" color='class_str', symbol='class_str',\n",
" opacity=0.5)\n",
"fig.show()"
],
"metadata": {
"id": "20x0UwqUAtdz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"testing_inputs = testing_df[['r', 'g', 'b']].values / 255.\n",
"testing_output = testing_df['class'].values\n",
"\n",
"predictions = []\n",
"for test_input in testing_inputs:\n",
" if neuron.predict(test_input)[0] <= 0.5:\n",
" prediction = 0\n",
" else:\n",
" prediction = 1\n",
" predictions.append(prediction)\n",
"predictions = np.array(predictions)\n",
"\n",
"predictions"
],
"metadata": {
"id": "tccP9w_EBGvG"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
" from sklearn.metrics import accuracy_score\n",
" \n",
"\n",
" accuracy_score(testing_output, predictions)"
],
"metadata": {
"id": "JZvNFNY4B-Z9"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**4 Puntos – Mejor puntaje obtenido en el reto.**\n",
"\n",
"> **Puedes explorar:**\n",
"> - Utilizar 1 a 3 variables (de las dadas).\n",
"> - Investigar e implementar una nueva funci.ón para estimar el error.\n",
"> - Realizar transformaciones en los datos.\n",
"> - Entrenar por más épocas.\n",
"> - Mover el umbral para definir la clase.\n",
"> - Explorar otras funciones de activación.\n",
"> - Generar tu nuevo dataset de datos a partir de las imágenes originales.\n",
"\n",
"**5 Puntos – Generalización al funcionamiento de las redes neuronales.**"
],
"metadata": {
"id": "QKp_PZ_NDqbS"
}
},
{
"cell_type": "markdown",
"source": [
"--------\n",
"\n",
"> Contenido creado por **Rodolfo Ferro**, 2022. <br>\n",
"> Puedes contactarme a través de Insta ([@rodo_ferro](https://www.instagram.com/rodo_ferro/)) o Twitter ([@rodo_ferro](https://twitter.com/rodo_ferro))."
],
"metadata": {
"id": "hSdbQU3e6-Ky"
}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment