Skip to content

Instantly share code, notes, and snippets.

View GastonMazzei's full-sized avatar

Gaston Mazzei GastonMazzei

View GitHub Profile
@GastonMazzei
GastonMazzei / csv2table.py
Last active August 31, 2020 21:19
CSV to Table
import plotly.graph_objects as go
import pandas as pd
df = pd.read_csv('path/to/file.csv')
# You might want to change the "wanted columns"
wanted_columns = ['time']
wanted_columns += [f'Question {x+1}' for x in range(6)]
wanted_columns += [f'Answer {x+1}' for x in range(6)]
#********************************************************************************
# we are using the conditional_scope of the keras_tuner hyperparameter class
#
# link: https://keras.io/api/keras_tuner/hyperparameters/#hyperparameters-class
#
# example by Gaston Mazzei, https://gastonmazzei.github.io/
#********************************************************************************
@GastonMazzei
GastonMazzei / importancia_relativa_gradient_boosting.py
Created July 30, 2021 17:27
Importancia Relativa usando el algoritmo "Gradient Boosting Classifier" de Scikit-Learn
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import GradientBoostingClassifier
# Definimos la semilla (SEED) y la cantidad de filas (N)
SEED=1234
N = 2500
np.random.seed(SEED)
@GastonMazzei
GastonMazzei / cute_plot.py
Created October 20, 2021 17:51
cute_plot.py
import matplotlib.pyplot as plt
import numpy as np
x=np.linspace(0,15,100)
y=np.sin(x)
plt.scatter(x,y)
plt.plot(x,y)
plt.title('PMF of the $2^n$ different possible phases')
plt.ylabel('Probability (0-1)')
@GastonMazzei
GastonMazzei / NLP_MaximumEntropy_MinimumSTD.py
Created January 25, 2022 19:50
When looking for the most degenerated word in a corpus tagged with classes, this simple code shows how the minimization of the coefficient of variation is equivalent to the maximization of the entropy for the random variable model associated with the Multinomial/Dirichlet conjugate scheme approximated by the max likelihood.
import numpy as np
"""
Simple code to show how the minimization of the coefficient of variation (i.e. propto the standard deviation)
is equivalent to the maximization of the entropy for the probability distribution model of the number of counts over the total,
i.e. Dirichlet distribution for multinomial variables approximated by the maximum likelihood :-)
Mathematical equivalence to render in Latex/MathJax:
max_x(-\sum_{i=1}^4\frac{N_i(x)}{Ntot(x)}log(\frac{N_i(x)}{Ntot(x)})) = min_x(\sum_{i=1}^4(\frac{N_i(x)-\bar{N}(x)}{\bar{N}(x)})^2)
@GastonMazzei
GastonMazzei / Q-Learning-Soccer.py
Created February 11, 2022 23:50
Code for the Q-Learning presentation at Paris-Saclay @ Dr. Abdel Lisser, Game theory
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
fig = plt.figure()
im = Image.open('ball.jpg')
L = 15
im = np.asarray(im.resize((im.size[0]//L, im.size[1]//L))).astype(np.float)/ 255
CENTER = fig.bbox.xmax//2, fig.bbox.ymax//4
DX,DY = -190,280#fig.bbox.xmax//10, fig.bbox.ymax //5
@GastonMazzei
GastonMazzei / distributed-self-stabilizing-ring-node-counter.py
Created March 8, 2022 14:19
Each processor (node of a ring network) applies a set of local rules and is able to count the number of nodes in the network. It is self-stabilizing as it supports random initialization.
import numpy as np
import matplotlib.pyplot as plt
from uuid import uuid4
"""
Distributed Self-stabilizing Algorithms:
Compute locally the ring's size.
assuming all variables are initialized randomly.