Skip to content

Instantly share code, notes, and snippets.

View IvanNardini's full-sized avatar
🏠
Working from home

Ivan Nardini IvanNardini

🏠
Working from home
View GitHub Profile
@IvanNardini
IvanNardini / Target_design_flaw.py
Created August 1, 2020 15:31
Back to the Machine Learning fundamentals: How to write code for Model deployment (Part 3/3)
def train():
# Read Data
data = pd.read_csv(DATA_INGESTION['data_path'])
target = DATA_INGESTION['data_map']['target']
variables = DATA_INGESTION['data_map']['variables']
#Preprocessing
flt = data['umbrella_limit']>=0
data = data[flt]
data[target] = data[target].map(FEATURES_ENGINEERING['target_encoding'])
@IvanNardini
IvanNardini / pipeline.py
Created August 1, 2020 14:42
Back to the Machine Learning fundamentals: How to write code for Model deployment (Part 3/3)
'''
Compile pipeline contains the pipeline object
'''
import data_preprocessing as Data_Prep
import feature_engineering as Feat_Eng
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from sklearn.ensemble import RandomForestClassifier
#Utils
@IvanNardini
IvanNardini / encoder_transformer.py
Created August 1, 2020 13:48
Back to the Machine Learning fundamentals: How to write code for Model deployment (Part 3/3)
class Encoder(BaseEstimator, TransformerMixin):
""" A transformer that returns DataFrame
with variable encoded.
Parameters
----------
encoding_meta : list, default=None
"""
@IvanNardini
IvanNardini / encoder.py
Last active August 1, 2020 13:40
Back to the Machine Learning fundamentals: How to write code for Model deployment (Part 3/3)
def encoder(data, var, mapping):
'''
Encode all variables for training
:params: data, var, mapping
:return: DataFrame
'''
if var not in data.columns.values.tolist():
pass
return data[var].map(mapping)
@IvanNardini
IvanNardini / trasformer.py
Last active July 25, 2020 07:54
Back to the Machine Learning fundamentals: How to write code for Model deployment (Part 3/3)
from sklearn.base import BaseEstimator, TransformerMixin
class Transformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return X
@IvanNardini
IvanNardini / pyenv.sh
Last active June 8, 2020 08:47
MLOps series #2 : Deploy a Recommendation System as Hosted Interactive Web Service on AWS
# update local packages
sudo apt-get update -y
# install dependencies
sudo apt-get install -y python3-pip python3-dev python3-venv
# create the python enviroment
python3 -m venv pyenv
# activate a virtual environment¶
source ./pyenv/bin/activate
#install packages
pip install -r ./src/score_interactive_endpoint/requirements.txt
@IvanNardini
IvanNardini / run_job.sh
Created June 7, 2020 15:51
MLOps series #1 : Batch scoring with Mlflow Model (Mleap flavor) on Google Cloud Platform
#! bin/bash
#Pass CLUSTER_NAME, REGION AND BUCKET parameters (or use default parameters)
CLUSTER_NAME=${1:-cluster-00000}
REGION=${2:-europe-west6}
BUCKET=${3:-cloud-demo-databrick-gcp}
#Run job
gcloud dataproc jobs submit pyspark \
--cluster ${CLUSTER_NAME} \
@IvanNardini
IvanNardini / setup_cluster.sh
Last active June 7, 2020 15:50
MLOps series #1 : Batch scoring with Mlflow Model (Mleap flavor) on Google Cloud Platform
#!/bin/bash
# setup_cluster.sh
# Create a plain vanilla cluster if doesn't exist with config.
# REGION - Region name (default eu)
# BUCKET - Bucker name (default cloud-demo-databrick-gcp)
#Pass all parameters (or use default parameters)
CLUSTER_NAME=${1:-cluster-00000}
@IvanNardini
IvanNardini / setup.sh
Created June 7, 2020 15:49
MLOps series #1 : Batch scoring with Mlflow Model (Mleap flavor) on Google Cloud Platform
#!/bin/bash
# setup.sh
# Create a bucket if doesn't exist.
# And load deployment scripts(.sh, .py)
# REGION - Region name (default eu)
# BUCKET - Bucker name (default cloud-demo-databrick-gcp)
#Pass REGION and BUCKET names (or use default parameters)
@IvanNardini
IvanNardini / databricks_bash.txt
Created June 7, 2020 15:48
MLOps series #1 : Batch scoring with Mlflow Model (Mleap flavor) on Google Cloud Platform
%sh
rm -rf /tmp/mleap_python_model_export
mkdir /tmp/mleap_python_model_export
ls -la /tmp/mleap_python_model_export
#Serialize Model to Bundle
lrModel.serializeToBundle("jar:file:/tmp/mleap_python_model_export/lrModel.zip", predictions)
%sh
ls -la /tmp/mleap_python_model_export/
dbutils.fs.cp("file:/tmp/mleap_python_model_export/lrModel.zip", "dbfs:/example/lrModel.zip")
display(dbutils.fs.ls("dbfs:/example"))