Skip to content

Instantly share code, notes, and snippets.

@Nanthini10
Nanthini10 / retrieve_explanation.py
Created June 3, 2021 03:05
Retrieve explanation from client on Azure
from azureml.interpret import ExplanationClient
# Get model explanation data
client = ExplanationClient.from_run(run)
global_explanation = client.download_model_explanation()
local_importance_values = global_explanation.local_importance_values
expected_values = global_explanation.expected_values
# Or you can use the saved run.id to retrive the feature importance values
client = ExplanationClient.from_run_id(ws, experiment_name, run.id)
@Nanthini10
Nanthini10 / custom_docker.py
Created June 3, 2021 02:58
RAPIDS env using custom Docker Image on Azure ML
from azureml.core import Environment
environment_name = "rapids"
env = Environment(environment_name)
env.docker.enabled = True
env.docker.base_image = None
env.docker.base_dockerfile = """
@Nanthini10
Nanthini10 / optuna_rapids.ipynb
Created November 6, 2020 20:21
Optuna Notebook for the Mini-blog
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@Nanthini10
Nanthini10 / run.py
Created May 13, 2020 16:41
Running a Tune experiment
analysis = tune.run(
WrappedTrainable,
name=exp_name,
scheduler=sched,
search_alg=search,
stop={"training_iteration": CV_folds, "is_bad": True,},
resources_per_trial={"cpu": cpu_per_sample, "gpu": int(compute == "GPU")},
num_samples=num_samples,
checkpoint_at_end=True,
keep_checkpoints_num=1,
@Nanthini10
Nanthini10 / train.py
Created May 13, 2020 16:39
Creating a RayTune Trainable model and evaluating performance
def _train(self):
iteration = getattr(self, "iteration", 0)
if compute == "GPU":
# split data
X_train, X_test, y_train, y_test = train_test_split(
X=self._dataset,
y=self._y_label,
train_size=0.8,
shuffle=True,