Skip to content

Instantly share code, notes, and snippets.

View jw-optalysys's full-sized avatar

jw-optalysys

View GitHub Profile
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Number of features selected Negative logarithmic loss
1 -0.1883
2 -0.1473
5 -0.1103
10 -0.1044
20 -0.1032
# Importing relevant libraries and modules
import torch
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch.nn as nn
import torch.nn.functional as F
# Importing the Fashion MNIST Dataset as Pandas dataframes
from concrete.ml.torch.compile import compile_torch_model
# Compiling the model into a FHE circuit using Concrete ML
quantised_compiled_module = compile_torch_model(
model,
X_train,
n_bits = 3 # This is the quantisation bit width.
#In this case only 3 bits is sufficient.
)
# Importing relevant libraries
import numpy as np
import torch
from torch import nn
# Defining the neural network using PyTorch
# It has 2 hidden layers. Both hidden layers and the output layer have 3 neurones each
# The Sklearn wine dataset is a simple one to work with, so such a small neural network
# is already sufficient for achieving high accuracy
# Parameter grid used for GridSearchCV
param_grid = {
"max_depth": list(range(1, 5)),
"n_estimators": list(range(1, 201, 20)),
"learning_rate": [0.01, 0.1, 1],