Skip to content

Instantly share code, notes, and snippets.

View AFAgarap's full-sized avatar

Abien Fred Agarap AFAgarap

View GitHub Profile
@AFAgarap
AFAgarap / autoencoder-pytorch.ipynb
Last active April 28, 2024 12:30
PyTorch implementation of an autoencoder.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@AFAgarap
AFAgarap / polynomial-least-squares.py
Last active March 11, 2024 10:22
A program implementation of Polynomial Least Squares written in Python.
# Copyright 2017 Abien Fred Agarap. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
@AFAgarap
AFAgarap / load_mnist.py
Last active March 6, 2024 03:07
Loading MNIST dataset and creating a torch.utils.data.DataLoader object for it.
transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
train_dataset = torchvision.datasets.MNIST(
root="~/torch_datasets", train=True, transform=transform, download=True
)
test_dataset = torchvision.datasets.MNIST(
root="~/torch_datasets", train=False, transform=transform, download=True
)
@AFAgarap
AFAgarap / paging.py
Created October 1, 2016 13:28
A Python implementation of the Paging memory management scheme in operating systems
#!/usr/bin/env python3
# Paging, a memory management scheme in operating systems
# Author: A.F. Agarap
def main():
page_mapping = []
physical_frame_table = []
virtual_page_table = []
size = int(input("How many words will you enter? "))
@AFAgarap
AFAgarap / autoencoder-full.py
Last active March 27, 2022 14:56
TensorFlow 2.0 implementation for a vanilla autoencoder. Link to tutorial: https://medium.com/@abien.agarap/implementing-an-autoencoder-in-tensorflow-2-0-5e86126e9f7
"""TensorFlow 2.0 implementation of vanilla Autoencoder."""
import numpy as np
import tensorflow as tf
__author__ = "Abien Fred Agarap"
np.random.seed(1)
tf.random.set_seed(1)
batch_size = 128
epochs = 10
@AFAgarap
AFAgarap / autoencoder.py
Last active December 25, 2020 18:34
PyTorch implementation of a vanilla autoencoder model.
class AE(nn.Module):
def __init__(self, **kwargs):
super().__init__()
self.encoder_hidden_layer = nn.Linear(
in_features=kwargs["input_shape"], out_features=128
)
self.encoder_output_layer = nn.Linear(
in_features=128, out_features=128
)
self.decoder_hidden_layer = nn.Linear(
@AFAgarap
AFAgarap / train_ae.py
Created January 7, 2020 12:55
Training a model in PyTorch.
for epoch in range(epochs):
loss = 0
for batch_features, _ in train_loader:
# reshape mini-batch data to [N, 784] matrix
# load it to the active device
batch_features = batch_features.view(-1, 784).to(device)
# reset the gradients back to zero
# PyTorch accumulates gradients on subsequent backward passes
optimizer.zero_grad()
@AFAgarap
AFAgarap / vae.py
Created May 16, 2019 09:33
TensorFlow 2.0 implementation of a variational autoencoder model.
class VariationalAutoencoder(tf.keras.Model):
def __init__(self, latent_dim, original_dim):
super(VariationalAutoencoder, self).__init__()
self.encoder = Encoder(latent_dim=latent_dim)
self.decoder = Decoder(original_dim=original_dim)
def call(self, input_features):
z_mean, z_log_var, latent_code = self.encoder(input_features)
reconstructed = self.decoder(latent_code)
kl_divergence = -5e-2 * tf.reduce_sum(tf.exp(z_log_var) + tf.square(z_mean) - 1 - z_log_var)
@AFAgarap
AFAgarap / instantiate_objects.py
Last active January 14, 2020 12:59
Object instantiation for training an autoencoder written in PyTorch.
# use gpu if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# create a model from `AE` autoencoder class
# load it to the specified device, either gpu or cpu
model = AE(input_shape=784).to(device)
# create an optimizer object
# Adam optimizer with learning rate 1e-3
optimizer = optim.Adam(model.parameters(), lr=1e-3)
from trustscore import TrustScore
ts = TrustScore(alpha=5e-2)
ts.fit(encoded_train_features, train_labels)
trust_score, closest_class_not_predicted = ts.score(
encoded_test_features, predictions, k=5
)