This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// A mock reference monitor design for a mock security system, enforcing the following access control policy: | |
// Only the user 'Alice' who identifies herself on requested inputs | |
// with her name and her birthday of '01.01.1980' is granted access to | |
// the top secret recipe file, stored on disk. | |
// This reference monitor design provides (some) security towards the given access policy | |
// by a) storing a hash rather than the values of the user inputs of (name, birthday) to compare against, | |
// and b) using a salt of the valid inputs as a symmetric key for AES encrypting the top secret recipe file beforehand and | |
// for decrypting before returning to Alice. | |
// | |
// Of course these comments would not be kept in the source code. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# pseudocode impl | |
# Algorithm 1 Pseudocode in a PyTorch-like style. | |
# for x in loader: # x: batch with B sequences | |
# # Split image into patches | |
# # B x C x T x H x W -> B x C x T x N x h x w | |
# x = unfold(x, (patch_size, patch_size)) | |
# x = spatial_jitter(x) | |
# # Embed patches (B x C x T x N) | |
# v = l2_norm(resnet(x)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
backprop algorithm on F[a] = xa^2 + a | |
For each node function f_i | |
- method to calculate value on input, f_i(x_i) | |
- method to calculate derivative value on input, K_ij | |
- method to calculate parameter derivative on input, xi_i | |
Suppose a single input x in R. Suppose the functional to be | |
optimized is F[a] = xa^2 + a |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# -*- coding: utf-8 -*- | |
""" | |
standard discriminative gaussian | |
y ~ N(f(x), sigma^2) | |
as well as heteroscedastic model | |
y ~ N(f(x), sigma^2(x)) | |
training on a dataset requiring the heteroscedastic model: | |
x in R, y in R^2 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# gmm_gd.py | |
""" | |
Direct gradient descent on 2-state gaussian mixture model. | |
Not the best way to do this, typically use the EM algorithm instead. | |
Training is highly unstable. | |
model: | |
p(x) = pi * phi_1 + (1-pi) * phi_2 | |
phi_1, phi_2 ~ normal |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# -*- coding: utf-8 -*- | |
""" | |
generative gaussian model, minimizing <-log p>_data wrt. mu, sigma | |
with gradient descent | |
-log p = log sigma + log sqrt(2pi) + (x - mu)^2 / (2 sigma^2) | |
So | |
grad_mu (-log p) = - (x - mu) / sigma^2 | |
grad_sigma (-log p) = 1 / sigma - (x - mu)^2 / sigma^3 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# array_reinforcement_learning.py | |
""" | |
array_reinforcement_learning | |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
Reinforcement learning is performed on a 1-dimensional | |
finite state space ("array") of k elements: | |
S = {1,...,k} | |
There are two possible actions: move right (a = 1), or move left (a = -1), |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
attempting to learn query completion as product of all next character models | |
P(x_c|x_q) = prod_i P(x_i | x_1:i-1) | |
Does not scale well for any reasonable sized text document(s). Need smaller length distributions, approximations. | |
""" | |
import numpy as np | |
import random | |
import tensorflow.keras as keras |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
--------------------------------------------------- | |
Output: | |
epoch loss: 78.85499735287158 | |
epoch loss: 0.0008048483715437094 | |
epoch loss: 7.917497569703835e-06 | |
epoch loss: 7.784523854692527e-08 | |
epoch loss: 1.082900831506084e-09 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Currently trains with decreasing loss | |
*** epoch: 0 epoch loss: 276.47448682785034 | |
*** epoch: 1 epoch loss: 216.9058997631073 | |
*** epoch: 2 epoch loss: 190.01888144016266 | |
*** epoch: 3 epoch loss: 171.68642991781235 | |
*** epoch: 4 epoch loss: 157.7317717075348 | |
*** epoch: 5 epoch loss: 145.89844578504562 | |
... |
NewerOlder