While doing my PhD, I've made code replicating other papers or contributing to other projects:
- Collaborated on the code for "Moonshine: Distilling with Cheap Convolutions"
- Replication of "Variational Dropout and the Local Reparameterization Trick"
- Replication of "Variational Dropout Sparsifies Deep Neural Networks"
- Replication of "The Shattered Gradients Problem: If resnets are the answer, then what is the question?".
- Replication of "SGDR: Stochastic Gradient Descent with Warm Restarts" (predating its inclusion in PyTorch)
- Replication of "Adaptive dropout for training deep neural networks"
- Replication of "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization"
- Contributed code speeding up an implementation of relational networks by 10 times.
- Replication of "ACDC: A Structured Efficient Linear Layer"
- A PyTorch interface to load ImageNet disk-sequentially.
- A way to use Autograd inside Tensorflow
- Notes showing relational networks can be used for few-shot learning (didn't follow this up, but later found it would have been concurrent with "Learning to Compare: Relation Network for Few-Shot Learning")
- A JupyterHub deployment for a University of Edinburgh course Data, Design and Society
- Contributed to Neuroglycerin's entries to the AES seizure prediction competition and the first National Data Science Bowl
- A tutorial on Black Box Stochastic Variational Inference for a postgrad study group
- Ordered discrete distributions with nested Gumbel-softmaxes