First you'll have to install the Git command line tool on your machine, following these instructions. Then find the repository that you want to contribute to, copy its address from the green "Clone or Download" button, and on your local machine run e.g.
git clone https://github.com/dmurfet/difflinearlogic.git
To see a list of what has changed (optional) run
git status. Then
Spaces of programs and synthesis
written in December 2018.
The practical development of deep learning and its associated infrastructure has initiated a broad re-examination of the practice of computer programming. In this document we briefly survey how this discussion has evolved over the past few years, and then describe our point of view on the underlying mathematics.
We begin with some appeals to authority, in the form of the following references:
Some thoughts on supervision
As a PhD student you are optimising for a goal with a long time horizon (in the first case to complete a PhD, but then perhaps also to obtain a permanent research position, which could take much longer) and it is hard to determine the correlation between any given intermediate action and eventual success (whatever you define that to be, but two large components could be prove beautiful theorems and get a job). This brute fact lies at the root of much stress and uncertainty. How does one prove beautiful theorems? How does one get a job?
Well, who knows, but certainy not by trying to directly optimise for a goal with a decade long time horizon, and this degree of uncertainty! You have to develop shorter term proxy goals, and it seems to me that part of the job of a supervisor is to assist in that development. If you want to prove beautiful theorems and get a job, then since it is difficult to infer from first principles the algorithm for doing either of those things, a r
The rough area at the moment is moduli of A-infinity structures in geometry.
- Homological algebra, category theory
- General category theory (Borceux, Mitchell, Stenstrom, Maclane-Moerdijk)
- General homological algebra (Weibel, Hilton-Stammbach)
- Hochschild homology and cohomology (Loday, Lipman)
- Coalgebras (Sweedler)
- Triangulated categories (Neeman)
In early 2019 I decided to try to understand the University of Melbourne a little better. I have recorded some observations here in case they are useful for other academics. For updates in early 2020 see down the page. The notes are taken from various University of Melbourne (UoM) official documents, primarily
To a first approximation, if you want to understand the University I think you should read the report, ignore the glossy bits, and pay close attention to the statistics on p.13 and the financial data reported beginning on p.124. All references in this section are to the report, unless specified otherwise.
- (Student Demographics) The percentage of international students has increased from 28.9% in 2013 to 39.8% in 2017. The overall number of students has increased from 40,455 in 2013 (median ATAR 94.30) to 50,270 in 2017 (median ATAR 93.65). Austra
Constructing A-infinity categories of matrix factorisations
I am making publicly available my hand-written working notes for the paper "Constructing A-infinity categories of matrix factorisations" in the same spirit that I made available the other notes on my webpage The Rising Sea. Obviously you should not expect these notes to be as coherent, or readable, as the final paper, but those marked on the first page as (checked) are indeed checked, to the same level of rigour that I apply to any of my published papers. And they often contain more details than the paper. I hope you find them useful!
Notes directly used in writing the paper
The main references, written in the same notation and from the same outlook as the final paper, are given below. You should probably start with (ainfmf28). Some of these PDF files are large, you have been warned.
Optimisation algorithms for deep RL
The optimisation algorithm used in most of DeepMind's deep RL papers is RMSProp (e.g in the Mnih et al Atari paper, in the IMPALA paper, in the RL experiments of the PBT paper, in the Zambaldi et al paper). I have seen speculation online that this is because RMSProp may be well-suited to deep learning on non-stationary distributions. In this note I try to examine the RMSProp algorithm and specifically the significance of the
epsilon hyperparameter. The references are
Often in the literature RMSProp is presented as a variation of AdaGrad (e.g. in the deep learning textbook and in Karpathy's class). However, I think this is misleading, and that the explanation in Hinton's lecture is (not surprisingl