Skip to content

Instantly share code, notes, and snippets.

View cedrickchee's full-sized avatar
⚒️
⚡ 🦀 🐿️ 🐘 🐳 ⬡ ⚛️ 🚢 🚀 🦄 🍵

Cedric Chee cedrickchee

⚒️
⚡ 🦀 🐿️ 🐘 🐳 ⬡ ⚛️ 🚢 🚀 🦄 🍵
View GitHub Profile
@cedrickchee
cedrickchee / README.md
Last active January 15, 2024 23:06
Machine learning/deep learning: how to get notifications of 'end of training' on your mobile phone.

How to get notifications of 'end of training' on your mobile phone

I often train machine learning/deep learning models and it takes a very long time to finish. Even an epoch in a moderately complex model takes near to half an hour to train. So, I constantly need to check (baby sit) the training process.

To help reduce the pain, I need a way to notify me on the training metrics. The idea is, we will send the training metrics (messages) as notifications on mobile using PyTorch Callbacks.

I have written some Python code snippets that helps me send my training metrics log as mobile push notifications using Pushover service. They have a limit of 7500 requests per month per user—which is fine for my usecase.

Those who'd like to have something like this, you can grab those little hacky scripts.

@cedrickchee
cedrickchee / nalu.py
Last active August 4, 2018 09:50
Keras implementation of DeepMind's Neural Arithmetic Logic Units (NALU). Paper: https://arxiv.org/abs/1808.00508
import numpy as np
import keras.backend as K
from keras.layers import *
from keras.models import *
import tensorflow as tf
class Nalu(Layer):
def __init__(self, units, krnl_init="glorot_uniform", **kwargs):
if "inp_shp" not in kwargs and "inp_dim" in kwargs:
@cedrickchee
cedrickchee / DOCS.md
Last active August 12, 2018 15:50
fastai v1—the rewrite of the fast.ai deep learning library: https://github.com/fastai/fastai_v1

About fastai_v1

fastai_v1 is the codename for fastai deep learning library version 1.0. It's the beginning of the new version of the library.

From the fast.ai forums:

We’re doing a rewrite of the fastai library, with the following goals:

  • Support consistent API for classification, regression, localization, and generation, across all of: vision, NLP, tabular data, time series, and collaborative filtering
  • Clear and complete documentation for both new and experienced users
@cedrickchee
cedrickchee / blog_post_draft.md
Last active August 14, 2018 06:33
A Deep Learner Notes

Hey folks. I hope your day is going well.

Today, I am excited to share my fast.ai "Cutting Edge Deep Learning for Coders" complete course notes. This is my personal notes on the 2018 edition of fast.ai Deep Learning Part 2. The notes are mainly written text transcript of each video lesson and they are partially time-coded. Thanks to our fellow student, Hiromi Suenaga for manually (old-fashioned) transcribe the full set of videos.

Benefits of this:

  • We can refer back to the transcripts without having to play the videos all the time.
  • This will save us a ton of time and helping us learn more effectively.
    • The transcript files are extremely helpful for searching up contents quickly.
  • For whom English is not their first language that a major impediment to understanding the content is the lack of written transcript or course notes.

For a long time I held off starting my own side projects because of how much I didn’t know how to do.

For every project I could think of, there were several features I had absolutely no idea how to build. I would always ask myself how I could start working on something when I didn’t even know half of what it took to finish it. I was convinced I needed to learn more before I could build anything of my own.

So, instead of building my own projects, I got stuck in what I’ll call “tutorial purgatory.” Since I felt like I learned best this way, I read and watched every tutorial I could find that seemed interesting and that I thought might apply to my own projects one day. I spent month after month doing this, filling my nights with endless videos on YouTube, Udemy, and whatever other tutorial site I stumbled across. I learned a lot, and forgot nearly as much in the process.

Don’t get me wrong. I love tutorials, and I think learning the basics from tutorials is a great way to get started. But if you’re not carefu

@cedrickchee
cedrickchee / note_to_self.md
Created September 23, 2018 09:18
What are some things that only someone who has been programming 20-50 years would know?
  1. Everything in software development has already been invented. People just keep rediscovering it and pretending they invented it. Whatever you think is so cool and new, was copied from Smalltalk, or HAKMEM, or Ivan Sutherland, or Douglas Engelbart, or early IBM, or maybe Bell Labs.

  2. Don’t trust the compiler. Don’t trust the tools. Don’t trust the documentation. Don’t trust yourself.

  3. We don’t need any more computer languages. Still, you will run right off and invent another one. Let me guess, your amazing new language uses IEEE-754 math and fixed-precision integers. Your amazing new language is broken.

  4. Maintaining code is harder than writing it. Writing lots and lots of new code can be a mark of laziness.

  5. You have been taught to program as though memory, processor time, and network bandwidth are all free and infinite. It isn’t, it isn’t, and it isn’t. Read the rest of Knuth’s paragraph about premature optimization.

@cedrickchee
cedrickchee / life_pro_tips_day_1.md
Last active December 15, 2018 11:54
What did you learn the hard way?

Worse is better

To put it a bit more optimistically—usable now is better than perfect later.

I have found that, if I disappear behind a curtain and spend a long time trying to make something really well-polished and feature rich, that just gives the user a lot of time to build up their expectations, and also to get frustrated by the delay.

By the time you ship, they will be actively looking for ways to find fault. When I YAGNI my way into a 80% or 90% solution and turn it around quickly, though, more often than not, they will initially just be impressed at how quickly I was able to help them. Requests for changes will come, but they are generally small, so it's usually relatively easy to turn those around quickly as well.

@cedrickchee
cedrickchee / summary.md
Last active December 24, 2018 13:11
Prof. Michael Levin talk about "What Bodies Think About: Bioelectric Computation Outside the Nervous System"

Overview

At NeurIPS 2018, Prof. Michael Levin presented, "What Bodies Think About: Bioelectric Computation Outside the Nervous System" (video recording).

Summary by Guillermo Valle:

Bodies have bioelectrical patterns that store information "memories" separately from genomic, anatomical states. These bioelectrical patterns play a huge role in developmental processes, so that being able to control them is basically like a holy grail of regenerative medicine. It also offers interesting new insights for Artificial Intelligence and cognition/neuroscience.

Opinion

@cedrickchee
cedrickchee / advantage_capsule_layer.md
Last active December 25, 2018 02:41
Text classification (NLP) using Capsule Network (aka CapsNet) layer and GRU/LSTM block

We will look at the advantage of Capsule layer in text classification.

CapsNet Model

The architecture of our model with CapsNet is very similar to general architecture, except for an addition Capsule layer.

Advantage of Capsule Layer in Text Classification

@cedrickchee
cedrickchee / pytorch_distributed.md
Created December 25, 2018 17:06
Setup PyTorch 1.0 stable distributed training

distrib_train function does all setup required for distributed training:

import torch

def distrib_train(gpu):
    if gpu is None: return gpu
    gpu = int(gpu)
    torch.cuda.set_device(int(gpu))
 torch.distributed.init_process_group(backend='nccl', init_method='env://')