You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
Mike Dusenberry
dusenberrymw
machine learning ∩ medicine.
Research engineer @ Google Brain.
Example ImageNet-style resnet training scenario with synthetic data and using the tf.Estimator API
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Example ImageNet-style resnet training scenario with synthetic data
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In neovim, the following command will open up a separate terminal in a small split window to compile the current file:
:sp | resize 5 | term latexmk -pdf -pvc %
Best practices
Always include usepackage[utf8]{inputenc} within every document.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Interesting Machine Learning / Deep Learning Scenarios
Interesting Machine Learning / Deep Learning Scenarios
This gist aims to explore interesting scenarios that may be encountered while training machine learning models.
Increasing validation accuracy and loss
Let's imagine a scenario where the validation accuracy and loss both begin to increase. Intuitively, it seems like this scenario should not happen, since loss and accuracy seem like they would have an inverse relationship. Let's explore this a bit in the context of a binary classification problem in which a model parameterizes a Bernoulli distribution (i.e., it outputs the "probability" of the true class) and is trained with the associated negative log likelihood as the loss function (i.e., the "logistic loss" == "log loss" == "binary cross entropy").
Imagine that when the model is predicting a probability of 0.99 for a "true" class, the model is both correct (assuming a decision threshold of 0.5) and has a low loss since it can't do much better for that example. Now, imagine that the model
A solid Git pull request workflow will keep you from having issues when
contributing work to projects of interest. At the core, the idea is
simple: keep a local master branch simply as a means of getting the
latest official updates from the project's official Git repo so that you
can create new branches from it to work on your desired changes. Then,
always open PRs from these new branches, and once the PR is merged into
the official Git repo, you can simply move back to master, pull those
official changes, and then checkout a brand new branch for the next item
you wish to work on.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters