Skip to content

Instantly share code, notes, and snippets.

@ehzawad
Created May 23, 2022 10:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ehzawad/075f0e26e3dd1dd11ffd4c011d3a635d to your computer and use it in GitHub Desktop.
Save ehzawad/075f0e26e3dd1dd11ffd4c011d3a635d to your computer and use it in GitHub Desktop.
Regularization ML DL

regularization

Any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error

To some extent, we are trying to fit a square peg (the data-generating process) into a round hole (our model family)

Strategies to create such a large, deep regularized model

Parameter Norm Penalties

L-squared Parameter Regularization

L-tothepower1 Regularization

Norm Penalties as constrained Optimization

Regularization and under-constrained Problems

Dataset Augmentation

Noise Robustness

Injecting Noise at the Output Targets

Semi-supervised Learning

Multitask learning

Early stopping

Parameter tying and Parameter sharing

Convolutional Neural networks

Sparse representations

Bagging and other ensemble methods

Dropout

Adversarial training

Tangent distance, Tangent prop and manifold tangent classifier

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment