Skip to content

Instantly share code, notes, and snippets.

@jdsgomes
jdsgomes / Tips.md
Last active May 31, 2018 22:31
Development cheat sheet

Pycharm

Format code

Ctrl + Alt + L

Display side project explorer

Alt + 1

Conda

conda upgrade libgcc (libstdc++.so)
@jdsgomes
jdsgomes / Coding.md
Last active August 24, 2016 13:35
Coding
@jdsgomes
jdsgomes / CPP.md
Last active August 22, 2016 09:50
C++

C++

C++11 smart pointers

  • shared_ptr
    • Raw pointer can be co-own by several shared pointers and a reference count is kept. Memory is realeased when the reference count is 0.
  • unique_ptr
    • Raw pointer can be own by only one unique pointer. No assignments or copies can be made. No need to keep a reference count, memory is released when the pointer is out of scope.
  • weak_ptr
    • Does not grant acess to the pointed data, is a view only pointer which can be used to query the status of the pointed date (if it still exists or not) and to create a shared pointer from it.

STL

@jdsgomes
jdsgomes / DeepLearningResources.md
Last active August 22, 2016 09:50
Deep Learning Resources
@jdsgomes
jdsgomes / MachineLearningResources.md
Last active August 22, 2016 09:51
Machine learning resources

Machine Learning

  • Coursera Machine Learning course by Andrew Ng
    • Linear regression
    • Logistic regression
    • Neural networks (basics)
    • Machine learning tips (how to apply in real situations) and example application
    • SVMs
    • Unsupervised learning
    • Anomaly detection
  • Large scale learning
@jdsgomes
jdsgomes / VeryDeepLearning.md
Last active May 30, 2016 09:57
Training very deep neural networks

Very deep neural networks (May 2016)

  • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition (2015)
    • Uses identity shortcuts connections that skip one or more layers and merge back by adding to the output of the last layer that has been skipped. The point of such netoworks is to be able to train deeper networks without the known gradient vanishing problem. They show that residual networs are easier to optimize and can achieve better accuracy with the depth increase. Same architecture used for classification, feature extraction, object detection and segmentation tasks with success.
  • Sergey Ioffe, Christian Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shif (2015)
  • An apporach to reduce the internal covariance shift by fixing the input layer distribution for each layer, thus allowing for a much faster learning without vanishing/ex
@jdsgomes
jdsgomes / DeepLearningSpeedAndCompression.md
Last active August 11, 2019 13:45
Speeding up deep learning

Speed Improvements and Compression for Deep Learning

Notes

@jdsgomes
jdsgomes / DeepLearningFaces.md
Last active January 6, 2020 07:01
Deep Learning for Face Recognition

Deep Learning for Face Recognition (May 2016)

Popular architectures

  • FaceNet (Google)
    • They use a triplet loss with the goal of keeping the L2 intra-class distances low and inter-class distances high
  • DeepID (Hong Kong University)
    • They use verification and identification signals to train the network. Afer each convolutional layer there is an identity layer connected to the supervisory signals in order to train each layer closely (on top of normal backprop)
  • DeepFace (Facebook)
    • Convs followed by locally connected, followed by fully connected