Skip to content

Instantly share code, notes, and snippets.

Machine Learning Loss Functions

Loss functions is a method of evaluating how well specific algorithm models the given data. If predictions deviates too much from actual results, loss function will compute a large positive number. Gradually, with the help of some optimization function, the model will make better predictions and reduce overall loss.

The cost function is the average of the losses. You first calculate the loss, one for each data point, based on your prediction and your ground truth label. Then, you average these losses which corresponds to your cost.

Microserivces

Summary of : https://medium.com/free-code-camp/microservices-from-idea-to-starting-line-ae5317a6ff02

  • Cohesion and coupling is traditionally the technical debt grasping onto our feet, slowing us down.
  • Complexity comes from low cohesion and high coupling. Microservices provides the structure to keep that at bay.
  • Benefits can include horizontal scalability, testability, reliability, observability, replaceability, and language independence.
  • The downside for microservices is that to achieve these benefits, you must provide an underlying infrastructure which supports them. Without that support, you can easily find yourself with an unreliable and opaque system — or you find yourself reinventing the reliability wheel in every single service.

Requirements:

Type Complexity Weights per layer Sequential Operations

Behavioral pattern

Strategy

Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use. It helps reducing the amount of inheritance needed. Instead of creating a multi level convoluted inheritance to share code, you can use Strategy to reduce code complexity and share code across siblings.

@FunctionalInterface
interface BillingStrategy {
    // use a price in cents to avoid floating point round-off error

Travis CI

Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted at GitHub.

Installation

Create .travis.yml and put it in the root directory of the Github project.

Configurations

@wael34218
wael34218 / Books.md
Last active December 16, 2021 16:08

12 Rules for Life (Jordan Peterson)

  1. Stand up straight with your shoulders back: man up, we are all on this Earth to suffer.
  2. Treat yourself like someone you are responsible for helping: Stop waiting for other people to dig you out of your pitiful hole.
  3. Make friends with people who want the best for you: If people are determined to screw up, let them they will drag you down to their level. So stick with the winners.
  4. Compare yourself to who you were yesterday, not to who someone else is today: Happiness is the PROGRESS for winning. Winning itself doesnt mean that much. Dont compare yourself with the smartest man on earth.
  5. Do not let your children do anything that makes you dislike them:
  6. Set your house in perfect order before you criticize the world
  7. Pursue what is meaningful (not what is expedient): So quit looking for short cuts and start reading Nietzsche.
  8. Tell the truth – or, at least, don't lie
@wael34218
wael34218 / Tmux.md
Last active December 16, 2021 16:22

TMux Command

Command Description
tmux new -s NAME Create new tmux session
tmux ls List all sessions started on the machine/user
tmux a -t NAME Reattach a tmux session
tmux kill-session -t NAME Terminate session
@wael34218
wael34218 / AWK.md
Last active December 16, 2021 16:22

AWK Basics

Introduction

Alfred Aho, Peter Weinberger, and Brian Kernighan - Awk is a utility that enables a programmer to write tiny but effective programs in the form of statements that define text patterns that are to be searched for in each line of a document and the action that is to be taken when a match is found within a line.

AWK Operations:

  • Scans a file line by line
  • Splits each input line into fields
  • Compares input line/fields to pattern

05/05/2018

2018: Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech

Projects audio files that contains one word of speech into a hyper-dimension space just like Word2Vec. Uses "Force Aligment" to split audio into words (which requires text). Pad the audio segments with zeros, do MFCC, feed into encoder-decoder which uses RMSE. They also add noise to the signal and make the network denoise it. LibriSpeech 500 hour of audio. Not sure how it can incorporated in an ASR or TTS systems. The audio file has to be paired with a text otherwise Speech2Vec cannot split the audio file into words using "Forced Alignment" method. It is used to query if the spoken word is similar to an existing word in the corpus.

2016: Neural Machine Translation of Rare Words with Subword Units (BPE)

BPE data compression tool that combines most frequent pair of bytes with one. It works well with Named Entity, loadwords and morphologically complex words. Handles OOVs well and rare words. You can