Skip to content

Instantly share code, notes, and snippets.

@saidsef
Last active April 3, 2020 14:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save saidsef/d7ef85f1d373f71d0acbfc1e0323639a to your computer and use it in GitHub Desktop.
Save saidsef/d7ef85f1d373f71d0acbfc1e0323639a to your computer and use it in GitHub Desktop.
Machine Learning Algorithms In Layman’s Terms

I’ll be going over:

  • Gradient Descent / Line of Best Fit
  • Linear Regression
  • Ridge & LASSO Regression
  • Logistic Regression
  • Decision Trees
  • Random Forest

“a model is like a Vending Machine, which given an input (money), will give you some output (a soda can maybe) . . . An algorithm is what is used to train a model, all the decisions a model is supposed to take based on the given input, to give an expected output. For example, an algorithm will decide based on the dollar value of the money given, and the product you chose, whether the money is enough or not, how much balance you are supposed to get [back], and so on.”

To summarize, an algorithm is the mathematical life force behind a model. What differentiates models are the algorithms they employ, but without a model, an algorithm is just a mathematical equation hanging out with nothing to do. With that, onwards!

Gradient Descent / Line of Best Fit

(While this first one isn’t traditionally thought of as a machine-learning algorithm, understanding gradient descent is vital to understanding how many machine learning algorithms work and are optimized.)

“Basically, gradient descent helps us get the most accurate predictions based on some data.

Let me explain a bit more – let’s say you have a big list of the height and weight of every person you know. And let’s say you graph that data.

Now let’s say there’s a local guessing competition where the person to guess someone’s weight correctly, given their height, gets a cash prize. Besides using your eyes to size the person up, you’d have to rely pretty heavily on the list of heights and weights you have at your disposal, right?

So, based on the graph of your data above, you could probably make some pretty good predictions if only you had a line on the graph that showed the trend of the data. With such a line, if you were given someone’s height, you could just find that height on the x-axis, go up until you hit your trend line, and then see what the corresponding weight is on the y-axis, right?

But how in the world do you find that perfect line? You could probably do it manually, but it would take forever. That’s where gradient descent comes in!

It does this by trying to minimize something called RSS (the residual sum of squares), which is basically the sum of the squares of the differences between our dots and our line, i.e. how far away our real data (dots) is from our line (red line). We get a smaller and smaller RSS by changing where our line is on the graph, which makes intuitive sense — we want our line to be wherever it’s closest to the majority of our dots.

We can actually take this further and graph each different line’s parameters on something called a cost curve. Using gradient descent, we can get to the bottom of our cost curve. At the bottom of our cost curve is our lowest RSS!

There are more granular aspects of gradient descent like “step sizes” (i.e. how fast we want to approach the bottom of our skateboard ramp) and “learning rate” (i.e. what direction we want to take to reach the bottom), but in essence: gradient descent gets our line of best fit by minimizing the space between our dots and our line of best fit. Our line of best fit, in turn, allows us to make predictions!”

Linear Regression

“Super simply, linear regression is a way we analyze the strength of the relationship between 1 variable (our “outcome variable”) and 1 or more other variables (our “independent variables”).

A hallmark of linear regression, like the name implies, is that the relationship between the independent variables and our outcome variable is linear. For our purposes, all that means is that when we plot the independent variable(s) against the outcome variable, we can see the points start to take on a line-like shape, like they do below.

(If you can’t plot your data, a good way to think about linearity is by answering the question: does a certain amount of change in my independent variable(s) result in the same amount of change in my outcome variable? If yes, your data is linear!)

Another important thing to know about linear regression is that the outcome variable, or the thing that changes depending on how we change our other variables, is always continuous. But what does that mean?

Let’s say we wanted to measure what effect elevation has on rainfall in London: our outcome variable (or the variable we care about seeing a change in) would be rainfall, and our independent variable would be elevation. With linear regression, that outcome variable would have to be specifically how many inches of rainfall, as opposed to just a True/False category indicating whether or not it rained at x elevation. That is because our outcome variable has to be continuous — meaning that it can be any number (including fractions) in a range of numbers.

Ridge & LASSO Regression

“So linear regression’s not that scary, It’s just a way to see what effect something has on something else.

Now that we know about simple linear regression, there are even better linear regression-like things we can discuss, like ridge regression.

Like gradient descent’s relationship to linear regression, there’s one back-story we need to cover to understand ridge regression, and that’s regularisation. Simply put, data scientists use regularisation methods to make sure that their models only pay attention to independent variables that have a significant impact on their outcome variable.

Logistic Regression

“So, we understand linear regression. Linear regression = what effect some variable(s) has on another variable, assuming that 1) the outcome variable is continuous and 2) the relationship(s) between the variable(s) and the outcome variable is linear.

But what if your outcome variable is “categorical”? That’s where logistic regression comes in!

Categorical variables are just variables that can be only fall within in a single category. Good examples are days of the week —if you have a bunch of data points about things that happened on certain days of the week, there is no possibility that you’ll ever get a datapoint that could have happened sometime between Monday and Tuesday. If something happened on Monday, it happened on Monday, end of story.

Decision Trees

A decision tree is a super simple structure we use in our heads everyday. It’s just a representation of how we make decisions, like an if-this-then-that game. First, you start with a question. Then you write out possible answers to that question and some follow-up questions, until every question has an answer.

Random Forest

Random forest, like its name implies, consists of a large number of individual decision trees that operate as an ensemble. Each individual tree in the random forest spits out a class prediction and the class with the most votes becomes our model’s prediction.

The fundamental concept behind random forest is a simple but powerful one — the wisdom of crowds. In data science speak, the reason that the random forest model works so well is:

A large number of relatively uncorrelated models (trees) operating as a committee will outperform any of the individual constituent models.

Choosing the right estimator

Often the hardest part of solving a machine learning problem can be finding the right estimator for the job.

Different estimators are better suited for different types of data and different problems.

The flowchart below is designed to give users a bit of a rough guide on how to approach problems with regard to which estimators to try on your data. Choosing the right estimator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment