Skip to content

Instantly share code, notes, and snippets.

@Deepayan137
Last active August 9, 2018 14:32
Show Gist options
  • Save Deepayan137/78a9ffb1ea47cc5e8ac5015c8612070a to your computer and use it in GitHub Desktop.
Save Deepayan137/78a9ffb1ea47cc5e8ac5015c8612070a to your computer and use it in GitHub Desktop.
RMSE vs. MAE

Root Mean Square error vs Mean Absolute Error

  • RMSE can be defined as a square root of summation over all i { [Yi -(W^T.Xi)]^2 }. We take square of the difference between predicted value and the real value before summing over in order to get rid of negative values, so that the positives and negatives don’t cancel each other out and we are left with an error which is much lesser than the actual error. We then proceed to find W by differentiating the above loss function w.r.t W and equate it to zero.

  • However, it is interesting to note that we don’t have to necessarily take the square of the difference to get rid of the negative values. The same could be achieved by taking the absolute value of the difference between predicted and real values. This is know as Mean Absolute Error or MAE. Summation over all i {abs [ Yi - (W^T.Xi)] }

  • One could argue that why do we take RMSE when MAE less computationally expensive.

The below link beautifully explains why we do that. Do give it a read.

RMSE vs MAE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment