Skip to content

Instantly share code, notes, and snippets.

@mGalarnyk
Created June 22, 2017 07:57
Show Gist options
  • Star 24 You must be signed in to star a gist
  • Fork 6 You must be signed in to fork a gist
  • Save mGalarnyk/9cdfce29e9172745d0ff2d3f19130d6c to your computer and use it in GitHub Desktop.
Save mGalarnyk/9cdfce29e9172745d0ff2d3f19130d6c to your computer and use it in GitHub Desktop.
Machine Learning (Stanford) Coursera Logistic Regression Quiz (Week 3, Quiz 1) for the github repo: https://github.com/mGalarnyk/datasciencecoursera/tree/master/Stanford_Machine_Learning

Machine Learning Week 3 Quiz 2 (Regularization) Stanford Coursera

Github repo for the Course: Stanford Machine Learning (Coursera)
Quiz Needs to be viewed here at the repo (because the image solutions cant be viewed as part of a gist)

Question 1

True or False Statement Explanation
False Adding many new features to the model helps prevent overfitting on the training set. Adding many new features gives us more expressive models which are able to better fit our training set. If too many new features are added, this can lead to overfitting of the training set.
False Introducing regularization to the model always results in equal or better performance on examples not in the training set. If we introduce too much regularization, we can underfit the training set and this can lead to worse performance even for examples not in the training set.
False Introducing regularization to the model always results in equal or better performance on the training set. If we introduce too much regularization, we can underfit the training set and have worse performance on the training set.
True Adding a new feature to the model always results in equal or better performance on the training set Adding many new features gives us more expressive models which are able to better fit our training set. If too many new features are added, this can lead to overfitting of the training set.

Question 2

Answer Explanation
Answer Image Adding many new features to the model helps prevent overfitting on the training set.

Question 3

True or False Statement Explanation
False Using a very large value λ cannot hurt the performance of your hypothesis; the only reason we do not set to be too large is to avoid numerical problems. Using a very large value of λ can lead to underfitting of the training set.
False Because regularization causes J(θ) to no longer be convex, gradient descent may not always converge to the global minimum (when λ > 0, and when using an appropriate learning rate α). Regularized logistic regression and regularized linear regression are both convex, and thus gradient descent will still converge to the global minimum.
True Using too large a value of λ can cause your hypothesis to underfit the data. A large value of results in a large λ regularization penalty and thus a strong preference for simpler models which can underfit the data.
False Because logistic regression outputs values 0 <= h0 <= 1, its range of output values can only be "shrunk" slighly by regularization anyway, so regularization is generally not helpful for it. None needed

Question 4

Answer Explanation
Answer Image The hypothesis follows the data points very closely and is highly complicated, indicating that it is overfitting the training set

Question 5

Answer Explanation
Answer Image The hypothesis does not predict many data points well, and is thus underfitting the training set.
@dazzysakb
Copy link

2:θ=[2.751.32] is true
1:ans is not correct .

@hiteshn97
Copy link

Q3.b). A doubt in reasoning. The new cost function made by adding two squares can have two local minima. So, why is this option wrong?

@jatinrajani
Copy link

Q.1b)Where it is written too much regularisation?Why can't we take normal regularisation then option b will be true

Copy link

ghost commented Feb 26, 2018

do not refer to this material. answers are given wrong here

@rraghu214
Copy link

Only the first one seems to be incorrect... True or False section of this quiz is very confusing.

@sajantonge
Copy link

Q.1b)Where it is written too much regularisation?Why can't we take normal regularisation then option b will be true

yes you can use, but the given statement is not valid for all the possibilities, hence it is false

@satingaux
Copy link

Question 1 is with the wrong solution.

@xFAOLx
Copy link

xFAOLx commented Sep 17, 2019

Question 1 is with the wrong solution.

i aggre

@alexandergribenchenko
Copy link

Hi everyone! I have been reviewing the question 1 many times, and studying again the regularization section, and I can't find the correct answer according to the Coursera revision. Could anyone discuss the solution that was evaluated right by Coursera? I'm really interested to find what's wrong and improve it. I know the honor code but i think that its so important to understand it.

@zhaotianzi
Copy link

There is only one correct answer but the question is given in the form of multiple choices hahaha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment