Skip to content

Instantly share code, notes, and snippets.

@harveyslash
Created April 19, 2018 02:10
Show Gist options
  • Save harveyslash/48df839a1776dc6da5a2d69962e5a968 to your computer and use it in GitHub Desktop.
Save harveyslash/48df839a1776dc6da5a2d69962e5a968 to your computer and use it in GitHub Desktop.
\documentclass[12pt]{article}
\pagestyle{empty}
\usepackage{tikz}
\usepackage{comment}
\usepackage{amsmath}
\usepackage{booktabs}
\begin{document}
\begin{center}
{\bf 331 -- Intro to Artificial Intelligence\\
Homework 06\\
Due: Wednesday, April 18th, 2018, 11:59pm in myCourses dropbox}
\end{center}
\begin{itemize}
\item Be sure to put your NAME and Section number on the first page.
\item You must submit your solution to myCourses in .pdf format.
\item Only the last thing submitted to the dropbox will be accepted.
\item No late homework will be accepted.
\item {\bf This is the last graded hw.}
\end{itemize}
\vspace{.75cm}
\begin{enumerate}
\item {\bf (12 Points)} (Decision Tree. Shannon Entropy.)
Recall our table for the Restaurant problem from R\&N pg 700: \\
\begin{center}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c || c |}
\hline
Num & Alt & Bar & Fri & Hun & Pat & Price & Rain & Res & Type & Est & Wait \\ \hline \hline
$x_{1}$ & yes & no & no & yes & some & \$\$\$ & no & yes & French & 0-10 & yes \\ \hline
$x_{2}$ & yes & no & no & yes & full & \$ & no & no & Thai & 30-60 & no \\ \hline
$x_{3}$ & no & yes & no & no & some & \$ & no & no & Burger & 0-10 & yes \\ \hline
$x_{4}$ & yes & no & yes & yes & full & \$ & yes & no & Thai & 10-30 & yes \\ \hline \hline
$x_{5}$ & yes & no & yes & no & full & \$\$\$ & no & yes & French & $>$60 & no \\ \hline
$x_{6}$ & no & yes & no & yes & some & \$\$ & yes & yes & Italian & 0-10 & yes \\ \hline
$x_{7}$ & no & yes & no & no & none & \$ & yes & no & Burger & 0-10 & no \\ \hline
$x_{8}$ & no & no & no & yes & some & \$\$ & yes & yes & Thai & 0-10 & yes \\ \hline \hline
$x_{9}$ & no & yes & yes & no & full & \$ & yes & no & Burger & $>$60 & no \\ \hline
$x_{10}$ & yes & yes & yes & yes & full & \$\$\$ & no & yes & Italian & 10-30 & no \\ \hline
$x_{11}$ & no & no & no & no & none & \$ & no & no & Thai & 0-10 & no \\ \hline
$x_{12}$ & yes & yes & yes & yes & full & \$ & no & no & Burger & 30-60 & yes \\ \hline
\end{tabular}
\end{center}
Using the Shannon Entropy formula and our formula for $Gain$, calculate the amount of information obtained by choosing the attribute of (a) Hungry and (b) Bar. Show your work to receive full credit. (Hint: it may be helpful to draw out a small decision tree showing attribute ``Hungry" at the root node to keep track of your $p$, $n$, $p_k$ and $n_k$ values.)
\\ \\
\textbf{Answer} Since the full dataset has $p=n=6$, the entropy is 1 bit.
If splitting by Hungry: \\ \\
\begin{equation}
\begin{split}
Remainder(Hungry)&= \frac{5+2}{12}B(\frac{5}{5+2}) + \frac{1+4}{12}B(\frac{1}{1+4}) \\
&= \frac{7}{12}\times 0.8632 + \frac{5}{12}\times 0.7219\\
&= 0.5035 + 0.3007\\
&= 0.8042
\end{split}
\end{equation}
\begin{equation}
\begin{split}
Gain(Hungry) &= 1 - Remainder(Hungry) \approx 0.1958
\end{split}
\end{equation}
If splitting by Bar: \\ \\
\begin{equation}
\begin{split}
Remainder(Bar)&= \frac{3+3}{12}B(\frac{3}{3+3}) + \frac{3+3}{12}B(\frac{3}{3+3}) \\
&= \frac{3}{6}\times 1 + \frac{3}{6}\times 1 \\
&= 0.5 + 0.5\\
&= 1
\end{split}
\end{equation}
\begin{equation}
\begin{split}
Gain(Bar) &= 1 - Remainder(Bar) = 0
\end{split}
\end{equation}
% \vspace {1cm}
\pagebreak
\item
{\bf (12 Points)} (Bayes Nets.) In the following example $B = BrokeElectionLaw$, $I = Indicted$,
$M = PoliticallyMotivatedProsecutor$, $G = FoundGuilty$ and $J = Jailed$.
\begin{center}
\begin{tikzpicture}[scale=0.2]
\tikzstyle{every node}+=[inner sep=0pt]
\draw [black] (21,-16.2) circle (3);
\draw (21,-16.2) node {$B$};
\draw (11, -16.2) node {
\begin{tabular}{| c |}
\hline
$P(B)$ \\ \hline \hline
.9 \\ \hline
\end {tabular}
};
\draw [black] (38.6,-16.2) circle (3);
\draw (38.6,-16.2) node {$I$};
\draw (38.6, -4) node {
\begin{tabular}{| c | c | c | }
\hline
B & M & $P(I)$ \\ \hline \hline
true & true & .9\\ \hline
true & false & .5 \\ \hline
false & true & .5 \\ \hline
false & false & .1 \\ \hline
\end {tabular}
};
\draw [black] (55.1,-16.2) circle (3);
\draw (55.1,-16.2) node {$M$};
\draw (65, -16.2) node {
\begin{tabular}{| c |}
\hline
$P(M)$ \\ \hline \hline
.1 \\ \hline
\end {tabular}
};
\draw [black] (38.6,-28.3) circle (3);
\draw (38.6,-28.3) node {$G$};
\draw (20, -38.3) node {
\begin{tabular}{| c | c | c | c | }
\hline
B & I & M & $P(G)$ \\ \hline \hline
true & true & true & .9\\ \hline
true & true & false & .8 \\ \hline
true & false & true & .0 \\ \hline
true & false & false & .0 \\ \hline \hline
false & true & true & .2 \\ \hline
false & true & false & .1 \\ \hline
false & false & true & .0 \\ \hline
false & false & false & .0 \\ \hline
\end {tabular}
};
\draw [black] (38.6,-41) circle (3);
\draw (38.6,-41) node {$J$};
\draw (52, -41) node{
\begin{tabular}{| c | c |}
\hline
G & $P(J)$ \\ \hline \hline
true & .9 \\ \hline
false & .0 \\ \hline
\end {tabular}
};
\draw [black] (24,-16.2) -- (35.6,-16.2);
\fill [black] (35.6,-16.2) -- (34.8,-15.7) -- (34.8,-16.7);
\draw [black] (23.47,-17.9) -- (36.13,-26.6);
\fill [black] (36.13,-26.6) -- (35.75,-25.74) -- (35.19,-26.56);
\draw [black] (52.68,-17.97) -- (41.02,-26.53);
\fill [black] (41.02,-26.53) -- (41.96,-26.46) -- (41.37,-25.65);
\draw [black] (52.1,-16.2) -- (41.6,-16.2);
\fill [black] (41.6,-16.2) -- (42.4,-16.7) -- (42.4,-15.7);
\draw [black] (38.6,-19.2) -- (38.6,-25.3);
\fill [black] (38.6,-25.3) -- (39.1,-24.5) -- (38.1,-24.5);
\draw [black] (38.6,-31.3) -- (38.6,-38);
\fill [black] (38.6,-38) -- (39.1,-37.2) -- (38.1,-37.2);
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item
Calculate the initial probability of $G$ given the data from the table.
\item
Calculate the value of $P(b, i, \neg m, g, j)$.
\item
Calculate the probability that someone goes to jail given that they
broke the law, have been indicted, and face a politically motivated prosecutor.
\end{enumerate}
\vspace{3cm}
\item
{\bf (10 Points)} A classic experiment in the 1960s eloquently illustrated the technical side of conservatism bias. The researchers presented subjects with two urns -- one containing 3 blue balls and 7 red balls, the other containing 7 blue balls and 3 red ones. Subjects were given this information and then told that someone had drawn randomly 12 times from one of the urns [with replacement]. Subjects were told that this draw yielded 8 reds and 4 blues. They were then asked, ``What is the probability that the draw was made from the first urn?"
Formulate the problem in terms of Bayes' Rule. State all of your assumptions and all assignments explicitly. Solve the probability question asked.
\vspace{2cm}
\item {\bf (12 Points)} (Perceptron.)
Use the Perceptron Training Rule to show how a perceptron can learn the logical AND function.
Use the following values for threshold and learning rate: \\
$t$ = 1 \\
$\alpha = 0.3$ \\
And initial weights of: \\
$w_{1} = -0.4$ \\
$w_{2} = 0.5$ \\
Hint: Remember for your step function to activate, the input must be greater than (not greater than or equal to) the threshold value. You should need 4 or 5 epochs depending on the order of your inputs. Show your work to receive full credit.
\begin{center}
% BEGIN RECEIVE ORGTBL 09a8s7fasdf9a87
\begin{tabular}{|r|r|r|r|r|r|r|r|}
\hline
Epoch & \(X_1\) & \(X_2\) & Expected Y & Actual Y & Error & \(w_1\) & \(w_2\)\\
\hline
\hline
1 & 0 & 0 & 0 & 0 & 0 & -0.4 & 0.5\\
1 & 0 & 1 & 0 & 0 & 0 & -0.4 & 0.5\\
1 & 1 & 0 & 0 & 0 & 0 & -0.4 & 0.5\\
1 & 1 & 1 & 1 & .1 & .9 & -.13 & .77\\
\hline
2 & 0 & 0 & 0 & 0 & 0 & -.13 & .77\\
2 & 0 & 1 & 0 & 0 & 0 & -.13 & .77\\
2 & 1 & 0 & 0 & 0 & 0 & -.13 & .77\\
2 & 1 & 1 & 1 & .64 & .36 & -0.022 & .878\\
\hline
3 & 0 & 0 & 0 & 0 & 0 & -0.022 & .878\\
3 & 0 & 1 & 0 & 0 & 0 & -0.022 & .878\\
3 & 1 & 0 & 0 & 0 & 0 & -0.022 & .878\\
3 & 1 & 1 & 1 & 0.856 & .144 & 0.0212 & 0.9212\\
\hline
4 & 0 & 0 & 0 & 0 & 0 & 0.0212 & 0.912\\
4 & 0 & 1 & 0 & 0 & 0 & 0.0212 & 0.912\\
4 & 1 & 0 & 0 & 0 & 0 & 0.0212 & 0.912\\
4 & 1 & 1 & 1 & .9332 & 0.0668 & 0.04124 & 0.93204\\
\hline
5 & 0 & 0 & 0 & 0 & 0 & 0.04123 & 0.93204\\
5 & 0 & 1 & 0 & 0 & 0 & 0.04123 & 0.93204\\
5 & 1 & 0 & 0 & 0 & 0 & 0.04123 & 0.93204\\
5 & 1 & 1 & 1 & .97328 & 0.02672 & 0.049256 & 0.940056\\
\hline
6 & 0 & 0 & 0 & 0 & 0 & 0.049256 & 0.940056\\
6 & 0 & 1 & 0 & 0 & 0 & 0.049256 & 0.940056\\
6 & 1 & 0 & 0 & 0 & 0 & 0.049256 & 0.940056\\
6 & 1 & 1 & 1 & .989312 & 0.010688 & 0.0524624 & 0.9432624\\
\hline
7 & 0 & 0 & 0 & 0 & 0 & 0.0524624 & 0.9432624\\
7 & 0 & 1 & 0 & 0 & 0 & 0.0524624 & 0.9432624\\
7 & 1 & 0 & 0 & 0 & 0 & 0.0524624 & 0.9432624\\
7 & 1 & 1 & 1 & .9957248 & 0.0042752 & 0.05374496 & 0.94454496\\
\hline
\end{tabular}
% END RECEIVE ORGTBL 09a8s7fasdf9a87
\begin{comment}
#+ORGTBL: SEND 09a8s7fasdf9a87 orgtbl-to-latex
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | Epoch | $X_1$ | $X_2$ | Expected Y | Actual Y | Error | $w_1$ | $w_2$ |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| / | < | < | < | < | < | < | < | <> |
| | 1 | 0 | 0 | 0 | 0 | 0 | -0.4 | 0.5 |
| | 1 | 0 | 1 | 0 | 0 | 0 | -0.4 | 0.5 |
| | 1 | 1 | 0 | 0 | 0 | 0 | -0.4 | 0.5 |
| | 1 | 1 | 1 | 1 | .1 | .9 | -.13 | .77 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 2 | 0 | 0 | 0 | 0 | 0 | -.13 | .77 |
| | 2 | 0 | 1 | 0 | 0 | 0 | -.13 | .77 |
| | 2 | 1 | 0 | 0 | 0 | 0 | -.13 | .77 |
| | 2 | 1 | 1 | 1 | .64 | .36 | -0.022 | .878 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 3 | 0 | 0 | 0 | 0 | 0 | -0.022 | .878 |
| | 3 | 0 | 1 | 0 | 0 | 0 | -0.022 | .878 |
| | 3 | 1 | 0 | 0 | 0 | 0 | -0.022 | .878 |
| | 3 | 1 | 1 | 1 | 0.856 | .144 | 0.0212 | 0.9212 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 4 | 0 | 0 | 0 | 0 | 0 | 0.0212 | 0.912 |
| | 4 | 0 | 1 | 0 | 0 | 0 | 0.0212 | 0.912 |
| | 4 | 1 | 0 | 0 | 0 | 0 | 0.0212 | 0.912 |
| | 4 | 1 | 1 | 1 | .9332 | 0.0668 | 0.04124 | 0.93204 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 5 | 0 | 0 | 0 | 0 | 0 | 0.04123 | 0.93204 |
| | 5 | 0 | 1 | 0 | 0 | 0 | 0.04123 | 0.93204 |
| | 5 | 1 | 0 | 0 | 0 | 0 | 0.04123 | 0.93204 |
| | 5 | 1 | 1 | 1 | .97328 | 0.02672 | 0.049256 | 0.940056 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 6 | 0 | 0 | 0 | 0 | 0 | 0.049256 | 0.940056 |
| | 6 | 0 | 1 | 0 | 0 | 0 | 0.049256 | 0.940056 |
| | 6 | 1 | 0 | 0 | 0 | 0 | 0.049256 | 0.940056 |
| | 6 | 1 | 1 | 1 | .989312 | 0.010688 | 0.0524624 | 0.9432624 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
| | 7 | 0 | 0 | 0 | 0 | 0 | 0.0524624 | 0.9432624 |
| | 7 | 0 | 1 | 0 | 0 | 0 | 0.0524624 | 0.9432624 |
| | 7 | 1 | 0 | 0 | 0 | 0 | 0.0524624 | 0.9432624 |
| | 7 | 1 | 1 | 1 | .9957248 | 0.0042752 | 0.05374496 | 0.94454496 |
|---+-------+-------+-------+------------+----------+-----------+------------+------------|
\end{comment}
\end{center}
\begin{comment}
\end{comment}
\end{enumerate}
\end{document}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment