Skip to content

Instantly share code, notes, and snippets.

@ntaylorwss
Created October 6, 2016 16:46
Show Gist options
  • Save ntaylorwss/687c4df1a63db25b7643e6969aaf11dd to your computer and use it in GitHub Desktop.
Save ntaylorwss/687c4df1a63db25b7643e6969aaf11dd to your computer and use it in GitHub Desktop.
Notes on David Silver's RL course
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\DeclareMathOperator*{\\argmin}{arg\\,min}$$\n",
"$$\\DeclareMathOperator*{\\argmax}{arg\\,max}$$\n",
"\n",
"# Reinforcement Learning by David Silver"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Intro\n",
"- RL borrows from many fields\n",
" - Neuroscience: dopamine reward systems\n",
" - Engineering: control\n",
" - Psychology: conditioning\n",
" - Math: operations research\n",
" - Economics: game theory\n",
"- RL vs SL\n",
" - Reward signal vs labels\n",
" - Delayed feedback vs immediate feedback\n",
" - Sequential data vs I.I.D data\n",
" - Actions influencing subsequent data vs independence\n",
"- Rewards\n",
" - All goals can be described as maximization of expected cumulative reward\n",
" - There can be either intermediate or end-of-episode reward\n",
" - If the goal is to be as fast as possible, appropriate reward mechanism is -1 per time step\n",
" - **Poker**: no intermediate reward; end-of-episode net $$$\n",
"- Sequential decision making\n",
" - Actions maximize total future reward\n",
" - Actions have long term consequences\n",
" - Sometimes, we forfeit immediate reward for future gain (depending on our discount factor)\n",
"- State\n",
" - History $= [(O_1,A_1,R_1),...,(O_t,A_t,R_t)]$\n",
" - State = info used to determine \"what next?\"\n",
" - $S=f(H)$\n",
" - Env state = environment's private representation of the world\n",
" - Agent state = agent's internal representation\n",
" - ${S^a}\\subset{S^e}$\n",
" - Markov state $=p(S_{t+1}|S_t)=p(S_{t+1}|S_{1},...,S_{t})$\n",
"- **Poker** is a POMDP\n",
"- RL agents can be based on:\n",
" - Policy: the agent's behaviour function\n",
" - Value function: $Q(s,a)$\n",
" - Model: agent's representation of the environment\n",
"- Policy-based\n",
" - Map from state to action\n",
" - Deterministic $\\pi(s)=a$ OR stochastic $\\pi(a|s)=P(A=a|S=s)$\n",
"- Value-based\n",
" - Value function predicts future reward, we evaluate states based on this\n",
" - $V_{\\pi}(s) = \\mathbb{E}_{\\pi}[\\sum_{t=0}^{\\infty}{\\gamma^{t}R_t}|S_t=s]$\n",
"- Model-based\n",
" - Model predicts the environment will do next\n",
" - P predicts the next state\n",
" - $P_{ss'}^{a} = p(S'=s'|S=s,A=a)$\n",
" - R predicts the immediate reward\n",
" - $R_{s}^{a} = E[R|S=s,A=a]$\n",
"- Model-free vs model-based\n",
" - Free: works on policy and/or value function\n",
" - Based: works on model + policy + value function\n",
"- Explore and exploit\n",
" - Explore: give up reward to learn more about environment\n",
" - Exploit: take good reward when it is known\n",
" - Exploiting allows us to explore down more **optimal** paths"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Markov Decision Process\n",
"- MDP: formal definition for environment for RL\n",
"- ALmost all RL is MDPs\n",
" - optimal control == continuous action MDP\n",
" - MDPs can have 1 state\n",
"- State transition matrix P: Markov chain\n",
" - $P_{ss'} = p(S_{t+1}=s'|s_t=s)$\n",
" - $P = \\begin{bmatrix}\n",
" from(1)to(1) & from(1)to(2) \\\\\n",
" from(2)to(1) & from(2)to(1)\n",
" \\end{bmatrix}$\n",
"- Markov reward process (MRP): Markov chain with values\n",
" - $R_s = \\mathbb{E}[R_{t+1}|S_t=s]$\n",
" - $return = G_t = \\sum_{k=0}^{\\infty}\\gamma^{k}R_{t+k+1}$\n",
" - Why discount?\n",
" - uncertainty about future rewards; dollar now > dollar tomorrow\n",
" - the reward diverges to infinity if we don't discount\n",
" - If all sequences terminate, discounting isn't necessary\n",
"- Bellman equation\n",
" - Value function: $V(s) = \\mathbb{E}[G_t|S_t=s]$\n",
" - Bellman equation: $V(s) = \\mathbb{E}[R_{t+1}+{\\gamma}v(S_t)|S_t=s]$\n",
" - $v = R + {\\gamma}Pv$\n",
" - $v = \\begin{bmatrix} v(1) \\\\ \\vdots \\\\ v(n) \\end{bmatrix}$\n",
" - $v = \\begin{bmatrix} R(1) \\\\ \\vdots \\\\ R(n) \\end{bmatrix} + {\\gamma} \\begin{bmatrix} p_{11} & \\cdots & p_{1n} \\\\ \\vdots & \\ddots & \\vdots \\\\ p_{n1} & \\cdots & p_{nn} \\end{bmatrix} \\begin{bmatrix} v(1) \\\\ \\vdots \\\\ v(n) \\end{bmatrix}$\n",
" - Bellman solvable: $v = R+{\\gamma}Pv = (I-{\\gamma}P)^{-1}R$\n",
" - Complexity is $O(n^3)$, so this is impractical\n",
"- MRP to MDP: from rewards to decisions\n",
" - $(S,P,R,\\gamma) \\rightarrow (S,\\textbf{A},P,R,\\gamma)$\n",
" - $P_{ss'}^{a} = p(S_{t+1}=s'|S=s_t, A=a$\n",
" - $R_{s}^{a} = \\mathbb{E}[R_{t+1}|S_t=s,A_t=a]$\n",
" - sometimes, $(r|a)=r$, in that case:\n",
" - $R_{s} = \\mathbb{E}[R_{t+1}|S_t=s]$\n",
" - MDP policy is time-independent, stationary\n",
" - $V(s)_{\\pi}$ is subbed by $\\pi$ because the policy influences the value of the state; a good policy that is taking full advantage of the state will give it a larger value\n",
" - $Q(s,a) = \\mathbb{E}[G_t|S_t=s,A_t=a]$ = action value\n",
" - For ANY MDP, $\\exists \\pi^{\\star} \\ge \\pi \\forall \\pi$\n",
"- Extensions to MDP\n",
" - Infinite and continuous MDPs\n",
" - POMDP\n",
" - Undiscounted, average reward MDP"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Planning by Dynamic Programming\n",
"- Breaking down the term\n",
" - Dynamic -> sequential or temporal problem\n",
" - Programming -> a policy, a mathematical program (not, like, a C program)\n",
"- Dynamic programming requires:\n",
" - Optimal substructure\n",
" - You can break down the problem into pieces, solve the pieces, and put them together to solve the main problem\n",
" - Overlapping subproblems\n",
" - The subproblems occur many times\n",
" - we can cache the subproblem solutions and use them again and again\n",
" - e.g. shortest path:\n",
" - $sp(a_{0},a_{n}) = sp(a_{0},a_{1}) + sp(a_{1},a_{2}) + \\cdots + sp(a_{n-1}, a_{n})$\n",
"- Does an MDP satisfy the requirements of DP?\n",
" - The Bellman equation is the recursive decomposition that gives us optimal substructure; it's the immediate problem, plus the future problem\n",
" - Value function stores and re-uses the subproblems; it's the cache\n",
" - So yes. MDPs can be solved with DP using value iteration\n",
"- DP used for: prediction, control\n",
" - Prediction\n",
" - Input:\n",
" - MDP $\\langle S,A,P,R,\\gamma \\rangle$ and policy $\\pi$, or\n",
" - MRP $\\langle S,P^{\\pi}, R^{\\pi}, \\gamma \\rangle$\n",
" - Output:\n",
" - Value function $V_{\\pi}$\n",
" - We want to predict the value of each state based on following the policy in the MDP\n",
" - Control\n",
" - Input:\n",
" - MDP $\\langle S,A,P,R,\\gamma \\rangle$\n",
" - Output:\n",
" - optimal value function $V_{\\star}$\n",
" - optimal policy $\\pi_{\\star}$ (implied from optimal value function)\n",
" - We want to know what the best possible policy is for this MDP\n",
"- Policy evaluation\n",
" - Problem: evaluate a given policy $\\pi$\n",
" - Solution: iterative application of Bellman expectation backup\n",
" - $v_1 \\rightarrow v_2 \\rightarrow \\cdots \\rightarrow v_\\pi$\n",
" - Using synchronous backups, at each iteration $k+1$, for all states $s \\in S$, update $v_{k+1}(s)$ from $v_{k}(s')$, where s' is a successor state of s\n",
" - Full math:\n",
" - $v_{k+1}(s) = \\sum_{a \\in A}{\\pi(a|s)[R_{s}^{a} + \\gamma \\sum_{s' \\in S}{P_{ss'}^{a}v_{k}(s')}]}$\n",
" - Vector: $\\textbf{v}^{k+1} = \\textbf{R}^{\\pi} + \\gamma \\textbf{P}^{\\pi}\\textbf{v}^{k}$\n",
" - No math:\n",
" - Compute the value for a state by looking at all possible actions, and taking the weighted sum of the values of the possible next states (weighted by probability of that stochastic action actually taking you to that state), plus the immediate reward of the action\n",
"- Policy iteration\n",
" - Intuition:\n",
" - Evaluate the policy $\\pi$\n",
" - $v_{\\pi}(s) = \\mathbb{E}[R_{t+1} + \\gamma R_{t+2} + \\cdots | S_t=s]$\n",
" - Improve the policy by acting greedily w.r.t $\\pi$\n",
" - $\\pi' = greedy(v_\\pi)$\n",
" - Formal:\n",
" - Consider a deterministic policy $a=\\pi(s)$\n",
" - We improve the policy by acting greedily\n",
" - $\\pi'(s) = \\argmax_{a \\in A}{q_{\\pi}(s,a)}$\n",
" - This improves the value from any state s over one step:\n",
" - $q_{\\pi}(s, \\pi'(s)) = \\max\\limits_{a \\in A}q_\\pi(s,a) \\ge q_\\pi(s,\\pi(s)) = v_{\\pi}(s)$\n",
" - $q_{\\pi}(s, \\pi'(s))$: the value of following our new greedy policy for this one step, then following $\\pi$ afterwards (max action, then $\\pi$)\n",
" - $v_{\\pi}(s)$: the value of just following $\\pi$ all the time (no max action now, just $\\pi$ forever)\n",
" - We can say that it's always at least as good because:\n",
" - At all future states, they're the same policy\n",
" - At this current state, the new one is the max; it's the best. The old one could only possibly be equal to it, but it can't be better\n",
" - If improvements stop, then the Bellman optimality equation is satisfied, and therefore we've reached the optimal policy\n",
" - $q_{\\pi}(s, \\pi'(s)) = \\max\\limits_{a \\in A}q_\\pi(s,a) = q_\\pi(s,\\pi(s)) = v_{\\pi}(s)$\n",
" - $v_{\\pi}(s) = \\max\\limits_{a \\in A}q_\\pi(s,a)$\n",
" - $v_{\\pi}(s) = v_{\\star}(s) \\forall s \\in S$\n",
"- Early stopping of policy evaluation\n",
" - Does policy evaluation need to converge to $v_{\\pi}$?\n",
" - $\\epsilon$-convergence of value function would work\n",
" - if the value doesn't change by more than $\\epsilon$, stop iterating\n",
" - Stop evaluating after k iterations, enter policy iteration; loop around like this\n",
" - Update policy every iteration, i.e. $k = 1$\n",
" - This is exactly value iteration\n",
"- Value iteration\n",
" - If we know the solution to subproblems $v_{\\star}(s')$, then solution $v_{\\star}(s)$ can be found by one-step lookahead\n",
" - $v_{\\star}(s) \\leftarrow \\max\\limits_{a \\in A}R^{a}_{s} + \\gamma \\sum_{s' \\in S}P^{a}_{ss'}v_{\\star}(s')$\n",
" - We start by throwing some value in for $v(s')$ and saying it's optimal, even if it isn't; we back that value up to s, s becomes s', and we do this iteratively again and again until we converge to the actual value\n",
"- Value vs Policy iteration\n",
" - Value iteration is policy iteration, but where we act on the policy after every step\n",
" - Rather than update the value, update the policy, update the value based on the updated policy... we just jump from value function to value function\n",
"\n",
" | Problem | Bellman Equation | Algorithm |\n",
" |------------|-------------------------------------------------|-----------------------------|\n",
" | Prediction | Bellman expectation | Iterative policy evaluation |\n",
" | Control | Bellman expectation + greedy policy improvement | Policy iteration |\n",
" | Control | Bellman optimality | Value iteration |\n",
"\n",
"- Extensions to dynamic programming (asynchronous backups)\n",
" - Back up states individually, in some order (what is the best order?)\n",
" - In-place dynamic programming\n",
" - Instead of storing $V_{old}$ and $V_{new}$, computing each $V_{new}(s)$ based on $V_{old}$, then replacing $V_{old} = V_{new}$...\n",
" - Just update the state right away: $v(s) \\leftarrow \\max_{a \\in A}(R_{s}^{a} + \\gamma \\sum_{s' \\in S}p_{ss'}^{a}v(s'))$\n",
" - Now every update is constantly using the most recent values, not the values from the last iteration; when we update state \\#5, and then use it to compute state \\#4, state 4 is getting the brand new value we just computed for state 5\n",
" - Prioritized sweeping\n",
" - Which state is the most important? The one that has the greatest change in value (Bellman Error)\n",
" - $|\\max_{a \\in A}(R_{s}^{a}+ \\gamma \\sum_{s' \\in S}P_{ss'}^{a}v(s')) - v(s)|$\n",
" - Backup the state with the largest remaining Bellman error\n",
" - Update Bellman error of affected states after each backup\n",
" - Maintain this in a priority queue\n",
" - Real-time dynamic programming\n",
" - Let the agent interact and choose the states to update; where the agent is, that's where you update"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model-Free Prediction\n",
"- This is used to estimate the value function of an unknown MDP\n",
"- Monte Carlo Learning\n",
" - MC methods learn directly from episodes of experience (entire episodes, entire games)\n",
" - MC is model-free; no knowledge of MDP transitions / rewards necessary\n",
" - MC learns from *complete* episodes; no bootstrapping\n",
" - Simplest possible value calculation: the mean reward over all episodes from that state\n",
" - Can only be applied to episodic MDPs (must terminate)\n",
" - Policy evaluation\n",
" - We want to learn $v_{\\pi}$ from episodes of experience of using $\\pi$\n",
" - We know that $v_{\\pi}(s) = \\mathbb{E}_{\\pi}[R_{t+1} + \\gamma R_{t+2} + \\cdots + \\gamma^{T-1}R_{T} | S_t = s]$\n",
" - MC policy evaluation uses the mean of the returns it gets in place of the expectation\n",
" - How do we do this for all states when we don't get to reset ourselves to each state a bunch of times?\n",
" - First-visit MC policy evaluation\n",
" - $S(s) = 0$, $N(s) = 0$\n",
" - On first visit to s: $N(s) \\leftarrow N(s) + 1$, $S(s) \\leftarrow S(s) + G_t$\n",
" - $V(s) = S(s)/N(s)$\n",
" - Law of large numbers: as $N(s) \\rightarrow \\infty$, then $V(s) \\rightarrow v_\\pi(s)$\n",
" - Basically, we're taking a mean over all episodes of the total discounted reward seen at the state at each episode\n",
" - Since this is policy *evaluation*, we can assume that the policy we're evaluating is visiting the states it cares about\n",
" - This exercise isn't about finding the best states, it's about getting a value function to approximate the value under the policy we're evaluating\n",
" - Every-visit MC policy evaluation\n",
" - Instead of only including the first visit, you include every visit\n",
" - Increment every visit, add G every visit\n",
" - Updating the MC value incrementally\n",
" - Incremental mean calculation:\n",
" - Calculating a mean doesn't have to happen all at once; it can happen incrementally\n",
" ```\n",
" u = 0\n",
" nums = range(1,11)\n",
" for i,n in enumerate(nums): u += 1/(i+1.) * (n-u)\n",
" ```\n",
" - Incremental MC updates\n",
" - Update V(s) incrementally after each episode using this mean update\n",
" - $N(S_t) \\leftarrow N(S_t) + 1$\n",
" - $V(S_t) \\leftarrow V(S_t) + \\frac{1}{N(S_t)}(G_t-V(S_t))$\n",
" - In non-stationary problems, may want to forget old episodes\n",
" - $V(S_t) \\leftarrow V(S_t) + \\alpha(G_t - V(S_t))$\n",
" - Most problems are actually non-stationary, because it's w.r.t. the policy, which is improving\n",
"- Temporal difference learning\n",
" - Learn directly from episodes; model-free\n",
" - Learn from incomplete episodes by bootstrapping\n",
" - Take what you have, estimate the remainder of the episode\n",
" - TD(0)\n",
" - Update $V(S_t)$ toward estimated return $R_{t+1}+\\gamma V(S_{t+1})$\n",
" - $V(S_t) \\leftarrow V(S_t) + \\alpha(R_{t+1}+\\gamma V(S_{t+1}) - V(S_t))$\n",
" - It learns before the final outcome, or even without the final outcome (non-terminating environments)\n",
"- MC vs TD\n",
" - MC\n",
" - High bias, zero variance\n",
" - Good convergence\n",
" - Not very sensitive to initial values\n",
" - Does not exploit Markov property\n",
" - TD\n",
" - Some bias, low variance\n",
" - TD(0) converges to $v_\\pi(s)$\n",
" - Usually more efficient than MC\n",
" - More sensitive to initial values\n",
" - Exploits Markov property\n",
"- Properties\n",
" - Bootstrapping\n",
" - Do we use the actual returns, or our own estimate of the returns?\n",
" - MC = no bootstrapping (actual returns)\n",
" - TD = bootstrapping (estimate returns)\n",
" - Sampling\n",
" - Do we sample a trajectory of the tree, or take an expectation over the whole tree?\n",
" - MC = sampling\n",
" - TD = sampling\n",
" - DP = no sampling (requires full MDP knowledge)\n",
" - Bootstrapping: how deep down the tree do we drill down?\n",
" - Sampling: how wide across the tree do we drill down?\n",
"- TD(lambda)\n",
" - TD(0) is the case where our bootstrapping is a 1 step lookahead; we take the value function one action in the future, and back that up to s\n",
" - What if we went 2 steps ahead, and backed that up? Or 3 steps? Or 4? Or n? That's controlled by $\\lambda$\n",
" - We don't know what the best n is, so let's average over all of them\n",
" - The proper way to average over all n (and still be able to compute it efficiently) is to use a gemoetric series that, while a sum, can be computed in closed form (so it's the same complexity as TD(0))\n",
" - $G^{\\lambda}_{t} = (1-\\lambda)\\sum_{n=1}^{\\infty}\\lambda^{n-1}G_{t}^{(n)}$\n",
" - This form, called forward-view still requires, like MC, to be computed at the end of an episode (so that it has available all N-step returns)\n",
"- Backward view TD lambda\n",
" - To use TD-lambda and still update online, every step, from incomplete sequences, we need a way around requiring all N steps\n",
" - Eligibility traces\n",
" - In credit assignment, we want to balance between the states that were most recent from the point of reward, and the states that were most frequent along the whole trip\n",
" - $E_0(s)=0$\n",
" - $E_t(s) = \\gamma\\lambda E_{t-1}(s) + I(S_t=s)$\n",
" - Exponentially decay its eligibility over time t (first term), but spike it by 1 every time it shows up again\n",
" - Keep an eligibility trace for every state s\n",
" - Udate value V(s) for all states s in proportion to TD-error $\\delta_t$ and eligibility trace $E_t(s)$:\n",
" - $\\delta_t = R_{t+1} + \\gamma V(S_{t+1}) - V(S_t)$\n",
" - $V(s) \\leftarrow V(s) + \\alpha \\delta_t E_t(s)$\n",
" - The states that we think are most responsible for the error get updated the most\n",
" - $\\lambda = 0$ means we only update the current state; this is TD(0)\n",
" - $\\lambda = 1$ means credit is deferred until the end of the episode; an episodic environment with offline updates\n",
" - equivalent update to MC\n",
"- The sum of offline updates is identical for forward-view and backward-view\n",
" - $\\sum_{t=1}^{T}{\\alpha \\delta_t E_t(s)} = \\sum_{t=1}^{T}\\alpha(G_{t}^{\\lambda} - V(S_t))I(S_t=s)$\n",
" - If you're adding up all the rewards after everything has been seen, it doesn't matter if you go forward or backward"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model-Free Control\n",
"- On-polcy vs off-policy learning\n",
" - On: learn about $\\pi$ from experience sampled from $\\pi$\n",
" - Off: learn about $\\pi$ from experience sampled from $\\mu$\n",
" - off-policy is like if you show a robot how to vaccuum, and then give it a vaccuum; it learns from you\n",
" - on-policy is like if you just hand a robot a vaccuum; it has to learn on its own\n",
"- Evaluation --> improvement --> evaluation --> improvement\n",
" - Framework: iterative policy evaluation, greedy policy improvement\n",
" - Can we plug in Monte Carlo evaluation here? Evaluate episodes by their mean reward --> act greedily --> evaluate episodes by mean reward? No.\n",
" - This won't explore well at all, since we're acting greedily\n",
" - It will take a long time, because we're getting episodic reward\n",
" - Acting greedily requires finding V(s) which requires knowing the dynamics of the MDP (the transition model). We don't know that because we're model free, so we can't work with a value function\n",
"- Can we fix the greedy property?\n",
" - Simplest (but most effective) fix: $\\epsilon$-greedy exploration\n",
" - $p(a^*) = \\frac{\\epsilon}{m} + 1 - \\epsilon$\n",
" - $p(a_{random}) = \\frac{\\epsilon}{m}$\n",
" - this always results in a policy that is $\\ge$ the previous policy\n",
"- Monte Carlo evaluation + $\\epsilon$-greedy improvement; how can we improve it?\n",
" - Instead of doing policy evaluation to convergence, we can just estimate it by doing one episode at a time\n",
" - One episode of evaluation, then improve; one episode of evaluation, then improve, etc.\n",
"- Greedy in the limit with infinite exploration\n",
" - Two properties should be satisfied by our exploration technique:\n",
" - $\\lim_{k\\rightarrow\\infty}\\pi_k(a|s) = N_k(s,a) = \\infty$; it sees each state infinitely many times\n",
" - $lim_{k\\rightarrow\\infty}\\pi_k(a|s) = 1(a = \\argmax_{a'\\in A}Q_k(s,a'))$ it converges on the greedy policy\n",
"- GLIE with Monte Carlo control algorithm\n",
" - Sample an episode according to $\\pi$\n",
" - For each state/action pair in the episode:\n",
" - $N(S_t,A_t) \\leftarrow N(S_t,A_t)+1$\n",
" - $Q(S_t,A_t) \\leftarrow Q(S_t,A_t) + \\frac{1}{N(S_t,A_t)}(G_t-Q(S_t,A_t))$\n",
" - Improve policy based on new action-value function Q\n",
" - $\\epsilon \\leftarrow 1/k$\n",
" - $\\pi \\leftarrow greedy_\\epsilon(Q)$\n",
" - This converges to the optimal Q*, but it's horribly inefficient; we need to replace Monte Carlo with TD\n",
"- Replace MC with TD\n",
" - TD is lower variance, it's online, and it works with incomplete sequences\n",
" - Apply TD to Q(s,a), use $\\epsilon$-greedy policy improvement still, update every time step\n",
"- SARSA (state/action -> reward -> next state / next action)\n",
" - $Q(S,A) \\leftarrow Q(S,A) + \\alpha(R+\\gamma Q(S',A') - Q(S,A))$\n",
" - Every time step, we evaluate the policy, and act epsilon-greedily\n",
"- Forward and backward view SARSA\n",
" - $Q(S_t,A_t) \\leftarrow Q(S_t,A_t) + \\alpha(q_t^\\lambda - Q(S_t,A_t))$\n",
" - $q_t^\\lambda = (1-\\lambda)\\sum_{n=1}^{\\infty}\\lambda^{n-1}q_t^{(n)}$\n",
" - Backward view: replace $q_t^\\lambda$ with eligibility trace as before with prediction (eligiblity traces are 1 per state/action pair)\n",
"- Off-policy learning\n",
" - evaluate target policy while following different behaviour policy\n",
" - Why?\n",
" - Learn from observing other humans/agents\n",
" - re-use experience from previous policies\n",
" - learn about optimal policy while following exploratory policy\n",
" - learn about multiple policies while following one policy\n",
" - Q learning is an off-policy algorithm\n",
" - we choose actions according to an epsilon-greedy (exploratory) policy, but we update our Q values with future Q values that are assuming the optimal policy\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Value Function Approximation\n",
"- Large / continuous state spaces are impossible to store in a table\n",
" - We instead approximate a function mapping from state to value\n",
" - $v(s,\\textbf{w}) \\approx v_\\pi(s)$\n",
" - $q(s,a,\\textbf{w}) \\approx q_\\pi(s,a)$\n",
" - Instead of updating state values, we update the parameters of this function (with MC or TD)\n",
" - 3 ways to approach it:\n",
" - from s to v(s)\n",
" - from s,a to q(s,a)\n",
" - from s to q(s,a1) ... q(s,aN)\n",
" - Linear function approximation or non-linear using NNs (differentiable a requirement)\n",
"- We optimize w by stochastic gradient descent, just like with NNs\n",
" - value function is represented by a linear combination of the features of the state\n",
" - for linear, the objective function is quadratic, so convex, so global minimum is attainable\n",
" - its gradient is very simple:\n",
" - if $v(S,\\textbf{w}) = x(S)^T\\textbf{w}$, then $\\nabla_\\textbf{w}{v(S,\\textbf{w})} = x(S)$\n",
" - then update = step_size * prediction_error * feature_value\n",
" - $\\nabla{\\textbf{w}} = \\alpha(v_\\pi(S) - v(S,\\textbf{w}))\\textbf{x}(S)$\n",
"- Table lookup is a special case of linear function approximation\n",
" - $x(S) = \\begin{pmatrix} I(S=s_1) \\\\ \\vdots \\\\ I(S=s_n) \\end{pmatrix}$\n",
" - $v(S,\\textbf{w}) = \\begin{pmatrix} I(S=s_1) \\\\ \\vdots \\\\ I(S=s_n) \\end{pmatrix} \\cdot \\begin{pmatrix} w_1 \\\\ \\vdots \\\\ w_n \\end{pmatrix}$\n",
" - In this case we're just picking out the weight that corresponds to the state and updating only that\n",
"- We need to approximate $v_\\pi(S)$, because we don't actually have it\n",
" - MC: total return $G_t$\n",
" - TD(0): $R_{t+1} + \\gamma v(S_{t+1},\\textbf{w})$\n",
" - TD($\\lambda$): discounted return $G_t^\\lambda$\n",
"- Convergence\n",
" - MC converges whether linear or non-linear\n",
" - Linear TD(0) converges close to the global optimum, but the rest don't\n",
" - TD($\\lambda$) is more difficult to get to converge, because it's not following the actual gradient (we just plugged in G and hoped it was close to V); Gradient TD converges\n",
"- State/action value functions q are the same idea\n",
" - our feature vector can now include features extracted from both the state and the actions (e.g. stack size if bet 50)\n",
"- Batch methods use the data more efficiently by applying gradient descent over batches and replaying the batches over and over to convergence\n",
"- Deep-Q Networks, the Atari method\n",
" - DQN = Q learning + experience replay + fixed-Q targets\n",
" - Q learning: the loss that the weight's gradient is pointed at is the error between the current Q and the (reward + max_future_Q)\n",
" - experience replay: compute the gradient with respect to some minibatch of samples, rather than the whole memory or just the last sample\n",
" - fixed Q: freeze a \"past Q\" to compute the error against, for maybe 1000 time steps; this is better than always using the last Q, because that moving target causes divergence\n",
"- SGD is least squares, which means for linear approximation it has a closed form solution, but with many state features the computation is inefficient because it's a matrix inversion which is O(n^3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Policy Gradient Methods\n",
"- From value function based to policy based\n",
" - $V_\\theta(s) \\approx V^\\pi(s)$ and $Q_\\theta(s,a) \\approx Q^\\pi(s,a)$ become\n",
" - $\\pi_\\theta(s,a) = P[a|s,\\theta]$\n",
" - No value function needed for policy-based\n",
" - Actor-critic combines learning a value function and learning a policy\n",
"- Pros and cons of policy-based\n",
" - Better convergence , effective in continuous action spaces (no max), and can learn stochastic policies (poker)\n",
" - Typically converges to a local optimum, and evaluation is inefficient and high variance\n",
" - Stochastic policies are great for games, and great in situations where our features don't fully distinguish different states (they would end up with the same policy, and they may not warrant the same policy)\n",
"- Policy objective functions: given a policy $\\pi_\\theta(s,a)$, find the best $\\theta$; how do we evaluate which is the \"best\"?\n",
" - Start value (episodic environments given a start state): $J_1(\\theta) = V^{\\pi_\\theta}(s_1) = \\mathbb{E}_{\\pi_\\theta}[v_1]$\n",
" - Average value (continuing environments): $J_{avV}(\\theta) = \\sum_s d^{\\pi_\\theta}(s)V^{\\pi_\\theta}(s)$ (where d is the probability of getting to that state, a stationary distribution)\n",
" - Average reward per time step: $J_{avR}(\\theta) = \\sum_s d^{\\pi_\\theta}(s) \\sum_a \\pi_\\theta(s,a)R^a_s$\n",
"- Policy-based RL is an optimization problem\n",
" - Genetic algorithms, simulated annealing, hill climbing, gradient descent, all possible tools for policy evaluation\n",
"- Finite differences for computing / evaluating gradients\n",
" - For each dimension, estimate the partial by moving a bit in that direction and taking the slope\n",
" - Must be done n times for n dimensions, on every evaluation\n",
" - Wildly inefficient and noisy, but for some reason it works sometimes\n",
"- If we compute the gradient analytically, what is the score function?\n",
" - Assume we know the gradient (because we designed the cost function), so we have $\\nabla_\\theta \\pi_\\theta(s,a)$\n",
" - Use likelihood ratios:\n",
" - $\\nabla_\\theta \\pi_\\theta(s,a) = \\pi_\\theta(s,a) \\frac{\\nabla_\\theta \\pi_\\theta(s,a)}{\\pi_\\theta(s,a)}$\n",
" - $\\nabla_\\theta \\pi_\\theta(s,a) = \\pi_\\theta(s,a)\\nabla_\\theta\\log \\pi_\\theta(s,a)$\n",
" - Score function: softmax policy\n",
" - weight actions using a linear combination of features: $\\phi(s,a)^T\\theta$ ($\\phi$ gives the feature vector)\n",
" - probability of action is proportional to exponentiated weight: $\\pi_\\theta(s,a) \\propto e^{\\phi(s,a)^T\\theta}$\n",
" - score function is: $\\nabla_\\theta\\log\\pi_\\theta(s,a) = \\phi(s,a) - \\mathbb{E}_{\\pi_\\theta}[\\phi(s,\\cdot)]$\n",
" - so the score function is just, \"how far away are these features from typical?\"\n",
" - Score function: Gaussian policy (for continuous action spaces)\n",
" - The mean of the Gaussian is the linear combination of features: $\\mu(s) = \\phi(s)^T\\theta$\n",
" - Variance can be a fixed $\\sigma^2$ or can be parameterized\n",
" - Policy draws actions from a Gaussian\n",
" - Score function: $\\nabla_\\theta\\log_{\\pi_\\theta}(s,a) = \\frac{(a-\\mu(s))\\phi(s)}{\\sigma^2}$\n",
" - score function is again, \"how much more is this action happening than normal?\"\n",
"- Policy gradients for one-step MDP\n",
" - Start in state s, take one step, get one reward, done\n",
" - Compute the policy gradient with likelihood ratios\n",
" - $J(\\theta) = \\mathbb{E}_{\\pi_\\theta}[r]$\n",
" - $J(\\theta) = \\sum_{s \\in S}d(s)\\sum_{a \\in A}\\pi_\\theta(s,a)R_{s,a}$\n",
" - The cost is, over all states, the expected reward over actions\n",
" - $\\nabla_\\theta J(\\theta) = \\sum_{s \\in S}d(s)\\sum_{a \\in A}\\pi_\\theta(s,a)\\nabla_\\theta\\log\\pi_\\theta(s,a)R_{s,a}$\n",
" - $\\nabla_\\theta J(\\theta) = \\mathbb{E}_{\\pi_\\theta}[\\nabla_\\theta\\log\\pi_\\theta(s,a)r]$\n",
" - The gradient is, in expectation over all states, the gradient of the log policy (which we simplified above) by the reward; it's score*reward\n",
"- Generalize to any MDP (policy gradient theorem)\n",
" - Replace r with Q(s,a)\n",
" - $\\nabla_\\theta J(\\theta) = \\mathbb{E}_{\\pi_\\theta}[\\nabla_\\theta\\log\\pi_\\theta(s,a)Q^{\\pi_\\theta}(s,a)]$\n",
"- MC policy gradient\n",
" - update parameters by stochastic gradient ascent using the policy gradient theorem, with return $v_t$ as an unbiased sample of $Q^{\\pi_\\theta}(s_t,a_t)$\n",
" - $\\nabla\\theta_t = \\alpha\\nabla_\\theta\\log\\pi_\\theta(s_t,a_t)v_t$\n",
" - algorithm\n",
" - initialize $\\theta$\n",
" - for each episode:\n",
" - for t in range(T):\n",
" - $\\theta \\leftarrow \\theta + \\alpha\\nabla_\\theta\\log\\pi_\\theta(s_t,a_t)v_t$\n",
" - $v_t$ is the return, an estimate of Q\n",
" - MC policy gradient is very slow and high variance, but it's a good idea in general\n",
"- Actor-critic policy gradient\n",
" - To reduce variance, we estimate Q with another network / function approximator: $Q_w(s,a) \\approx Q^{\\pi_\\theta}(s,a)$\n",
" - We maintain these two networks:\n",
" - Critic: updates action-value function (Q) parameters (w)\n",
" - Actor: updates policy parameters $\\theta$, in direction suggested by critic\n",
" - Actor-critic follows an approximate policy gradient\n",
" - $\\nabla_\\theta J(\\theta) \\approx \\mathbb{E}_{\\pi_\\theta}[\\nabla_\\theta\\log\\pi_\\theta(s,a) Q_w(s,a)]$\n",
" - $\\Delta\\theta = \\alpha\\nabla_\\theta\\log\\pi_\\theta(s,a)Q_w(s,a)$\n",
" - The critic's job of policy evaluation has been discussed in previous sections\n",
"- Actor-critic can be high variance, so to counter that, we subtract a baseline\n",
" - If we subtract $V^{\\pi_\\theta}(s)$, it can be stable; the expectation doesn't change, but the variance is reduced\n",
" - $A^{\\pi_\\theta}(s,a) = Q^{\\pi_\\theta}(s,a) - V^{\\pi_\\theta}(s)$\n",
" - A is \"how much better is this action in this state than the state in general? Does this action make the state better?\n",
" - $\\nabla_\\theta J(\\theta) = \\mathbb{E}_{\\pi_\\theta}[\\nabla_\\theta\\log\\pi_\\theta(s,a)A^{\\pi_\\theta}(s,a)]$\n",
" - how do we estimate the advantage function? we don't have access to V(s)\n",
" - observe that A is really just an expectation of the TD error\n",
" - TD error: $\\delta^{\\pi_\\theta} = r + \\gamma V^{\\pi_\\theta}(s') - V^{\\pi_\\theta}(s)$\n",
" - $\\mathbb{E}_{\\pi_\\theta}[\\delta^{\\pi_\\theta}|s,a] = \\mathbb{E}_{\\pi_\\theta}[r + \\gamma V^{\\pi_\\theta}(s') | s,a] - V^{\\pi_\\theta}(s)$\n",
" - $\\mathbb{E}_{\\pi_\\theta}[\\delta^{\\pi_\\theta}|s,a] = Q^{\\pi_\\theta}(s,a) - V^{\\pi_\\theta}(s)$\n",
" - $\\mathbb{E}_{\\pi_\\theta}[\\delta^{\\pi_\\theta}|s,a] = A^{\\pi_\\theta}(s,a)$\n",
" - and plug that into the gradient\n",
" - $\\nabla_\\theta J(\\theta) = \\mathbb{E}_{\\pi_\\theta}[\\nabla_\\theta\\log\\pi_\\theta(s,a)\\delta^{\\pi_\\theta}]$\n",
"- Actor-critic with eligiblity traces and TD(lambda)\n",
" - $\\Delta\\theta = \\alpha(v_t^\\lambda - V_v(s_t))\\nabla_\\theta\\log\\pi_\\theta(s_t,a_t)$\n",
" - $\\delta = r_{t+1} + \\gamma V_v(s_{t+1}) - V_v(s_t)$\n",
" - $e_{t+1} = \\lambda e_t + \\nabla_\\theta\\log\\pi_\\theta(s,a)$\n",
" - $\\Delta\\theta = \\alpha\\delta e_t$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Integrating Learning and Planning\n",
"- Learn a model directly from experience; plan with that model to construct a value function or policy\n",
" - Model predicts: transition probabilities P, reward function R\n",
" - Use the model to simulate the environment and generate experience without actually interacting with the world\n",
" - It's still online; we use the active experience to build the model, and solve with the model (with e.g. tree search)\n",
"- Advantages and disadvantages\n",
" - Good: In games with sharp value functions where a small change in state is a big change in reward, we can model that instead of learning with huge gradients\n",
" - Good: We can efficiently learn the model with supervised learning methods\n",
" - Good: We can reason about model uncertainty (offer Bayesian approaches to action selection)\n",
" - Bad: We now have two layers of uncertainity; a noisy model of the world, and a noisy value function\n",
"- Model architecture\n",
" - Learner 1: input $\\{(s_i,a_i)\\}_{i=1}^n$, output $\\{r_i\\}_{i=1}^n$ (regression problem)\n",
" - Learner 2: input $\\{(s_i,a_i)\\}_{i=1}^n$, output $\\{s_{i+1}\\}_{i=1}^n$ (density estimation problem)\n",
" - Example cost functions: RMSE for regression, KL Divergence for density estimation\n",
"- Toy example model, a table lookup\n",
" - Could take mean of rewards from s,a as model reward for s,a; take count of s' from s,a as probability of s' from s,a\n",
" - Could sample from experience tuples for s,a to get an s'; memory inefficient\n",
"- Sample-based planning\n",
" - Use our model to generate sample experience (s,a,r,s')\n",
" - Apply model-free RL e.g. Q-learning to sample experience\n",
" - This gives us a source of infinite data that we can learn over for as long as we want\n",
" - Our agent will only perform as well as our model of the environment\n",
"- Combining real and sampled experience\n",
" - Dyna is an architecture for:\n",
" - learning a model from real experience\n",
" - learning and planning with a value function / policy from combined real and simulated experience\n",
" - Dyna algorithm:\n",
" - Initialize Q(s,a) and model(s,a)\n",
" - while True:\n",
" - pick and execute action epsilon-greedily\n",
" - $Q(s,a) \\leftarrow Q(s,a) + \\alpha[R + \\gamma\\max_A{Q(s',A)} - Q(s,a)]$\n",
" - $model(s,a) \\leftarrow r,s'$\n",
" - for i in range(N):\n",
" - S <- random previously observed state\n",
" - A <- random previously taken action from S\n",
" - R,S' from model(S,A)\n",
" - $Q(S,A) \\leftarrow Q(S,A) + \\alpha[R + \\gamma\\max_a{Q(S',a)} - Q(S,A)]$\n",
" - Dyna idea: update Q with real experience, use updated Q to generate fake experience, continue to update Q with that experience\n",
" - Efficient re-use of data to learn faster over the same amount of real experience\n",
" - The more times we hallucinate data (the larger N is), the faster we will converge on the optimal value function / policy\n",
"- Dyna Q+\n",
" - The + is a way of incentivizing exploration, in case the environment changes and we're still stuck on our policy\n",
" - Say you're playing poker and suddenly a whole new table of players sits down; if you're not incentivized to explore, you'll just act according to your previously-optimal policy for a long time before figuring out that it sucks\n",
"- Forward search algorithms for simulation-based search\n",
" - Forward search chooses the best action by lookahead\n",
" - Build a search tree rooted at the current state (don't care about the past or about far off states)\n",
" - Use our model (approximation to the MDP) to roll forward the dynamics and do lookahead\n",
" - Solve the sub-MDP starting from the current state\n",
"- Forward search with sample-based planning\n",
" - Generate samples using the dynamics to roll-forward episodes\n",
" - Use those samples in a model-free setting and learn over those\n",
"- Simple MC search\n",
" - Given a model and a simulation policy $\\pi$ (doesn't have to be a great policy, but we will improve it)\n",
" - For each action a in A:\n",
" - simulate K episodes from current state, evaluate by mean return\n",
" - Select best A from simulated mean return\n",
"- Monte Carlo Tree Search\n",
" - Given a model\n",
" - Simulate K episodes from current state\n",
" - Build a search tree containing visited states and actions (tree that has state nodes and action nodes mapped out)\n",
" - Evaluate states by mean return of episodes starting with s,a\n",
" - Select best A based on maximum return from search tree\n",
"- MCTS simulation policy\n",
" - $\\pi$ improves because we're always (epsilon) maximizing over Q, always going down the best path and exploring the best parts of the tree\n",
" - When we get to unexplored areas down the tree (further in the simulation) where they're all still at their initial, equivalent values, we pick actions randomly\n",
"- Replacing MC with TD\n",
" - apply SARSA to sub-MDP from current state, instead of mean return\n",
" - as always, TD is usually more efficient; it reduces variance, but increases bias\n",
"- Dyna-2; long + short term memory\n",
" - long-term is updated with TD-learning over real experience\n",
" - short-term is updated with TD-search over simulated experience from current state\n",
" - value function is just the sum of the two value functions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploration and Exploitation\n",
"- Are there better ways than epsilon-greedy to explore effectively?\n",
" - Random exploration (this is epsilon-greedy)\n",
" - Optimism in the face of uncertainty\n",
" - Estimate uncertainty on value\n",
" - Prefer to explore states/actions with the highest uncertainty (Dyna+)\n",
" - State A is always +10, State B could be 5-20; try B. If it's better it's better, if not whatever\n",
" - Information state space\n",
" - Consider agent's information as part of its state\n",
" - State variable is # of times we've been in this state\n",
"- State-action vs parameter exploration\n",
" - State-action: systematically explore state-space and action-space; e.g. pick new A each time S is visited\n",
" - Parameter: parameterize policy, randomly start fiddling with the parameters to try some new combination out\n",
" - Advantage: exploration is consistent, not random; we change our parameters a bit so we're still kind of following the same path, but still kind of exploring\n",
" - Disadvantage: doesn't know about state-action space (doesn't know what we've already tried)\n",
"- Example problem: multi-arm bandits\n",
" - No states, just rewards and actions\n",
" - Each bandit is an arm we can pull (action we can take) for some reward. We want to maximize our reward over many pulls\n",
" - One step episodes\n",
"- Cost function for actions: regret\n",
" - $L_t = \\mathbb{E}[\\sum_{\\tau=1}^t v_\\star - q(A_\\tau)]$\n",
" - The expected total difference over all actions between the optimal value and what we're actually getting\n",
" - The \"gap\" for an action $\\Delta_a$ is $\\Delta_a = v_\\star- q(a)$; the difference between that action and the best action\n",
" - The \"count\" for an action is the expected number of uses of that action\n",
" - Regret is a function of gaps and counts: $L_t = \\sum_{a \\in A}\\mathbb{E}[N_t(a)]\\Delta_a$\n",
" - Optimizing regret thus means causing small counts for large gaps\n",
"- If we never explore, our regret will be huge, because we'll lock onto a sub-optimal action and incur regret forever\n",
" - Exploration technique: optimistic initialization\n",
" - Initialize every Q value to be the max reward possible\n",
" - Everything is 100; we pull A, it gets 10, now everything but A is the best possible; we're guaranteed to try everything at least once, before deciding it sucks\n",
" - Exploration technique: decaying epsilon for epsilon-greedy\n",
" - pretend we know $v_\\star$, the optimal value, and therefore know $\\Delta_a$\n",
" - pick this decaying schedule: $\\epsilon_t = \\min(1, \\frac{c|A|}{d^2t})$, where $d = \\min_{a|\\Delta_a>0}\\Delta_a$\n",
" - this schedule guarantees logarithmic asymptotic total regret (it flatlines out to infinity instead of growing forever\n",
"- How can we replicate this decayed epsilon when we don't know $\\Delta_a$, which we never do?\n",
" - Use upper confidence bounds on each of the actions\n",
" - with high probability, $q(a) \\le Q_t(a) + U_t(a)$; thus, $U_t(a)$ acts as a sort of 95% confidence interval on the mean reward\n",
" - U is inversely proportional to N; the more times we pick an action, the smaller it gets\n",
"- How do we set U?\n",
" - Pick some probability p for our \"confidence interval\" (e.g. 5%); this is the probability that the true value exceeds the upper confidence bound\n",
" - Hoeffding's inequality: $P[\\mathbb{E}[X] > \\overline{X}_t + u] \\le e^{-2tu^2}$\n",
" - $P[q(a) > Q_t(a) + U_t(a)] \\le e^{-2N_t(a)U_t(a)^2}$\n",
" - $U_t(a) = \\sqrt{\\frac{-\\log{p}}{2N_t(a)}}$\n",
" - Reduce p as we observe more rewards, e.g. $p = t^{-4}$ (be more confident that we're right as we experience)\n",
"- UCB1 algorithm\n",
" - $A_t = \\argmax_{a \\in A}Q_t(a) + \\sqrt{\\frac{2\\log t}{N_t(a)}}$\n",
" - This will achieve the logarithmic asymptotic reward we wanted\n",
"- Bayesian approach when assuming / given prior distribution for actions\n",
" - Bayesian UCB\n",
" - $p[Q|w]$, we find a distribution for the action-value function (the reward) and parameterize it by w, which could be e.g. a vector of the means and variances of all the arms\n",
" - use Bayes theroem to update the means and SDs, then pick a t-score and evaluate the actions at that t value (e.g. mean + 3 SDs, max a for that)\n",
" - Bayesian probability matching with Thompson sampling\n",
" - $\\pi(a) = \\mathbb{E}[1(Q(a) = \\max_{a'}Q(a')) | R_1,...,R_{t-1}]$\n",
" - Now we want to find the probability that each action is the best action\n",
" - We just sample one reward from each action's distribution, and take the best action from those samples\n",
" - This simple idea achieves the lower bound, and there are no hyperparameters like SD\n",
"- Information state exploration\n",
" - Treat the reward distributions for actions as states, generate an MDP from that\n",
" - The actions of the MDP are still the actions, and each child is an outcome from that action\n",
" - If we do action A and it succeeds, that's a state; if we do action A and it fails, that's a state\n",
" - If we use a beta distribution for each action, that's parameterized very easily by the counts of successes and failures (rewards of 0 and 1)\n",
" - This is intractable without sampling and tree search (because we just exploded our state space size)\n",
"- How do we apply all of this to MDPs with states?\n",
" - We just do it. This all generalizes when we add in s\n",
" - e.g. $A_t = \\argmax_{a \\in A}Q(S_t,a) + U(S_t,a)$\n",
" - One thing it doesn't account for is that our Q improves; U doesn't know how close Q is to the optimal policy, so it might still be suggesting to explore when we're almost done\n",
" - Rmax algorithm\n",
" - For any state we don't know about, set it to $R_max$; this will cause the MDP to guide us to that state, so that we can figure out its true value, then we do that again with another state\n",
" - The MDP systematically guides us to each state"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [Root]",
"language": "python",
"name": "Python [Root]"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment