Skip to content

Instantly share code, notes, and snippets.

@jamii
Created March 5, 2018 13:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jamii/6f3bb35dc7fa5623fe5166579402b893 to your computer and use it in GitHub Desktop.
Save jamii/6f3bb35dc7fa5623fe5166579402b893 to your computer and use it in GitHub Desktop.

doesn't include actions

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0006421 mountain car in detail policies are functions of sensory input state entropy bounded by sensory entropy + transfer term ergodic assumption sensory entropy = integrate equilibrum density over time ie min sensory entropy = min surprise minimizing surprise in dynamic environment requires controlling environment - cant just wait it out but cant compute surprise directly, so need to bound it 'The recognition density is a slightly mysterious construct because it is an arbitrary probability density specified by the internal states of the agent. Its role is to induce free-energy, which is a function of the internal states and sensory inputs.' policies are functions of sensation and internal states of q 'Note that the true states depend on action, whereas the generative model has no notion of action; it just produces predictions that action tries to fulfil.' here the generative model fully specifies location, and we pick actions to minimize prediction error of that model precision is variance in prior? governs whether discrepancies are resolved by action or perception? (how do we choose precisions?) give utility function as a desired equilibrium density

In brief, learning entails immersing an agent in a controlled environment that furnishes the desired equilibrium density. The agent learns the causal structure of this training environment and encodes it through perceptual learning as described above. This learning induces prior expectations that are retained when the agent is replaced in an uncontrolled or test environment. Because the agent samples the environment actively, it will seek out the desired sensory states that it has learned to expect. so rather than creating priors by hand, we just put it in an environment where it makes the right choices. can we learn eg regret minimization for bandit? They key thing here is that the free-energy principle reduces the problem of learning an optimum policy to the much simpler and well-studied problem of perceptual learning, without reference to action. ^ this seems like a key idea Increasing the relative precision of empirical priors on motion causes more confident behaviour, whereas reducing it subverts action, because prior expectations are overwhelmed by sensory input and are therefore not expressed at the level of sensory predictions. optimal control does some kind of gradient climbing thing, where we have to figure out the value of each state by what end states we can reach from there? hard to solve free energy allows simply training optimal trajectories and letting the agent figure out the correct policies optimal control requires knowing hidden states? acknowledges that they put the solution into the training environment, but still didn't need to learn policy (what were the parameters that were learned? not clear to me. could be entire trajectory? theta = mu looks like it learns environmental dynamics, mapping from states to senses and states to future states. oh, also params to control polynomial. so learnt a policy that was being directly implemented. not that impressive?)


precisions? in mountain car example, is sensory/motor noise

can we do some picoeconomics?

q is really the desired world, not just an approximation. we act to make the real world look like the desired world.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment