Skip to content

Instantly share code, notes, and snippets.

@PtrMan
Last active August 29, 2015 14:21
Show Gist options
  • Save PtrMan/5acbc38735e0b3832ac6 to your computer and use it in GitHub Desktop.
Save PtrMan/5acbc38735e0b3832ac6 to your computer and use it in GitHub Desktop.

There are no dimensions

NN's typical weakness is the curse of dimensionality. In reality all? information doesn't depend on the dimensions it is mapped to. For example

  • a big letter A stays a big letter A in a smaller font
  • a big circular touch sensation stays a circular touch sensation even if the circle is smaller

Ways to not run into this problem is to avoid it, for example with

  • early binding of raw data into higher level representations (line fitting, ...)
  • other representations like in NEAT, etc.

RNN vs other approaches

  • RNN's map bad to problems with not bound datastructures, like Lists, Maps, Trees, Workspaces etc.
  • GA/GP/EA evolved operators and/or scaffolds map better to these problems, and parts are composable/decomposable

i mean it's possible to actually train an NN with some architecture on any real world task I doubt that for good reasons

  • (a) specific architectures map to specific problem categories
  • (b) learning doesn't scale well if everything is a RNN, see (a)
  • (c) RNN's are turing complete (with some limitations like limited storage space/neurons) but see (b) other formalisms map better to certain problems than RNN (for example a operator based operation can work with stacks, lists, heaps, trees, workspaces, etc) yes its equivalent but it doesn't mean that its practically trainable and or can adapt fast as other solutions
  • (d) RNN and any backprop work only when the parts are homogenous a RNN, if there is a non-RNN component inside it can't learn (edited)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment