Skip to content

Instantly share code, notes, and snippets.

@micimize
Last active October 13, 2020 22:22
Show Gist options
  • Save micimize/8cfb4d429ddaac0a2d1b83c1e03c8e6e to your computer and use it in GitHub Desktop.
Save micimize/8cfb4d429ddaac0a2d1b83c1e03c8e6e to your computer and use it in GitHub Desktop.
An ill-formed model for language that could potentially be useful in AGI work

My Model of Language

My current model of natural language is that it could and should be modeled as a programming language, and that this modeling could prove fruitful, and possibly vital for producing AGI

Expression Rewriting Interpreter

If language is code, it is fundamentally a pure expression rewrite system, like Wolfram. This is the root of the "signs only point to signs" problem, concretion is only possible at runtime. Looseness can be afforded because of the robustness and extensibility of its interpreter.

All language concepts available present in their final form. The haskell type system has nothing on our type system. Expressions can be curried, transformed, inverted, dereferenced. None of this is surprising – programming languages constructs are mostly formalized derived from natural language (mostly english).

Evalation at Runtime

The Concrete

How are statements in language processed?

Foundationally, we can resolve symbols to the present context. "Look to your right, bears, run!" can be clearly mapped to a set of runtime modules. The automata will parse the symbols into visual, directional, memory, and movement modules, and assuming they are all intact, execute the desired procedure.

These modules are highly interconnected. Similarly to how Wolfram has poured much effort into coherent grammars within and across domains – we strive for what we already have. "Steve saw bears in the east" logs to the memory module, etc.

Some particularly important modules to drive this home are the afformentioned memory module, as well as the mentalization, simulation, and reasoning modules.

An example, "For sale: baby shoes, never worn"
  1. I can map the symbols of sales, baby shoes, and wearing to known concepts from memory.
  2. My concept of sales includes that they can happen to new and secondhand items.
  3. I know there is no reason to specify wear if the item is new.
  4. Thus, from reason I know the item is secondhand, but that a baby never wore them.
  5. Thus, the purchaser was never able to give the shoes to a baby to wear.
  6. Mainly from mentalization I know that this why baby shoes are purchased.
  7. From simulation I can generate a number of possible scenarios as to why this happened,
  8. But the first one generated (likely due to word-vector proximity) is that the mother is reselling the shoes.
  9. This, again, has multiple possible implications, but mentalization has us hone in on the most poignient.

This is still a wild undersimplification. Also my mentalization module is making me regret choosing such a morbid example.

Note: Mentalization is the ability to understand the mental state--of oneself or others.

The Metaphorical

This accounts for a great deal, but it is rather trivial to say these modules exist and we utilize them in language. How are these modules constructed, and glued together? Why can we make statements like "do you see what I'm saying?" My position is that the pivitol to langauge is the faculty for metaphore, which allows us to map between and amidst different modules. We perform a kind of metaphorical algebra – one that likely existed before language as well, but due to its integral role in language becomes far more robust after its development.

Actually, my suspicion is that the other modules were developed solely from the initial stuff of experience and metaphore, which could be viewed as synonymous with abstraction.

Consequence

So what's the point? Obviously we won't articulate the rules of such a complex language anytime soon.

Well, the consequence for the science of intelligence is that if we want humanoid intelligence, we should focus on the following fundamentals:

  1. Embodiment: The automata must have our fundamental physical affordances, at least in part.
  2. Experiential Parsing: The interpretation statements symbols to
  3. Metaphorical Parsing: The ability to perform metaphorical algebra on symbols and the world

Of those, I believe 1 and 2 are being seriously pursued... and honestly 3 probably is also. I have no idea if any of this is novel or actionable. But I haven't found anything elsewhere about the "metaphorical algebra," so.

@micimize
Copy link
Author

This idea might be too unrefined to be truly useful, but I really think further interrogation of the metaphorical algebra idea could bear actual fruit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment