Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@parmentf
Created August 31, 2018 15:03
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save parmentf/84a0c3f0f590524a2fb40e19059f4505 to your computer and use it in GitHub Desktop.
Save parmentf/84a0c3f0f590524a2fb40e19059f4505 to your computer and use it in GitHub Desktop.
ECTOR technical notes with Markdeep
          <meta charset="utf-8" emacsmode="-*- markdown -*-">

                        **ECTOR technical notes**

Concept Network

BAsCET is the fundamental architecture of ECTOR. It is shown in my PhD Thesis (in French only).

The Concept Network is BAsCET's model. It matches the COPYCAT's Slipnet. It is a dynamic one: it evolves to adapt to the problem. We can call this model an hybrid one. It is between a semantic network and an artificial neural network. From the semantic network, it gets the symbolic nature of its nodes. From the neural net, it gets the activation propagation, through weighted links.

One can say it works by nodes association (when a node is activated, and it is strongly bound to another one, it will activate this other node). Furthermore, building a Concept Network in a certain manner, one can say it works by symbols association and concepts emergence.

Concept notion

In BAsCET's frame (and in ECTOR's too), a concept is a set of nodes in the Concept Network, strongly bound together and simultaneously activated. Thus, when nodes named network, neural, Widrow, Kohonen, and Hoff are in a Concept Network, one can assume that they constitute one concept, different from that constituted by Markov, hidden, though the network node could belong to the hidden Markov network concept.


  •                ###########                  ######                       *
  •               ## Kohonen ###                ######  Neural Network Node  *
  • ~ Markov------network ##|####### Widrow ### *
  • ~~|~~~~~~ ##|#######|#######/########### ~~~~~~ *
  • ~ HMM-------hidden # neural-----------'##### ~~~~~~ *
  •                                             ------  Links                *

[Figure [concept]: two concepts (Neural network and Markov Network)]

This is true, only if the Concept Network was designed in order to associate conceptually near symbols.

Let's detail the Network Concepts' components.

Network components

The Concept Network is a net made of nodes and weighted links between them. It can be wholy connected.

Nodes

Each node has these fields:

symbol (S) : it contains the symbolic information.

conceptual importance (CI) : the more abstract a node, the bigger its conceptual importance.

activation (A) : translate the fact that the node is activated or not (!)

decay rate (DR) : Influences the quickness of deactivation of the node. By default, depends on the number of incoming links and on the node CI.

agents (Ag) : agents that the node can launch in the coderack when its activation is greater than an activation threshold. They generally have to identify or to locate an instanciation of this node in the Blackboard. In the ECTOR case, they'll have to select the appropriate answer to the user's input.

Each instanciation creation in the Blackboard implies the activation of its father node (its activation is then maximal). This instanciation also allows the diminution of the father's decay rate (divided by the number of instanciations in the Blackboard -- plus one).

Links

!!! Note TODO

Activation propagation

In BAsCET, the activation value is propagated from a semantic node to another.

Let's start from the example shown in the link explanation.


  • .--------------. .------------. *
  • | S: birthday | | S: cake | *
  • +----------------+ +--------------+ *
  • | CI: 80 | .-------+-------. | CI: 70 | *
  • | AV: 100 % +-----+ T: eat | W: 95% +-----+ AV: 0% | *
  • | DR: 2 | '-------+-------' | DR: 5 | *
  • | Ag: ComputeDay | | Ag: CookCake | *
  • '--------------' '------------' *

Propagating the activation value of birthday through the link eat to the node cake will give the following Concept Network:


  • .--------------. .------------. *
  • | S: birthday | | S: cake | *
  • +----------------+ +--------------+ *
  • | CI: 80 | .-------+-------. | CI: 70 | *
  • | AV: 98 % +-----+ T: eat | W: 95% +-----+ AV: 86% | *
  • | DR: 2 | '-------+-------' | DR: 5 | *
  • | Ag: ComputeDay | | Ag: CookCake | *
  • '--------------' '------------' *

The activation value $AV^{t+1}_i$ of a node $i$ at the instant $t + 1$, after propagation, is expressed as the sum of its old activation value $AV^t_i$ and the other nodes' influence $I_i$, minus a deactivation $D_i$, depending on its decay rate $DR_i$.

$AV^{t+1}_i = AV^t_i + I_i - D_i$

The influence of other nodes could be given by this classical formula:

$I_i = \frac{\sum_{j \neq i}{A_j \times W_{ij}}}{100}$

but it lets an unbalanced influence between nodes with many neighbours (influencing them) and nodes with almost none.

That's the reason why BAsCET use a little more complicated influence, with a logarithmic behaviour.

As the birthday node has no incoming link, $I_{birthday}$ is null.

$D_{birthday} = \frac{100 \times 2}{100} = 2$

So, its new activation value $AV_1 = AV_0 + I - D = 100 + 0 - 2 = 98$.

As far as the cake node is concerned, it is different: its activation value is null, and so is its deactivation. However, it is influenced by birthday:

$I_{cake} = \frac{A^0_{birthday} \times W_{birthday,cake}}{100 \times Div_{cake}} = \frac{100 \times 95}{100 \times Div_{cake}} = \frac{95}{Div_{cake}}$

$Div_{cake} = \frac{\ln{4}}{\ln{3}} \approx 1.0986$, so $I_{cake} \approx 86$.

At last, $A^1_{birthday} = 0 + 86 - 0 = 86$

In this example, one could add a candle node, that would be associated to the cake and birthday symbols. One could add a link from cake to candle labeled by birthday; so, when birthday and cake are both activated, this link activates candle.

Simple influence

!!! TODO

Labels

!!! WARNING No more implemented in recent versions of the concept network.

Labels are nodes whose activation modifies the influence of a node on another one.


  •                .---------+.                        *
  •               | opposite |3|                       *
  •                '-----+---+'                        *
  • .------+. | .--------+. *
  • | happy |1|<-------------+-------------->| unhappy |2| *
  • '------+' '--------+' *

Let $C_1$ be the occurence number of the word 1 -happy-

Let $C_{12}$ be the co-occurrences number of the words 1 and 2.

Let $A_3$ be the activation level of word 3 -opposite, the label node-

Let $I_{12}$ the percentage of influence of the node 1 on the node 2.

$I_{12} = \frac{C_{12}}{C_1} + \frac{1 - C_{12}}{C_1} \times A_3$

Concepts/expression creation

!!! Tip Sometimes called "multiterms".

Words often have several semantics, that's why a concept is a combination of several words. Actually, an expression gives a particular sense to all the words it contains.

Moreover, some words are often related, and have a particular sense when associated (like neural and network). That's why we can create a new semantic node in the concept Network when a sequence of words co-occurs equally.


  •                                              +---(2)---> many(4)   *
  •                                              |                     *
  •                                              v                     *
  • there (5) <----(4)---> has (10) <----(4)---> been (6) *
  •                                              ^                     *
  •                      |                       |                     *
  •                      |                       +---(1)--> several(1) *
  •                      |                                             *
  •                      | CREATION OF "there as been"                 *
  •                      v                                             *
  •                                                                    *
  •                there has been(4)                                   *
  •                 ^      ^      ^                                    *
  •                 |      |      |                                    *
  •        +--(4)---+     (4)     +--(4)---+   +--(2)--> many(4)       *
  •        |               |               |   |                       *
  •        v               |               v   v                       *
  •  there (5) <--(4)--> has(10) <--(4)--> been (6)                     *
  •                                            ^                       *
  •                                            |                       *
  •                                            +--(1)--> several(1)    *

When a phrase is added, the higher level concepts (higher than words) are also considered. Example: addition of the phrase there has been something.


  •           there has been(5)                                      *
  •           ^      ^      ^                                        *
  •           |      |      |                                        *
  •  +--(5)---+     (5)     +--(5)---+   +--(2)--> many(4)           *
  •  |               |               |   |                           *
  •  v               |               v   v                           *
  • there(6) <--(5)--> has(11) <--(5)--> been (7) <--(1)--> something(1) *
  •                                      ^                           *
  •                                      |                           *
  •                                      +--(1)--> several(1)        *

In order to create a concept, all the nodes to be integrated must have an equivalent ($95%$) co-occurrence.

The created node then has a number of occurrences equal to the minimum of the co-occurrences considered.

"Creating" nodes influence the new one, but this one influences only the first and last constituents of the string.

A created node can be a sub-concept of another existing one (for example a sentence, or another created concept).


  •      s e n t e n c e         *
  •      |   |    |   |          *
  •      |   |    |   |          *
  •  .--'    |    |    '--.      *
  • |        |    |        |     *
  • |        |    |        |     *
  • word1-----word2---word3----word4 *

[Figure [before]: Concept Network before concept creation]


  •     s e n t e n c e                *
  •           |      |                 *
  •           |      |                 *
  •       created     '--- word4 ---.  *
  •         node                     | *
  •        | | |                     | *
  • word1 ---' | '---- word3 --------' *
  •          |                         *
  •        word2                       *

[Figure [after]: New concept with words 1 to 3]

Sequence of words integration

Let's see on an example in what order to integrate words in the Concept Network. Here is a part of the initial Concept Network:


  •                                            +-------(1)-----> program (1) *
  •                                            |                             *
  • ECTOR (5) <----(2)----> is (3) <----(3)----> a (3) <----(1)----> bot (2) *
  •                     |                                                    *
  • Achille (1) <----(1)----+ *

Now, let's consider the sentence to integrate: Achille is a program.

  1. Adding of occurrences and co-occurrences:
    • Achille (2) <----(2)----> is (4) <----(4)----> a (4) <----(2)----> program (2) *

  2. Expression/concept detection (co-occ >= MinCo-occ = 4):
    •   +--------------> Achille is a program (1) <--------------+         *
    •   |                          ^                             |         *
    •   |                          |                             |         *
    •   |                         (1)                            |         *
    •  (1)                         |                            (1)        *
    •   |                          v                             |         *
    •   |                +-(4)--> is a (4) <--(4)-+              |         *
    •   |                |                        |              |         *
    •   v                v                        v              v         *
    • Achille (2) <--(2)--> is (4) <-----(4)-----> a (4) <--(2)--> program (2) *

  3. Labels detection (simplified figure):
    •   +-----> Achille is a program (1) <----+                                             *
    •   |               ^                     |                                             *
    •   |               |                     |                                             *
    •  (1)             (1)                   (1)                                            *
    •   |               |                     |     ======>            is a (4)             *
    •   v               v                     v                           |                 *
    • Achille(2) <--(2)--> is a (4) <--(2)--> program (2) Achille (2) <--(1)--> Program (2) *

<style class="fallback">body{visibility:hidden}</style><script src="https://casual-effects.com/markdeep/latest/markdeep.min.js?"></script>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment