Skip to content

Instantly share code, notes, and snippets.

@contrast-zone
Last active December 29, 2020 23:51
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save contrast-zone/c27e32819cc78d174b3f6dc22d99b07a to your computer and use it in GitHub Desktop.
Save contrast-zone/c27e32819cc78d174b3f6dc22d99b07a to your computer and use it in GitHub Desktop.
Computational system implementing expression logic
// under construction //

expression logic

[Abstract]

Languages can be seen as streams of symbols used to carry on, process, and exchange informations. Expression logic is also a language, but it is a general kind of metalanguage capable of describing and hosting any other language. Expression logic is also able to perform any intermediate data processing that hosted languages are supposed to perform. Being a general kind of metalanguage, expression logic represents all of the following:

  • Expression recognizer and generator
  • SMT solver
  • Deductive system
  • Term rewriting framework
  • Metatheory language formalization

These descriptions render expression logic as a general problem solver and solution verifier, which seems to be a required minimum property when processing different kinds of formal languages. In this short exposure, we explore basic principles and reveal initial thoughts behind expression logic.

table of contents

1. introduction

Expression logic is a declarative metalanguage. It is designed to be a computationaly complete hosting environment for any other formal language. Formal languages may be represented by a set of rules guiding a correct expression formation within the languages terms, together with their semantical definition. Expression logic should be able to understand a considerable amount of different languages, once we provide their definitions.

Being a general kind of language subsumes an ability to perform computations of any kind, requiring expression logic to embrace Turing completeness as a measure of its applicability. Indeed, expression logic is Turing complete, reflecting its algorithmic universality to interpret discrete steps of computations within a general class of languages such as mathematics, logic, or different kinds of computational and programming systems which, in its own meta-level, may or may not fall into Turing complete category.

Probably the clearest analogy to expression logic is that it may be seen as a form of a deductive system. Let's shortly overview the most used deductive systems to compare them to expression logic. Hilbert style deduction, natural deduction, and sequent calculus all belong to a class of deductive systems. They are characterized by respectively increasing number of included primitive logical operators. Hilbert style deduction incorporates only implication where all injected rules take a form of A -> B. Natural deduction adds conjunctions to the left implication side, so that rules take a form of A1 /\ A2 /\ ... /\ An -> B. Sequent calculus further extends the basic language by including right side disjunctions, like in A1 /\ A2 /\ ... /\ An -> B1 \/ B2 \/ ... \/ Bn.

The price paid for the simple syntax of a Hilbert-style deduction is that complete formal proofs tend to get extremely long. A positive consequence is relatively simple software implementation of Hilbert-style deduction. In contrast, more complex deductive system syntax like in natural deduction or sequent calculus, leads to shorter formal proofs. A negative consequence is more complicated software implementation which brings exposure to inefficiency of a software implementation in terms of possible implementation errors occurrence and reduced speed and memory performance.

Expression logic combines positive properties of sequent calculus (proof construction simplicity) with positive properties of Hilbert-style deduction software (implementation simplicity). Thus, expression logic includes all the basic logic operators, but it relies on a kind of resolution rule as the one and only rule of inference. This combination seems to be just enough for efficiently performing tasks which expression logic aims to formalize: solution finder and verifier.

2. expression logic

Expression logic is designed for describing problems and reasoning about solutions to them. A problem in expression logic is stated as a single formula consisted of atomic expressions connected by primitive or compound operators.

Expression logic formalizes assertion of arbitrary language rules needed to recognize or generate target expressions. Considering the rules being a medium for reaching the target expressions, we may form an expression tree rooting at starting assumptions, branching over language rules applied iteratively to the assumptions and their results, and finally producing tree leaves representing the target expressions as an output of the whole process.

Expression logic may be used in two similar, but very broad scenarios:

  1. as an expression recognizer. In this scenario, we may ask a question of whether an expression is a part of particular language. The same process that answers this question may also be used in determining if a theorem is a part of some theory. As a general use of the same process, we may verify if a solution satisfies some problem.
  2. as an expression generator. In this scenario, providing some constraints along with particular language rules, we actually generate an instance statement of that language. The same process may be used to generate a theorem of some theory. As a general use of the same process, we may produce a solution to some problem, if such solution exists.

In both cases, expression logic bases its functioning on specific satisfiability modulo theory solver that utilizes a kind of term rewriting system controlled by normalized expression logic structures. The term rewriting system verifies or produces possible solutions to stated problems by a custom combinatorial algorithm.

2.1. introductory syntax

Expression logic syntax may be represented by the following, relaxed kind of BNF notation:

start := << exp >>

exp   := exp <-> exp
       | exp -> exp
       | exp /\ exp
       | exp \/ exp
       | ~ exp
       | exp exp
       | exp _ exp    
       | << exp >>
       | ( exp )
       | atom

atom  := top
       | literal
       | identifier
       | @identifier

The above grammar is prescribed to be unambiguous with a priority grouping expressions from top to bottom rule cases. Under the general exp rule, exp <-> exp and exp -> exp cases assotiate to the right, exp /\ exp and exp \/ exp cases are commutative, while exp exp and exp _ exp cases assotiate in a way that is defined by surrounding logic rules. Identifiers are represented by any other strings of characters, possibly prefixed by. To this point, expression logic syntax looks exactly like the one of propositional logic.

Now we extend propositional logic syntax with some new formations. << exp >> form of expressions introduce scoping notation, exp exp and exp _ exp forms of expressions introduce propositional sequences, @identifier atomic expression introduces implicit equality into expression logic, while literal atomic expression introduces quoted strings and numeric values.

Further, when performing inference, we use the only hard-wired constant symbol: top. All the inference that is to be performed is required to be a successive branch of top atom. This requirement reduces computational complexity performance by a whole order of magnitude, which is a very desirable property when implementing expression logic in a programming language of our choice.

The following section explains semantics of expression logic. The first three sub-sections use the above introductory syntax, while the last, fourth section extends the current syntax to support scope controlled axiomatization.

2.2. semantics

We place foundations of expression logic to propositional logic from which we inherit logical reasoning about expressions connected by logical operators. Logical constructs in expression logic are intended to be internally converted to conjunctive normal form before conducting the inference mechanism. Inference mechanism is then implemented as a simple resolution rule answering what possible solutions satisfy questioned problem setup (see the implementation details about specific engine behind expression logic in accompanying document).

We are about to introduce three extensions to propositinal logic (namely scoping notation, propositional sequences, and implicit equality), and learn some problem patterns we can use to guide forming solutions of our interest. After bringing each extension, we will immediately overview some of examples which the extensions make possible to express. But before that, to get a feeling about how problem setups intend to behave, let's overview a simple "Hello world" example:

<<
    (
        a -> 'Hello'
    ) /\ (
        b -> 'world'
    ) /\ (
        top -> a b '!'
    )
>>

This expression logic problem setup produces a string Hello world ! as a solution. The rule of thumb is that only sequences of string and number literals are being exposed as the solution, while identifiers and expressions involving logical operators are used only as helpers to reach the literal sequences. This policy makes the last assertion of a b '!' below the top symbol as the main expression from which the target solution is produced.

2.2.1. scoping notation

As a first extension to propositional logic, scoping notation introduces distinct abstraction areas in which we can place our expression logic formulas. We introduce these areas by enclosing them within << and >> double point braces. Inside a specific scope area, like in propositional logic, similarly written atoms stand for the same notions. When introducing a new scope, newly introduced atoms are not visible outside the scope, while atoms from parent scopes remain visible within the child scope. This forms a namespace system where parent scopes do not "see" child scopes, and sibling scopes do not "see" each other.

However, to be of any use, scopes have to expose something to the outer world. The only atoms that they expose are literal atoms, which include numbers and strings. In other words, input to scopes may consist of any atom or sequence within parent scopes, newly created atoms and sequences within scopes are hidden scope internals, while scopes output is formed only of literals incorporated within the scopes.

expressing scoped substitution

In a simple example, to to substitute parameters a := 1 and b := 2 in expression a + b, we make the following assertion: << ( a -> 1 ) /\ ( b -> 2 ) /\ ( a '+' b ) >>. This model of execution conforms the expression logic solution finding model, yielding 1 + 2 as a possible solution from the previous example.

In more complex example, substitution may span over different nested scopes:

<<
    top -> <<
        <<
            '(' x '+' z '+' 2 ')' /\
            ( x -> 0 )
        >> '-' <<
            '(' z '+' x '+' y ')' /\
            ( x -> 2 ) /\
            ( y -> 3 )
        >> /\
        ( z -> 1 )
    >>
>>

which results with:

`( 0 + 1 + 2 ) - ( 1 + 2 + 3)`

In the above example, atom x stands for different notion in two sibling scopes, atom y is not visible outside its use scope, while atom z is inherited from parent to child scopes.

By hiding atoms within scopes we may reuse the same name atoms as entirely different values in different scopes. In a way, scoping frees us from the burden of explicit avoiding unwanted atom name colisions, where we don't need to care about finding different names for different atoms. Instead, with introducing scopes, implicit atom renaming is done automatically for us.

2.2.2. propositional sequences

As a second extension to propositional logic, propositional sequences are expressions in a form of a b c ..., and may contain atoms or other expressions. Just like individual atoms, evaluation of sequences maps either to true or false. Truth value of sequences is somewhat similar to truth values of conjunctions: a sequence is true if all of its elements are true; a negation of a sequence elements is true if any of the sequence elements are false; otherwise, a sequence evaluates to false. In the syntax definition, we noted two kinds of sequences: exp exp and exp _ exp. The only difference between exp exp and exp _ exp is that exp exp requires a whitespace between expressions, while exp _ exp expects the two expressions to be together withou a gap.

Sequences also allow us to construct compound expressions, forming a kind of sequence trees. Sequence trees may interact in the inference process similar to term rewriting. For example, if we have a pair of expressions a b c and b -> b1 b2, then it can be shown that a (b1 b2) c follows from a b c. Among other uses, this method is a basis for converting expression logic expressions to sequence normalized form.

An expression is said to be sequence normalized if every sequence in the expression is consisted only of atoms or negated atoms. Every expression of expression logic can be transformed to its sequence normalized form. Normalization is performed like in the following example: compound sequence a ...comp... c may be replaced by expression a b c /\ b -> ...comp..., where ...comp... may be any compound expression like (a1 <-> a2), (a1 -> a2), (a1 \/ a2), (a1 /\ a2), and similar variations and combinations including ~ operator.

We will overview a possible use of propositional sequences in parse grammar rules example in the next section.

expressing parse grammar rules

Parsing, syntax analysis, or syntactic analysis is the process of analysing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. Within computational linguistics the term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in a parse tree showing their syntactic relation to each other, which may also contain semantic and other information.

In formal language theory, a context-free grammar (CFG) is a certain type of formal grammar: a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the rule

A -> α

where A is a string and α is a sequence of strings, replaces A with α. In CFG, alternations are expressed by repeating the left rule side, placing each alternation to the right side. Mechanism similar to handling production rules in CFG-s is also available in expression logic, but to express alternations, we may reach for disjunctions:

A -> ( α \/ β \/ γ )

where α, β and γ are sequences of strings. As an example of context free grammar, we present a grammar for parsing simple mathematic operations in terms of expression logic:

<<
    (
        mathExp -> <<
            (
                fact
            ) /\ (
                fact -> (
                    fact '*' sum \/
                    sum
                )
            ) /\ (
                sum -> (
                    sum '+' primary \/
                    primary
                )
            ) /\ (
                primary -> (
                    '(' fact ')' \/
                    int
                )
            ) /\ (
                int -> (
                    int _ digit \/
                    digit
                )
            ) /\ (
                digit -> (
                    '1' \/ '2' \/
                    '3' \/ '4' \/
                    '5' \/ '6' \/
                    '7' \/ '8' \/
                    '9' \/ '0'
                )
            )
        >>
    ) /\ (
        top -> mathExp
    )
>>

The starting symbol of our mathExp grammar is fact. The above expression results with an infinite set of all valid mathematic operation combinations. For example, one of the solutions is 1 + 2 * (3 - 4) / 5. Note that in the above grammar, correct mathematic operator precedence is being taking care of.

Defining parse grammar rules introduces flexible syntax into expression logic. This includes forming a flexible notation expressions to represent a starting point for transformations towards searched solutions, using rules otherwise stated in the problem setup.

2.2.3. implicit equality

We are ready to perform the third and the last extension to propositional logic. When forming expressions, we may encounter a problem of expressing necessary appearance of the same inference steps across the different positions in formulas. For example, in expression of ( x -> int ) /\ ( x + x <-> 2 * x ), we want the last three occurences of x to be replaced by the same int value. As propositional logic alone doesn't provide a mechanism suitable for this requirement, we extend it by implicit equality expressions. With implicit equality, we pose the restriction in which all appearances of the same symbol prefixed by @ in the same scope, share the same inference tree, forcing them to be able to take only the same inferred value at all positions. Boundaries of equality restriction are determined by the current scope. Thus, we would correctly write our example as << ( x -> int ) /\ ( @x + @x <-> 2 * @x ) >>.

We will overview an example use of implicit equality in the next section.

expressing functional programming

Functional programming is a programming paradigm based on a notion of functions known from lambda calculus. Although one of its main characteristics is currying, in this section we present a version of functional programming based on tuples. We will write parameter application as an expression of a sequence representing a tuple, which connects entire function call to its result by an implication.

Let's consider an example of calculating Fibonacci numbers:

<<
    <<
        (
            x -> int
        ) /\ (
            'fib' @x -> (
                (
                    @x == 0 -> 0
                ) /\ (
                    @x == 1 -> 1
                ) /\ (
                    @x >= 2 -> (
                        ( fib ( @x - 1 ) ) + ( fib ( @x - 2 ) )
                    )
                )
            )
        )
    >> /\ (
        top -> fib 7
    )
>>

This example presumes existence of several constructs:

  1. infix numeric comparing functions >= and == are a part of input language
  2. infix function arithmetic operations + and - are a part of input language
  3. atom int is defined as an integer expression

We defined fib function in the first conjunct. Definitions for fib 0 and fib 1 are straightforward, while they are accepting atom fib from the parent scope. But the definition for other fibonacci numbers is more of our interest: it is a recursive definition which says: a certain fibonacci number indexed by integer as x, is a sum of the previous two fibonacci numbers indexed as x - 1 and x - 2, under condition that x is greater then or equal two. Within this scope, all the occurences of @x stand for the same integer generally constrained by x -> int. By each recursive function call, a new equal four @x instances are being intialized.

Finally, the second conjunct is applying function fib to the value 7, and returns the seventh Fibonacci number: 8, which is the solution of the wole example problem.

2.2.4. axiomatization divergence

In the early part of 20th century, mathematician David Hilbert had an inspiring idea of grounding all existing mathematics theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent (see Hilbert's program). Unfortunately, it turned up that this is not possible, due to Gödel's incompleteness theorems. Because of Gödel's incompleteness theorems, it is possible that different areas of mathematics have different axiomatic setups, and sometimes these setups may contradict each other, or may even be inconsistent alone, just to achieve completeness or other required properties of theories they want to describe.

Although Hilbert's program is not possible to completely realize, the idea of having all the theories described by the same language still seems very valuable. Obvious advantage of such a language is having parts of different theories mutually interact to produce implicitly contained information. Because of Gödel's incompleteness theorems, expression logic is allowing its expressions to explicitly choose between particular axiomatic setups. Thus, expression logic presents a framework that bundles different axiomatic schemata to different expressions, while taking care of which axioms fit which purpose and area of interest.

To provide this functionality, we introduce a new starting binding operator <- that takes axioms from its right side, and applies them to general expression to its left side. Let's see how the new final structure of expression logic with the binding operator looks like in relaxed BNF:

binder := << binder >> <- << axioms >>
        | binder binder
        | expression

In the above definition, << axioms >> stands for expression logic constructs described in the previous sections, while expression stands for arbitrary sequence that has to be a result of << axioms >> construct. Note that ... <- ... isn't a deduction performing operator, yet it represents a deduction verifying operator. It verifies if the left side follows from the right side, using the right side deductive formalism. If the verification fails, an error may be reported. Although expression has to follow from << axioms >>, the same expression might not be the final terminating result of growing axiomatic combinatorics, yet it may lay somewhere between << axioms >> and the final result of computation that we want to perform utilizing << axioms >>. Output of the whole system should represent an expression that we get when all the axioms are combined and branched out, finally reaching their leaf axiomatic literals.

As we see, binders terminate with expression leafs, can be composed in a sequence, and can be nested and branched to the left. In the following generalized example, we are applying distinct axiom sets to a sequence of expressions, controlled by recursively nested bindings. The recursive structure:

...
<<
    ...
    << expression01 >> <- << axiom-set01 >>
    ...
    << expression02 >> <- << axiom-set02 >>
    ...
>> <- << axiom-set0 >>
...

is determining and restricting a way in which different axiom sets are applied on different expressions. In this example, to expression01 we apply axiom-set01 and axiom-set0, while to expression02 we apply axiom-set02 and axiom-set0. expression01 and expression02 share the same axiom-set0, but diverge at applying axiom-set01 and axiom-set02. The system permits for axiom-set01 and axiom-set02 to be mutually exclusive because we are not assuming a single flat axiom space, yet in contrast, we are forming a tree structure of axiom set wholes. If all the axioms would be put in the same flat space, they may yield a falsehood, but they are clearly separated by the binding operator, making them more expressive than when laid down in an original flat space.

To explain detecting when certain expressions are valid, we'll take a look by moving in a direction from the single most general axiom set, over less general axiom sets, to terminal expressions. Imagine a line between the single most general axiom set and any single specific terminal expression. All the axiom-sets laying on the line between those two are compounded in a conjunction which actually forms a single flat unified axiom set that has to be consistent by the definition. Now it is clear that the terminal expression at the end of the line has to follow from this flat unified axiom set, otherwise an error may be reported.

To explain information sharing between different expression branches, we'll take a look in another direction, this time by moving from terminal expressions, over less general axiom sets, to the single most general axiom set. Arbitrary expressions are organized in sequences. Each sequence is tied to a single axiomatization in << expression-sequence >> <- << axioms >> pairs. These pairs may form sequences again, which altogether may be tied to another single axiomatization, successively. The structure may repeat recursively, which provides a tree structure of << expression-sequence >> <- << axioms >> pairs, possibly recursed at expression-sequence part. Thus, expressions from different branches may eventually share axiomatizations, from a branching joint onwards. These shared axiom sets are just what we need to exchange information between different branches. Shared axiom sets may define relations between different expression fragments. On the other side, non-shared axiom sets are forbidden to communicate between each other. This is a necessary condition because non-shared axiom sets may be mutually contradictory when put together.

To analyze further, we can nest binding operators only to the left. A natural question that may araise is: can we nest binding operators to the right, just to restrict different areas of particular axiom sets too. The motivation for this extension could be in introducing a very interesting property of meta-axiomatic logical reasoning about which super-axiom sets logically apply to which sub-axiom sets. However, this extension is not possible because different sub-axiom sets would span in the same axiomatic space. As different super-axiom sets may again be mutually inconsistent, when they map to sub-axiom sets in the same axiomatic space, they may render the space inconsistent, which is not acceptable for our purposes. Fortunately, this behavior doesn't apply when nesting the binding operator to the left because the left side is merely a sequence of expressions, not a logical formula, and as such, it doesn't require its distinct branching elements to be mutually consistent.

Resuming axiomatic divergence, our syntax choice of binding operator forms a structure which reads from bottom to up, right to left. Following this structure, each deepest terminal expression is pointed from a distinct branch containing axiom sets. Branches flow backwards from the most general axiom set, over a number of intermediate axiom sets, to the most specific axiom set, terminating with an expression that has to hold with the current branch. Axioms from every such branch are combined into a conjunction to apply them to the terminal expressions. Note that parent-child related axiom sets mutually interact, while sibling branches do not interfere with each other. This non-interference between divergent axiom scheme branches is permitting us to express a very comprehensive variety of expressions. Thus, in spite of Gödel's incompleteness theorems, terminal expressions are interpreted by structure controlled axiom schemata which may otherwise be even contradictory if combined alltogether in a flat space of axioms.

Considering the introduced properties, we believe that this kind of axiomatic system arrangement overcomes the unfortunate limitations of the Gödel incompleteness theorems, and that the Hilbert's program should be again an open question to which we need to pay more careful attention. We hold that Hilbert's program is too valuable concept to be easily disregarded, and we hope to see some scientific progression in that direction in the future because it seems that things are not so insurmountable as they seemed to be in the recent past.

3. examples

We bring three theoretic examples using only constructs learned in section 2. expression logic. We will show how to express (1) Turing machines, (2) lambda calculus, and (3) classical logic in terms of expression logic. The choice of examples is represenative for showing how expression logic handles different theories. All three examples possess a level of computational completeness described in Church-Turing thesis, transferable to classical logic by Curry-Howard correspondece.

3.1. Turing machines

A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

The machine operates on an infinite memory tape divided into discrete cells. The machine has its head that positions over a single cell and can read or write a symbol to the cell. The machine keeps a track of its current state from a finite set of states. Finally, the machine has a finite set of instructions which direct reading symbols, writing symbols, changing the current machine state and moving the tape to the left or to the right. Each instruction consists of the following:

  1. reading the current state and reading a symbol at the position of head
  2. depending on (1.), writing a symbol at the position of head
  3. either move the tape one cell left or right and changing the current state

The machine repeats these steps until it encounters the halting instruction.

The following example shows a Turing machine for adding 1 to a n-digits binary number:

<<
    // set of instructions (state abbreviations: s=start, a=add one, f=finish, h=halt)
    (  s 0 => 0 R s  ) ,
    (  s 1 => 1 R s  ) ,
    ( s () => () L a ) ,
    
    (  a 1 => 0 L a  ) ,
    (  a 0 => 1 R f  ) ,
    
    (  f 0 => 0 R f  ) ,
    (  f 1 => 1 R f  ) ,
    
    ( f () => () R h ) :
    
    // initial tape setup
    ( () s 1 0 0 1 () )
    
>> <- <<
    // general Turing machine definition
    
    // top
    top -> seq ':' '(' tape ')' /\

    ( state -> ('s' \/ 'a' \/ 'f' \/ 'h' ) ) /\
    (   bit -> ( '0' \/ '1' \/ '()' )    ) /\
    
    // tape syntax
    ( head -> state bit             ) /\
    ( cell -> ( bit \/ head )       ) /\
    ( tape -> ( cell tape \/ cell ) ) /\

    // instructions syntax
    ( dir -> ( L \/ R )                    ) /\
    ( ins -> '(' head '=>' bit dir state ')' ) /\
    ( seq -> ( ins ',' seq \/ ins )        ) /\
    
    // extracting each instruction and the tape
    <<
        ( i -> ins  ) /\
        ( s -> seq  ) /\
        ( t -> tape ) /\
        ( @i , @s : @t -> ( instruction @i /\ @s : @t )       ) /\
        (      @i : @t -> ( instruction @i /\ outputTape @t ) )
    >> /\
    
    // tape operations
    <<
        ( preb -> bit   ) /\
        ( pres -> state ) /\
        ( sufb -> bit   ) /\
        ( sufs -> state ) /\
        ( newb -> bit   ) /\
        ( news -> state ) /\
        (    t -> tape  ) /\
        (
            // changing bit and state, and moving head to the right
            instruction '(' @pres @preb => @newb R @news ')' -> (
                ( ( @pres @preb ) @sufb @t -> tape ) -> ( @newb ( @news @sufb ) @t -> tape )
            )
        ) /\ (
            // changing bit and state, and moving head to the left
            instruction '(' @sufs @sufb => @newb L @news ')' -> (
                ( @preb ( @sufs @sufb ) @t -> tape ) -> ( ( @news @preb ) @newb @t -> tape )
            )
        )
    >> /\
    
    // extracting the solution
    <<
        (
            t -> tape
        ) /\ (
            outputTape @t -> @t
        )
    >>
>>

Expectedly, the solution to this example is 1 0 1 0 h.

3.2. lambda calculus

Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is very simple, but powerful system. Among other uses it has found a way to be an inspiration for a lot of functional programming languages.

Syntax of lambda calculus is surprisingly simple. A lambda term is one or a combination of the following:

  • variable in a form of: x
  • abstraction in a form of: λx.M (where x is a variable; M is a lambda term)
  • application in a form of: (M N) (where M and N are lambda terms)

Semantics of lambda calculus, written in a relaxed language, include

  • α-conversion: (λx.M) -> (λy.M[x:=y])
  • β-reduction: ((λx.M) N) -> (M[x:=N])

α-conversion is renaming of variables used to avoid name collisions. β-reduction is actual operation carying process of replacing bound variables with the argument expression in the body of the abstraction.

Lambda calculus is defined in expression logic as follows:

<<
    // example lambda expression
    ( λ x . ( x x ) ) ( ( λ x . ( x x ) ) 2 )
    
>> <- <<
    // lambda calculus definition
    
    // top
    top -> λterm /\

    // syntax
    (
        λterm -> (
            'λ' λvar '.' λterm \/
            λterm primary \/
            primary
        )
    ) /\ (
        primary -> (
            '(' λterm ')' \/
            λvar
        )
    ) /\ (
        λvar -> /[_A-Za-z][_0-9A-Za-z]*/
    ) /\

    // semantics
    <<
        ( M -> λterm ) /\
        ( N -> λterm ) /\
        ( x -> λvar  ) /\
        ( y -> λvar  ) /\
        
        // α-conversion
        λ @x . @M -> ( @x -> @y /\ α-λ @y . @M ) /\
        
        // β-reduction
        ( α-λ @x . @M ) @N -> ( @x -> @N /\ @M )
    >>
>>

Expectedly, the solution to this example is 2 2 2 2.

3.3. classical logic

Logic is the systematic study of the form of valid inference, and the most general laws of truth. A valid inference is one where there is a specific relation of logical support between the assumptions of the inference and its conclusion. In ordinary discourse, inferences may be signified by words such as therefore, thus, hence, ergo, and so on. Classical logic (or standard logic) is the intensively studied and most widely used class of logics.

The important aspect of our logic definition is in handling variables. We use universal and existential quantifiers to bound variables which we extrude from related constant expressions by implications that introduce fresh variables (@x in the following definition). As expected in higher order logic, variables may stand for constants, functions, or predicates.

The other important aspect of our logic inference process is that it allows infinitely recursive deduction. Such behavior causes underlying inference process to loop until memory resources are exhausted. To avoid such behavior, we are obligated to explicitly control it, putting some constraints at left sides of implications, transferable to any forms of it, including expressions like a ∧ ( ¬ a ∨ b ). In short, we have to make sure the process never enters infinite loop, just like in calling functions in mainstream applications programming. There are some cases which may be detected like when the right side of implication eventually takes exact form of the left side, but if they never get the same, the current implementation of expression logic enters the infinite loop.

Taking care of this last requirement and making sure to avoid infinite loops, we define classical logic as follows:

<<
    // example logic expression
    ∀ x . ∀ y . ¬ ( x ∧ y )
    
>> <- <<
    // classical logic definition

    // top
    top -> logic /\
    
    // syntax
    (
        logic -> (
            '∃' atom '.' eq \/
            '∀' atom '.' eq \/
            eq
        )
    ) /\ (
        eq -> (
            impl '↔' eq \/
            impl
        )
    ) /\ (
        impl -> (
            and '→' impl \/
            and
        )
    ) /\ (
        and -> (
            and '∧' or \/
            or
        )
    ) /\ (
        or -> (
            or > '∨' not \/
            not
        )
    ) /\ (
        not -> '¬' pred
    ) /\ (
        pred -> (
            atom '(' seq ')' \/
            prim
        )
    ) /\ (
        seq -> (
            atom ',' seq \/
            atom
        )
    ) /\ (
        prim -> (
            '(' top ')' \/
            atom
        )
    ) /\ (
        atom -> /[_A-Za-z][_0-9A-Za-z]*/
    ) /\
    
    // semantics
    <<
        ( a -> top  ) /\
        ( b -> top  ) /\
        ( x -> atom ) /\
        
        (    @a ↔ @b -> ( @a <-> @b )  ) /\
        (    @a → @b -> ( @a -> @b )   ) /\
        (    @a ∧ @b -> ( @a /\ @b )   ) /\
        (    @a ∨ @b -> ( @a \/ @b )   ) /\
        (       ¬ @a ->  ~ @a          ) /\
        (  ∀ @x . @a -> ( @x -> @a )   ) /\
        (  ∃ @x . @a -> ( @x /\ @a )   ) /\
        
        (    (@a /\ ~ @a ) -> 'false' ) /\
        (  (@a /\ 'true' ) -> @a      ) /\
        ( (@a /\ 'false' ) -> 'false' ) /\
        
        (    (@a \/ ~ @a ) -> 'true'  ) /\
        (  (@a \/ 'true' ) -> 'true'  ) /\
        ( (@a \/ 'false' ) -> @a      )
    >>
>>

A solution to this example may be ∀ x . ∀ y . ¬ x ∨ ¬ y.

3.4. about self definition

One might consider that the ultimate test of metaprogramming would be defining entire expression logic in its own terms. However, this is not possible by making direct mappings from simulated operators to their counterparts in expression logic. This restriction does not apply only to expression logic, yet (provably) also to any consistent logical system. The proof of this limitation could be found in examining Tarski undefinability theorem. Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that arithmetical truth cannot be defined in arithmetic. We hold that this is transferable to scientific fields other than arithmetic.

However, this limitation applies only to the shallowest meta-level in which direct mapping to existing operators is being made. This means that actual self definition of expression logic is possible on deeper meta-levels. For example we may define formalisms like Turing machines or lambda calculus, or even use programming abilities of expression logic, just to define expression logic only in those deeper formalisms. Different meta-levels could be understood in the following programming language example: "It is possible to implement a javascript interpreter within javascript. It is also possible to implement the same javascript interpreter within implemented javascript interpreter. The sequence continues, making possible to implement javascript interpreter in javascript implementation of javascript implementation of ...". In the same way, expression logic is finally definable in expression logic, but on some deeper meta-levels.

For now, we will not make an effort of implementing self definition within expression logic, but interested readers are invited to take a look at open source code at official exp-log project github pages, once the project is finished. Until then, exp-log represents merely a work in progress with occasional partial source code updates.

4. conclusion

Our aim was utilizing a certain kind of satisfiability modulo theories (SMT) solver algorithm as an arbitrary formal language processing and general problem solving mechanism. To achieve this, we placed our system foundations at propositional logic as a fruitful base for building an advanced logic computational platform. There are four important extensions to propositional logic SMT solver that this exposure suggests to establish:

  1. suggestion to extend our system by scoping expressions. Following this direction, we are able to reuse atom names without caring for atom name colisions. This extension introduces comfortable workflow similar to mainstream programming languages.
  2. suggestion to extend our system by propositional sequences. Following this direction, we are able to represent parse grammar rules in terms of logic. This extension introduces reasoning about flexible syntax expressions which parse grammar rules bring into the system.
  3. suggestion to extend our system by implicit equality. Following this direction, we are able represent functional programming expressions in terms of logic. This extension introduces computational completeness that functional programming brings into the system.
  4. suggestion to extend our system by axiomatization divergence. Following this direction, we are able to compose theories whose axiomatic system fragments may be contradictory under a single flat axiomatic space, but may not negatively intefere with each other only when applied separately to different expression fragments in a successive branching manner. This extension introduces a framework of combining different axiomatic systems under the same umbrella, even if fragments of those systems contradict each other.

Considering the four suggestions to extend propositional logic, we may find ourselves in a possession of a system which is able to programmatically describe any kind of textual expressions as long as it can be computed by a finite amount of resources. This should be just enough to fulfill the role of general problem solving system that we had in our hindsight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment