mathematical foundations of programming languages

Quote:

Mike Haertel writes:

> The one phrase that really sticks in my memory is "equality

> logic." I have never heard this term before or since, and

> I am not sure what flavor of formal logic the author was

> referring to.

There are at least two "equational logics" that functional languages can be

based on

(1) The lambda calculus. This was developed by Alonzo Church (who

supervised Alan Turing at some point I seem to recall). in the late

1930's, early 1940's. In the printed typographic form, "lambda" is the

greek letter. The expressions of the language are formed from:

(a) variables.

(b) lambda abstractions lambda x. E means "that function which maps

x to E (e.g. lambda x. (+ x 3) is the function "add 3").

(c) Applications E1 E2 means "apply the expression E1 (considered as a

function) to the expression E2.

The rule which permits the lambda calculus to be a model for computation is

*beta-reduction*, which says that any expression of the form

(lambda x. E1) E2

can be reduced to "E1 with E2 substituted for x". The definition of

substitution is somewhat complicated by the handling of "free variables".

Additional rules are required for proving things, e.g. alpha-reduction

which says in effect that lambda x. E is the same as lambda y. E as long as

you don't confuse free variables. Also, for practical computing, you

introduce delta-rules, which introduce constants like 0,1,2,3, etc. (you

can in fact model these in the restricted "pure" calculus, but it is a pain

for practical purposes).

LISP is based loosely on the lambda calculus (it does not give the proper

meaning to the expression (+ 3) for instance, which means "add 3", i.e. it

should have the same meaning as the lambda expression given above.

More modern functional languages (ML, Haskell) are based much more exactly

on the lambda calculus, despite their different appearance, which arises

from the application of "syntactic sugar" to taste (not everbody has the

same taste in sweeteners...)

(2) Combinatorial logic. This was developed in the 1920's by Schonfinkel.

It is ingeniously simple, requiring only two "combinators" S and K,

together with application, with the reduction, for all f g x y:

S f g x -> f x (g x)

K x y -> x

No variables are required in the language (f, g, x, y in the above are META

variables which serve to define the reduction). Thus expressions need only

contain S and K, and still form a complete basis for computation and proof.

In particular, there are no hassles with free variables.

Alas this elegant simplicity is bought at a price - Combinatorial logic can

easily be shown to be equivalent to the lambda calculus, BUT a lambda

calculus expression of size O(n) is equivalent to a combinatorial

expression of size O(n^2). More practical combinatorial logics have

additional combinators (I,B,C,B',C'), but have not won mainstream support.

The lambda calculus is powerful and elegant. It embodies *abstraction*

(i.e. lambda abstraction) as one of 2 primary constructs, which renders it

much more suitable than Turing machines to form a basis for a theory of

computation. Imperative languages can be modelled EITHER in an extended

lamdba calculus (see Landin's paper "A Correspondence between Algol 60 the

Church's Lambda Notation"[1965], CACM 8 pp89-101 and 158-165, or (better)

by modelling the mutable store within the lambda calculus.

Robin Popplestone.