Robert's Mistakes

Lambda calculus

Dec 24, 2013

Lambda calculus is the elegant sibling of the Turing machine. It is also much simpler to describe and in many ways easier to work with. Bare with me and i'll quickly show you how it works:

  1. Abstraction: (λx.t) defines an anonymous function and is a fancy way to write f(x) = t, where t is some term, without giving the function a name ("f"). It basically tells us that the variable in t is called x. If the notation confuses you, maybe it helps to remember that greek letter lambda "λ" and the dot "." are just punctuation marks.

  2. Application: You can concatenate two terms a and b and write ab. You then say (the function) a is applied to b.

  3. The two can be combined in beta reduction: The reduction of (λx.t)y is just t with x replaced by y.

An example: The function (λx.xx) duplicates it's argument, applied to some y it looks like this:

(λx.xx)y -> yy

Interestingly, one can apply functions to functions:

(λx.xx)(λx.xx) -> (λx.xx)(λx.xx)

It means: Duplication applied to itself can be reduced to... duplication applied to itself! We see here a very primitive loop! Beta reduction is the dynamic element and you could stop here because that's the gist of it. But...

...there are two further refining rules to clarify ( formalize ) two intuitive ideas one might have silently assumed... We'll zoom in a bit on what happens practicing our new game, i'm sure you'll find it interesting - so keep reading!

  1. Alpha conversion is just renaming a variable. Simple as that. Using our example, the effect looks like this:

(λx.xx) -> (λy.yy)

  1. Eta conversion is more interesting. It allows to remove a lambda construction that does nothing, even if no argument ( application ) is given. It's also called extensionality (as opposed to intensionality , referring to internal properties) and it says we can...

(λx.(fx)) -> f

...whenever x does not appear in f. More precisely, whenever x does not appear as a free variable in f (in contrast to a bound one). That precision is useful to make it okay for another abstraction inside f to introduce a local x. You see, the last two are more like book-keeping devices.

Eta's purpose is to capture a simple case where two functions ought to be equivalent. When, for all inputs, they give the exact same result. That is certainly the case here, as long as f does not depend on it's argument.

Let me give you another angle: Eta conversion looks like a beta reduction lacking application to an argument. Inserting one, "y" here, gives us something where beta reduction works:

(λx.(fx))y -> fy

Note that it's only about one case. The general problem - wether two functions are the same - is undecidable ! Since eta conversion is handling a degenerated beta case it's mostly there to simplify some proofs. For example, try proving this (without) using eta:

(λx.(λy.(yx)) -> (λy.y)

Terms that differ by beta reduction are called beta equivalent . Similarily two terms can be alpha or eta equivalent .

That's it! Invented by Alonso Church (see the picture below), it was proven equivalent to Turing machines by Alan Turing in 1937. Both are equally capable to define the modern notion of computable functions . For some reason, lambda calculus isn't the preferred way to introduce students to theoretical computer science today, but it very well could be!

It has been the basis for type theory, and further made modern proof assistants possible. It has wide applications in mathematics philosophy, linguistics and computer science.

The wikipedia page on lambda calculus looks a bit patchy to me, but it does have example code for simple arithmetics or logic operations. Mind the fancy way to talk about the result of fx without knowing what f does...

http://en.wikipedia.org/wiki/Lambda_calculus#.CE.B7-conversion

Over the next few days i'll be taking on a recent paper by +Marius Buliga and Louis Kauffman relating lambda calculus to chemistry, here's my second reaction: