## April 17, 2011

### Curry-Howard for Non-Dummies – Part 1

(Anyone who wants to learn is not a dummy)

At the risk of alienating the (small) audience who is used to me writing about not-too-technical topics, I would like to take a crack at an important but technical topic in theoretical computer science. This topic is often known as the Curry-Howard Isomorphism, though there are variations on its name. Despite the forbidding name, the principle it represents is useful to understand for anyone who is interested in logic, computation or programming languages, which would obviously include programmers of all skill levels. I hope to provide an introduction that anyone in this wider audience can understand.

#### Programs are Proofs

The very basic idea of Curry-Howard is that you can think of a program as a proof. A proof of what? A proof that if you provide certain inputs to that program you can produce the corresponding output.
Another way to think about this is to understand that both programs and proofs are series of rule-following steps that allow you to transform what you started with into a new result.
We often take this generating capacity of programs for granted, but the very nature of programming requires a deep understanding of the structure of inputs, the structure of outputs and the details of the relationships between them. This is exactly the same kind of activity as creating a proof in math or science.

#### Programs are Implications

In logic, the simplest way to state a proposition that captures this inputs-to-outputs behaviour of programs is as an implication: A → B, ( read as “A implies B”). This means that if I already know that A is true then I will also know that B is true.
To see the parallel to a program, I have to understand that “A is true” is equivalent to “I can find a thing which fits the description of A”.
This last step is perhaps not quite so obvious, so let me give an example. Let’s say that A stands for “a purple unicorn” and B is “a horse race winner”, then A → B could mean that “if I had a purple unicorn I would run it in a horse race, and since purple unicorns are magically fast, I would be sure to win”.
You can see that this is a kind of program: give me a purple unicorn as an input, and I can follow a series of steps to produce a horse race winner.
While this may be a good program, it can’t be run, since purple unicorns don’t exist. And since purple unicorns don’t exist A can’t be true.
If I change my program to match C → B, where C is “a well-bred horse”, I’m back in business: my program can now be run again, when I run it (or rather run the well-bred horse in the race) and produce a horse race winner, I will have proved the correctness of the implication.
To return to a slightly more programmatic example, let’s say that A is “a number” and B “a number greater than A”, and my program adds one to its input. I then feed it “1” as input, my program produces “2”, which is indeed greater than 1, and I’ve proven that, given the number “1” I can produce a higher number.

#### Types are Propositions

However, we aren’t quite done here. I just showed that for a particular thing that met the description of an A (“a number”) I could get a particular B (“a number greater than 1”), but what I really need to do is show that any thing which meets the description of an A can always be used to produce a B using my program. In order to do this, I need to think a bit more about what it means to be a “thing which meets the description of A or B”.
There are basically two ways to define “things that meet the description of A”: by listing all the individuals that meet the description (this is called an extensional definition) or by proposing a rule (typically an inductive rule) that allows me to produce or recognize an A at will (this is called an intensional definition).
It so happens that we already have a concept in programming languages that does this definitional work for us: types. Types can be defined by an exhaustive list. For example, the character type is typically defined by a list of ASCII or Unicode characters that is fully known already. Types can also be defined by a rule. For example, a number type might be defined as any sequence of digits. You wouldn’t have to list all the numbers (there are probably too many) – you just have to check the simple formation rule to determine membership in the type.
Based on this, we can see that any type can be construed as a logical proposition that can be verified by lookup or by rule: just produce an individual item that meets the description and we know that our proposition is “true”.

#### The Type of Programs

We now have enough machinery to show that our program works for all of its inputs. We showed that A and B are both types and propositions. We now have to see that the implication A → B is also both a type and a proposition. To meet the description of the proposition A → B we have to show that our program will take any A and produce a B. As with our simple propositions, we need either an exhaustive list or a rule that allows us to cover all the bases. The easiest way to do that is to start with our definition of A. When we look at the structure of our program, we will have to see that there is either an exhaustive list of the elements of A or some rule that follows the rule for A to cover all the possible values of A, and then for each path that results from this rule or lookup, show that a B will result.
This can be very complicated for complex programs but it is often possible to break a bigger program up into smaller bits, each of which can be checked in this manner. Generally programming languages and type systems are designed so that the compiler can verify the type of a program automatically.
When we have completed the process of verifying our program, we will know that our program fulfills the description of the proposition A → B, that it conforms to the corresponding type, and that it is also a proof of that proposition.

If you have followed this far, you have grasped the basic idea of Curry-Howard. It is, I hope, a surprisingly simple idea, but with profound consequences for logic, mathematics and computer science. In Part 2, I hope to expand a bit on some of the elementary consequences.