## First steps with Agda: provable factoring

In my last post we looked at the problem of factoring an integer. We ran into an issue with the termination checker and resolved it by wrapping the recursion into a separate inductive data-type that made it explicit. The idiom used is present in Agda’s standard library and can be used “off the shelf” this way:

```module factor-idiomatic where

open import Data.Fin as F
open import Induction.WellFounded
open import Induction.Nat
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties

factor : (n : ℕ) → Acc _<′_ n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (acc p) = helper n ≤′-refl
where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → List ℕ
lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))

helper 0 _ = (suc (suc n)) ∷ []
helper (suc f) proof with ((suc (suc n)) divMod' (suc (suc f)))
... | result q F.zero _ = q ∷ factor (suc (suc f)) (p (suc (suc f)) proof)
... | _ = helper f (lemma proof)

lemma ≤′-refl = ≤′-step ≤′-refl
lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

factorize : ℕ → List ℕ
factorize n = factor n (<-well-founded n)

factors₃₀ = factorize 30
```

If you look at the sources, `Acc` is defined as

```data Acc {a} {A : Set a} (_<_ : Rel A a) (x : A) : Set a where
acc : (rs : WfRec _<_ (Acc _<_) x) → Acc _<_ x
```

which basically boils down to

```data Acc {a} {A : Set a} (_<_ : Rel A a) (x : A) : Set a where
acc : (rs : ∀ y → (y < x) → Acc y) → Acc _<_ x
```

… pretty close to what we defined and used. Importantly, Agda already defines `<-well-founded`, which is equivalent to `generateTerminationProof` we defined earlier. We will use this more idiomatic version of the terminating `factor` definition in this post.

So now we wish to somehow verify that the `List ℕ` we returned really multiplies to `n`. One way to do this is by creating a new data type family

```  data Factorization : ℕ → Set where
...
```

and somehow ensure that `Factorization n` can only be constructed once we have a correct factorization for `n`. So, how about this:

```  data Factorization : ℕ → Set where
factors : (n : ℕ) → (f : List ℕ) → (product f ≡ n) → Factorization n
```

_≡_ is an interesting type. It is parameterized on a value and is indexed on another value of the same type. The only constructor for ≡ is `refl`. In Agda, the values of the type parameters are the same for all the type constructors. However, the constructors do get to choose values for the type indices. So, for instance, the following code is illegal

```open import Data.Nat

data K (a : ℕ) (b : ℕ) : Set where
construct : K a 42
```

since `construct` can’t decide the values of `a` and `b`. However, this is allowed

```open import Data.Nat

data K : ℕ → ℕ → Set where
construct : K 42 42
```

Getting back to `_≡_`, the definition says

```data _≡_ {a} {A : Set a} (x : A) : A → Set a where
refl : x ≡ x
```

meaning given a value `x` of type `A`, `refl` will choose the index of the constructed `_≡_ x` to be `x`. Since `refl` is the only constructor `_≡_` has, `a ≡ b` will have a value only if Agda can somehow prove that `a` and `b` are equal. Moreover, this value, `refl` of type `a ≡ b`, is itself the proof of `a` and `b`‘s equality.

Getting back to defining `Factorization`, the type we last described takes an integer, `n`; a list of integers, `f`; and a proof of the fact that the product of the integers in `f` is `n`. Only after supplying all this can you construct a value of type `Factorization n`. You can also just as easily decompose a value of type `Factorization n` into `n`, `f` and the proof.

However, while it may certainly be possible to go ahead with this definition of `Factorization`, we choose one more amenable to composing; one in which we can easily augment proofs to create larger proofs. I’m talking about something like this:

```data Factorization : ℕ → Set where
singleton : Factorization 1
augment : {old : ℕ} {n : ℕ} → (new-factor : ℕ) →
(Factorization old) → ((new-factor * old) ≡ n) →
Factorization n
```

When we factor `1`, we use the `singleton` constructor. `augment` (the equivalent of cons here) uses an old factorization and a new factor to construct a new `Factorization`. We also have to supply a value of the type `((new-factor * old) ≡ n)` or a proof that `(new-factor * old)` is equal to `n`. Therefore, at every step, our invariant of all the numbers in the an object of type `Factorization n` multiplying up to `n` stays valid. As an example, a valid value of the type `Factorization 20` is `augment 2 (augment 2 (augment 5 singleton proof0) proof1) proof2` where `proof2` is a value (proof) of type (proposition) `2 * 10 ≡ 20`, `proof1` is a value of type `2 * 5 ≡ 20` and so forth.

But how do we generate this value of type `((new-factor * old) ≡ n)`? How do we statically convince Agda that the `new-factor` we’re feeding to the type constructor actually satisfies this constraint? Fortunately, most of our work has already been done.

The `divMod'` function returns a value of type `DivMod'` which, apart from holding the quotient and the remainder also holds a proof (value) of the proposition (type) `quotient * divisor + remainder ≡ dividend`. When the remainder is zero, this proof can be transformed to the one required by the `augment` constructor. This transformation is itself a proof; given a certain condition, it proves another. The Curry-Howard isomorphism maps implication to function types, and the proposition we’re seeking to prove is

```lemma₁ : forall {r} → (q : ℕ) (d : ℕ) (n : ℕ) → (r ≡ 0) →
(r + q * d ≡ n) → (q * d ≡ n)
```

which is the same as “given that (r ≡ 0) and (r + q * d ≡ n), we prove (q * d ≡ n)”. The definition for `lemma₁` is weirdly simple:

```  lemma₁ q d .(q * d) refl refl = refl
```

Agda understands definitional equalities. Since `zero + n ≡ n` is a definitional equality we can get away by telling Agda that the truth of the proposition is “obvious”.

We’ll also need a proof for `n * 1 ≡ n` at some point, so let’s have a look:

```identity* : (n : ℕ) → (n * 1 ≡ n)
identity* zero = refl
identity* (suc n') = cong suc (identity* n')
```

The first thing to observe is that `n * 1 ≡ n` is not a definitional equality and requires a proof. There are two cases, when `n` is `zero` and when it isn’t. Funnily, if you see the definition of `_*_`, you’ll notice `zero * n ≡ zero` is, in fact, a definitional equality so we can get away with `refl` in the `identity* zero`.

The `identity* (suc n')` is slightly more involved. First we recurse to prove `n' * 1 ≡ n'` and then call use `cong` to turn it into a proof of `n * 1 ≡ n`. `cong` can be used to transform a proof of `a ≡ b` to a proof of `f a ≡ f b` and in this case, we set `f` to `suc`.

A small note on a lemma we looked at in the last post

```lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))
lemma ≤′-refl = ≤′-step ≤′-refl
lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)
```

Agda has two inductive definitions for the “less than” relation on natural numbers. The first one, `_<_` says i) ` zero` is less than any `suc n` and ii) `m < n` implies `(suc m) < (suc n)`. The second definition, `_<'_` says i) `m < (suc m)` and ii) `m < n` implies `m < (suc n)`. Actually, the library version is a little bit more complex than this since it first defines the “less than or equal” (`≤`) relation and then defines `_<_` in terms of `≤`. Similarly for `_≤′_` and `_<′_`. This definition of `lemma` uses both the `_<_` and the `_<′_` variants. I don’t know if there is a better implementation.

Here is the full, code, stitched together:

```module factor-provable where

open import Data.Fin as F hiding(_+_)
open import Induction.WellFounded
open import Induction.Nat
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality

data Factorization : ℕ → Set where
singleton : Factorization 1
augment : {old : ℕ} {n : ℕ} → (new-factor : ℕ) →
(Factorization old) → ((new-factor * old) ≡ n) →
Factorization n

identity* : (n : ℕ) → (n * 1 ≡ n)
identity* zero = refl
identity* (suc n') = cong suc (identity* n')

factor : (n : ℕ) → Acc _<′_ n → Factorization n
factor 0 _ = augment 0 singleton (identity* 0)
factor 1 _ = singleton
factor (suc (suc n)) (acc p) = helper n ≤′-refl
where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → Factorization (suc (suc n))
lemma₀ : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))
lemma₁ : forall {r} → (q : ℕ) (d : ℕ) (n : ℕ) →
(r ≡ 0) → (r + q * d ≡ n) → (q * d ≡ n)

helper 0 _ = augment (suc (suc n)) singleton (identity* (suc (suc n)))
helper (suc f) proof-<′ with ((suc (suc n)) divMod' (suc (suc f)))
... | result q F.zero proof-div =
let rest = factor (suc (suc f)) (p (suc (suc f)) proof-<′)
proof-main = lemma₁ q (suc (suc f)) (suc (suc n)) refl
(sym proof-div)
in augment q rest proof-main
... | _ = helper f (lemma₀ proof-<′)

lemma₀ ≤′-refl = ≤′-step ≤′-refl
lemma₀ (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
lemma₀ (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

lemma₁ q d .(q * d) refl refl = refl

factorize : (n : ℕ) → Factorization n
factorize n = factor n (<-well-founded n)

factors₅₀ = factorize 50
```

## First steps with Agda: well founded recursion

Last week I finally got time to explore Agda in some detail; something I had on my list for a while.

Agda is a dependently typed programming language. That means in Agda you can have a function of the type “Integer to Type to (Type of Types) to (Type, Integer)”. This allows you to express arbitrary constraints in the type system itself. Agda’s type safety and totality constraint (no crashes and guaranteed termination) allow us to invoke the Curry-Howard isomorphism and prove things by construction. I won’t be introducing Agda here (this is a good place to start) but will state and solve a small problem slightly more involved than the “vectors indexed by their lengths” variety.

The task is to create a provably correct function that factors a natural number. The notion of correct we’ll use here is that the returned list multiplies up to the number we were factoring. With this notion of correctness, both `[2, 2, 5]` and `[1, 20]` are correct factorings of `20`. Our function is expected to return only one such factoring.

If you know a bit of Agda you should be able to come up with this:

```module factor-nonterminating where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod

factor : (n : ℕ) → List ℕ
factor 0 = 0 ∷ []
factor 1 = []
factor (suc (suc n)) = helper n
where helper : (try : ℕ) → List ℕ
-- Check if (try + 1) divides (suc (suc n))
helper 0 = (suc (suc n)) ∷ []
helper (suc f) with ((suc (suc n)) divMod' (suc (suc f)))
... | result q F.zero _ = q ∷ factor (suc (suc f))
... | _ = helper f
```

The base cases of `factor`ing `0` and `1` are trivial. The general case (which has to be of the form `(suc (suc n))` since we’ve handled `0` and `1`) is factored by counting down `(suc n)` and checking each number for divisibility. The only caveat here is that when we call `helper n`, we actually get `helper` to check for divisibility with `(n + 1)`. If we had `helper n` check for divisibility with `n` we’d have to handle the `helper 0` case somehow and that would be ugly. In that case, with some work, we could also pass around a proof that `n` is greater than `0` and convince the typechecker that the `helper 0` clause could never be executed but we’ll keep it simple for now. The underscore in `result q F.zero _` is a proof of the division returned by `divMod'` (proof that the quotient times the divisor plus the remainder is the dividend); we’ll use this later to prove the correctness of our factorization.

Alas, the code above fails the termination checker. It shouldn’t be a surprise that Agda’s termination checker isn’t perfect — halting is undecidable. Since labelling a non-terminating term as terminating isn’t an option in a theorem-prover, Agda errs on the side of caution and sometimes flags terminating functions as non-terminating. In those cases we need to help Agda in figuring out the structure of the recursion and why it must terminate. In the above example, for instance, it isn’t obvious that the recursive call to `factor` has a structurally smaller argument. We know that, since we’ve ruled out division by `1` (by making `helper 0` a separate clause) and division by any integer other than `1` will result in a quotient smaller than the dividend, but Agda does not.

There are several ways to address this. One simple (but hacky, I think) way is to assign an upper limit to the number of times your function can recurse. Once you’ve done that, you can keep a “dummy” counter that you decrement (i.e. peel off a constructor from, this is the part that the termination checker likes) every time your recurse. This looks especially bad if the upper limit is loose — Agda will force you to come up with a (possibly ad-hoc) answer for the case when your dummy counter runs out. However, there is another way.

The key property that allows the trick I’m about to describe is that an applied function is not considered larger than the function itself. I don’t know the theory behind why this works (I just asked on `#agda`) but this can be exploited to tell Agda that a recursion terminates. In a way this approach is a bit like keeping a “dummy” variable, but is more elegant and doesn’t require you to come up with an upper limit.

The high-level idea is that we create a type `data Terminates (x : ℕ) : Set` with a constructor that holds a function that gives us another `Terminates y`. We then recurse passing to ourselves `Terminates y` we just produced instead. Since we peel off one constructor Agda considers the recursion well-founded and hence terminating. With this idea we forge ahead:

```module factor-terminating-draft where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod

data Terminates (x : ℕ) : Set where
termination-proof : (∀ y → Terminates y) → Terminates x

factor : (n : ℕ) → Terminates n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (termination-proof proof) = helper n
where helper : (try : ℕ) → List ℕ
helper 0 = (suc (suc n)) ∷ []
helper (suc f) with ((suc (suc n)) divMod' (suc (suc f)))
... | result q F.zero _ = q ∷ factor (suc (suc f)) (proof (suc (suc f)))
... | _ = helper f

factorize : (n : ℕ) → List ℕ
factorize n = factor n ?
```

Pretty clever, eh? Note how we peel off the `termination-proof` constructor we match on the right hand side and apply `(suc (suc f))` to the function we’ve just unwrapped to make the recursive call to `factor` well-typed and terminating. We just need to fill in the `?` and we’re done. If you type this in emacs and load it into an Agda environment you won’t receive a non-termination error.

Unfortunately, this approach doesn’t work either — you cannot fill in the `?` with any meaningful value. You probably have guessed the reason already, but in case you haven’t I’ll give you a hint:

```data ⊥ : Set where

bam : (x : ℕ) → Terminates x → ⊥
bam x (termination-proof proof) = bam x (proof x)
```

Moreover, we still haven’t used the fact that we recurse on a smaller number each time. No wonder we couldn’t fill in the `?`; without that crucial constraint, the recursion isn’t well-formed! To use the fact that the value we call ourselves recursively on is smaller than the current one, we change the definition of `Terminates` to

```data Terminates (x : ℕ) : Set where
termination-proof : (∀ y → (y <′ x) → Terminates y) → Terminates x
```

We now require a proof of the newer value being smaller than the current one to generate a new `Terminates` object. The idea is to use this proof to somehow so that we can actually fill in the `?` plaguing us. To fill in the `?` we write the function:

```generateTerminationProof : (n : ℕ) → Terminates n
```

so that we can write

```factorize : ℕ → List ℕ
factorize n = factor n (generateTerminationProof n)
```

The definition of `generateTerminationProof` is quite simple too — especially if you use `C-c C-t` and `C-c C-c` judiciously :P.

```base-case : (∀ y → (y <′ zero) → Terminates y)
base-case _ ()

generateTerminationProof : (n : ℕ) → Terminates n

induction : (n : ℕ) → (y : ℕ) → (y <′ n) → Terminates y
induction n zero _ = termination-proof base-case
induction .(suc (suc y')) (suc y') ≤′-refl =  generateTerminationProof (suc y')
induction .(suc n) (suc y') (≤′-step {n} m≤′n) = induction n (suc y') m≤′n

generateTerminationProof zero = termination-proof (base-case)
generateTerminationProof (suc limit) = termination-proof (induction (suc limit))
```

As you can see we are always able to recurse into structurally smaller elements. This isn’t incidental but is possible because we carry the proof of `y` being smaller than `x`. For instance, if we weren’t carrying the proof, we’d have to provide a body for `base-case`, which I don’t think is possible.

The entire source file (if you’re new to Agda, I strongly suggest you try to put the pieces together yourself) looks like this:

```module factor-termination-proof where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties

data Terminates (x : ℕ) : Set where
termination-proof : (∀ y → (y <′ x) → Terminates y) → Terminates x

factor : (n : ℕ) → Terminates n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (termination-proof p) = helper n ≤′-refl
where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → List ℕ
lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))

helper 0 _ = (suc (suc n)) ∷ []
helper (suc f) proof with ((suc (suc n)) divMod' (suc (suc f)))
... | result q F.zero _ = q ∷ factor (suc (suc f)) (p (suc (suc f)) proof)
... | _ = helper f (lemma proof)

lemma ≤′-refl = ≤′-step ≤′-refl
lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

base-case : (∀ y → (y <′ zero) → Terminates y)
base-case _ ()

generateTerminationProof : (n : ℕ) → Terminates n

induction : (n : ℕ) → (y : ℕ) → (y <′ n) → Terminates y
induction n zero _ = termination-proof base-case
induction .(suc (suc y')) (suc y') ≤′-refl =  generateTerminationProof (suc y')
induction .(suc n) (suc y') (≤′-step {n} m≤′n) = induction n (suc y') m≤′n

generateTerminationProof zero = termination-proof (base-case)
generateTerminationProof (suc limit) = termination-proof (induction (suc limit))

factorize : ℕ → List ℕ
factorize n = factor n (generateTerminationProof n)

factors₃₀ = factorize 30
```

In particular, notice how we carried the proof from the `factor` function to the `helper` function and how we maintained it using `lemma`. We’ll discuss a little bit more on transforming (and maintaining proofs) in the next post.

In the next post, we’ll first see a more idiomatic version of the same code and then go on to make the implementation provably correct (for some notion of correct) as we had originally planned. And many thanks to the nice folks on `#agda` for putting up with my shameless curiosity.

## Movies to make you think, part II

Some less famous movies which I found nice (in no particular order):

• Waltz With Bashir
• Annie Hall (maybe not so exotic)
• Pan’s Labyrinth
• Teen Kanya (get a version with subtitles if you don’t speak bangla) (edit: to clarify, I’m talking about this movie)
• Spirited Away (again, maybe not so exotic)
• Begotten
• Mulholland Dr.
• Paprika

For the record, the “Part I” of this list is here.

Posted in Rest | 4 Comments

## An Interesting CPP Bug

The C preprocessor, being a dumb text-manipulating tool that does not understand the syntactic structure of the underlying program, is often a hiding place for subtle headache-inducing bugs. I’ve been bitten such bugs myself and one bug I had a particularly difficult debugging is often not mentioned in lists such as [1].

Can you spot a bug in the following CPP macro? Specifically, when is its behaviour ill-defined?

```#define FROBNICATE(__frobptr) do {
struct frobnicable *fptr = (__frobptr);
assert(fptr && "Can't frobnicate NULL!");
frobnicate_init(fptr);
frobnicate_finish(fptr);
} while(0)
```

One interesting thing I discovered about C while debugging this macro is that variable initializations like the following are valid:

```{
int foo = foo;
/* foo now has an unspecified value. */
}
```

which means the following piece of code won’t do what is expected of it:

```{
struct frobnicable *fptr = malloc(sizeof(struct frobnicable));
fptr->x = 42;
FROBNICATE(fptr);
}
```

The macro will expand to

```do {
struct frobnicable *fptr = (fptr);
/* ... and so on ...  `fptr` is undefined in the rest of the
* macro expansion. */
} while(0);
```

and the implications won’t be pleasant.

I haven’t yet discovered an elegant and effective way to deal with this issue. The `struct frobnicable *fptr = (__frobptr)` is needed to prevent multiple evaluations of `__fptr`, so the only way I see of preventing this bug is by using an extremely unusual name for the `fptr` variable (or following a convention of prefixing such variable with some previously defined string). But that is neither elegant nor foolproof.

Posted in Computers | Tagged , , , | 3 Comments

## Faking Dependent Types in C++

It is possible to have some kind of dependent typing in C++ using templates. The “solution” isn’t elegant or usable, but I found it mildly interesting so I’ll try to explain it here. The C++ type system with templates is Turing complete, so that something like this can be done shouldn’t be a surprise. The entire source file is here [1].

We will need to convert `int`s to types and vice versa (we’ll soon see why), so we might as well get that out of the way. One way to convert an `int` to a type is by exploiting the fact that templates are allowed to take integral values as parameters:

```template
struct Integer {
}

typedef Integer Zero;
typedef Integer One;

// ... and so on

```

The issue with this is the lack of control over the types we’re producing. In this particular case, we want to prevent (at compile time) the creation of types corresponding to negative integers. We do this by following Peano’s definition of natural numbers:

```template
struct S { typedef T Previous;
enum { ToInt = T::ToInt + 1 };
};

struct Z { enum { ToInt = 0 }; };

template
struct ToType {
typedef S<typename ToType::TypeValue > TypeValue;
};

template
struct ToType {
typedef Z TypeValue;
};

#define P(n) typename n::Previous
#define TO_INT(n) n::ToInt
```

In this case, `S<S<S<Z > > >` represents `3` and it is obvious that using `S` and `Z`, it is possible to enumerate only positive integers.

Secondly, we want some mechanism which to use to “lift” normal integers into a corresponding type. This is done by `ToType`. The recursion is simple, for `0`, `ToType` “returns” `Z` and for `N`, it “returns” `S<` the “return value” of `ToType<N - 1>` `>`. `P` and `TO_INT` are macros to save typing.

Note that we’ve not really solved the problem of creation of negative integers — `ToType<-50>::TypeValue` will throw a “template depth exceeds maximum of XYZ” error and not a proper compile time error. All this mechanism does is isolate the issue of negative numbers to within `ToType` so that the rest of the code can work with a cleaner representation. I think this can be improved:

• Create a dummy `struct ZeroFlag` with one `int` template paramenter.
• Specialize `ZeroFlag` on `0` and add a static field `DummyF` to it.
• Create a `struct Abs` that computes the absolute value of its `int` template parameter.
• Add `typedef` `ZeroFlag<Abs<N>::Value >::DummyF` to `ToType`.

This should give a compile time error on instantiating `ToType<0>`. However, right now, we concentrate elsewhere. Importantly, check the `Previous` typedef in `struct S`, which isn’t present when we specialize in `struct Z``Previous` will be our main weapon in generating compile time errors, as we shall see.

We now create a “linked list” which keeps track of the number of nodes in it at compile time:

```// N denotes the size of the list here
template
struct List {
T data;
List next;
};

template
struct List {
};
```

We don’t need to have a pointer to a node since the `next` field always points to a smaller type — there is no circularity (and hence no possibility of a loop in the linked list). I’m not exactly sure of this, but I think, at least in terms of memory layout, `List<100>` should be equivalent to an array of `100` `int`s.

We can actually write up a `foldl` for this case. First we declare the type of a generic function (we have to use `struct`s everywhere because C++ does not allow partial template specialization on functions):

```template
struct Func2 {
typedef C functionType(A, B);
};
```

Now, to fold a type `T`, we need a function of type `A->T->A` and an initial value of type `A`:

```template
struct List {
T data;
List next;

template
A foldl(typename Func2::functionType f, A a) {
return next.foldl(f, f(a, data));
}
};

template
struct List {
template
A foldl(typename Func2::functionType f, A a) {
return a;
}
};
```

I was surprised how close C++ can get to Haskell at this point.

Right now we have a dependently typed (only in a loose sense of the term — I’ve not had enough rigour with type systems to formally claim this) linked list in C++. We can now write a “safe” sorting function using this:

(I’ve used a shitty algorithm here to focus on the things this post is actually supposed to focus on.)

```template
struct Sorter {
static void sort(List &list) {
List rest = list.next;
Sorter::sort(rest);
if (list.data > rest.data) {
std::swap(rest.data, list.data);
Sorter::sort(rest);
}
list.next = rest;
}
};

// List of length 1 is already sorted.
template
struct Sorter<t, S > {
static void sort(List<t, S > &list) { };
};
```

The cute part about this code is, if the specialization `Sorter<T, S<Z> >` is removed you get a compile time error, since the compiler tries to sort a list of on element using the general template which then tries to sort a list of zero elements. Since the general template tries to access the `Previous` field (via the `P` macro), you get a compile time error. That the code compiles is “proof” that your algorithm will not try to walk the past of an empty linked list.

It is easy to create a `struct ListCreator` which creates a linked list of a specified length and printing is a matter of folding with the function

```template
std::vector accumulate(std::vector in, T t) {
in.push_back(t);
return in;
}
```

and an empty `std::vector` and print the resulting vector. The exact code is in [1].

The entire list, the sorting function could all be implemented inside the template system, by the way. However, in that case the elements of the list would all be baked in; in the present situation, we have the option of reading (a compile-time-constant obviously) number of entries from a file or from the user. I think more interesting applications will be seen in “lifting” a partial amount of data and algorithms into the type system, providing a fuzzier boundary between what programmers have to keep in their heads and what the type system proves for them.

For dessert, let’s see how you can implement compile-time bound checking on the list. First a template to assert the less-than relation:

```template<typename A, typename B>
struct Lt {
typedef typename Lt<P(A), P(B)>::Flag Flag;
};

template<typename B>
struct Lt<Z, S<B> > {
typedef bool Flag;
};```

Essentially `A < B` iff `Lt<A, B>::Flag` exists. The non-existence of `Flag` can be made a compile-time error by `typedef`ing it in the template which requires the relation to be valid. So, in this case, `Z` is less than all integers which can be written as `S<A>` for some `A` and `A` is less than `B` iff `A - 1` is less than `B - 1`. Notice that these two axioms are both necessary and sufficient for comparing any two positive integers. Also note how we are inductively asserting in the general case — since we `typedef` `Lt<P(A), P(B)>::Flag` to `Flag`, `Lt<A, B>` will have `Flag` `typedef`ed iff `Lt<P(A), P(B)>` has it `typedef`ed. Stating a correct indexing operation is now trivial:

```template
struct Idx {
typedef typename Lt::Flag Flag;
static T idx(List list) {
return list.data;
}
};
```

Posted in Computers | Tagged , | 6 Comments

## TSA in the sky with diamonds

An anti-drug article [1] was written and published by some IIT Kharagpur students in one of our campus newspapers. I think the article is rubbish and here’s why.

I think the article should, in principle, offend any self-respecting individual due to its tone and any rational person due to the “facts” it presents. From this point on, I tacitly assume that people reading this anti-(anti-drug-rant) rant posses both rationality and self-respect to some extent.

One interesting thing about the article is that after replacing “drug use” with “gay marriage” the reasons still remain “valid”, and become strikingly similar to what anti-gay lobbyists routinely come up with. The illegality of two men loving each other, the separation from normal society, “underground” gay groups and “the gateway effect”. One of the reasons routinely cited in support of anti-gay laws is that if we allow two women to marry, men will start sleeping with dogs and sisters will engage in incest. Needless to say, such fears are as baseless as the gateway drug theory [2] the authors talk about.

Perhaps the most objectionable aspect of this article the way the authors instruct their reader on the “right” way to lead their lives. Whether someone spends their free time reading Shakespeare or worshipping bananas is entirely for themselves to decide. Advice can only be given conditionally, “If you want to be an astronaut eating zombie then do the following … “, or in this particular case “If you’re giving advice to self-respecting people …”. Unconditional advice speaks poorly of how the author view people around them even more poorly of TSA for publishing such an article.

Getting down to the factual aspects:

It is true that most nations ban drugs, but more and more countries are taking steps towards legalization. London [4] is re-thinking their drug laws and Portugal decriminalized drug use [5] over a decade ago. In our own country we have licensed Bhang Shops [3]. Cocaine and heroin were both legal for several decades — does it mean it was “okay” to use them back then? The issue is, legality isn’t always a good indicator of whether a particular activity is harmful or not. Even practically, the only country I have heard of where such drug-laws are strictly enforced is the US of A, and I don’t think the reader will need to be told that their notion of what to “fix” and what not to is a bit skewed. Personally, no one in my social circle ever got into any legal trouble for smoking up once in a while.

The authors mention legality again, and their argument is flawed for the reason already mentioned. Going by this logic, DC++, anything but regular sex, homosexuals (in West Bengal) should all be avoided.

Keeping secrets from some people is not the same as getting disowned from society. Moreover, people choose friends and social circles based on their opinions, not the other way around

You can’t vote in this country if you mind sponsoring corruption. 🙂 Plus, at least in West Bengal, marijuana is grown locally by extremely poor people. If anything, by buying pot you’re ensuring they have enough to eat for a day.

I agree with the authors on one thing and one thing only — presenting a distorted picture of drug use, mentioning the good while leaving out the bad and risky is never ethical. In fact, I find this article unethical for doing the exact opposite, which is just as bad.

Fact is, however the authors try to pretend to be omniscient beings who transcend time and space, they’re are just like the readers they’re trying to advise and I don’t think they know any better. People have a right to choose what they do with their lives; giving out unsolicited advice is vulgar and revolting.

Does this mean I condone drug usage? Well, it doesn’t matter what I think. Drugs can ruin your life, be the most meaningful experience you take out of this phase of your life or just something you do once in a while. Your life is yours to make and destroy, and I don’t get a say in it.

PS: This isn’t a personal attack against anyone.

PPS: I don’t contribute to any multiple-continent-spanning newspaper.

Posted in Rest | Tagged , , | 4 Comments

## Type-Safe Tic-tac-toe

I recently had occasion to screw around Haskell’s type system. The end goal was to produce a Tic-tac-toe game in which all type-correct moves are legal and where all illegal moves produce type errors. I decided call the following moves illegal

• attempts to move on a cell already occupied
• attempts to make a move on a game that has already been won

While it does not make sense to compile the program into an executable the game is quite “playable” from the ghci REPL. The code is up on github [1].

In fact, Haskell’s type system, with the UndecidableInstances extension is Turing complete, so it isn’t a surprise that such a thing is possible. But type level programming has its own set of rules and I had a lot of fun figuring things out in such a foreign domain (Type-Level Instant Insanity [2] was extremely helpful). This post is about some of the things I observed.

Tuples are the type-level lists

This seems obvious now that I say it, since a tuple is the most straightforward way to represent a list of types. However, the direct way to do so is not very useful — instead of encoding the types A, B, C as the tuple type (A, B, C) it is clearer to represent them using the usual recursive cons idiom: (A, (B, (C, ()))). Here () is the nil element and cons X Y is (X, Y).

Functional dependencies

Haskell normally doesn’t allow typeclasses on more than one type. Adding proper functional dependencies can fix this [3]. A multi-parameter typeclass is essentially a n-ary relation between types and a functional dependency constrains the relations that a multi-parameter typeclass can represent. This has interesting consequences when programming with types.

The functional dependency in X a b c | a b -> c constrains the relation it defines by forcing c to be a function of a and b. This can be used to defined type-level functions. Specifically, a function that maps two types to a third type can be represented just like the typeclass X in the example above. The body of the function can then be defined as a bunch of instance declarations. If you’re doing something like Peano arithmetic and representing, for instance, 2 by the type S (S ()) (S is a type constructor here), you can implement addition as shown below:

```{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies,
FlexibleInstances, UndecidableInstances #-}

data S x

class Add x y z | x y -> z where add :: x -> y -> z
```

The functional dependency in Add states that the output of Add is a function of the first two types, which is correct. Note how we’re building up the induction.

I had a hard time figuring out how predicates like “two things are foo if they are not bar” are to be represented. The reverse was easy, you use a context — instance (Foo x, Foo y) => Bar x y, but there isn’t any way in Haskell to say instance (NOT Foo x, Not Foo y) => Bar x y. If you think about it, there is no reason to have such a feature either — the only reason for saying “if x is in the typeclass X then it is also in the typeclass Y” is to be able to use X‘s functions in defining Y‘s, so having “x is not in the typeclass X” in the context is absolutely useless! The solution (which I read in [2]) is to instead have such predicates “return” a type (which is a value in this weird world we’re working in). For better readability, defining two types, data T and data F for this purpose is useful. [1], for instance, is littered with instances like

```-- When a is X then it isn't Y and vice versa.  Silly example, but I
-- think it demonstrates the point

instance (X a T) => Y a F
instance (X a F) => Y a T
```

Inverted Induction

Finally, the way you write “function bodies” at the type level is a bit different. Instead of breaking down a function into smaller bits and then recursion on those, you conditionally build smaller structures into larger structures. For instance, to check if an element is in a list, you say “if the object o is in a list l or if x == othen it is in the list cons x l for all x”. It isn’t any different really, once you get used to it.

## [Proof] λ→ is normalizing

I studied a proof today that I had been putting off for some time now; that a simply typed lambda calculus is normalizing — every well-typed expression eventually reduces to a value. The proof is quite clever and that the exact wording of the proof in Types and Programming Languages is a bit terse so I think it will make for a good, juicy blog post.

In case you haven’t already, you need to first read up (a little bit) on untyped and typed lambda calculus. Try reading to the point where you can comfortably state why \$\$ lambda x cdot x x \$\$ can not be well-typed. I’ll wait right here, I promise. 🙂

I’ll use the symbol \$\$ [x mapsto s] t \$\$ to mean “replace instances of \$\$ x \$\$ by \$\$ s \$\$ in the term \$\$ t \$\$”. We’ll talk about a typed lambda calculus which has exactly one type \$\$ A \$\$ and only one value of that type, \$\$ a \$\$. We’ll evaluate in normal order.

Firstly, note (informally) that all types will look like \$\$ ((A to A) to (A to A)) to (A to A) \$\$. It is not hard to see (or prove) that the set of types is isomorphic to a set of binary trees with every internal node a \$\$ to \$\$ and every leaf an \$\$ A \$\$. This is important since we’ll do structural induction on the binary tree corresponding to a type.

We now define a set of what are usually called reducibility candidates. They are a predicate on terms of the lambda calculus with the following conditions

• \$\$ R_{A}(t) \$\$ if \$\$ t \$\$ halts.
• \$\$ R_{T_{1} to T_{2}}(t) \$\$ \$\$ iff \$\$ \$\$ t \$\$ halts and \$\$ R_{T_{1}}(s) Rightarrow R_{T_{2}}(t s) \$\$

Since \$\$ R_T \$\$ is a predicate, we can safely say things like \$\$ x \$\$ is in \$\$ R_A \$\$ etc.

We aim to prove the following

1. Every term in a set \$\$ R_{T} \$\$ (for any type \$\$ T \$\$) is normalizable. Note that this follows from the definition of the reducibility candidates
2. Every term of type \$\$ T \$\$ is in \$\$ R_{T} \$\$.

These two, combined, will allow us to draw the conclusion that every well-typed term in our simply typed lambda calculus halts.

Lemma 0: Every term in a reducibility candidate halts

This follows directly from the definition of a reducibility candidate.

Lemma 1: Reduction does not change the reducibility status of a term

Formally, we try to prove \$\$ R_T(t) wedge (t to t’) iff R_T(t’) \$\$. For this, we use the structural induction on binary trees isomorphic to each type we talked about above. There are two cases.

T is the atom type

This is the base case for our induction and also pretty obvious. From \$\$ t to t’ \$\$ we conclude \$\$ t \$\$ halts iff \$\$ t’ \$\$ halts. Since the only condition to be in \$\$ R_A \$\$ is halting, \$\$ R_A(t) iff R_A(t’) \$\$.

T is a function type

Let \$\$ T equiv T_1 to T_2 \$\$. In this case, each \$\$ t’ \$\$ has to satisfy the additional condition \$\$ forall s in R_{T_1} cdot R_{T_2}(t’;s) \$\$. We begin by taking an arbitrary \$\$ s \$\$ in \$\$ R_{T_1} \$\$.

Notice how \$\$ t ; s to t’ ; s \$\$ [1]? Also notice how \$\$ T_2 \$\$ is a substructure of \$\$ T equiv (T_1 to T_2) \$\$? We can invoke structural induction here and say we already know \$\$ R_{T_2}(t ; s) Rightarrow R_{T_2}(t’ ; s) \$\$ (statement 0). Also, since we’re given \$\$ R_T(t) \$\$, we know for all \$\$ s \$\$ in \$\$ R_{T_1} \$\$, \$\$ t ; s \$\$ is in \$\$ R_{T_2} \$\$ (property of \$\$ R_T \$\$) (statement 1). We can conclude from statement 0 and statement 1 that for any arbitrary \$\$ s \$\$ in \$\$ R_{T_1} \$\$, we have \$\$ R_{T_2} (t’ ; s) \$\$. This is the extra condition we needed to show \$\$ t’ \$\$ is in \$\$ R_T \$\$.

Lemma 2: Every well typed term of type T is in \$\$ R_T \$\$

This is proved by proving a stronger assertion: if \$\$ x_1:T_1, …, x_n:T_n vdash t ; : ; T \$\$ and \$\$ emptyset vdash v_i : T_i \$\$ for all \$\$ i \$\$ then \$\$ R_T([x_1 mapsto v_1] … [x_n mapsto v_n]t) \$\$. This is a stronger assertion because this also applies to terms with free variables while, at the top level, we’ll need to consider only closed terms. The assertion will be invoked on open terms only via induction.

To prove this assertion, we’ll invoke induction on the size of a term. So, informally, \$\$ t_1 ; t_2 \$\$ is greater than either \$\$ t_1 \$\$ or \$\$ t_2 \$\$ and \$\$ lambda a cdot x \$\$ is greater than \$\$ x \$\$. We make a case by case analysis on the various forms \$\$ t \$\$ can be in

A variable

This is obvious, since \$\$ t = x_i \$\$ for some \$\$ i \$\$.

An application

This too is easy. Since \$\$ t equiv t_1 ; t_2 \$\$, we invoke induction on \$\$ t_1 \$\$ and \$\$ t_2 \$\$ separately and then use \$\$ ([a mapsto b]c) ([a mapsto b]d) equiv ([a mapsto b] (c ; d)) \$\$ to get our result.

An abstraction

We know, from its form that \$\$ t \$\$ is a value. The other thing left to prove is that, keeping types compatible, \$\$ forall s in R_S cdot R_M(t ; s) \$\$. \$\$ t \$\$ is of the form \$\$ lambda x : S cdot b \$\$. Let \$\$ s in R_S \$\$. We know, from lemma 0 that \$\$ s \$\$ will reduce to a value \$\$ v \$\$. From lemma 1, we get \$\$ R_S(v) \$\$. The condition we’re trying to prove is \$\$ ([x_1 mapsto v_1]…[x_n mapsto v_n]t)s in R_P \$\$. But this beta-reduces to \$\$ ([x_1 mapsto v_1]…[x_n mapsto v_n](t ; s)) \$\$ \$\$ to \$\$ \$\$ ([x_1 mapsto v_1]…[x_n mapsto v_n](t ; v)) \$\$ \$\$ to \$\$ \$\$ ([x_1 mapsto v_1]…[x_n mapsto v_n]([x mapsto v]b) \$\$. \$\$ b \$\$ is a smaller term satisfying the requirements for this proposition, and we can say, via induction, \$\$ R_P([x_1 mapsto v_1]…[x_n mapsto v_n]([x mapsto v]b) \$\$. From lemma 1 we get \$\$ R_P(([x_1 mapsto v_1]…[x_n mapsto v_n]t)s) \$\$, which is what we wanted to prove.

1. In normal order. I haven’t tried proving this in other evaluation strategies but I think I can go around by proving that the evaluation strategy does not matter in deciding the behaviour of a program in this lambda calculus.

## Books, movies and music

I graduated out of high school an idiot. I hadn’t read anything, seen anything or heard anything (Everybody anyone?). Now that I’m becoming moderately literate in such things, I think I can make a generalization — the information content (sorry Dr. Shannon, I’m going to abuse that term here) of an art form is inversely proportional to the effect it has on me.

Contrast a book with a movie, for instance. The information a book contains is certainly more abstract than the information a movie rams into my head. I don’t know how Naoko in Norwegian Wood looked like but Martha always conjures up the image of Helena Bontham Carter in a hat, smoking a freshly lit cigarette. Naoko‘s image is malleable and ethereal but Martha will always be a dark girl with curly hair (this is not to criticize Fight Club, I loved the movie).

I think this makes books more relatable, and a well written text hits me at a very base level. Since the written word is so flexible and abstract, my mind can lift it up to a concrete representation that is very close to my sense of self. A book can drive me to insanity, a movie cannot.

I think the same applies to music too — I’ve always found solos by Pink Floyd their most interesting work. When a rapper talks about how he scored last night, there is only so much information my mind can add, there is only so much I can misinterpret. When I hear the crazy slide-guitar in Shine on you crazy diamond, VI-IX, my mind is forced to conjure up something using the only tools it has at its disposal, my imagination and my experience. And it always comes up with something sinister.

Posted in Rest | 1 Comment

## New JIT Interface for GDB

I worked on GDB as a part of my internship at Igalia. The work I did is finally merged, and this blog post describes the problem that I tried to solve and the solution I implemented.

### The Problem

Let’s say `A` is a runtime function that sometimes gets called by JIT compiled code. When the control is inside `A`, the stack may look something like this:

```[  A  ]
[ Some JIT compiled function (X)  ]
[ Some other JIT compiled function (Y)  ]
[ Some function that launches JIT compiled code (B)  ]
[ Some function that does the JIT compilation (C)  ]
[   .... and so on ....  ]
[  First function for this thread == (D)  ]
```

Now, for GDB to be able to successfully unwind through the `X` and `Y` frames, it needs some extra information. For regular functions (like `A`, `B`, `C` and `D`) this information is embedded in a section in the ELF file, using the DWARF encoding. However, `X` and `Y` don’t have any such associated information since they were generated on the fly. On trying to `backtrace` when the stack is in the state shown above you’ll

• See a bunch of `??`s instead of `X` and `Y`, since GDB does not have the debug symbols.
• A possibly incorrect backtrace. From what I understand, GDB can backtrace by interpreting the prologue, but only up to a point.

The initial solution (still supported by GDB) was this: whenever the JIT compiler compiles a block of code, have it emit the debug information to an ELF file in memory. Then have the JIT compiler notify GDB of these in-memory ELF files using a published interface. This way, GDB knows how to display a pretty backtrace.

This solution (of generating ELF files in memory) is, however, not ideal. ELF files are complex and hard to generate. Generating ELF files bites even more if all that GDB needs to do is display a meaningful stack trace. My job was to find a less complex way to get the debugging information into GDB.

### The Solution

It was Tom Tromey’s idea that it probably makes more sense to load and use some sort of a plugin provided by the JIT vendors. I too thought this idea was viable and decided to implement it and see how well it worked.

The problem can be divided into two parts — providing the debug symbols and providing the unwind information. We don’t deal with providing type information and other such more sophisticated information, since we’re targeting people who just want to see a meaningful stack trace without putting in too much effort. The ELF handling code is still in GDB and anyone wishing to do something more exciting than looking at a backtrace can generate full-fledged ELF files.

For the debug symbols part, we

1. Have the JIT compiler generate the debug symbol information in any format it pleases. GDB does not need to understand this format.
2. Have the plugin parse this debug information (since it understands the format, being written by the JIT vendor) and emit the symbol tables GDB requires.

In the name of a clean interface and better backward compatibility, (2) is done by passing a bunch of callback functions to the plugin, along with the data it is supposed to parse (the data that the JIT compiler passed to GDB). The plugin then constructs the symbol tables using the callbacks it is given. There are callbacks to create a new symbol table (`symtab_open`), to add a mapping from line numbers to PCs (`line_mapping_add`) among others. All this is documented in the header `jit-reader.h`.

`jit-reader.h` is installed in `{includdir}/gdb/`, where `{includedir}` is the system’s default include directory. This header is the interface between the plugin and GDB, and needs to be included in the plugin source file. Because of where it is installed by default, usually a `#include <gdb/jitreader.h>` is sufficient.

For the unwinding part we simply have the plugin provide an unwinder. The unwinder too gets a bunch of callbacks, one to read the value of a register in the current frame, one to tell GDB (write) the value of a register in the previous frame (the one that was called immediately before), one to read some data off the target process’ address space, one to generate a frame id (described in `jit-reader.h`) and so on. The unwinder is especially easy to write if the JIT compiled code stores interesting things at constant offsets from the base register. Pseudo code could be

```  READ_REGISTER(cb, AMD64_RBP, FramePtr);
I’ve checked in the code, and it is available in the current trunk. This was a very general bird’s eye view of the situation, more information can be found in the GDB manual and the `jit-reader.h` file GDB installs. Thanks to everyone on the GDB mailing list who sent in their valuable reviews.