Initial Algebras: Part II

In the last post we looked at functor algebras and properties of morphisms (arrows) between them. In this post we’ll explore initial functor algebras (or initial F-Algebras).

The Cliffhanger

The last episode ended with a question on whether it is always possible to have a consistent morphism from (Int, intAlgebra) to an arbitrary algebra. The answer, it turns out, is no.

This can be illustrated with a very important counter-example, the algebra ([Int], intListAlgebra) with

intListAlgebra :: Func [Int] -> [Int]
intListAlgebra E = []
intListAlgebra (Func x xs) = x:xs

The limitation is actually more general — no Func-algebra of the form (Int, coerce) can have a function to ([Int], intListAlgebra). For an informal proof, think about this (we’ll talk in the terminology introduced in the last post):

Consider the case when the initial Func Int is of the form Func a b. The first path morphs Func a b to frob (coerce (Func a b)) while the second path maps it to a:frob b. Let the data-type Int be able to represent N unique values and let frob map these N unique values to M unique values. In other words, let the cardinality of frob‘s range be M. Then the number of possible values of frob (coerce (Func a b)) is at most M while the number of possible values of a:frob b is N * M. Therefore the two paths can not represent the same mapping — their ranges are different.

Initial Algebras

However, the ([Int], intListAlgebra) algebra is special in a way — we can always find a function from ([Int], intListAlgebra) to any arbitrary Func-algebra. In Haskell, we should be able to implement this function:

createArrow :: (Func c -> c) -> [Int] -> c

The createArrow function is surprisingly easy to create. As earlier, we proceed with the case analysis of the Func [Int] we have to start with. The function we’re looking for is f :: [Int] -> C.

When the initial value is E, we require (f (intListAlgebra E)) = (algC ((fmap f) E)). Using the definition of fmap and intListAlgebra(which we already know), we get f [] = algC E.

When the initial value is Func x y, we require f (intListAlgebra (Func x y))) = (algC ((fmap f) (Func x y))). This simplifies to (again, using the known, fixed definitions) f (x:y) = algC (Func x (f y)).

The above two observations have just given us a nice, recursive, definition of f! 🙂 The definition of createArrow automatically follows:

createArrow algC [] = algC E
createArrow algC (x:xs) = algC (Func x (createArrow algC xs))

Moreover, it looks a lot like a foldr:

createArrow algC xs = foldr (a b -> algC (Func a b)) (algC E) xs

Such F-algebras, which have mappings to all other F-algebras are called initial algebras, and ([Int], intListAlgebra) is one. It also is a terminal co-algebra (co-algebras which have a function from all other co-algebras) but we won’t talk about them now. This dualism exists because Haskell doesn’t differentiate between inductive and co-inductive data structures. I wish it did. 🙂

Lambek’s Lemma

Lambek’s lemma states that if a functor has an initial algebra, the initial algebra is a fixed point of the functor. Above, we can see that [Int] is, indeed, the fixed point of the Func. Alternatively, it says the (Func [Int] -> [Int]) we came up with is an isomorphism — it has an inverse; which is also something apparent from the above example.

Why I find F-algebra’s interesting

I decided to read up a little bit on this topic when I realized that I could use arrows between F-algebras to represent correctness preserving compilations. For instance, consider a simple evaluator for arithmetic expressions:

data Arith a = Add a a | Sub a a | If a a a | Atom Int
-- Define fmap the usual way
data StackLang = PushSL Int | AddSL | SubSL | IfSL

stackAlg :: Arith [StackLang] -> [StackLang]
stackAlg (Add a b) = AddSL:(a ++ b)
stackAlg (Sub a b) = SubSL:(a ++ b)
stackAlg (If a b c) = IfSL:(a ++ b ++ c)
stackAlg (Atom a) = [PushSL a]

runStack :: [StackLang] -> Int
runStack stack = head $ horse stack []
  where horse (PushSL a:r) s = horse r (a:s)
        horse (AddSL:r) (x:y:s) = horse r (x + y:s)
        horse (SubSL:r) (x:y:s) = horse r (x - y:s)
        horse (IfSL:r) (x:t:f:s) =
          horse r ((if x /= 0 then t else f):s)
        horse [] s = s

eval :: Arith Int -> Int
eval a = {- usual -}

Now, if we can show that runStack is an arrow from (Arith [StackLang], stackAlg) to Arith Int, we will have proved that one step of compiling a top level expression to a stack-machine language preserves semantics. By structural induction, we can then prove the whole compilation process semantics-preserving.

We know that we have always have an arrow (rather a unique arrow) from the fixed point of Arith (finite expressions in this toy expression language) to any arbitrary algebra. Specifically, the arrow to eval should tell us something about how we recursively combine results of evaluating sub-trees into a value.

I’m still quite new to all this, but, so far, Category Theory looks rather interesting and I’m probably going to spend some more time on it.

Advertisements
Posted in Computers | Tagged , , , , | Leave a comment

Initial Algebras: Part I

Meta-post

This post is light on math and uses Haskell to express formalisms. If you’re clear on what a Functor is, you’re good to go.

Functors and Algebras

There is this regular old functor, with the type constructor Func. We’ve made it an instance of Functor with an appropriate fmap following the functor laws.

An F-algebra is a tuple consisting of a Haskell type A (i.e. something with kind *) and a function f :: (Func A) -> A.

To makes things concrete:

import Data.Char

data Func x = E | Func Int x deriving(Show)

instance Functor Func where
  fmap f (Func i x) = Func i (f x)
  fmap _ E = E

charAlgebra :: Func Char -> Char
charAlgebra E = chr 0
charAlgebra (Func _ x) = x

We have an algebra now: (Char, charAlgebra). Algebras are specific to the functor they operate on, so we call the above a Func-algebra. We can have a different Func-algebra, (Char, charAlgebra2) using a different function from Func Char to Char:

charAlgebra2 :: Func Char -> Char
charAlgebra2 E = chr 0
charAlgebra2 (Func i _) = chr i

We will play with both of these.

Morphisms (arrows) between Algebras

We can also have morphisms that transform one Func-algebra to another. A morphism from Func-algebra (A, f) to (B, g) is a function morph :: A -> B with some special constraints.

You see, given a Func-algebra (A, f), the values we have to deal with are either of the type A or Func A. The constraint on morph tell us the two ways we have of translating a value of type Func A to a value of type B are equivalent. Concretely, say we have a value v :: Func A. There are two ways we can get a value of type B out of it:

  1. Reduce v to v' :: A using f :: Func A -> A. Then convert v' :: A to v'' :: B using morph.
  2. Convert v to v' :: Func B using fmap morph. Then reduce v' :: Func B to v'' :: B using g.

The restriction is that these two methods must give us the same v'' :: B. If morph satisfies this, we say that morph is a morphism between (A, f) and (B, g). It can be succinctly formalized by saying that the following diagram commutes in the Hask category. Even if you don’t speak Category Theory, the meaning of the diagram should be quite apparent

In this post (and later ones too) we’ll refer mapping from Func A to A to B as the first path and mapping Func A to Func B to B as the second path.

In Haskell, the restriction implies that isArrow is true for all values of sampleA:

isArrow :: (Eq b) => (Func a -> a) -> (Func b -> b) -> (a -> b) -> (Func a) -> Bool
isArrow algA algB arrow sampleA = (arrow (algA sampleA)) == algB ((fmap arrow) sampleA)

isArrow, while not a proof, will come in handy for quickly generating counter-examples.

To play with isArrow, we first create two more Func-algebras, (Int, intAlgebra) and (Int, intAlgebra2):

intAlgebra :: Func Int -> Int
intAlgebra E = 0
intAlgebra (Func i _) = i

intAlgebra2 :: Func Int -> Int
intAlgebra2 E = 0
intAlgebra2 (Func _ i) = i

and some values of type Func Int:

intSamples = [E, Func 0 42, Func 42 0]

and then try to rule out some possibilities. We use the chr :: Int -> Char function as the morph function here. The choice of chr is only incidental — it just seems to be the most canonical way to convert an Int to a Char.

isArrow0 = isArrow intAlgebra charAlgebra chr
values0 = map isArrow0 intSamples

isArrow1 = isArrow intAlgebra2 charAlgebra chr
values1 = map isArrow1 intSamples

isArrow2 = isArrow intAlgebra charAlgebra2 chr
values2 = map isArrow2 intSamples

isArrow3 = isArrow intAlgebra2 charAlgebra2 chr
values3 = map isArrow3 intSamples

Out of the four values{0-3}, only values1 and values2 are true for all inputs. Therefore, they are the only ones that warrant our time.

values1 tells us that it is possible that chr is a function from (Int, intAlgebra2) to (Char, charAlgebra). Let’s see if we can convince ourselves that this is actually the case:

We start with a value v :: Func Int. If v is E then the conclusion (the restriction on chr) is trivially true. The only other possibility is that v is of the form Func (i::Int) (x::Int). In the first path, we apply intAlgebra2, which reduces v to x, after which we apply chr on it, making the end result chr x. In the second path we first apply fmap chr on v, which gives us Func i (chr x). Now, charAlgebra simply picks up the second field, and the end result is chr x again; same in both the paths. This proves that chr is indeed a morphism from (Int, intAlgebra2) to (Char, charAlgebra).

A similar argument can be made for isArrow2, I think.

We have seen that chr can’t be an arrow from intAlgebra to charAlgebra. In fact there is an arrow (and only one arrow) from intAlgebra to charAlgebra. To find it, assume that the arrow is arr, and observe that in the case v = Func i x, the first path first maps v to i and then to arr i. The second path first maps it to Func i (arr x) and then to arr x. The only way arr i can be equal to arr x for all values of i and x is if arr doesn’t depend on its argument at all! Thus we establish that arr is of the form _ -> constantChar.

We get the value of constantChar by setting v to E. The first path first maps E to 0 and then to arr 0 = constantChar. The second path first maps E to E and then E to chr 0. Thus, the value of constantChar is chr 0.

Is it always possible, perhaps with some effort, to define an arrow from (Int, intAlgebra) to some arbitrary algebra? We’ll explore this and more in the next post(s).

Posted in Computers | Tagged , , , , , | 6 Comments

Logic Puzzles in Agda

This post assumes basic knowledge of Agda.

A few days ago, Paul Snively published a post which set me thinking on how Agda (which is the closest thing to Coq I have some knowledge of) fares in similar scenarios. Pretty well, it turns out.

I’ll start by solving the problem Paul solves in his post, which is also the first problem in the book Logical Labyrinths by Raymond M. Smullyan:

Context: Abercrombie is in a weird island where all knaves lie and all knights are truthful

On the day of his arrival, Abercrombie came across three inhabitants, whom we will call A, B and C. He asked A: “Are you a knight or a knave?” A answered, but so indistinctly that Abercrombie could not understand what he said. He then asked B: “What did he say?” B replied: “He said that he is a knave.” At this point, C piped up and said: “Don’t believe that; it’s a lie!”. Was C a knight or a knave?

We begin by defining (equality) and (a type with no values):

data _≡_ {A : Set} (a : A) : A → Set where
  refl : a ≡ a

data ⊥ : Set where

Now a Person can either be a knight or knave. We model that

data Person : Set where
  knight : Person
  knave : Person

and the fact that a knight always speaks the truth and that a knave always lies

says : Person → Set → Set
says knight p = p
says knave p = p → ⊥

The intuition behind says is that says p predicate always gives us a a predicate that is true. This is actually enough to represent and solve the first problem — we represent the problem as a type such that a value of that type is the solution we need:

data Solution₀ : Set where
  soln₀ : (a : Person) → (b : Person) → (c : Person) →
          (says b (says a (a ≡ knave))) → (says c (b ≡ knave)) →
          Solution₀

We encode the constraints in the problem here in a very straightforward way; it is possible to construct a value of type Solution₀ if and only if we have proofs for (says b (says a (a ≡ knave))) and (says c (b ≡ knave)). We now ask Agda to search for a solution:

solution₀ : Solution₀
solution₀ = ?

and Agda fills up the ?:

solution₀ = soln₀ knight knave knight (λ ()) refl

As cool as this undoubtedly is, for this particular problem the solution is the proof of the solution and that isn’t as involved as I’d like to be. Let’s look at something deeper (problem 1.11 from the same book).

Suppose that you visit the Island of Knights and Knaves because you have heard a rumor that there is gold buried there. You meet a native and you wish to find out from him whether there really is gold there, but you don’t know whether he is a knight or a knave. You are allowed to ask him only one question answerable by yes or no. What question would you ask?

The answer, of course, is to ask “if I were to ask you whether there is gold buried somewhere here, what would you say?”. The knights answer truthfully and the knaves end up answering truthfully since they lie about a lie and hence negate a negation. Is Agda smart enough to figure this out?

A question to which only two answers are possible is basically a predicate and can be represented by a Set. However, since we’d like to be able to ask questions containing the phrase “you” or even questions like “if you are a knight then is the sky red else are you a knave?”, we represent questions as

Prp : Set₁
Prp = Person → Set

We also declare some standard ammo

record _∧_ (A : Set) (B : Set) : Set where
  constructor _,_
  field
    proj₁ : A
    proj₂ : B

prj₁ : {A : Set} {B : Set} → A ∧ B → A
prj₁ = _∧_.proj₁
prj₂ : {A : Set} {B : Set} → A ∧ B → B
prj₂ = _∧_.proj₂


_⇔_ : Set → Set → Set
_⇔_ a b = (a → b) ∧ (b → a)

to represent logical conjunctions, and if and only if. With these, we are ready to represent our problem. We represent a slightly more general version of the problem — “is gold buried in your island” is really an arbitrary predicate and we model it as such

data PredicateTransform : Set₁ where
  predicateTrans : (f : Prp → Prp) →
                   ((OldP : Prp) → (p : Person) →
                    (OldP p) ⇔ (says p ((f OldP) p))) →
                   PredicateTransform

The way to construct a PredicateTransform value is by providing a function that maps a predicate to a new one, and proving that the old predicate is equivalent to an arbitrary Person asserting the new one. So, in the specific case of the question, the predicate would hopefully transform “is gold buried in your island” to a knight/knave-hood agnostic one and provide a proof for the same. We try our old trick again

soln : PredicateTransform
soln = ?

which unfortunately doesn’t work this time. The reason why it doesn’t is fundamental — remember how, for knaves, our solution would work by eliminating double negation? You can’t do that in constructive logic (and hence Agda). But all is not lost, we have the mighty ? to the rescue:

elim-double-neg : {A : Set} → ((A → ⊥) → ⊥) → A
elim-double-neg = ?

We can’t provide a body for elim-double-neg but that doesn’t stop us from keeping it around and using it to prove things. We just won’t get a complete proof, since such a proof lies outside the realm of constructive logic.

The “solution” we came up with basically transforms predicate “P” to “Would you assert the truth of predicate P”? Succinctly,

soln₁ : PredicateTransform
soln₁ = predicateTrans f ?
  where f : Prp → Prp
        f q p = says p (q p)

where ? is something we have to (interactively) fill in. We proceed the usual, high-school, way — proving both the implications separately and put them together to prove the iff:

soln₁ : PredicateTransform
soln₁ = predicateTrans f proof
  where f : Prp → Prp
        f q p = says p (q p)
        proof₀ : (A : Prp) → (p : Person) → (A p) → (says p ((f A) p))
        proof₀ = ?
        proof₁ : (A : Prp) → (p : Person) → (says p ((f A) p)) → (A p)
        proof₁ = ?
        proof : (A : Prp) → (p : Person) → (A p) ⇔ (says p ((f A) p))
        proof A p = proof₀ A p , proof₁ A p

Note that a person can either be a knight or a knave. We tell Agda this and ask it to search for the individual proof terms:

proof₀ : (A : Prp) → (p : Person) → (A p) → (says p ((f A) p))
proof₀ A knight prf = prf
proof₀ A knave prf = λ z → z prf
proof₁ : (A : Prp) → (p : Person) → (says p ((f A) p)) → (A p)
proof₁ A knight prf = prf
proof₁ A knave prf = ?

The only place Agda gets stuck is the place where it would have to use elim-double-neg. The type of prf here is (A knave → ⊥) → ⊥ (in emacs hit C-c C-d) and the type of the hole is A knave. We can use double-neg-elim here

proof₁ A knave prf = elim-double-neg prf

and we’re done!

Another interesting solution to the problem is transforming the predicate “P” to “Is P true iff you are a knight?”:

soln₂ : PredicateTransform
soln₂ = predicateTrans f ?
  where f : Prp → Prp
        f q p = (p ≡ knight) ⇔ (q p)

I leave this as an exercise. 🙂

The full source (and the solution to the “exercise”) is here. If you’ve made it this far please do leave a comment, especially if you think I’m mistaken somewhere.

Posted in Computers | Tagged , , , | Leave a comment

First steps with Agda: provable factoring

In my last post we looked at the problem of factoring an integer. We ran into an issue with the termination checker and resolved it by wrapping the recursion into a separate inductive data-type that made it explicit. The idiom used is present in Agda’s standard library and can be used “off the shelf” this way:

module factor-idiomatic where

open import Data.Fin as F
open import Induction.WellFounded
open import Induction.Nat
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties

factor : (n : ℕ) → Acc _<′_ n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (acc p) = helper n ≤′-refl
  where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → List ℕ
        lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))

        helper 0 _ = (suc (suc n)) ∷ []
        helper (suc f) proof with ((suc (suc n)) divMod' (suc (suc f)))
        ... | result q F.zero _ = q ∷ factor (suc (suc f)) (p (suc (suc f)) proof)
        ... | _ = helper f (lemma proof)

        lemma ≤′-refl = ≤′-step ≤′-refl
        lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
        lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

factorize : ℕ → List ℕ
factorize n = factor n (<-well-founded n)

factors₃₀ = factorize 30

If you look at the sources, Acc is defined as

data Acc {a} {A : Set a} (_<_ : Rel A a) (x : A) : Set a where
  acc : (rs : WfRec _<_ (Acc _<_) x) → Acc _<_ x

which basically boils down to

data Acc {a} {A : Set a} (_<_ : Rel A a) (x : A) : Set a where
  acc : (rs : ∀ y → (y < x) → Acc y) → Acc _<_ x

… pretty close to what we defined and used. Importantly, Agda already defines <-well-founded, which is equivalent to generateTerminationProof we defined earlier. We will use this more idiomatic version of the terminating factor definition in this post.

So now we wish to somehow verify that the List ℕ we returned really multiplies to n. One way to do this is by creating a new data type family

  data Factorization : ℕ → Set where
    ...

and somehow ensure that Factorization n can only be constructed once we have a correct factorization for n. So, how about this:

  data Factorization : ℕ → Set where
    factors : (n : ℕ) → (f : List ℕ) → (product f ≡ n) → Factorization n

_≡_ is an interesting type. It is parameterized on a value and is indexed on another value of the same type. The only constructor for ≡ is refl. In Agda, the values of the type parameters are the same for all the type constructors. However, the constructors do get to choose values for the type indices. So, for instance, the following code is illegal

open import Data.Nat

data K (a : ℕ) (b : ℕ) : Set where
  construct : K a 42

since construct can’t decide the values of a and b. However, this is allowed

open import Data.Nat

data K : ℕ → ℕ → Set where
  construct : K 42 42

Getting back to _≡_, the definition says

data _≡_ {a} {A : Set a} (x : A) : A → Set a where
    refl : x ≡ x

meaning given a value x of type A, refl will choose the index of the constructed _≡_ x to be x. Since refl is the only constructor _≡_ has, a ≡ b will have a value only if Agda can somehow prove that a and b are equal. Moreover, this value, refl of type a ≡ b, is itself the proof of a and b‘s equality.

Getting back to defining Factorization, the type we last described takes an integer, n; a list of integers, f; and a proof of the fact that the product of the integers in f is n. Only after supplying all this can you construct a value of type Factorization n. You can also just as easily decompose a value of type Factorization n into n, f and the proof.

However, while it may certainly be possible to go ahead with this definition of Factorization, we choose one more amenable to composing; one in which we can easily augment proofs to create larger proofs. I’m talking about something like this:

data Factorization : ℕ → Set where
  singleton : Factorization 1
  augment : {old : ℕ} {n : ℕ} → (new-factor : ℕ) →
            (Factorization old) → ((new-factor * old) ≡ n) →
            Factorization n

When we factor 1, we use the singleton constructor. augment (the equivalent of cons here) uses an old factorization and a new factor to construct a new Factorization. We also have to supply a value of the type ((new-factor * old) ≡ n) or a proof that (new-factor * old) is equal to n. Therefore, at every step, our invariant of all the numbers in the an object of type Factorization n multiplying up to n stays valid. As an example, a valid value of the type Factorization 20 is augment 2 (augment 2 (augment 5 singleton proof0) proof1) proof2 where proof2 is a value (proof) of type (proposition) 2 * 10 ≡ 20, proof1 is a value of type 2 * 5 ≡ 20 and so forth.

But how do we generate this value of type ((new-factor * old) ≡ n)? How do we statically convince Agda that the new-factor we’re feeding to the type constructor actually satisfies this constraint? Fortunately, most of our work has already been done.

The divMod' function returns a value of type DivMod' which, apart from holding the quotient and the remainder also holds a proof (value) of the proposition (type) quotient * divisor + remainder ≡ dividend. When the remainder is zero, this proof can be transformed to the one required by the augment constructor. This transformation is itself a proof; given a certain condition, it proves another. The Curry-Howard isomorphism maps implication to function types, and the proposition we’re seeking to prove is

lemma₁ : forall {r} → (q : ℕ) (d : ℕ) (n : ℕ) → (r ≡ 0) →
                       (r + q * d ≡ n) → (q * d ≡ n)

which is the same as “given that (r ≡ 0) and (r + q * d ≡ n), we prove (q * d ≡ n)”. The definition for lemma₁ is weirdly simple:

  lemma₁ q d .(q * d) refl refl = refl

Agda understands definitional equalities. Since zero + n ≡ n is a definitional equality we can get away by telling Agda that the truth of the proposition is “obvious”.

We’ll also need a proof for n * 1 ≡ n at some point, so let’s have a look:

identity* : (n : ℕ) → (n * 1 ≡ n)
identity* zero = refl
identity* (suc n') = cong suc (identity* n')

The first thing to observe is that n * 1 ≡ n is not a definitional equality and requires a proof. There are two cases, when n is zero and when it isn’t. Funnily, if you see the definition of _*_, you’ll notice zero * n ≡ zero is, in fact, a definitional equality so we can get away with refl in the identity* zero.

The identity* (suc n') is slightly more involved. First we recurse to prove n' * 1 ≡ n' and then call use cong to turn it into a proof of n * 1 ≡ n. cong can be used to transform a proof of a ≡ b to a proof of f a ≡ f b and in this case, we set f to suc.

A small note on a lemma we looked at in the last post

lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))
lemma ≤′-refl = ≤′-step ≤′-refl
lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

Agda has two inductive definitions for the “less than” relation on natural numbers. The first one, _<_ says i) zero is less than any suc n and ii) m < n implies (suc m) < (suc n). The second definition, _<'_ says i) m < (suc m) and ii) m < n implies m < (suc n). Actually, the library version is a little bit more complex than this since it first defines the “less than or equal” () relation and then defines _<_ in terms of . Similarly for _≤′_ and _<′_. This definition of lemma uses both the _<_ and the _<′_ variants. I don’t know if there is a better implementation.

Here is the full, code, stitched together:

module factor-provable where

open import Data.Fin as F hiding(_+_)
open import Induction.WellFounded
open import Induction.Nat
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality

data Factorization : ℕ → Set where
  singleton : Factorization 1
  augment : {old : ℕ} {n : ℕ} → (new-factor : ℕ) →
            (Factorization old) → ((new-factor * old) ≡ n) →
            Factorization n

identity* : (n : ℕ) → (n * 1 ≡ n)
identity* zero = refl
identity* (suc n') = cong suc (identity* n')


factor : (n : ℕ) → Acc _<′_ n → Factorization n
factor 0 _ = augment 0 singleton (identity* 0)
factor 1 _ = singleton
factor (suc (suc n)) (acc p) = helper n ≤′-refl
  where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → Factorization (suc (suc n))
        lemma₀ : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))
        lemma₁ : forall {r} → (q : ℕ) (d : ℕ) (n : ℕ) →
                 (r ≡ 0) → (r + q * d ≡ n) → (q * d ≡ n)

        helper 0 _ = augment (suc (suc n)) singleton (identity* (suc (suc n)))
        helper (suc f) proof-<′ with ((suc (suc n)) divMod' (suc (suc f)))
        ... | result q F.zero proof-div =
                let rest = factor (suc (suc f)) (p (suc (suc f)) proof-<′)
                    proof-main = lemma₁ q (suc (suc f)) (suc (suc n)) refl
                                 (sym proof-div)
                in augment q rest proof-main
        ... | _ = helper f (lemma₀ proof-<′)

        lemma₀ ≤′-refl = ≤′-step ≤′-refl
        lemma₀ (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
        lemma₀ (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

        lemma₁ q d .(q * d) refl refl = refl


factorize : (n : ℕ) → Factorization n
factorize n = factor n (<-well-founded n)

factors₅₀ = factorize 50
Posted in Computers | Tagged , , | Leave a comment

First steps with Agda: well founded recursion

Last week I finally got time to explore Agda in some detail; something I had on my list for a while.

Agda is a dependently typed programming language. That means in Agda you can have a function of the type “Integer to Type to (Type of Types) to (Type, Integer)”. This allows you to express arbitrary constraints in the type system itself. Agda’s type safety and totality constraint (no crashes and guaranteed termination) allow us to invoke the Curry-Howard isomorphism and prove things by construction. I won’t be introducing Agda here (this is a good place to start) but will state and solve a small problem slightly more involved than the “vectors indexed by their lengths” variety.

The task is to create a provably correct function that factors a natural number. The notion of correct we’ll use here is that the returned list multiplies up to the number we were factoring. With this notion of correctness, both [2, 2, 5] and [1, 20] are correct factorings of 20. Our function is expected to return only one such factoring.

If you know a bit of Agda you should be able to come up with this:

module factor-nonterminating where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod

factor : (n : ℕ) → List ℕ
factor 0 = 0 ∷ []
factor 1 = []
factor (suc (suc n)) = helper n
  where helper : (try : ℕ) → List ℕ
        -- Check if (try + 1) divides (suc (suc n))
        helper 0 = (suc (suc n)) ∷ []
        helper (suc f) with ((suc (suc n)) divMod' (suc (suc f)))
        ... | result q F.zero _ = q ∷ factor (suc (suc f))
        ... | _ = helper f

The base cases of factoring 0 and 1 are trivial. The general case (which has to be of the form (suc (suc n)) since we’ve handled 0 and 1) is factored by counting down (suc n) and checking each number for divisibility. The only caveat here is that when we call helper n, we actually get helper to check for divisibility with (n + 1). If we had helper n check for divisibility with n we’d have to handle the helper 0 case somehow and that would be ugly. In that case, with some work, we could also pass around a proof that n is greater than 0 and convince the typechecker that the helper 0 clause could never be executed but we’ll keep it simple for now. The underscore in result q F.zero _ is a proof of the division returned by divMod' (proof that the quotient times the divisor plus the remainder is the dividend); we’ll use this later to prove the correctness of our factorization.

Alas, the code above fails the termination checker. It shouldn’t be a surprise that Agda’s termination checker isn’t perfect — halting is undecidable. Since labelling a non-terminating term as terminating isn’t an option in a theorem-prover, Agda errs on the side of caution and sometimes flags terminating functions as non-terminating. In those cases we need to help Agda in figuring out the structure of the recursion and why it must terminate. In the above example, for instance, it isn’t obvious that the recursive call to factor has a structurally smaller argument. We know that, since we’ve ruled out division by 1 (by making helper 0 a separate clause) and division by any integer other than 1 will result in a quotient smaller than the dividend, but Agda does not.

There are several ways to address this. One simple (but hacky, I think) way is to assign an upper limit to the number of times your function can recurse. Once you’ve done that, you can keep a “dummy” counter that you decrement (i.e. peel off a constructor from, this is the part that the termination checker likes) every time your recurse. This looks especially bad if the upper limit is loose — Agda will force you to come up with a (possibly ad-hoc) answer for the case when your dummy counter runs out. However, there is another way.

The key property that allows the trick I’m about to describe is that an applied function is not considered larger than the function itself. I don’t know the theory behind why this works (I just asked on #agda) but this can be exploited to tell Agda that a recursion terminates. In a way this approach is a bit like keeping a “dummy” variable, but is more elegant and doesn’t require you to come up with an upper limit.

The high-level idea is that we create a type data Terminates (x : ℕ) : Set with a constructor that holds a function that gives us another Terminates y. We then recurse passing to ourselves Terminates y we just produced instead. Since we peel off one constructor Agda considers the recursion well-founded and hence terminating. With this idea we forge ahead:

module factor-terminating-draft where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod

data Terminates (x : ℕ) : Set where
  termination-proof : (∀ y → Terminates y) → Terminates x

factor : (n : ℕ) → Terminates n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (termination-proof proof) = helper n
  where helper : (try : ℕ) → List ℕ
        helper 0 = (suc (suc n)) ∷ []
        helper (suc f) with ((suc (suc n)) divMod' (suc (suc f)))
        ... | result q F.zero _ = q ∷ factor (suc (suc f)) (proof (suc (suc f)))
        ... | _ = helper f

factorize : (n : ℕ) → List ℕ
factorize n = factor n ?

Pretty clever, eh? Note how we peel off the termination-proof constructor we match on the right hand side and apply (suc (suc f)) to the function we’ve just unwrapped to make the recursive call to factor well-typed and terminating. We just need to fill in the ? and we’re done. If you type this in emacs and load it into an Agda environment you won’t receive a non-termination error.

Unfortunately, this approach doesn’t work either — you cannot fill in the ? with any meaningful value. You probably have guessed the reason already, but in case you haven’t I’ll give you a hint:

data ⊥ : Set where

bam : (x : ℕ) → Terminates x → ⊥
bam x (termination-proof proof) = bam x (proof x)

Moreover, we still haven’t used the fact that we recurse on a smaller number each time. No wonder we couldn’t fill in the ?; without that crucial constraint, the recursion isn’t well-formed! To use the fact that the value we call ourselves recursively on is smaller than the current one, we change the definition of Terminates to

data Terminates (x : ℕ) : Set where
  termination-proof : (∀ y → (y <′ x) → Terminates y) → Terminates x

We now require a proof of the newer value being smaller than the current one to generate a new Terminates object. The idea is to use this proof to somehow so that we can actually fill in the ? plaguing us. To fill in the ? we write the function:

generateTerminationProof : (n : ℕ) → Terminates n

so that we can write

factorize : ℕ → List ℕ
factorize n = factor n (generateTerminationProof n)

The definition of generateTerminationProof is quite simple too — especially if you use C-c C-t and C-c C-c judiciously :P.

base-case : (∀ y → (y <′ zero) → Terminates y)
base-case _ ()

generateTerminationProof : (n : ℕ) → Terminates n

induction : (n : ℕ) → (y : ℕ) → (y <′ n) → Terminates y
induction n zero _ = termination-proof base-case
induction .(suc (suc y')) (suc y') ≤′-refl =  generateTerminationProof (suc y')
induction .(suc n) (suc y') (≤′-step {n} m≤′n) = induction n (suc y') m≤′n

generateTerminationProof zero = termination-proof (base-case)
generateTerminationProof (suc limit) = termination-proof (induction (suc limit))

As you can see we are always able to recurse into structurally smaller elements. This isn’t incidental but is possible because we carry the proof of y being smaller than x. For instance, if we weren’t carrying the proof, we’d have to provide a body for base-case, which I don’t think is possible.

The entire source file (if you’re new to Agda, I strongly suggest you try to put the pieces together yourself) looks like this:

module factor-termination-proof where

open import Data.Fin as F
open import Data.List
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.Properties

data Terminates (x : ℕ) : Set where
  termination-proof : (∀ y → (y <′ x) → Terminates y) → Terminates x

factor : (n : ℕ) → Terminates n → List ℕ
factor 0 _ = 0 ∷ []
factor 1 _ = []
factor (suc (suc n)) (termination-proof p) = helper n ≤′-refl
  where helper : (try : ℕ) → ((suc try) <′ (suc (suc n))) → List ℕ
        lemma : ∀ {f n} → ((suc f) <′ (suc (suc n))) → (f <′ (suc (suc n)))

        helper 0 _ = (suc (suc n)) ∷ []
        helper (suc f) proof with ((suc (suc n)) divMod' (suc (suc f)))
        ... | result q F.zero _ = q ∷ factor (suc (suc f)) (p (suc (suc f)) proof)
        ... | _ = helper f (lemma proof)

        lemma ≤′-refl = ≤′-step ≤′-refl
        lemma (≤′-step m≤′n) with ≤′⇒≤ (≤′-step m≤′n)
        lemma (≤′-step m≤′n) | s≤s m≤n = ≤⇒≤′ (≤-step m≤n)

base-case : (∀ y → (y <′ zero) → Terminates y)
base-case _ ()

generateTerminationProof : (n : ℕ) → Terminates n

induction : (n : ℕ) → (y : ℕ) → (y <′ n) → Terminates y
induction n zero _ = termination-proof base-case
induction .(suc (suc y')) (suc y') ≤′-refl =  generateTerminationProof (suc y')
induction .(suc n) (suc y') (≤′-step {n} m≤′n) = induction n (suc y') m≤′n

generateTerminationProof zero = termination-proof (base-case)
generateTerminationProof (suc limit) = termination-proof (induction (suc limit))

factorize : ℕ → List ℕ
factorize n = factor n (generateTerminationProof n)

factors₃₀ = factorize 30

In particular, notice how we carried the proof from the factor function to the helper function and how we maintained it using lemma. We’ll discuss a little bit more on transforming (and maintaining proofs) in the next post.

In the next post, we’ll first see a more idiomatic version of the same code and then go on to make the implementation provably correct (for some notion of correct) as we had originally planned. And many thanks to the nice folks on #agda for putting up with my shameless curiosity.

Posted in Computers | Tagged | Leave a comment

Movies to make you think, part II

Some less famous movies which I found nice (in no particular order):

  • Waltz With Bashir
  • Annie Hall (maybe not so exotic)
  • Pan’s Labyrinth
  • Teen Kanya (get a version with subtitles if you don’t speak bangla) (edit: to clarify, I’m talking about this movie)
  • Spirited Away (again, maybe not so exotic)
  • Eraserhead (ditto)
  • Begotten
  • Mulholland Dr.
  • Paprika

For the record, the “Part I” of this list is here.

Posted in Rest | 4 Comments

An Interesting CPP Bug

The C preprocessor, being a dumb text-manipulating tool that does not understand the syntactic structure of the underlying program, is often a hiding place for subtle headache-inducing bugs. I’ve been bitten such bugs myself and one bug I had a particularly difficult debugging is often not mentioned in lists such as [1].

Can you spot a bug in the following CPP macro? Specifically, when is its behaviour ill-defined?

#define FROBNICATE(__frobptr) do {              
    struct frobnicable *fptr = (__frobptr);     
    assert(fptr && "Can't frobnicate NULL!");   
    frobnicate_init(fptr);                      
    frobnicate_finish(fptr);                    
  } while(0)

One interesting thing I discovered about C while debugging this macro is that variable initializations like the following are valid:

{
  int foo = foo;
  /* foo now has an unspecified value. */
}

which means the following piece of code won’t do what is expected of it:

{
  struct frobnicable *fptr = malloc(sizeof(struct frobnicable));
  fptr->x = 42;
  FROBNICATE(fptr);
}

The macro will expand to

do {
  struct frobnicable *fptr = (fptr);
  /* ... and so on ...  `fptr` is undefined in the rest of the
   * macro expansion. */
} while(0);

and the implications won’t be pleasant.

I haven’t yet discovered an elegant and effective way to deal with this issue. The struct frobnicable *fptr = (__frobptr) is needed to prevent multiple evaluations of __fptr, so the only way I see of preventing this bug is by using an extremely unusual name for the fptr variable (or following a convention of prefixing such variable with some previously defined string). But that is neither elegant nor foolproof.

[1] http://spin.atomicobject.com/2012/03/30/preprocessing-with-caution/

Posted in Computers | Tagged , , , | 3 Comments