I’ve been studying Haskell for a while now, and in the past few months I’ve gone from “Monads are impure” to “all Haskell functors are endofunctors” and “
join f = f >>= id“. Some small things I noticed during this experience (YMMV):
Haskell essentially is a System F (fancy name for polymorphic lambda calculus) implementation with syntactic sugar and, for this reason, learning Haskell without knowing relevant theory can get confusing. I found reading about denotational semantics (excluding analysis of more complex stateful languages), lambda calculus (untyped, simply-typed, polymorphic) and some basic type theory (sum-of-product types, recursive types, Hindley-Milner type systems) very helpful. None of these topics are difficult; in fact, they become fairly intuitive once studied.
Currently I’m melting my brain on Category Theory, hoping that’ll help me better understand recursion schemes.
Monads are a design pattern
Despite what the volume of the literature on the topic would suggest, Monads are just a bunch of cleverly written types and (higher-order) functions, and
do blocks are just syntactic sugar. There is no impure or effectful code in Haskell – every expression is referentially transparent. The right way to look at a Haskell program is that it builds a description of an imperative program (a program that is allowed to have effects on the environment in which it is executed). This is the reason the type of
I was initially confused by literature that treats Monads as some magical switch which, all of a sudden, allows Haskell to have effects, when, in reality, all it does it manage the plumbing which makes it appear as if the code is sequential and effectful. There is, as it turns out, no spoon.
Fast data structures are hard
Haskell makes certain things extremely easy. Writing fast, composed data structures is not one of them. This is probably true of most stateless programming languages – you can’t do in-place updates and this makes efficient updates to a data structure complex (one way is to use Zippers, as I’m finding out). This automatically guarantees persistence; but sometimes, at 2 AM in the morning, you just want to do a
node = node->right and be done with it.
The hard way is the right way
Learning Haskell the hard way (why does that sound familiar?) is the only way it worked for me. I could never figure much out by firing up GHCi, typing in a bunch of statements and seeing what happened. Such experimentation certainly helps, but only when backed by relevant theory. As an aside, I’ve found the book “Learn you a Haskell for great good” extremely well-written; in this “not jumping into things too soon” matter.