Parametricity Tutorial (Part 1)

Saturday, 23 May 2015, by Edsko de Vries.
Filed under coding.

A powerful feature of Haskell’s type system is that we can deduce properties of functions by looking only at their type. For example, a function of type

f :: a. a -> a

can only be the identity function: since it must return something of type a, for any type a, the only thing it can do is return the argument of type a that it was given (or crash). Similarly, a function of type

f :: a. a -> a -> a

can only do one of two things: either return the first argument, or return the second. This kind of reasoning is becoming more and more important with the increasing use of types such as this definition of a “lens”:

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

Since a lens is just a function of a particular type, the only thing we can conclude about such a function is whatever we can deduce from its type.

To reason about the properties of functions based only on their types we make use of the theory of parametricity, which tells how to derive a so-called “free theorem” from a type. This blog post is a tutorial on how to do this; it won’t explain why the theory works, only how it works. A Haskell practitioner’s guide to parametricity, if you will.

This is a two-part blog post. In part 1 (this post) we will cover the basics: constant types, functions and polymorphism (over types of kind *).

In part 2 (to be published shortly) we will deal with more advanced material: type constructor (types of kind * -> *), type classes, polymorphism over type constructors and type constructor classes.

The Basics

The main theorem of parametricity is the following:

if f :: t then f ℛ(t) f

When t is a closed type, ℛ(t) is a relation between two terms of type t (we shall see later that the type of is actually slightly more general). In words, parametricity states that any term f of type t is related to itself by ℛ(t). Don’t worry if this all looks incredibly abstract! We shall see lots and lots of examples.

Constant types (types of kind *)

For any constant type C, the relation ℛ(C) is the identity relation. In other words,

     x ℛ(C) x'
iff  x ≡ x'

(We will use ≡ throughout to mean mathematical equality, to distinguish it from Haskell’s equality function (==).)

Let’s see an example. Suppose that x :: Int. Then parametricity tells us that

     x ℛ(Int) x
iff  x ≡ x

I.e., it tells us that x is equal to itself. Not very insightful! Intuitively, this makes sense: if all we know about x is that it is an integer, we cannot tell anything about its value.

TOOLING. Many of the examples in this blog post (though sadly not all) can also be auto-derived by one of two tools. On the #haskell IRC channel we can ask lambdabot to derive free theorems for any types not involving type classes or type constructor classes. If you ask

@free x :: Int

lambdabot will reply with

x = x

(I recommend starting a private conversation with lambdabot so you avoid spamming the whole #haskell channel.)

Alternatively, you can also try the online free theorem generator. This free theorem generator is a bit more precise than lambdabot (which takes some shortcuts sometimes), and supports type classes, but cannot work with type constructors (lambdabot can work with unknown type constructors but not with quantification over type constructors, unfortunately).

Functions

For functions we map related arguments to related results:

     f ℛ(A -> B) f'
iff  forall x, x'.
       if x ℛ(A) x' then f x ℛ(B) f' x'

(The types of x and x' here depend on what precisely A is; see The type of ℛ, below.)

Example. Suppose f :: Int -> Bool. By parametricity

     f ℛ(Int -> Bool) f
iff  forall x :: Int, x' :: Int.
       if x ℛ(Int) x' then f x ℛ(Bool) f x'
-- both Int and Bool are constant types
iff  forall x :: Int, x' :: Int.
       if x ≡ x' then f x ≡ f x'
-- simplify
iff  f ≡ f

Again, not very informative. Parametricity doesn’t tell us anything about functions between constant types. Time to look at something more interesting!

Polymorphism (over types of kind *)

The definition for polymorphic values is

     f ℛ(a. t) f'
iff  forall A, A', a :: AA'.
       f@A ℛ(t) f'@A'           -- where 'a' can occur free in t

That is, whenever we pick two types A and A', and some relation a between A and A', the function f@A obtained by instantiating the type variable by A must be related to the function f'@A' obtained by instantiating the type variable by A'. In what follows we will write explicit type instantiation like this only if the type is not clear from the context; specifically, we will omit it when we supply arguments to the function.

The type of ℛ.

∀ab. a -> b -> a is an example of a closed type: all type variables are bound by a universal quantifier. An open type is a type with free type variables such as ∀b. a -> b -> a or even a -> b -> a. (Note that this distinction is harder to see in Haskell where universal quantifiers are often implicit. We will not follow that convention in this article.)

We said in the introduction that if t is a closed type, ℛ(t) relates two terms of type t. As we saw, in order to be able to give a meaning to open types we need a mapping from any free variable a to a relation a :: A ⇔ A'. In this article we somewhat informally maintain this mapping simply by using the same name for the type variable and the relation.

Given two relations a :: A ⇔ A' and b :: B ⇔ B', ℛ(a -> b -> a) relates terms of type A -> B -> A with terms of type A' -> B' -> A'. It is important to realize that can therefore relate terms of different types. (For a more precise treatment, see my Coq formalization of a part of this blog post.)

The interpretation of for a free type variable a is defined in terms of the corresponding relation:

     x ℛ(a) x'     -- the type variable
iff  (x, x') ∈ a    -- the relation

Example: ∀a. a -> a

Let’s consider a number of examples, starting with an f :: ∀a. a -> a:

     f ℛ(a. a -> a) f
-- parametricity
iff  forall A, A', a :: AA'.
       f@A ℛ(a -> a) f@A'
-- definition for function types
iff  forall A, A', a :: AA', x :: A, x' :: A'.
       if x ℛ(a) x' then f x ℛ(a) f x'

It might not be immediately evident from the last line what this actually allows us to conclude about f, so let’s look at this a little closer. A function g is a special kind of relation, relating any argument x to g x; since the property holds for any kind of relation a : A ⇔ A', it must also hold for a function a⃯ :: A -> A':

     forall x, x'.
       if x ℛ(a⃯) x' then f x ℛ(a⃯) f x'
-- x ℛ(a⃯) x' iff a⃯ x ≡ x'
iff  forall x :: A, x' :: A'.
       if a⃯ x ≡ x' then a⃯ (f x) ≡ f x'
-- simplify
iff  forall x :: A,
       a⃯ (f x) ≡ f (a⃯ x)

We can apply this result to show that any f :: ∀a. a -> a must be the identity function: picking a⃯ = const x, we get const x (f x) ≡ f (const x x), i.e. x ≡ f x, as required.

NOTE. We are doing fast and loose reasoning in this tutorial and will completely ignore any totality issues. See Automatically Generating Counterexamples to Naive Free Theorems, or the associated web interface, for some insights about what wrong conclusions we can draw by ignoring totality.

Example: ∀a. a -> a -> a

Intuitively, there are only two things a function f :: ∀a. a -> a -> a can do: it can either return its first argument, or it can return its second argument. What does parametricity tell us? Let’s see:

     f ℛ(a. a -> a -> a) f
iff  forall A, A', a :: AA'.
       f@A ℛ(a -> a -> a) f@A'
-- applying the rule for functions twice
iff  forall A, A', a :: AA', x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a) x', y ℛ(a) y' then f x y ℛ(a) f x' y'

Let’s again specialize the last line to pick a function a⃯ :: A -> A' for the relation a:

     forall x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a⃯) x', y ℛ(a⃯) y' then f x y ℛ(a⃯) f x' y'
-- a⃯ is a function
iff  forall x :: A, y :: A.
       if a⃯ x ≡ x' and a⃯ y ≡ y' then a⃯ (f x y) ≡ f x' y'
-- simplify
iff  a⃯ (f x y) = f (a⃯ x) (a⃯ y)

So parametricity allows us to “push in” or “distribute” a⃯ over f. The fact that f must either return its first or its second argument follows from parametricity, but not in a completely obvious way; see the reddit thread How to use free theorems.

 Example: ∀ab. a -> b

Other than undefined (which we are ignoring), there can be no function f :: ∀ab. a -> b. Let’s suppose that one did exist; what does parametricity tell us?

     f ℛ(ab. a -> b) f
-- applying the rule for universal quantification, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f@A,B ℛ(a -> b) f@A',B'
-- applying the rule for functions
iff  forall A, A', B, B', a :: AA', b :: BB', x :: A, x' :: A'.
       if x ℛ(a) x' then f x ℛ(b) f x'

Picking two functions a⃯ :: A -> A' and b⃯ :: B -> B' for a and b, we get

b⃯ . f = f . a⃯

It’s not too difficult to derive contradiction from this (remember that you can pick any two functions a⃯ and b⃯ between any types of your choosing). Hence, such a function cannot exist.

Example: ∀ab. (a -> b) -> a -> b

The only thing a function of this type can do is apply the supplied function to the supplied argument (alternatively, if you prefer, this must be the identity function). Let’s spell this example out in a bit of detail because it is our first example of a higher order function.

     f ℛ(ab. (a -> b) -> a -> b) f
-- apply rule for polymorphism, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f@A,B ℛ((a -> b) -> a -> b) f@A',B'
-- apply rule for functions, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: A -> B, g' :: A' -> B', x :: A, x' :: A'.
       if g ℛ(a -> b) g' and x ℛ(a) x' then f g x ℛ(b) f g' x'

Let’s expand what that premise g ℛ(a -> b) g' means:

     g ℛ(a -> b) g'
iff  forall y :: A, y' :: A'.
       if y ℛ(a) y' then g y ℛ(b) g' y'

For the special case that we pick functions a⃯ :: A -> A' and b⃯ :: B -> B' for a and b, that premise collapses to

     forall y :: A, y' :: A'.
       if y ℛ(a⃯) y' then g y ℛ(b⃯) g' y'
-- a⃯ and b⃯ are functions
iff  forall y :: A, y' :: A'.
       if a⃯ y ≡ y' then b⃯ (g y) ≡ g' y'
-- simplify
iff  forall y :: A.
       b⃯ (g y) ≡ g' (a⃯ y)
-- simplify (extensionality)
iff  b⃯ . g ≡ g' . a⃯

So that the free theorem for f :: ∀ab. (a -> b) -> a -> b becomes

if b⃯ . g ≡ g' . a⃯ then b⃯ . f g ≡ f g' . a⃯

Picking b⃯ = const g, g' = g, and a⃯ = id (verify that the premise holds) we get that indeed g ≡ f g, as expected.

Useful shortcut. This pattern is worth remembering:

      g ℛ(a⃯ -> b⃯) g'
iff   b⃯ . g ≡ g' . a⃯
whenever a⃯ and b⃯ are function(al relation)s.

Example: ∀ab. (∀c. c -> String) -> a -> b -> String

A function f :: ∀ab. (∀c. c -> String) -> a -> b -> String is not only higher order, but has a rank-2 type: it insists that the function it is given is itself polymorphic. This makes it possible to write, for example

f g x y = g x ++ g y

Note that since x and y have different types, it is important that g is polymorphic. What is the free theorem for f?

     f ℛ(ab. (c. c -> String) -> a -> b -> String) f
-- apply rule for polymorphism, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f ℛ((c. c -> String) -> a -> b -> String) f
-- apply rule for functions three times, and simplify ℛ(String) to (≡)
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: c. c -> String, g' :: c. c -> String.
     forall x :: A, x' :: A', y :: B, y' :: B'.
       if
         g ℛ(c. c -> String) g', x ℛ(a) x', y ℛ(b) y'
       then
         f g x y ≡ f g' x' y'

Specializing this theorem to functions a⃯ :: A -> A' and b⃯ :: B -> B' we get

forall A, A', B, B', a⃯ :: A -> A', b⃯ :: B -> B'.
forall g :: c. c -> String, g' :: c. c -> String.
forall x :: A, y :: B.
  if
    g ℛ(c. c -> String) g'
  then
    f g x y ≡ f g' (a⃯ x) (b⃯ y)

But that is somewhat surprising, because it seems to say that the values of x and y cannot matter at all! What is going on? Expanding the first premise:

     g ℛ(c. c -> String) g'
iff  forall C, C', c :: CC',
       g ℛ(c -> String) g'
iff  forall C, C', c :: CC', z :: C, z' :: C'.
       if z ℛ(c) z' then g z = g' z'

Let’s stop for a moment to ponder what this requirement for g and g' really says: given any relation c, and any elements z and z' that are related by c—in other words, for any z and z' at all—we must have that g z and g' z' give us equal results. This means that g and g' must be constant functions, and the same constant function at that. As a consequence, for any function f of the above type, f g must itself be constant in x and y. In part two we will see a more useful variation which uses the Show type class.

Incidentally, note that this quantification over an arbitrary relation c is a premise to the free theorem, not a conclusion; hence we cannot simply choose to consider only functions c.

TOOLING. Indeed, if you enter

(forall c. c -> String) -> a -> b -> String
in the online free theorem generator you will see that it first gives the free theorem using relations only, and then says it will reduce all “permissible” relation variables to functions; in this example, that is all relations except for c; lambdabot doesn’t make this distinction and always reduces relations to functions, which is not correct.

To be continued

In part 2 (to be published shortly) we will cover type constructors, type classes and type constructor classes. Meanwhile, here are some links to papers on the subject if you want to read more.


Recent Hackage improvements

Monday, 18 May 2015, by Duncan Coutts.
Filed under cabal, community.

You may or may not have noticed but over the last few months we’ve had a number of incremental improvements to the Hackage site, with patches contributed by numerous people.

I’m very pleased that we’ve had contributions from so many people recently. Apart from one patch that took a long time to merge we’ve generally been reasonably good at getting patches merged. Currently there’s just 1 outstanding pull request on hackage-server’s github site.

I gave a talk a couple months ago at the London Haskell User Group about getting started with hacking on Cabal and Hackage. Unfortunately the video isn’t yet available. (I’ll try and chase that up and link it here later).

An idea we floated at that talk was to run a couple hackathons dedicated to these and related infrastructure projects. If you want to help organise or have a venue in London, please get in touch. If you can’t get to London, fear not as we’d also welcome people attending online. Of course there’s also the upcoming ZuriHac where I expect there will be plenty of infrastructure work going on.

If you do want to get involved, the github site is the place to start. Discussion of features happens partly in issues on github and the #hackage channel on IRC. So those are good places to get feedback if you decide to start working on a bug or feature.

Recent changes

Visible changes

A few boring but important ones

Miscellaneous small changes


Improving Hackage security

Thursday, 16 April 2015, by Duncan Coutts.
Filed under industrial-haskell-group, cabal, community.

The members of the Industrial Haskell Group are funding work to improve the security of packages downloaded from Hackage. The IHG members agreed to fund this task some time ago and my colleague Austin and I have been working on the design and implementation.

In this post I will explain the problem and the design of our solution.

TL;DR

We’re implementing a system to significantly improve Hackage security. It’s based on a sensible design (The Update Framework) by proper crypto experts. The basic system is fully automatic and covers all packages on Hackage. A proposed extension would give further security improvements for individual packages at the cost of a modest effort from package authors.

It will also allow the secure use of untrusted public Hackage mirrors, which is the simplest route to better Hackage reliability. As a bonus we’re including incremental index downloads to reduce cabal update wait times. And it’s all fully backwards compatible.

Goals

The general aim is to make a significant incremental improvement to Hackage security, given limited resources to spend on the project. So that means covering as many users, packages and likely threats as we can.

We have had to think carefully about the implementation effort and what improvements will give the biggest security benefits. Our general approach has been to go for the biggest improvements first, but to leave the door open to further tightening later.

Crypto-humility and initial thoughts

Both Austin and I have some experience with cryptography and computer security. Austin used to work for a computer security company, and many years ago I worked at an anti-virus company on the problem of securing the software update process. So we know the general options. Importantly, we also know enough to know that we don’t know everything. Hopefully the combination will be enough to avoid security disasters like the recent Docker image download vulnerability.

The basic options are:

Our initial thoughts were that having the server sign the index gives a bigger bang for the buck than a scheme with author signing. This is because with the server doing the signing we can cover all packages and not require any knowledge or effort by either the package authors or the developers downloading the packages. With author signing we fear we would not be able to cover the majority of packages, at least not in the near term. Without covering all the packages we cannot have untrusted Hackage mirrors. Our rough estimate was also that index signing would be less effort to implement. Of course we’re well aware that index signing provides shallower security than the end-to-end verification with author signing.

So while we started with a general idea of the kind of solutions that would fit with Hackage and our budget, we looked around for designs and analysis from better experts than ourselves.

Austin pointed out some prior art called The Update Framework (abbreviated TUF). TUF is an architecture for securing software update systems, though there is also a reference implementation in Python. It has been designed by a number of academics, based on a few published papers and practical experience with the Tor Project’s update system. There is ongoing work to use TUF to secure Python and Ruby’s package distribution systems (both of which are fairly similar to Hackage).

What is good about TUF is that it considers the problem as a whole, with a wide range of threats. It has what security researchers call a “threat model”, which is needed to be able to discuss with clarity what kinds of threats a system can protect against. As an architecture it is also flexible enough to cover a fairly wide range of update systems: one can use subsets and still be clear about what protections one has.

The elements of TUF are still based on the basic ideas of end-to-end signing and index signing, but it is put together in a comprehensive way. There are several attacks that TUF considers and protects against that we did not consider with our initial thoughts on index signing.

Indeed even if one goes for a system based only on end-to-end author-based signing (e.g. individual GPG-signed packages) then there are still several possible attacks. In retrospect this is clear, but it was a bit of a surprise since packages individually GPG-signed by their authors is often considered as the gold standard.

The Update Framework

TUF is designed to protect against adversaries that can interfere with network traffic between you and the repository server, for example someone operating a wifi network you’re connected to, or in control of a proxy or operating a mirror of the repository. To a degree it is also designed to protect against adversaries than can compromise a repository server itself.

In a full implementation of the architecture, TUF subsumes both server-based index signing and end-to-end verification with author signing.

It relies on a number of different cryptographic keys with different roles. There are one or more root keys and a few subordinate keys for particular purposes. It does not rely on an external PKI, like the public certificate authority system that SSL/TLS uses.

The key idea with having multiple keys with different roles is that one can design a system in such a way that compromising one key is not enough to break the whole system. This relies on different keys living in different places so that compromising one doesn’t automatically imply compromising the others. Some keys have to live on public facing servers and so are at greatest risk, while others can be kept offline or on private servers and so are much harder to compromise.

Attacks

TUF provides protection against a number of attacks. If you want to imagine a concrete example, consider that the attacker runs or has compromised a mirror of Hackage that you use, and then think about each of these attacks:

Interestingly, a naive approach based only on authors signing packages only solves the first of the attacks above. All the others are still possible, and several of them can lead directly to compromised client systems, while others can force you to use old possibly vulnerable software, or just DOS your system.

Roles for keys

TUF defines a number of roles for keys:

Client verification process

In a full implementation, including delegated target keys, the client download and verification process goes like this:

At this point we know (with cryptographic evidence) whether the package index has changed and what size and hash to expect when we download it. We also have a bound on the freshness of this information because the timestamp signature includes a short (e.g. day or two) expiry time.

At this point we have downloaded the package index successfully and know (again with cryptographic evidence) that it has not been tampered with since being created on the server and so we can trust all the metadata in the index.

At this point we have downloaded an individual package and we know (with cryptographic evidence) that the package tarball has not been modified since it was created on the author’s system, and that this author is authorised to provide this named package.

Discussion

The different keys provide assurance of slightly different (but somewhat overlapping) things. If we only care about man-in-the-middle attacks then we only need the timestamp key and signature, since it includes the hash of the index which includes hashes of the packages.

The addition of the snapshot key to sign the index allows the snapshot key to be kept on a different system from the public-facing index server (think of an organisation like Red Hat with an internal system only available to people uploading packages). This timestamp/snapshot separation would allow the public-facing index server to be compromised with the damage being limited to freeze attacks (and the duration of this is bounded).

The addition of the target key (and delegated target keys) gives additional protection in the case that the index server is compromised. When target keys are being used, an attacker that has taken control of the index server can perform freeze attacks, rollback and mix and match attacks, but cannot change the contents of individual packages, nor provide packages that they are not authorised to provide.

Again, it is important to emphasise that all the keys need to be kept in separate places for them to have any security benefit.

Note that slow download and endless data attacks can be prevented because except for the initial tiny timestamp file, the client knows (securely) the expected download size and so carefully written download code can at least notice slow download and endless data attacks.

The Update Framework for Hackage

TUF is an architecture (albeit with a reference implementation) designed to be flexible enough to cover many use cases (distros, individual apps etc) and so we are free to adapt it for our particular use case.

For the case of Hackage as currently designed, the timestamp/snapshot key distinction does not add anything, as both keys would need to be on the same system. This is because the Hackage server is public facing both for download and upload. There would need to be a separate more secure upload server for the timestamp/snapshot distinction to add value. So for the Hackage implementation we will merge the timestamp and snapshot keys into a single package index signing key.

We also need to adapt the format and layout of the various bits of TUF metadata to fit the existing Hackage index format. For the most part this is fairly straightforward. The timestamp file can be provided as a separate file over HTTP, since it needs to be small and downloaded before the package index. All the other files that TUF specifies can be included inside the Hackage index file. This is straightforward because the Hackage index is simply a tarball and is designed to be extensible with extra file entries.

Our development plan had us start with the package index signing but keeping in mind the target keys feature so that it can be added later without any compatibility problems. We have enough funding to complete the package index signing and we hope that with this clear design roadmap we can secure either further funding or community volunteer effort.

Implementing target keys later would involve client tools (like cabal) generating keys and an out-of-band mechanism to submit them to be signed by the holder of the central target key (in practice the Hackage admins). For updating who is allowed to upload which package, a semi-manual process will be needed for the central target key holders to check and authorise re-signing the key delegation information. Tools like cabal would also need to be extended to sign tarballs and upload the signatures. So there would be some extra work for package authors but it could be kept fairly minimal. The main cost would be that changing the set of maintainers for a package on Hackage would become an asynchronous process involving a Hackage admin re-signing something.

As a final point, Hackage allows package authors and Hackage trustees to edit .cabal file metadata after release. So when we add target keys then these parts of the index should be protected with the target keys, not just the main package index key.

Comparing with other approaches

HTTPS

HTTPS is often mentioned as a way to improve Hackage security. As the TUF papers make clear, relying on the public PKI for security leaves a lot to be desired, given all the well-documented problems with trusting all the public certificate authorities in the world. In addition, relying on HTTPS does not allow for untrusted mirrors, whereas TUF’s index signing approach does allow that. Finally, on a practical note HTTPS is unfortunately complicated to support sanely on all platforms (OpenSSL is horrible, and distributing it for platforms that don’t come with it is painful). By contrast, TUF only relies on a crypto hash and signature verification algorithm. There are many choices for these including good choices that are small enough to embed existing audited C code directly into a Haskell library or tool. We plan to use ed25519 and in particular the C implementation extracted from NaCl.

GnuPG

Another approach that has been discussed and had some development effort is one based on author signing with GnuPG by Nikita Karetnikov. It was designed to fit with the existing Hackage index format with signatures being included in the package index tarball. The primary difficulty with a GnuPG-based system is in establishing the web-of-trust to a sufficient degree that signature verification can be done fully automatically without supervision from the user.

More GnuPG

Another recent GnuPG-based proposal comes from Chris Done. This proposal is under active discussion so in the spirit of constructive criticism it’s worth comparing it with TUF.

The first general point is that TUF has been designed by academic experts in the subject, based both on research and existing real-world systems. Our crypto-humility should cover not just crypto algorithms but extend to whole system designs. As I mentioned at the beginning of this post, though my initial thoughts on package index signing looked quite similar to the root+snapshot key part of the TUF system, there are many extra details that TUF covers that I would not have considered. With these in mind we can already spot underspecified areas in the proposal. For example one part of the spec says “Downloads the index from the web server or loads from cache if not modified”, but this leaves open the question of how we know securely if the index has been modified or not. TUF addresses this with the signed timestamp file. This kind of detail is not something I would have considered either, but TUF identifies the problem (that relying on HTTP cache control headers would be vulnerable to MITM) and has a solution. Generally in a system designed by crypto amateurs like us we would expect to have such holes. So we can do better by picking an existing expert design. TUF seems like a reasonable choice since it is designed to be somewhat flexible to fit different situations, and it’s being considered for both Python and Ruby’s package systems.

More specifically, the GnuPG-based proposal relies on a web of trust in which individual end users have to decide who they trust and configure their tools appropriately. A more recent version of the proposal includes signed mappings from packages to signers, which are basically statements of “I trust so and so to upload package foo”, but you still need to trust the author of that statement.

The system seems to be designed primarily for enterprise users who are using a carefully selected set of packages where a level of manual supervision to decide which authors to trust is justified. By contrast TUF’s target key delegation system is designed to be fully automatic for end users.

With TUF’s target key delegation, the chain of trust is very clear and simple: the root key signs the target key, which signs delegated keys and delegation information, and the delegated keys sign individual packages.

I also think it’s unfortunate that the signatures are not integrated as part of the Hackage index as in Nikita’s system (though it is in keeping with the general 3rd party design). On the positive side, the system has the flexibility to have 3rd parties make claims of trustworthiness, such as “I have manually audited this code” or “My CI system says this builds ok without obviously attempting to attack the system”. That extra complexity adds to the amount of manual supervision needed and seems to have been left out of the latest version of the proposal.

Finally because the proposal has nothing corresponding to the TUF snapshot key signature then all the corresponding attacks are possible, including rollback, freeze and mix-and-match attacks.

A central authority

A suggestion that has come up in recent discussion is that there should be a central authority for who is allowed to upload which package and that that information should be transparent. The criticism is that while Hackage is currently that central authority, it does not generate any verifiable evidence.

With TUF the target key delegation information is precisely that cryptographic evidence. The holders of the target key is that central authority and the signed delegation information is the cryptographic evidence of who is allowed to upload what, and that information is distributed to all clients.

Incremental updates

There has been a design for incremental updates of the Hackage index floating around for some years. With the TUF approach of putting more signatures into the index the pain of slow Hackage index downloads would become critical so our design incorporates that existing design for incremental updates to the index. This means most index updates will be 10s of kilobytes rather than multiple megabytes. This is actually even better than rsync (and likely better than git). The trick is that the tar format was originally designed to be append only (for tape drives) and so if the server simply updates the index in an append only way then the clients only need to download the tail (with appropriate checks and fallback to a full update). Effectively the index becomes an append only transaction log of all the package metadata changes. This is also fully backwards compatible.

Conclusion

We think implementing TUF for Hackage will be a major improvement to security. The initial implementation without target keys will be completely transparent to both end users and authors, and then the later addition of per-author target keys will further improve security by guarding against certain attacks even if the central server is compromised.

We accept that although we are programming experts, we are not crypto experts. We are hopeful that by using TUF we are reusing an existing expert crypto design and will not fall into the traps that typically bedevil our fellow crypto amateurs.


OverloadedRecordFields revived

Monday, 30 March 2015, by Adam Gundry.
Filed under coding, ghc, well-typed.

Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed the benefits. Now, however, I’m happy to report that Well-Typed are sponsoring the work required to get an improved version into GHC. Moreover, the first part of the work is already up for review on Phabricator.

The crucial thing that has enabled OverloadedRecordFields to get going again is that we’ve found a way to factor the design differently, so that we get a much better power-to-weight ratio for the extension by splitting it into two parts.

Part 1: Records with duplicate field labels

The first step is to cut down the core OverloadedRecordFields extension as much as possible. The essential idea is the same as it ever was, namely that a single module should be able to use the same field name in multiple datatypes, as in this example:

data Person  = Person  { personId :: Int, name :: String }
data Address = Address { personId :: Int, address :: String }

These definitions are forbidden in normal Haskell, because personId is defined twice, but the OverloadedRecordFields extension will permit them and instead postpone name conflict checking to use sites. The basic extension will require that fields are used in such a way that the relevant datatype is always unambiguously determined, and the meanings of record construction, pattern matching, selection and update will not change. This means that the extension can always be enabled for an existing module and it will continue to compile unchanged, an important property that was not true of the previous design.

The Haskell syntax for record construction and pattern-matching is always unambiguous, because it mentions the data constructor, which means that code like this is perfectly fine:

p = Person { personId = 1, name = "Donald" }
getId (Person { personId = i }) = i

On the other hand, record selector functions are potentially ambiguous. The name and address selectors can be used without restrictions, and with their usual types, but it will not be possible to use personId as a record selector if both versions are in scope (although we will shortly see an alternative).

Record update is a slightly more interesting case, because it may or may not be ambiguous depending on the fields being updated. In addition, since updates are a special syntactic form, the ambiguity can be resolved using a type signature. For example, this update would be ambiguous and hence rejected by the compiler:

f x = x { personId = 0 } -- is x a Person or an Address?

On the other hand, all these updates are unambiguous:

g :: Person -> Person
g x = x { personId = 0 }                 -- top-level type signature

h x = x { personId = 0 } :: Person       -- type signature outside

k x = (x :: Person) { personId = 0 }     -- type signature inside

l x = x { personId = 0, name = "Daffy" } -- only Person has both fields

Overall, this extension requires quite a bit of rewiring inside GHC to distinguish between field labels, which may be overloaded, and record selector function names, which are always unambiguous. However, it requires nothing conceptually complicated. As mentioned above, the implementation of this part is available for review on Phabricator.

Part 2: Polymorphism over record fields

While the OverloadedRecordFields extension described in part 1 is already useful, even though it is a relatively minor relaxation of the Haskell scoping rules, another important piece of the jigsaw is some way to refer to fields that may belong to multiple datatypes. For example, we would like to be able to write a function that selects the personId field from any type that has such a field, rather than being restricted to a single datatype. Much of the unavoidable complexity of the previous OverloadedRecordFields design came from treating all record selectors in an overloaded way.

But since this is new functionality, it can use a new syntax, tentatively a prefix # sign (meaning that use of # as an operator will require a space afterwards when the extension is enabled). This means that it will be possible to write #personId for the overloaded selector function. Since we have a syntactic cue, it is easy to identify such overloaded uses of selector functions, without looking at the field names that are in scope.

Typeclasses and type families will be used to implement polymorphism over fields belonging to record types, though the details are beyond the scope of this blog post. For example, the following definition is polymorphic over all types r that have a personId :: Int field:

getId :: r { personId :: Int } => r -> Int
getId x = #personId x

Moreover, we are not limited to using #personId as a selector function. The same syntax can also be given additional interpretations, allowing overloaded updates and making it possible to produce lenses for fields without needing Template Haskell. In fact, the syntax is potentially useful for things that have nothing to do with records, so it will be available as a separate extension (implied by, but distinct from, OverloadedRecordFields).

Further reading

More details of the redesigned extensions are available on the GHC wiki, along with implementation notes for GHC hackers. Last year, I gave a talk about the previous design which is still a good guide to how the types work under the hood, even though it predates the redesign.


Qualified Goals in the Cabal Solver

Friday, 27 March 2015, by Edsko de Vries.
Filed under cabal, coding, industrial-haskell-group.

When you ask cabal-install to install one or more packages, it needs to solve a constraint satisfaction problem: select a version for each of the packages you want to install, plus all their dependencies, such that all version constraints are satisfied. Those version constraints come both from the user (“please install lens version 4”) and from the packages themselves (“this package relies on mtl between version 2.1 and 2.2”). Internally cabal-install uses a modular constraint solver, written by Andres Löh. It was first presented at the Haskell Implementor’s Workshop in 2011, and is based on the paper Modular lazy search for Constraint Satisfaction Problems.

For the Industrial Haskell Group, Well-Typed has recently extended the constraint solver to deal with qualified goals. In this blog post we will explain how this constraint solver works by means of a running example, and look at how the new qualified goals change things.

Overview of the Solver

The ability to be able to decouple generating solutions from finding the right one is one of the classical reasons why functional programming matters, and cabal-install’s constraint solver makes very heavy use of this. It first builds a tree with lots of “solutions”; solutions in quotes because many of these solutions will not be valid. It then validates these solutions, marking any invalid ones. This might still leave many possible solutions, so after this we apply some preferences (we prefer newer packages over older ones for example) and heuristics (we want to pick the version of base that is already installed, no point considering others) and then it uses the first solution it finds (if any).

It is very important to realize throughout all this that these trees are never built in their entirety. We depend on laziness to only evaluate as much as necessary. A key design principle throughout the solver is that we must have enough information at each node in the tree to be able to make local decisions. Any step that would require looking at the tree in its entirety would be a big no-no.

In the remainder of this section we will see what these different steps look like using a running example.

Building the tree

Suppose we have a package database with base version 4.0, mtl versions 1.0 and 2.0, both of which depend on base, and a package foo that depends on both base and mtl version 2.0.

When we ask cabal to install package foo, it constructs a search tree that looks something like this:

GoalChoice nodes, shown as G nodes in the diagram, represent points where we decide on which package to solve for next. Initially we have only one option: we need to solve for package foo. Similarly, PChoice nodes P represent points where we decide on a package version. Since there is only one version of foo available, we again only have one choice.

Once we have chosen foo version 1.0, we need to solve for foo's dependencies: base and mtl. We don’t know which we should solve for first; the order in which we consider packages may affect how quickly we find a solution, and which solution we return (since we will eventually report the first solution we find). When we build the tree we essentially make a arbitary decision (depending on which order we happen to find the dependencies), and we record the decision using a GoalChoice node. Later we can traverse the tree and apply local heuristics to these GoalChoice nodes (for instance, we might want to consider base before mtl).

In the remainder of the tree we then pick a version for mtl (here we do have a choice in version), and then a version for base, or the other way around. Note that when we build the tree, package constraints are not yet applied: in the tree so far there is nothing that reflects the fact that foo wants version 2.0 of mtl, and every path ends with a Done node D, indicating success. Indeed, we would get precisely the same tree if we have a package DB

where foo depends on either version of mtl.

Validation

Once we have built the tree, we then walk over the tree and verify package constraints. As soon as we detect a violation of a constraint on a given path we replace that node in the tree with a failure node. For the above example this gives us the following tree:

Paths through the tree that lead to failure are not removed from the tree, but are replaced by explicit failure F. This helps with generating a good error message if we fail to find a solution. In this case, both failures are essentially the same problem: we cannot pick version 1.0 for mtl because foo needs version 2.0.

Heuristics and Preferences

After validation we apply a number of heuristics to the tree. For example, we prefer to pick a version of base early because there is generally only one version of base available in the system. In addition, we apply user preferences; for example, we try newer versions of packages before older versions. For our example this gives us

Finding a solution

After applying the heuristics we throw away all but the first choice in each GoalChoice node (but keeping all choices in the PChoice nodes)

and traverse the tree depth first to find a solution, returning the first solution we find. In our running example, this means that we will use version 1.0 of foo, 4.0 of base and 2.0 of mtl.

Whenever we encounter a Fail node we backtrack. This backtracking is aided by so-called conflict sets. I haven’t shown these conflict sets in the diagrams, but each Fail node in the tree is annotated with a conflict set which records why this path ended in failure. In our running example the conflict set for both Fail nodes is the set {foo, mtl}, recording that there is a conflict between the versions of mtl and the version of foo that we picked. The conflict set is used to guide the backtracking; any choice that we encounter while backtracking that does not involve any variables in the conflict set does not need to be reconsidered, as it would lead to the same failure.

If we cannot find any solution, then we must report an error. Reporting a good error here however is difficult: after all, we have a potentially very large tree, with lots of different kinds of failures. Constructing an informative error from this is difficult, and this is one area where cabal-install might still be improved.

Qualified goals

Motivation

Normally cabal-install can only pick a single version for each package. For example, if we have a situation

we cannot install package D because it would require installing both versions 1.0 and 2.0 of package A (this is known as the diamond problem).

Setup scripts

Cabal packages can however have their own custom Setup scripts, when necessary, which are Cabal’s equivalent of the traditional ./configure scripts. In principle there should be no problem building these Setup scripts using different (and possibly conflicting) dependencies than the library itself; after all, the Setup script is completely independent from the library.

From Cabal 1.23 and up these setup scripts can have their own list of dependencies. Let’s suppose that in our running example the Setup script of foo has a dependency on any version of mtl:

We want to allow the setup script to be compiled against a different version of mtl as foo itself, but of course we would prefer not to, in order to avoid unnecessary additional compilation time.

Qualification

In order to allow picking a different version, we introduce qualified goals for each of the setup dependencies. In our running example, this means that cabal-install will now try to solve for the variables foo, mtl, and base, as well as foo.setup.mtl and foo.setup.base. This makes it possible to pick one version for mtl and another for foo.setup.mtl.

Linking

But how do we make sure that we pick the same version when possible? One (non-) option is to look at the entire search tree and find the solution that installs the smallest number of packages. While that might work in theory, it violates the earlier design principle we started with: we only ever want to make local decisions. The search tree can be quite large; indeed, the addition of the single setup dependency to foo already makes the tree much larger, as we shall see shortly. We certainly never want to inspect the entire search tree (or, worse yet, have the entire search tree in memory).

Instead, we introduce the concept of linking. This means that when we select a version for foo.setup.mtl (say), in addition to being able to pick either version 1.0 or 2.0, we can also say “link the version of foo.setup.mtl to the version of mtl” (provided that we already picked a version for mtl).

Then we can make local decisions: when we pick a version, we prefer to link whenever possible. Of course, this is subject to certain constraints. In the remainder of this section we shall see how qualified goals and linking works using our running example, and identify some of these linking constraints.

Building the tree

The code to build the initial tree is modified to introduce qualified constraints for setup dependencies, but does not itself deal with linking. Instead, it builds the tree as normal, and we then add additional linking options into the tree as a separate phase.

The search tree for our running example gets much bigger now due to combinational explosion: we have two additional variables to solve for, and linking means we have more choices for package versions. Here’s part of the initial search tree:

Let’s follow along the spine of this tree to see what’s going on. We first consider foo and pick version 1.0, just like before (there is no other choice). Then, on this path, we first consider foo.setup.mtl, and we have two options: we can either pick version 1.0 or version 2.0. We pick version 1.0, and continue with foo.setup.base and pick version 4.0 (only one option).

But when we now consider mtl things are more interesting: in addition to picking versions 1.0 and 2.0, we can also decide to link the version of mtl against the version of foo.setup.mtl (indicated by a red label in the tree). Similarly, when we pick a version for base, we can now choose to link base against the version of foo.setup.base in addition to picking version 4.0.

Validation

When we link mtl against foo.setup.mtl, we are really saying “please use the exact same package instance for both mtl and foo.setup.mtl”. This means that the dependencies of mtl must also be linked against the dependencies of foo.setup.mtl.

In addition, ghc enforces a so-called single instance restriction. This means that (in a single package database) we can only have one instance of a particular package version. So, for example, we can have both mtl version 1.0 and mtl version 2.0 installed in the same package database, but we cannot have two instance of mtl version 2.0 (for instance, one linked against version 3.0 of transformers and one linked against version 4.0 oftransformers) installed at the same time. Lifting this restriction is an important step towards solving Cabal Hell, but for now we have to enforce it. In our terminology, this means that when we have to solve for both (say) mtl and foo.setup.mtl, we can either pick two different versions, or we can link one to the other, but we cannot pick the same version for both goals.

So, in addition to the regular validation phase which verifies package constraints, we introduce a second validation phase that verifies these kinds of “linking constraints”. We end up with a tree such as

In this part of the tree, the two failures for mtl are because we picked version 1.0 for foo.setup.mtl, but since foo itself wants mtl version 2.0, we cannot pick version 1.0 for goal mtl nor can we link mtl to foo.setup.mtl. The two failures for base are due to the single instance restriction: since we picked version 4.0 for foo.setup.base, we must link base to foo.setup.base.

Heuristics

If we picked the first solution we found in the tree above, we would select version 1.0 of mtl for foo’s setup script and version 2.0 of mtl for foo itself. While that is not wrong per se, it means we do more work than necessary. So, we add an additional heuristic that says that we should consider setup dependencies after regular (library) dependencies. After we apply this heuristic (as well as all the other heuristics) we end up with

In this part of the tree we see one failure for foo.setup.mtl and two failures for foo.setup.base. The failure for foo.setup.mtl comes from the single instance restriction again: since we picked version 2.0 for mtl, we cannot pick an independent instance for foo.setup.mtl. The failure for foo.setup.base on the right is due to the same reason, but there is an additional reason for the left failure: since we chose to link foo.setup.mtl to mtl, its dependencies (in this case, foo.setup.base) must also be linked.

Finding a solution

As before, after applying the heuristics we prune

and we report the first solution we find. In this case, this means that we will pick version 2.0 for mtl and link foo.setup.mtl to mtl, and foo.setup.base to base.

Conclusions

Although we skimmed over many details, we covered the most important design principles behind the solver and the new implementation of qualified goals. One thing we did not talk about in this blog post are flag assignments; when the solver needs to decide on the value for a flag, it introduces a FChoice node into the tree with two subtrees for true and false, and then proceeds as normal. When we link package P to package Q, we must then also verify that their flag assignments match.

Qualified goals and linking are now used for setup dependencies, but could also be used to deal with private dependencies, to split library dependencies from executable dependencies, to deal with the base-3 shim package, and possibly other purposes. The modular design of the solver means that such features can be added (mostly) as independent units of code. That doesn’t of course necessarily mean the code is also easy; making sure that all decisions remain local can be a subtle problem. Hopefully this blog post will make the solver code easier to understand and to contribute to.


Monads: From Web 2.0 to Hardware Drivers

Tuesday, 03 February 2015, by Edsko de Vries.
Filed under coding.

Monads are often considered to be a stumbling block for learning Haskell. Somehow they are thought of as scary and hard to understand. They are however nothing more than a design pattern, and a rather simple one at that. Monads give us a standardized way to glue a particular kind of computations together, passing the result of each computation to the next. As such they are useful well beyond functional programming.

JavaScript

My favourite example is the well-known “callback hell” in JavaScript. JavaScript functions often take a callback as argument, a function that they invoke on completion. Functions that take a callback argument are called asynchronous functions. The problem arises when we want to call another asynchronous function inside the callback; we end up with deeply nested callbacks.

Let’s consider an example. Suppose we want to collect configuration files for our application in the current directory, in the user’s home directory, and in the system-wide configuration directory. We might do something like

function getConfigFiles(callback) {
  function extractConfigFiles(filess) { ... }

  fs.readdir(".", function(hereErr, here) {
    fs.readdir(process.env.HOME, function(homeErr, home) {
      fs.readdir("/etc", function(etcErr, etc) {
        callback(extractConfigFiles([here, home, etc]));
      })
    })
  });
}

Since readdir is an asynchronous function, and we need to call it three times, we end up with this deeply nested structure. You can see how this might get out of hand. By contrast, in Haskell we would recognize that these kinds of asynchronous functions form a monad (the continuation monad, to be precise); after all, this fits the pattern: we want to glue asynchronous functions together, passing the result of each function to the next. In Haskell we would write the above code as

getConfigFiles = do
    here <- readdir "."
    home <- readdir homedir
    etc  <- readdir "/etc"
    return $ extractConfigFiles [here, home, etc]
  where
    extractConfigFiles = ...

This looks and feels like simple sequential code, but it isn’t; this code is precisely equivalent to the JavaScript example above it. Note that there are tons of attempts to address callback hell in JavaScript; many of which are in fact inspired by monads.

Ziria

Let’s now move from the mainstream and high level to the more esoteric and low level. Ziria is a domain specific language designed at Microsoft Research specifically for the development of Software-defined radios. Well-Typed have been working with Microsoft Research over the last few months to improve the Ziria compiler (itself written in Haskell), primarily the front end (internal code representation, parser, scope resolution and type checking) and the optimizer.

Ziria is a two-level language. The expression language is a fairly standard imperative language. For example, we can write a function that computes factorial as

fun fac(n : int) {
  var result : int := 1;
  while(n > 0) {
    result := result * n;
    n := n - 1;
  }
  return result
}

The expression language contains all the usual suspects (mutable variables, arrays, structs, control flow constructs, etc.) as well as good support for bit manipulation and complex numbers.

The more interesting part of the language however is the computation language. Ziria programs define stream computations: programs that transform an incoming stream of bits into an outgoing stream of bits. In particular, Ziria is designed so that we can write such stream computations in a high level way and yet end up with highly performant code; Ziria’s flagship example is an implementation of the WiFi 802.11a/g protocol that is able to operate in realtime.

The simplest way to turn our simple factorial computation into a stream transformer is to map our fac function:

let comp streamFac = map fac

But there are also other ways to build up stream computations. There are two fundamental primitive stream computations: take reads an element from the input stream, and emit writes an element to the output stream. Of course now we need a way to glue stream computations together; you guessed it, they form a monad. For example, here is a stream processor which creates an output stream such that each element of the output stream is the sum of all preceding elements of the input stream, starting with some initial value init:

fun comp sum(init : int) {
  var total : int := init;
  repeat {
    x <- take;
    do { total := total + x }
    emit total;
  }
}

As a second example, consider this function which copies the first n elements from the input stream to the output stream and then terminates, returning the last element it wrote to the output stream:

fun comp prefix(n : int) {
  var last : int;
  times n {
    x <- take;
    do { last := x }
    emit x;
  }
  return last
}

This kind of monadic composition where the result of one stream computation is passed to another is known as “control path composition” in the Ziria literature. We also have “data path composition”, where the output stream of one processor becomes the input stream of another. For example, consider

let comp compositionExample = {
  var total : int := 0;
  repeat {
    newTotal <- sum(total) >>> prefix(5);
    do { total := newTotal - 1 }
  }
}

We use data path composition (>>>) to make the output stream of the sum stream transformer the input stream of the prefix stream computer. We then use control path composition to update a local variable with the last value that was written minus one, and we repeat. The effect is that we sum the input stream, but decrement the running total by one every 5 elements. So, given an input stream

1,2,3,4,5,6,7,8,9,10

we output

1,3,6,10,15,20,27,35,44,54

Ziria also offers a variant operator (|>>>|) which makes sure that the computations performed by the two stream processors happen in parallel.

A Haskell perspective. Stream computers can be compared to Haskell pipes, where a stream computer is something of type Pipe a b IO, with an input stream of type a and an output stream of type b; do is the equivalent of liftIO. Control path composition corresponds to monadic or “horizontal” composition, while data path composition corresponds to vertical composition.

Optimization

Monads come with laws: properties that most hold true for all monadic code. The Ziria compiler takes advantage of these laws in the optimizer. For example, suppose you write

x <- take
y <- return (x + 1)
emit y

At this point the so-called “left identity” law kicks in, and the compiler rewrites this to

x <- take
let y = x + 1
emit y

which will then subsequently be cleaned up by the inliner. Other optimizations applied by the Ziria compiler include loop unrolling, partial evaluation, etc. It also uses a simple SMT solver to remove unnecessary conditionals, following the approach we described in a previous blogpost.

One optimization deserves special mention, although it’s not related to monads per se. The vectorization optimization turns a stream computer that takes single elements from the input stream and outputs single elements to the output stream into a stream computer which takes arrays of elements from the input stream and outputs arrays of elements to the output stream, so that the resulting code can be further optimized to operate on multiple elements simultaneously and to reduce the overhead involved from reading and writing to stream buffers.

For example, consider the following Ziria program:

fun sumArray(xs : arr int) {
  var total : int := 0;
  for i in [0, length(xs)] {
    total := total + xs[i];
  }
  return total;
}

let comp sum4 = {
  repeat {
    xs <- takes 4;
    emit sumArray(xs);
  }
}

let comp stutter = {
  repeat {
    x <- take;
    emit x;
    emit x;
  }
}

let comp stutterSum = sum4 >>> stutter

Computation sum4 takes 4 elements from the input stream, sums them up, and emits the result; we say that the cardinality of sum4 is 4:1. Computation stutter writes every element in the input stream twice to the output stream; we say that its cardinality is 1:2; the cardinality of the composition stutterSum is therefore 2:1. The optimizer turns this program into this (cleaned up for readability only):

fun sum4_vect(xs : arr[288] int) {
  var ys : arr[72] int
  for i in [0, 72] {
    let xs : arr[4] int = xs[i*4:+4]
    var total : int
    total := xs[0];
    total := total+xs[1];
    total := total+xs[2];
    total := total+xs[3];
    ys[i] := total
  };
  return ys
}

fun stutter_vect(xs : arr[72] int) {
  var ys : arr[144] int
  for i in [0, 72] {
    ys[i*2]   := xs[i];
    ys[i*2+1] := xs[i]
  };
  return ys
}

let comp stutterSum = map sum4_vect >>> map stutter_vect

The optimizer has done an excellent job here. Both sum4 and stutter have become expressions, rather than computations, that are passed as arguments to map, which can be optimized better in the code generator; sum4 now takes an array of 288 elements and returns arrays of 72 elements (4:1), while stutter_vect takes arrays of 72 elements and returns arrays of 144 elements (2:1) and the inner loop in stutter has been unrolled.

This ability of the Ziria compiler to optimize the code for different kinds of pipeline widths is one of the reasons that it is possible to write software-defined radios in Ziria in a high level manner; with other approaches such as Sora this kind of optimization had to be done by hand. The Ziria compiler also does a number of other optimizations; for a more detailed discussion, see Ziria: A DSL for wireless systems programming.

Conclusion

Monads aren’t an obscure concept that has been invented just to work around peculiarities of Haskell. They are a very general and universal design principle with applications everywhere. The concept of monads has been found useful in JavaScript, C++, Java, Scala, C#, Ruby, Rust, Go, and many other languages. Recognizing the monad pattern when it arises in your code can lead to nicer, more readable and easier to maintain designs, and recognizing the monad pattern in language design helps programmers do this. In Ziria monads turn out to be precisely the right abstraction that makes it possible to have a language in which we can write software-defined radios at a very high level of abstraction and which can yet be compiled down to highly efficient code.


How we might abolish Cabal Hell, part 2

Wednesday, 28 January 2015, by Duncan Coutts.
Filed under cabal, community.

In part 1 we looked at an overview of all the various problems that make up “Cabal hell”. We also looked at an overview of a few solutions and how they overlap.

In part 2 and part 3 we’ll look in more detail at the two major solutions to Cabal hell. In this post we’ll look at Nix-style package management, and in the next post we’ll look at curated package collections.

A reminder about what the problems are

You might recall from part 1 this slightly weird diagram of the symptoms of Cabal Hell:

Cabal Hell: the symptoms 

In this part we’re going to pick out a couple of the problems and look them in a bit more detail:

Breaking re-installations

There are situations where Cabal’s chosen solution would involve reinstalling an existing version of a package but built with different dependencies.

For example, suppose I have an application app and it depends on zlib and bytestring. The zlib package also depends on bytestring. Initially the versions I am using for my app-1.0 are zlib-0.5 and bytestring-0.9 (Versions simplified for presentation.) Now I decide in the next version of my app, app-1.1, that I want to use a nice new feature in bytestring version 0.10. So I ask cabal to build my application using a more recent bytestring. The cabal tool will come up with a solution using the new bytestring-0.10 and the same old version of zlib-0.5. Building this solution will involve installing bytestring-0.10 and rebuilding zlib-0.5 against it.

Example breakage 1 

What is the problem here?

The problem is the rebuilding of zlib-0.5.

Why is this a problem?

It is a problem because when we install the instance “zlib-0.5 built against bytestring-0.10” we have to delete the pre-existing instance “zlib-0.5 built against bytestring-0.9”. Anything that depended on that previous instance now has a dangling reference and so is effectively broken.

Why do we have to delete the previous instance?

The way installed packages are managed is such that each GHC package database can only have one instance of each package version. Note that having two different versions would be allowed, e.g. zlib-0.4 and zlib-0.5, but having two instances of zlib-0.5 is not allowed. Not deleting the previous instance would involve having two instances of zlib-0.5 (each instance built against different versions of bytestring).

So the root of the problem is having to delete – or if you like, mutate – package instances when installing new packages. Mutable state strikes again! And this is due to the limitation of only being able to have one instance of a package version installed at once.

Type errors when using packages together

The second, orthogonal, problem is that it is possible to install two packages and then load them both in GHCi and find that you get type errors when composing things defined in the two different packages. Effectively you cannot use these two particular installed packages together.

The reason is that the two packages have been built using different versions of some common dependency. For example, I might have zlib built against bytestring-0.9 and binary built against bytestring-0.10. Now if I try to use them together in GHCi (e.g. compressing the result of binary serialisation) then I will get a type error from GHC saying that bytestring-0.9:Data.ByteString.ByteString is not the same type as bytestring-0.10:Data.ByteString.ByteString. And GHC is not wrong here, we really cannot pretend that these are the same types (at least not in general).

Example breakage 2 

This is a variant on the classic “diamond dependency problem” that cabal solved years ago. So why do we still get it? In fact we never hit this problem when building a package with cabal, because cabal’s solver looks for “consistent” solutions. (Recall from part 1, that when we say a “consistent” solution we mean one that uses only one version of any package.)

We can still hit this problem when we install things separately and then use the packages directly with ghc or ghci. This is because cabal does not enforce consistency in the developer’s environment. It only enforces consistency within any set of packages it installs simultaneously.

The fundamental problem is that developers expect to be able to use combinations of their installed packages together, but the package tools do not enforce consistency of the developer’s environment.

In practice this class of problem is currently relatively rare. In part that is because one would often hit the problem above involving re-installing and breaking packages. If however we lifted the limitation on installing multiple instances of the same version of a package, then we would always be able to install new versions and we would instead hit this problem much more frequently.

Nix-style package management

The ideas behind Nix are now over 10 years old. When first reading the published papers on Nix some years ago I was struck by how simple and elegant they are. They also appear to work well in practice, with a full Linux distribution based on them, plus a number of other well-regarded tools.

The key ideas are:

Contrast this to a traditional approach, e.g. what we have now with Cabal, or Linux distros based on .rpm or .deb. In the traditional approach there is a single environment which is exactly the same as the full set of installed packages. So compared to a traditional approach, we have an extra level of indirection. Having views lets us have a much larger collection of packages installed than we have available in the working environment, and allows multiple environments.

A good illustration of what these ideas give us is to see what happens when we want to add a new package:

  1. We start with our initial environment which points to a bunch of packages from the store.

  2. We compile and install a new package into the store. So far this changes very little, which is a big feature! In particular it cannot break anything. The new installed package can co-exist with all the existing ones. There can be no conflicts (the install paths are arranged to guarantee this). No existing packages have been modified or broken. No environments have yet been changed.

  3. Now we have choices about what to do with environments. Our existing environment is unchanged and does not contain the new package. We can create a new environment that consists of the old environment with the extra package added. In principle both the old and the new environments exist. This is very much like a persistent functional data structure:

    let env' = Map.insert pkgname pkg env

    Both exist, the old one is not changed, and we can decide if we are only interested in the new one or if we want to keep both.

  4. Finally we can, if we want, switch our “current” environment to be the new environment with the new package. So while multiple environments can exist, only one of them is active in our current shell session.

So what have we gained from the added indirection of a persistent store + views?

The multiple environments effectively gives us sandboxes for free. In fact it’s better because we can easily share artefacts between sandboxes when we want to. That means far fewer rebuilds, and easy global installation of things built in an isolated environment.

Nix has a few other good ideas, like its functional package description language and some clever tricks for dealing with the messy details of system packages. The key ideas however are the ones I’ve just outlined, and they are the ones that we should steal for GHC/Cabal.

Mechanisms, policies and workflows

The package store and the multiple environments are just a mechanism, not a user interface. The mechanisms are also mostly policy free.

Generally we should start from user requirements, use cases, workflows and the like, and work out a user interface and then decide on mechanisms that will support that user interface. That said, I think it is clear that the Nix mechanisms are sound and sufficiently general and flexible that they will cover pretty much any user interface we decide we want.

So I think our design process should be:

  1. Look at the packaging tool requirements, use cases, workflows etc, and work out a user interface design.

  2. Then figure out how the actions in that user interface translate into operations on the Nix package store and environments.

Addressing the Cabal Hell problems

The Nix approach deals nicely with the problem of breaking re-installations. The Nix mechanisms guarantee that installed packages are never mutated, so no existing installed packages ever break.

The Nix mechanisms do not deal directly with the issue of type errors when using packages together. As we noted before, that requires enforcing the consistency of the developer’s environment. In Nix terms this is a policy about how we manage the Nix environment(s). The policy would be that each environment contains only one version of each package and it would guarantee that all packages in an environment can be used together.

Without wishing to prejudge the future user interface for cabal, I think this is a policy that we should adopt.

Enforcing consistency does have implications for the user interface. There will be situations where one wants to install a new package, but it is impossible to add it to the current environment while keeping all of the existing packages. For example, suppose we have two different web stacks that have many packages in common but that require different versions of some common package. In that case we could not have a consistent environment that contains both. Thus the user interface will have to do something when the user asks to add the second web stack to an environment that already contains the first. The user interface could minimise the problem by encouraging a style of use where most environments are quite small, but it cannot be avoided in general.

While what I am suggesting for consistency is relatively strong, we cannot get away without enforcing some restrictions on the environment. For example if our environment did contain two instances of the same version of a package then which one would we get when we launch GHCi? So my view is that given that we cannot avoid the user interface issues with environment consistency, it is better to go for the stronger and more useful form.

In fact we’ve already been experimenting in this direction. The current cabal sandbox feature does enforce consistency within each sandbox. This seems to work ok in practice because each sandbox environment is relatively small and focused on one project. Interestingly we have had some pressure to relax this constraint due to the cost of creating new sandboxes in terms of compiling. (Allowing some inconsistencies in the environment allows the common packages to be shared and thus only compiled once.) Fortunately this issue is one that is nicely solved by Nix environments which are extremely cheap to create because they allow sharing of installed packages.

Implementation progress

We’ve been making small steps towards the Nix design for many years now. Several years ago we changed GHC’s package database to use the long opaque package identifiers that are necessary to distinguish package instances.

More recently Philipp Schuster did a GSoC project looking into the details of what we need to do to incorporate the Nix ideas within GHC and Cabal. You can see the slides and video of his HiW presentation. We learned a lot, including that there’s quite a lot of work left to do.

Last year Edward Yang and Simon PJ (with advice from Simon Marlow and myself) started working on implementing the “Backpack” package system idea within GHC and Cabal. Backpack also needs to be able to manage multiple instances of packages with the same version (but different deps) and so it overlaps quite a lot with what we need for Nix-style package management in GHC. So Edward’s work has dealt with many of the issues in GHC that will be needed for Nix-style package management.

Another small step is that in GHC 7.10 we finally have the ability to register multiple instances of the same version of a package, and we have the basic mechanism in GHC to support multiple cheap environments (using environment files). Both of these new GHC features are opt-in so that they do not break existing tools.

The remaining work is primarily in the cabal tool. In particular we have to think carefully about the new user interface and how it maps into the Nix mechanisms.

So there has been a lot of progress and if we can keep making small useful steps then I think we can get there. Of course it would help to focus development efforts on it, perhaps with a hackathon or two.

Conclusion

Implementing the Nix approach in GHC/Cabal would cover a major part of the Cabal Hell problems.

In the next post we’ll look at curated package collections, which solves a different (but slightly overlapping) set of Cabal Hell problems. Nix-style package management and curated package collections are mostly complementary and we want both.


Simple SMT solver for use in an optimizing compiler

Wednesday, 31 December 2014, by Edsko de Vries.
Filed under coding.

This is a second blog post in a series about engineering optimizing compilers; the previous was Quasi-quoting DSLs for free. In this blog we will see how we might write a very simple SMT solver that will allow us to write an optimization pass that can turn something like

if a == 0 then
  if !(a == 0) && b == 1 then
    write 1
  else
    write 2
else
  write 3

into

if a == 0 then
  write 2
else
  write 3

without much effort at all.

Preliminaries

For the sake of this blog post we will consider a very simple imperative object language, defined as

data Expr =
    -- Arithmetic expressions
    ExprInt Integer           -- ^ Integer constants
  | ExprAdd Expr Expr         -- ^ Addition
    -- Boolean expressions
  | ExprBool Bool             -- ^ Boolean constants
  | ExprEq Expr Expr          -- ^ Equality
  | ExprNot Expr              -- ^ Negation
  | ExprAnd Expr Expr         -- ^ Logical conjunction
  | ExprOr Expr Expr          -- ^ Logical disjunction
    -- Variables
  | ExprVar VarName           -- ^ Read from a variable
  | ExprAssign VarName Expr   -- ^ Write to a variable
    -- Control flow
  | ExprSeq Expr Expr         -- ^ Sequential composition
  | ExprIf Expr Expr Expr     -- ^ Conditional
    -- Input/output
  | ExprRead                  -- ^ Read an integer from the console
  | ExprWrite Expr            -- ^ Write an integer to the console

We will assume the existence of a quasi-quoter for this language so that we can write Haskell fragments such as

[expr| if a == 0 then read else write b |]

instead of

ExprIf (ExprEq (ExprVar a) (ExprInt 0)) 
       ExprRead 
       (ExprWrite (ExprVar b))

How you can write such a quasi-quoter was the topic of the previous blog post. You should however be able to read this blog post without having read the previous post; hopefully the mapping from the concrete syntax to the constructors of Expr is pretty obvious.

Simplifying assumption

Our goal will be to write a function

provable :: Expr -> Bool

Let’s consider some examples:

Note that this means that it’s perfectly possible for both an expression and the negation of that expression to be unprovable.

What about an expression !(n == 0 && n == 1)? Morally speaking, this expression should be provable. However, we will be making the following very important simplifying assumption:

provable is allowed to give up: when provable returns False, this should be interpreted as “failed to prove”, rather than “there exist a counterexample”.

From a compiler perspective, if something is not statically provable, that simply means that an optimization may not be applied even though it could: that is, we miss an opportunity to make the program go faster, but we will not break the code.

An evaluator

We don’t want (or need) to embed a full blown theorem prover into our compiler: ideally we write something simple that will still catch a lot of the common cases. Moreover, we would prefer to reuse as much of our existing compiler infrastructure as we can. Our optimizer is likely to contain an interpreter for the object language, so that we can attempt to statically evaluate expressions. We are going to adapt this interpreter so that we can also use it in our solver. In fact, as we shall see, the solver will be a one-liner.

But we’re getting ahead of ourselves. Let’s consider how to write the interpreter first. The interpreter will be running in an Eval monad; for our first attempt, let’s define this monad as a a simple wrapper around the list monad:

newtype Eval a = Eval { unEval :: [a] }
  deriving ( Functor
           , Applicative
           , Alternative
           , Monad
           , MonadPlus
           )

runEval :: Eval a -> [a]
runEval act = unEval act

We will use the monad for failure:

throwError :: Eval a
throwError = Eval []

We can provide error recovery through

onError :: Eval a -> Eval a -> Eval a
onError (Eval act) (Eval handler) = Eval $
    case act of
      [] -> handler
      rs -> rs

We will see why we need the list monad when we discuss the evaluator for boolean expressions; but let’s consider the evaluator for integer expressions first:

evalInt :: Expr -> Eval Integer
evalInt = go
  where
    go (ExprInt i)    = return i
    go (ExprAdd a b)  = (+) <$> evalInt a <*> evalInt b
    go (ExprIf c a b) = do cond <- evalBool c
                           evalInt (if cond then a else b)
    go _              = throwError 

Hopefully this should be pretty self explanatory; our toy language only has a few integer-valued expressions, so there isn’t much to do. The interpreter for boolean expressions is more interesting:

evalBool :: Expr -> Eval Bool
evalBool = \e -> go e `onError` guess e
  where
    go (ExprBool a)   = return a
    go (ExprEq   a b) = (==) <$> evalInt a <*> evalInt b
    go (ExprNot  a)   = not  <$> evalBool a
    go (ExprAnd  a b) = (&&) <$> evalBool a <*> evalBool b
    go (ExprOr   a b) = (||) <$> evalBool a <*> evalBool b
    go (ExprIf c a b) = do cond <- evalBool c
                           evalBool (if cond then a else b)
    go _              = throwError 

    guess _e = return True <|> return False

The definition of go contains no surprises, and follows the definition of go in evalInt very closely. However, the top-level definition

evalBool = \e -> eval e `onError` guess e

is more interesting. If for some reason we fail to evaluate a boolean expression (for example, because it contains a variable) then guess returns both True and False. Let’s consider some examples:

runEval $ evalBool [expr| True |]

evalutes to [True];

runEval $ evalBool [expr| a |]

evaluates to [True, False] because we don’t know what value a has, but

runEval $ evalBool [expr| a || True |]

evaluates to [True, True]: we still have two guesses for a, but no matter what we guess a || True always evaluates to True.

Satisfiability

We can now write our SMT solver; as promised, it will be a single line:

satisfiable :: Expr -> Bool
satisfiable = or . runEval . evalBool

Function satisfiable (the “S” in SMT) checks if the expression is true for some values of the variables in the expression. How do we check this? Well, we just run our evaluator which, when it encounters a variable, will return both values for the variable. Hence, if any of the values returned by the evaluator is True, then the expression is true at least for one value for each variable.

Once we have an implementation of satisfiability, we can implement provable very easily: an expression is provable if its negation is not satisfiable:

provable :: Expr -> Bool
provable = not . satisfiable . ExprNot

If we consider the three examples from the previous section, we will find that both True and a || True are provable, but a by itself is not, as expected.

Inconsistent guesses

The careful reader might at this point find his brow furrowed, because something is amiss:

runEval $ evalBool [expr| a || !a |]

evaluates to

[True, True, False, True]

This happens because the evaluator will make two separate independent guesses about the value of a. As a consequence, a || !a will be considered not provable.

Is this a bug? No, it’s not. Recall that we made the simplifying assumption that provable is allowed to give up: it’s allowed to say that an expression is not provable even when it morally is. The corresponding (dual) simplifying assumption for satisfability, and hence the interpreter, is:

The interpreter, and hence satisfiability, is allowed to make inconsistent guesses.

Making an inconsistent guess is equivalent to assuming False: anything follows from False and hence any expression will be considered satisfiable once we make an inconsistent guess. As a consequence, this means that once we make inconsistent guesses, we will consider the expression as not provable.

More precision

We can however do better: whenever we make a guess that a particular expression evaluates to True or False, then if we see that same expression again we can safely make the same guess, rather than making an independent guess. To implement this, we extend our Eval monad with some state to remember the guesses we made so far:

newtype Eval a = Eval { unEval :: StateT [(Expr, Bool)] [] a }
  deriving ( Functor
           , Applicative
           , Alternative
           , Monad
           , MonadPlus
           , MonadState [(Expr, Bool)]
           )
           
runEval :: Eval a -> [a]
runEval act = evalStateT (unEval act) []

throwError :: Eval a
throwError = Eval $ StateT $ \_st -> []

onError :: Eval a -> Eval a -> Eval a
onError (Eval act) (Eval handler) = Eval $ StateT $ \st ->
    case runStateT act st of
      [] -> runStateT handler st
      rs -> rs

The implementation of evalInt does not change at all. The implementation of evalBool also stays almost the same; the only change is the definition of guess:

guess e = do
  st <- get
  case lookup e st of
    Just b  -> return b
    Nothing -> (do put ((e, True)  : st) ; return True)
           <|> (do put ((e, False) : st) ; return False)

This implements the logic we just described: when we guess the value for an expression e, we first look up if we already made a guess for this expression. If so, we return the previous guess; otherwise, we make a guess and record that guess.

Once we make this change

runEval $ evalBool [expr| a || !a |]

will evaluate to [True,True] and consequently a || !a will be considered provable.

Example: folding nested conditionals

As an example of how one might use this infrastructure, we will consider a simple pass that simplifies nested conditionals (if-statements). We can use provable to check if one expression implies the other:

(~>) :: Expr -> Expr -> Bool
(~>) a b = provable [expr| ! $a || $b |]

The simplifier is now very easy to write:

simplifyNestedIf :: Expr -> Expr
simplifyNestedIf = everywhere (mkT go)
  where
    go [expr| if $c1 then
                 if $c2 then
                   $e1
                 else
                   $e2
               else
                 $e3 |]
      | c1 ~> c2              = [expr| if $c1 then $e1 else $e3 |]
      | c1 ~> [expr| ! $c2 |] = [expr| if $c1 then $e2 else $e3 |]
    go e = e

The auxiliary function go pattern matches against any if-statement that has another if-statement as its “then” branch. If we can prove that the condition of the outer if-statement implies the condition of the inner if-statement, we can collapse the inner if-statement to just its “then” branch. Similarly, if we can prove that the condition of the outer if-statement implies the negation of the condition of the inner if-statement, we can collapse the inner if-statement to just its “else” branch. In all other cases, we return the expression unchanged. Finally, we use the Scrap Your Boilerplate operators everywhere and mkT to apply this transformation everywhere in an expression (rather than just applying it top-level). For example,

simplifyNestedIf [expr| 
    if a == 0 then 
      if !(a == 0) && b == 1 then 
        write 1 
      else 
        write 2 
    else 
      write 3 
  |]

evaluates to

if a == 0 then
  write 2
else
  write 3

as promised. Incidentally, perhaps you are wondering why such an optimization would be useful; after all, which self respecting programmer writes such code? However, code like the above may result from other compiler optimizations such as inlining: imagine that the inner if-statement came from a inlined function. A lot of compiler optimizations are designed to clean up other compiler optimizations.

Conclusion

We can implement a very simple but useful SMT solver for use in an optimizing compiler by making a small change to the interpreter, which we need anyway.

Note that the increase in precision gained from recording previous guesses is a gradual increase only; satisfiability may still make some inconsistent guesses. For example

runEval $ evalBool [expr| !(n == 0 && n == 1) |]

will evaluate to

[False,True,True,True]

because it is making independent guesses about n == 0 and n == 1; consequently !(n == 0 && n == 1) will not be considered provable. We can increase the precision of our solver by making guess smarter (the “MT” or “modulo theories” part of SMT). For example, we could record some information about the guesses about integer valued variables.

At the extreme end of the scale we would be implementing a full decision procedure for first order arithmetic, which is probably optimization gone too far. However, the approach we took above where we start from the basis that we are allowed to make inconsistent guesses gives us a way to implement a simple solver that addresses the most important cases, even if it misses a few—without requiring a lot of complicated infrastructure in the compiler. And as long as we make sure that our evaluator never returns too few values (even if it may return too many) we don’t have to worry that we might generate invalid code: the worst that can happen is that we miss an optimization.


Compose conference and New York City Haskell courses

Friday, 05 December 2014, by Andres Löh.
Filed under community, training, well-typed.

Well-Typed is happy to announce that we are sponsoring

C◦mp◦se conference

Friday, January 30 – Sunday, February 1, 2015, New York City

This conference is focused on functional programming and features a keynote by Stephanie Weirich on dependent types as well as invited talks by Anthony Cowley, Maxime Ransan and Don Syme, plus a whole lot of additional contributed talks. There’s also an “unconference” with small workshops and tutorials as well as the opportunity to get your hands dirty and try things out yourself.

For several years now, we have been running successful Haskell courses in collaboration in Skills Matter. For the C◦mp◦se conference, we have decided to bring theses courses to New York! You can participate in our Haskell courses directly before or directly after the conference (or both):

Fast Track to Haskell

Thursday, January 29 – Friday, January 30, 2015, New York City

(Don’t worry, there’s no overlap with Stephanie Weirich’s keynote on Friday evening that marks the start of C◦mp◦se.)

You can register here.

This course is for developers who want to learn about functional programming in general or Haskell in particular. It introduces important concepts such as algebraic datatypes, pattern matching, type inference, polymorphism, higher-order functions, explicit effects and, of course, monads and provides a compact tour with lots of hands-on exercises that provide a solid foundation for further adventures into Haskell or functional programming.

Advanced Haskell

Monday, February 2 – Tuesday, February 3, 2015, New York City

You can register here.

This course is for developers who have some experience in Haskell and want to know how to work on larger projects and how things scale. The course covers important topics such as selecting the right data structures for a task at hand, taking the functional perspective into account, and takes a thorough look at Haskell’s “lazy evaluation” and how to reason about time and space performance of Haskell programs. There’s also some focus on how to use Haskell’s powerful abstraction mechanisms and concepts such as Applicative Functors, Monads and Monad Transformers to help you organize larger code bases. Finally, depending on time and demand, there’s the opportunity to look at Parallelism and Concurrency, or at type-level programming. Once again, the course comes with several carefully designed hands-on exercises and provides room for specific questions that the participants might have.

Both courses will be taught by Duncan Coutts, co-founder and partner at Well-Typed. He’s both an experienced teacher and is involved in lots of commercial Haskell development projects at Well-Typed. He’s seen a lot of Haskell code, and knows perfectly which techniques and approaches work and which do not.

Well-Typed training courses

In general, our courses are very practical, but don’t shy away from theory where necessary. Our teachers are all active Haskell developers with not just training experience, but active development experience as well. In addition to these two courses in New York City, we regularly offer courses in London, and plan to offer courses in other European locations, too.

We also provide on-site training on requests nearly anywhere in the world. If you want to know more about our training or have any feedback or questions, have a look at our dedicated training page or just drop us a mail.


Haskell development job with Well-Typed

Thursday, 27 November 2014, by Duncan Coutts, Andres Löh.
Filed under well-typed.

tl;dr If you’d like a job with us, send your application before Christmas.

We are looking for a Haskell expert to join our team at Well-Typed. This is a great opportunity for someone who is passionate about Haskell and who is keen to improve and promote Haskell in a professional context.

About Well-Typed

We are a team of top notch Haskell experts. Founded in 2008, we were the first company dedicated to promoting the mainstream commercial use of Haskell. To achieve this aim, we help companies that are using or moving to Haskell by providing a range of services including consulting, development, training, and support and improvement of the Haskell development tools. We work with a wide range of clients, from tiny startups to well known multinationals. We have established a track record of technical excellence and satisfied customers.

Our company has a strong engineering culture. All our mangers and decision makers are themselves Haskell developers. Most of us have an academic background and we are not afraid to apply proper computer science to customers’ problems, particularly the fruits of FP and PL research.

We are a self-funded company so we are not beholden to external investors and can concentrate on the interests of our clients, our staff and the Haskell community.

About the job

We are looking for someone to join as a general member of our team, not for a single specific project or task. The work could cover any of the projects and activities that we are involved in as a company. The work may involve:

We try wherever possible to arrange tasks within our team to suit peoples’ preferences and to rotate to provide variety and interest.

Well-Typed has a variety of clients. For some we do proprietary Haskell development and consulting. For others, much of the work involves open-source development and cooperating with the rest of the Haskell community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from industry, academia or personal interest. Familiarity with other languages, low-level programming and good software engineering practices are also useful. Good organisation and ability to manage your own time and reliably meet deadlines is important. You should also have good communication skills. Being interested or having experience in teaching Haskell (or other technical topics) is a bonus. Experience of consulting or running a business is also a bonus. You are likely to have a bachelor’s degree or higher in computer science or a related field, although this isn’t a requirement.

Offer details

The offer is initially for a one-year full time contract, with the intention of a long term arrangement. We are also happy to consider applications for part-time work. We are a distributed company so everyone works remotely. Living in England is not required. We offer flexible hours. We may be able to offer either employment or sub-contracting, depending on the jurisdiction in which you live. We operate a fair profit share scheme so the salary is variable but with a minimum guarantee. For full time annual equivalent (with the statutory minimum holidays) the guaranteed minimum is GBP 34.8k and based on past performance the expected level is approximately 40% higher.

If you are interested, please apply via info@well-typed.com. Tell us why you are interested and why you would be a good fit for the job, and attach your CV. Please also indicate how soon you might be able to start. We are more than happy to answer informal enquiries. Contact Duncan Coutts (duncan@well-typed.com, dcoutts on IRC) or Andres Löh (andres@well-typed.com, kosmikus on IRC) for further information.

To ensure we can properly consider your application, please get it to us by December 22nd, 2014. We expect to do interviews at the beginning of January.


Previous entries