Come talk at the Haskell Implementers' Workshop!

Tuesday, 02 June 2009, by Duncan Coutts.
Filed under community.

Are you doing research, hacking or playing with any of the infrastructure behind the Haskell ecosystem? Come to the Haskell Implementers' Workshop and tell us about it!

The Haskell Implementers' Workshop is a new workshop that will be held along side ICFP and the Haskell Symposium this summer in Edinburgh.

It's not just for compiler hackers! We also want to hear from people writing tools or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. The aim is to get the people involved with building the whole infrastructure together to share ideas, experiences, and ask for feedback from fellow experts.

So think about what you've been working on recently, is there something about it that you would like to share or get feedback on? Perhaps you've got a great idea or even something to complain about! You've now got two weeks to think of something and send us a title and a short abstract of your talk. See the call for talks for more details.

It's going to be a relatively informal affair, there will be no proceedings, though with the speakers' permissions we hope to video the talks and make any slides available.

Buildings plugins as Haskell shared libs

Thursday, 21 May 2009, by Duncan Coutts.
Filed under coding, industrial-haskell-group.

This post is a sneak preview about building Haskell shared libraries on Linux. We'll look at how to use ghc to make a standalone Haskell shared library that exports C functions. We could use this shared library as part of a bigger project (without having to use ghc for the final linking) or we could load it dynamically, e.g. as a plugin in some other program.

This work is being supported by the IHG and it builds on the hard work of several other people over the last few years (see the first post in this series for the history and credits)

Building GHC with shared libs support

For starters you need the latest development version of GHC. See these instructions on getting the sources and doing the configure, build and install steps.

The only non-standard thing you need to do is to use ./configure --enable-shared. Note that this has only been tested on Linux x86-64 and x86, though in the past, the shared lib support has also worked on Linux PPC and OSX PPC.

Currently what you get is a ghc that itself is statically linked but it can build programs and shared libraries that dynamically link against the runtime system and base libraries.

Building programs that use shared libs

For example, for "hello world":

$ ghc --make -dynamic Hello.hs

It is interesting to look at the output of the ldd program:

$ ldd ./Hello

I'll not paste the whole output, but here's a bit of it: =>

(I've simplified the ghc version slightly)

If you were to look at the full output what you would notice is that it links against each Haskell package as a separate .so file. What is more, it is able to find the shared libs even though they are not in a standard location like /usr/local/lib. This is because by default it is using the -rpath mechanism. It is also possible to build binaries in a mode that does not embed an rpath which might be more suitable for deployment.

Building shared libs

Suppose we have a module Foo.hs that uses the FFI to export a C function called foo():

module Foo where
import Foreign.C
foreign export ccall foo :: CInt -> CInt
foo :: CInt -> CInt
foo = ...

we can build it into a shared library:

$ ghc --make -dynamic -shared -fPIC Foo.hs -o

We need to use -dynamic, -shared and -fPIC. The -dynamic flag tells ghc at the compile step to produce code so that it can link dynamically to dependent packages. At the link step it tells ghc to actually link dynamically to dependent packages. The -shared flag tells ghc to link a shared library rather than a program. The -fPIC flag tells ghc to make code that is suitable to include into a shared library. If we were to break it down into separate compile and link steps then we would use:

$ ghc -dynamic -fPIC -c Foo.hs
$ ghc -dynamic -shared Foo.o Foo_stub.o -o

In principle you can use -shared without -dynamic in the link step. That would mean to statically link the rts all the base libraries into your new shared library. This would make a very big, but standalone shared library. However that would require all the static libraries to have been built with -fPIC so that the code is suitable to include into a shared library and we don't do that at the moment.

If we use ldd again to look at the that we've made we will notice that it is missing a dependency on the rts library. This is problem that we've yet to sort out, so for the moment we can just add the dependency ourselves:

$ ghc --make -dynamic -shared -fPIC Foo.hs -o \
  -lHSrts-ghc6.11 -optl-Wl,-rpath,/opt/ghc/lib/ghc-6.11/

The reason it's not linked in yet is because we need to be able to switch which version of the rts we're using without having to relink every library. For example we want to be able to switch between the debug, threaded and normal rts versions. It's quite possible to do this and it just needs a bit more rearranging in the build system to sort it out. Once it's done you'll even be able to switch rts at runtime, eg:

$ LD_PRELOAD=/opt/ghc/lib/ghc-6.11/
$ ./Hello

Going back to our, now that it is linked against the rts it is completely standalone, we can link it into a C program using just gcc, or we can use dlopen() to load at runtime.

Assuming we've got in the current directory, we can link it into a C program:

$ gcc main.c -o main -lfoo -L.

If you use ldd now it'll tell you that is not found. Remember that the runtime linker doesn't look in the same places as the static linker. We told the static linker to look in the current directory with the flag -L.. For the dynamic linker we can either move our to /usr/local/lib or we can embed a path into the binary that tells the runtime linker where to look. One particularly neat way to do this is to tell it to look for the library not at an absolute path, but relative to the program itself:

$ gcc main.c -o main -lfoo -L. -Wl,-rpath,'$ORIGIN'

The Linux runtime linker understands the special variable $ORIGIN and interprets it as the location of the executable. This also works on Solaris. Windows and OS X have something similar. This makes it possible to distribute binaries along with shared libraries and have the whole lot fully relocatable.

If we want to load the library and call functions at runtime we would use C code like:

void *dl = dlopen("./", RTLD_LAZY);
int (*foo)(int a) = dlsym(dl, "foo");
printf("%d\n", foo(2500));

In this case we do not need to link our C program against (we just need -ldl for the dynamic linking functions like dlopen).

$ gcc main.c -o main -ldl

Now one thing to watch out for is that before you call any exported Haskell function, you have to start up the runtime system. If you just call foo() directly then it'll emit a helpful error message to remind you. We have to use the C API of the Haskell FFI to initialise the runtime system. This is a little tiresome. In our case it'll look like:

hs_init(&argc, &argv);

The first line is specified by the Haskell FFI. The second is a GHC'ism. It initialises the module containing the function we're going to call.

If you're exporting a plugin API then hopefully the API will support some kind of plugin initialisation. In that case you can include the above C code to initialise the rts before any of the Haskell functions get called. We can do that by adding the above initialisation code into a C function and export that from our shared lib:

void init (void);
void init (void) { ... }

Then we would add init into our shared lib:

$ ghc -fPIC -c init.c
$ ghc -dynamic -shared Foo.o Foo_stub.o init.o -o \
  -lHSrts-ghc6.11 -optl-Wl,-rpath,/opt/ghc/lib/ghc-6.11/

Of course the calling program has to call init() first.

If you have to support a C API where there is no initialiser then we can use this trick:

static void init (void) __attribute__ ((constructor));
void init (void) { ... }

The constructor attribute means the function will be called on program startup or as soon as the library is loaded via dlopen.

Next steps for the Haskell Platform

Wednesday, 06 May 2009, by Duncan Coutts.
Filed under community, haskell-platform.

Don just announced the first release of the Haskell Platform.

The intention of this first major release series is to get up to speed and test out our systems for making releases. We want to have everything working smoothly in time for GHC 6.12, when we hope to take over from the GHC team the task of making end-user releases.

We would like to thank the people who have worked on the release. Mikhail Glushenkov and Gregory Collins have put a lot of effort into the Windows and OSX installers. (We hope the OSX installer will be available in time for the next minor release.) We also received a lot of helpful feedback on the release candidates and the general release process from Claus Reinke, Sven Panne and Bulat Ziganshin. Many other people tested out release candidates on a range of systems. Thanks to everyone for all that.


There will be follow-up minor releases 4 weeks and 10 weeks after this initial release. These will incorporate feedback on the installers and packaging. Your comments and feedback will be appreciated.

Upcoming policy decisions

We have said that major releases will be on a 6 month schedule. Major releases may include new and updated packages, while minor releases will contain bug fixes and fixes for packaging problems.

There are many policy details that we have to sort out however. For example, how do we decide which packages to add to new releases? What quality standards should we demand?

Importantly, these policy decisions are not ones that Don and I want to make ourselves, and indeed we should not be the ones to make them. These are questions for the community to decide. The plan is to discuss them on the libraries mailing list in the coming weeks and months. However, to make sure that necessary decisions do actually get made I'm going to propose a steering committee. The members would have the task of talking to the release team, thinking about what needs to be decided and guiding discussions on the mailing list. They would also have to make sure policy decisions are recorded in the wiki and are communicated to the release team.

So, if you are interested in the direction and success of the platform then now is a good time to get involved. Keep an eye out for the discussions on the libraries mailing list. If you want to do some hacking then we still need more help to organise and automate our release processes.

Well-Typed at CUFP

Sunday, 03 May 2009, by Ian Lynagh.
Filed under industrial-haskell-group, well-typed.

The Commercial Users of Functional Programming (CUFP) workshop is in Edinburgh this year, on the 4th September, along with the developer tracks on the 3rd and 5th. Both Duncan and I will be there, as well as at ICFP and the other co-located events. If you'll be there then and would like to talk to us, either about Well-Typed or about the Industrial Haskell Group (IHG), then drop us an e-mail or just find us during the week.

In the mean time, if you'd like to give a 25 minute talk about your experiences with functional programming at CUFP, then you have just two weeks to submit a proposal. These talks are a great way for everyone to benefit from each others' experiences. The call says:

Talks are typically 25 minutes long, but can be shorter. They aim to inform participants about how functional programming played out in real-world applications, focusing especially on the re-usable lessons learned, or insights gained. Your talk does not need to be highly technical; for this audience, reflections on the commercial, management, or software engineering aspects are, if anything, more important. You do not need to submit a paper! Talks on the practical application of functional programming with a primarily technical focus may also be appropriate for the adjacent DEFUN 2009 event.

If you are interested in offering a talk, or nominating someone to do so, send an e-mail to francesco(at)erlang-consulting(dot)com or jim(dot)d(dot)grundy(at)intel(dot)com by 15 May 2009 with a short description of what you'd like to talk about or what you think your nominee should give a talk about. Such descriptions should be about one page long.

"Hello world" now only 11k using GHC with shared libs

Tuesday, 28 April 2009, by Duncan Coutts.
Filed under coding, industrial-haskell-group.

$ ./Hello.dyn 
Hello World!

$ ls -ogh Hello Hello.dyn
411K 2009-04-28 21:59 Hello
 11K 2009-04-28 21:55 Hello.dyn

On Linux x86-64 with GHC using shared libraries a "Hello World" program is now only 11k compared to 411k previously. By comparison, JHC manages 6.4k and an equivalent C program is 6.3k. (All sizes after running strip on the binary.)

As I mentioned earlier, the IHG has asked us to work on improving GHC's support for shared libraries. I've been updating the new GHC build system to support --enable-shared and I've just now managed to get the build to go through. I'll clean up my patches and send them in tomorrow. There are still a number of things to do. I've got to run the testsuite with everything built for shared libs. Clemens had this working before so I'm not expecting too many test failures. We also need to set up a GHC buildbot to use --enable-shared so that we do not get regressions.

The next task will be to test that it works to make a Haskell library that exports a C API and to use it as a plugin for some other program. Anyone got any good suggestions for a simple demo plugin? What programs have nice simple plugin APIs?

First round of Industrial Haskell Group development work

Tuesday, 28 April 2009, by Duncan Coutts.
Filed under industrial-haskell-group.

The Industrial Haskell Group (IHG) have asked us to get cracking on a number of tasks:

We'll talk in more detail about each one as we tackle them.

Shared libraries

We've started on the shared libraries task. This is quite a big area. Lots of people have put a lot of hard work into it already but there's a fair bit left to do before we have GHC releases using them by default.

A little history

Wolfgang Thaller did a lot of the original work on generating position independent code (PIC) in the native codegen. Clemens Fruhwirth pushed things further along as part of a SoC project. He got shared libs working on Linux and started to address some of the packaging and management issues. GHC version 6.10 actually released with the shared libs code as an experimental feature.

Why do we care about shared libs?

There are several reasons we care. The greatest advantage is that it enables us to make plugins for other programs. There are loads of examples of this, think of plugins for things like vim, gimp, postgres, apache. On Windows if you want to make a COM or .NET component then it usually has to be as a shared library (a .dll file).

There has been most demand for this feature from Windows users over the years and for some time it has been possible to generate .dlls using GHC (though it was broken in version 6.10.1). It's not been an easy feature to use however, and what's more the current results are not exactly great. While you can currently take a bunch of Haskell modules that export a C API and make a .dll, the .dll file you get is huge. It statically links in the runtime system and all the other Haskell packages. So if you want to use more than one dll plugin then each one has it's own copy of the GHC runtime system and all the libraries! Obviously this is not ideal. Having all these copies of the runtime system and base libs takes more memory, more disk space and slows things down. What everyone really wants is to be able to build the runtime system and each Haskell package as a separate .dll file. Then each plugin should be small and would share the runtime system and other dependencies that they have in common.

A somewhat superficial reason is that it makes your "Hello World" program much smaller because it doesn't have to include a complete copy of the runtime system and half of the base library. It's true that in most circumstances disk space is cheap, but if you've got some corporate shared storage that's replicated and meticulously backed-up and if each of your 100 "small" Haskell plugins is actually 10MB big, then the disk space does not look quite so cheap.

Using shared libraries also makes things a bit easier for Haskell applications that want to do dynamic code loading. For example GHCi itself currently has to load two copies of the base package, the one that is statically linked with and another copy that it loads dynamically. With shared libraries it would just end up with another reference to the same copy of the single shared base library.

Shared libs also completely eliminates the need for the "split objs" hack that GHC uses to reduce the size of statically linked programs. This should make our link times a bit quicker.

What we'll be doing

We're planning to get things to the stage where a GHC user can make a working plugin on Linux x86, Linux x86-64 and Windows.

As recently as a few days ago people have managed to get GHC HEAD working with shared libraries on Linux x86-64. Since then however we've had the new GHC build system land in the HEAD branch. So the first thing I've been working on is porting the shared library support to the new build system. So far so good. I'll report when I've got the build to go all the way through.

Platform progress and the Hackathon

Friday, 24 April 2009, by Duncan Coutts.
Filed under community, haskell-platform.

The Haskell Hackathon last weekend was a great success with more than 50 people attending over the three days. Thanks to the sponsors and local organisers!

If you've been to a few of these events you learn that it's best not to come with too many preconceived ideas for what to work on. Since the point of the hackathon is really collaboration, you end up spending half the time talking and the other half working on cool ideas that other people bring.

I arrived with the general plan to work on the Haskell Platform release, and along with Don Stewart and Lennart Kolmodin we did actually get a bit done. I'm slightly embarrassed to admit that I spent three days at the Haskell Hackathon and wrote no Haskell code, only POSIX shell script and M4 autoconf macros!

Don and I updated the list of packages that will be in the first platform release. There were a few that needed to be bumped after the ghc-6.10.2 release. Our thanks to Ross who had already uploaded all the core and "extra libs" packages to Hackage.

The three of us also worked on making a generic Unix tarball of the platform. The point is for users of distros which do not yet have native packages for the platform to be able to download this tarball and ./configure; make; make install. We even managed to get something working just enough for people to be able to test it (haskell-platform-2009.0.0.tar.gz).

Chris Eidhof and Eelco Lempsink of Tupil designed a cool "Get Haskell" download page
Get Haskell
(The silly caption was Chris's joke in response to Ganesh's comment about an earlier design)
The idea is that we would put this at to provide an easy start for new users. For OSX and Windows, the icons would link directly to a download and a page with install and post-install instructions. The Linux icon would link to another page with instructions for each supported distro, or the generic tarball for unsupported distros.

Outside of the Hackathon people have also been working hard on the platform release. If you're on the mailing list you'll know that Mikhail Glushenkov has been making great progress on preparing a Windows installer. He's got a beta version available (HaskellPlatform-2009.0.0-setup.exe). Report feedback in the platform trac ticket #6.

Gregory Collins has also been working hard on a cabal2macpkg tool to generate OSX packages from Cabal packages. He'll use this for each package in the platform and then bundle them all together (along with ghc) into one installer. He's been having difficulty with the fact that the package format for OSX Leopard is woefully under-documented.

If you're someone who prepares distro packages then now is an excellent time to get started making sure you've got the correct versions of all the platform packages and making a haskell-platform meta-package. See the platform trac for more details.

Regression testing with Hackage

Saturday, 21 March 2009, by Duncan Coutts.
Filed under haskell-platform.

Suppose you wanted to do something rash like release a new version of some important piece of infrastructure like Cabal, haddock or indeed ghc itself. Of course you worry that your sparkling new release might have hidden regressions. If only you could check that you're not breaking anyone's code. Well, you can!

We can use the cabal command line tool to do regression testing. Basically we build all of Hackage with the old and new releases and then we compare the build reports to find regressions. Simple!

Let's look at the details...

(

The Industrial Haskell Group

Monday, 02 March 2009, by Duncan Coutts.
Filed under well-typed.

IHG logo

We are pleased to announce the creation of the Industrial Haskell Group (IHG). The IHG is an organisation to support the needs of commercial users of the Haskell programming language.

For more information, please see

Currently, the main activity of the IHG is a collaborative development scheme, in which multiple companies fund work on the Haskell development platform to their mutual benefit. The scheme has started with three partners of the IHG, including Galois and Amgen.

More details are available at

If your company is interested in joining then please e-mail

Cabal ticket #500

Saturday, 14 February 2009, by Duncan Coutts.
Filed under community.

I just opened the 500th Cabal ticket! What, you mean there's no prize?

I'll ignore the possibility that this is a sign that Cabal is full of bugs and take the positive view that 500 tickets is a sign of an active, useful project. Nobody bothers complaining about useless projects.

As it happens, the 500th ticket is not a bug but an idea for a project. There are over 1,000 packages on Hackage now and the question is how many of them can be installed simultaneously? This is not just an idle statistic. If two packages cannot be installed with consistent dependencies then it is unlikely that you can use both together in your next project. That is of course a wasted opportunity for code re-use.

The idea is if we can work out the set of packages on Hackage that can all be installed together consistently then we can mark package pages with that information. Basically we would be handing out brownie points. Hopefully we can also influence maintainers of other packages to adjust their dependencies so that their package can also join the happy collection of packages that can all agree on their dependencies.

Actually calculating the maximal set of consistent packages is a bit tricky. It is almost certainly NP-complete in general but in practice is probably doable and we can probably live with approximations.

In fact it might make quite a good Google Summer of Code project. If you are interested, get in touch.

Previous entries

Next entries