More help with the Parallel GHC Project

Wednesday, 02 March 2011, by Duncan Coutts.
Filed under well-typed.

In addition to the full-time job we advertised recently, Eric Kow will be helping us out one day a week with the Parallel GHC Project.

Eric has become well known for the excellent work he's done organising the darcs project in recent years. He will be applying his talents to help us manage and promote the Parallel GHC Project. The Parallel GHC Project is not just about "getting useful stuff done" and helping the four partner organisations. We also aim to promote practical parallel Haskell tools and techniques to the wider industry.

So, welcome Eric! And if you start seeing Eric going on about parallelism then don't be surprised.


Well-Typed are hiring: Haskell consultant

Tuesday, 15 February 2011, by Ian Lynagh.
Filed under well-typed.

In order to keep up with customer demand, we are looking to hire a Haskell expert to work with us at Well-Typed as a Haskell consultant.

This is an exciting opportunity for someone who is passionate about Haskell and who is keen to improve and promote Haskell in a professional context.

The role is quite general and could cover any of the projects and activities that we are involved in as a company. The tasks may involve:

Well-Typed has a variety of clients. For some we do proprietary Haskell development and consulting. For others, much of the work involves open-source development and cooperating with the rest of the Haskell community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from industry, academia, or personal interest. Familiarity with other languages, low-level programming, and good software engineering practices are also useful. Good organisation and ablity to manage your own time, and reliably meet deadlines, is important. You are likely to have a batchelor's degree or higher in computer science or a related field, although this isn't a requirement. Experience of consulting, or running a business, is also a bonus.

The position is initially as a contractor for one year with a salary of 150 GBP per day, plus a bonus if profits are high. We offer flexible hours and work from home. Living in England is not required.

In the longer term there is the opportunity to become a member of the partnership with a full stake in the business: being involved in business decisions, and fully sharing the risks and rewards.

If you are interested, please apply via info@well-typed.com. Tell us why you are interested and why you would be a good fit for the job, and attach your CV. We are more than happy to answer informal enquiries. Contact Duncan Coutts, Ian Lynagh or Andres Löh for further information, either by email or IRC.

The deadline for applications is Tuesday 1st March 2011.

About Well-Typed

Well-Typed LLP is a Haskell services company, providing consultancy services, writing bespoke applications, and offering commercial training in Haskell and related topics.

http://www.well-typed.com/


How much tea

Friday, 14 January 2011, by Duncan Coutts.
Filed under well-typed.

Have you ever wondered how much tea an Englishman needs to drink to write a PhD thesis? After expending considerable time and effort I have discovered that the answer is approximately this much...

(Read more …)

Parallel Haskell project underway

Monday, 15 November 2010, by Dmitry Astapov.
Filed under well-typed, parallel.

GHC HQ and Well-Typed are pleased to report that work has started on the MSR-funded project to push the real-world use of parallel Haskell.

We will be working with four industrial partners over the next two years, with the aim of demonstrating that parallel Haskell can be employed successfully in industrial projects.

The participating organizations are:

Each group is working on their own project, applying parallel Haskell and their domain-specific expertise. In addition to providing advice on Haskell tools and techniques, we will work with these partners to identify and resolve any issues that are hindering progress. We are prepared to handle issues covering anything from the compiler and runtime system, through to platform, tool and library problems.

All the participants are working on complex, real-world problems. Three projects involve scientific problems, and the fourth involves network servers. Three of the projects are targeting single-node SMP systems, while the fourth is targeting clusters. In two cases, Haskell will be directly pitted against existing code written in C or C++.

Project progress reports will be posted to the Well-Typed blog and to Parallel Haskell mailing list.

Dragonfly

www.dragonfly.co.nz

Participants: Finlay Thompson, Edward Abraham

Cloudy Bayes: Hierarchical Bayesian modeling in Haskell

The Cloudy Bayes project aims to develop a fast Bayesian model fitter that takes advantage of modern multiprocessor machines. It will support model descriptions in the BUGS model description language (WinBUGS, OpenBUGS, and JAGS). It will be implemented as an embedded domain specific language (EDSL) within Haskell. A wide range of model hierarchical Bayesian model structures will be possible, including many of the models used in medical, ecological, and biological sciences.

Cloudy Bayes will provide an easy to use interface for describing models, running Monte Carlo Markov chain (MCMC) fitters, diagnosing performance and convergence criteria as it runs, and collecting output for post-processing. Haskell's strong type system will be used to ensure that model descriptions make sense, providing a fast, safe development cycle.

IIJ Innovation Institute Inc.

www.iij-ii.co.jp

Participants: Kazu Yamamoto

Haskell is suitable for many kinds of domain, and GHC's support for lightweight threads makes it attractive for concurrency applications. An exception has been network server programming because GHC 6.12 and earlier have an IO manager that is limited to 1024 network sockets. The upcoming GHC 7 has a new IO manager implementation that gets rid of this limitation.

This project will implement several network servers to demonstrate that Haskell is suitable for network servers that handle a massive number of concurrent connections.

Los Alamos National Laboratory

www.lanl.gov

Participants: Michael Buksas, Timothy M. Kelley

This project will use parallel Haskell to implement high-performance Monte Carlo algorithms, a class of algorithms which use randomness to sample large or otherwise intractable solution spaces. The initial goal is a particle-based MC algorithm suitable for modeling the flow of radiation, with application to problems in astrophysics. From this, the project is expected to move to identification of suitable abstractions for expressing a wider variety of Monte Carlo algorithms, and using models for different physical phenomena.

Willow Garage

www.willowgarage.com

Participants: Ryan Grant

Distributed Rigid Body Dynamics in ROS

Willow Garage seeks a high-level representation for a distributed rigid body dynamics simulation, capable of excellent parallel speedup on current and foreseeable hardware, yet linking to existing optimized libraries for low-level message passing and matrix math.

This project will drive API, performance, and profiling tool requirements for Haskell's interface to the Message Passing Interface (MPI) specification, an industry-standard in High Performance Computing (HPC), as used on clusters of many nodes.

Competing internal initiatives use C++/MPI and CUDA directly.

Willow Garage aims to lay the groundwork for personal robotics applications in everyday life. ROS (Robot Operating System - ROS.org) is an open source, meta-operating system for your robot.


Post-ICFP summary

Friday, 15 October 2010, by Duncan Coutts.
Filed under community.

It's been a bit over a week since we all got back from ICFP in Baltimore. I thought I'd write up a little report with my perspective.

As usual it was great fun. I basically count it as a holiday, though not a very restful one, with a packed programme of talks and every other waking moment spent jabbering to friends and colleagues.

ICFP and its associated workshops

This year the conferences seemed to be arranged in a progression from most highly academic and mathematical towards the more practical and commercial. I don't know if it was deliberate, but it seemed to work fairly well. People could arrive or leave at the point suiting their interest. It was quite interesting to see how the mix of people changed through the week. I missed the metatheory, mathematically structured FP and generic programming, but arrived in time for the main 3-day ICFP conference and stayed through to the end.

Well-Typed was pretty well represented at the conferences this year. Andres was on the ICFP programme committee and had a paper accepted for the Haskell Symposium. I was on the Haskell Symposium programme committee (but didn't review Andres et al's paper of course!) and along with Simon Marlow, I co-organised the Haskell Implementors' Workshop.

In theory I co-authored a presentation with Don Stewart on Hackage, Cabal and the Haskell Platform though in practice he did it all and I just reviewed the slides and made a few suggestions. I had a slight feeling beforehand that there was not really that much to talk about, partly because I'm feeling a little frustrated that I have not been able to spend more time on Cabal. On reflection however there was plenty to say, we have made quite a bit of progress during the year, especially in establishing the platform as the way most people get their Haskelly goodness.

Colin Runciman, Don and I spent a good couple hours plotting for a paper, perhaps for ICFP or the Haskell Symposium next year. I'm looking forward to working with Colin and Don on that. It's a nice bit of classic lazy functional programming I think.

Simon Marlow and I declared that we would hand over the organisation of the Haskell Implementors' Workshop to a new team and we've already got a couple volunteers from this year's programme committee. So I had thought that I would not be organising anything for next year's ICFP in Japan. That was until Michael Sperber asked if I would like to help him organise the CUFP tutorials next year. If you were at ICFP in the last couple years you may remember DEFUN, the functional programming developer tracks. This year they were rebranded as being part of CUFP. The idea is to appeal more to programmers using (or wanting to use) FP at work and to help persuade managers that it is worthwhile training.

Paper highlights

A couple papers from the Haskell Symposium that I particularly liked, or thought significant:

STG in Coq or to give the proper paper name A Systematic Derivation of the STG Machine Verified in Coq, by Maciej Piróg and Dariusz Biernacki from the University of Wrocław in Poland. They presented a fragment of a bigger project to build a verified Haskell compiler, perhaps similar to Xavier Leroy's work on a verified C compiler. To verify that a compiler faithfully translates a program in a high level language to a program with the same meaning but in a low level language, what you need is a proper formal connection between the high and low level languages. And of course it is not just a high and low level but a whole series of intermediate languages. Xavier's "compcert" uses about a dozen intermediate languages. Real compilers also use several intermediate languages. GHC goes from Haskell, to Core (System Fc) to STG to C-- and finally into either C, LLVM or assembly. This paper focuses on STG which is the language on the boundary between the functional world and the imperative world. From the functional side it is just a stylised subset of Core, but the same language also has an imperative semantics using an abstract machine that explains how to efficiently execute it. The paper makes the formal connection between the functional and imperative semantics of the language. I'm looking forward to more work from this team.

J. Garrett Morris presented an experience report on Using Hackage to Inform Language Design. I thought this was great, but not just because he cites one of my old blog posts as inspiration! The basic idea is really simple: take advantage of the fact that we have a large amount of publicly available code in a standardised form to get empirical data to inform questions about language design. Getting lots of real data has not generally been the tradition in the programming language community, partly because it is so hard to get (but also because we have ideas about what programmers ought to do). He gave an example to do with the design of the type class system, and did a survey to see how overlapping instances are used in practice. The tools at this stage are a bit hacky but with a little work it could be improved and automated much more. I hope we will see more people taking this approach in future, especially to help the Haskell prime process.

"The Future of Haskell" discussion

Traditionally, the Haskell Symposium ends with a long discussion entitled "The Future of Haskell". For the past few years this session has become less and less about the future direction of the language or about uncomfortable home truths and more about incremental changes and infrastructure (like Hackage and the Haskell Platform). Recognising this, the programme committee this year decided to scrap the future of Haskell discussion and just have short reports on the progress of the language standard i.e. Haskell 2010 and the future 2011/2012 revision.

For the implementors' workshop we decided to pick up the baton from the symposium and run a "Beyond Haskell" discussion. The idea was to be a bit less self-congratulatory, more forward looking and to pose uncomfortable questions. To kick things off we had Ben Lippmeier give a short intro. I didn't know beforehand what direction he would take, but we expected he'd do something interesting and we were not disappointed.

Ben talked about the problem of performance, that we often know what ugly fast low level program we want to write, but we have difficulty in expressing it in a high level way that we can reliably translate into the ugly fast version. So it's not that we cannot write fast programs but we want to have our cake and eat it, we want to write nice programs and reliably generate fast programs. People who work on this often end up writing Haskell but constantly studying the generated core to see why the transformation they wanted didn't quite work out. Reliability can be crucial: if it is a transformation that makes a 10x or 100x difference then you need to be sure that it is going to work. Perhaps not everyone worries about performance like this but it struck a chord with me because it was more or less exactly the issue I had in mind when I started my PhD. I was working on partial evaluation with the notion that the programmer would be able to control the compile-time transformations that generate the fast program from the nice program. I still think it's an approach worth investigating.

All the HIW slides and videos are on the HIW wiki page. Thanks to all the presenters for making their slides available and to Malcolm Wallace for videoing the whole event.

The Haskell "BoF" session

One of the things that CUFP started doing this year is "birds of a feather" (BoF) sessions. The idea is to get groups together for a couple hours to discuss some topic of interest in the community. Bryan and Johan organised a session on "Haskell in the real world". It was a pretty interesting and useful discussion I thought, particularly in relation to how we make improvements in infrastructure and attract volunteers to do that. We also talked quite a bit about what needs doing to keep up Haskell adoption, like coherence of the web presence, IDEs etc. Don did a good job as secretary and posted his notes afterwards.

Google Summer of Code

I was very pleased this summer to be involved with two GSoC projects and two excellent students. I was not technically the mentor in either case but since they were both related to Cabal/Hackage then I could hardly not be involved!

What I was especially pleased about is that both of them came to the Haskell Implementors' Workshop to give presentations about their GSoC projects. The HIW programme committee were very supportive of their talk proposals. The talks were on topic (being about infrastructure), they were useful for disseminating news to the community, and having GSoC students attend is great for integrating them into the community.

The new hackage

Matt Gruen has been working on the new hackage server implementation which will give us a decent extensible platform for adding the new haskage features that everyone has been clamouring for.

Matt has the new hackage server code running on sparky and has recently been working on the process of how we will transition from the old to the new server. If anyone wants to help him with that, I'm sure he would appreciate it. There are quite a few things to do. He's got a plan up on the wiki. You can find him by email or in the #hackage IRC channel on freenode.

Cabal test

Thomas Tuegel was working on "cabal test" which is a new Cabal feature to let packages define test suites and have other tools run them and collect results.

As anyone following the cabal-devel mailing list will have noticed from the deluge of patches, I finally finished reviewing and applying all of Thomas's cabal test patches. The plan is that this will be in Cabal-1.10.x which will come with GHC 7. If you watch Thomas's presentation you'll understand that one of the important features of the design is that we can have different protocols that test suites can support. So far we have two protocols, a basic one and a more detailed one. For the Cabal-1.10 release however we will enable just the basic "exitcode-stdio" test interface. We will continue to work on the more detailed interface in the development version of Cabal. In particular we are working with Max Bolingbroke, author of the popular test-framework package, to refine the interface for describing sets of tests.

Taken together, these two projects are an important step in our long term plan to make it easier to work out which are the high quality packages on hackage and to improve package quality overall.


Come to the Ghent Hackathon!

Tuesday, 07 September 2010, by Duncan Coutts.
Filed under community.

I'm pleased to announce that Well-Typed is sponsoring the next Haskell Hackathon, which is now just two months away. BelHac will be in Ghent, Belgium, November 5th-7th.

We would like to encourage people to come along. It's a great opportunity to learn about exciting projects and to meet interesting people. The entire Well-Typed team will be there!

If you're planning to come along then please help out the organisers by registering now.


Well-Typed Hires Two Additional Consultants

Monday, 06 September 2010, by Ian Lynagh.
Filed under well-typed.

We are please to announce that, following our recent hiring process, we have taken on two additional consultants in order to meet our upcoming commitments.

Andres Löh

Andres started using Haskell in 1997. Since then, he has used Haskell for most of his programming projects. He has participated in the ICFP programming contest multiple times, and won it in a team together with Duncan Coutts, Ian Lynagh and Ganesh Sittampalam in 2004, with an all-Haskell entry.

Andres is a well-known member of the Haskell community. He maintains several tools and libraries on Hackage. He has also packaged Haskell compilers and libraries for the Gentoo and NixOS Linux distributions.

Andres obtained a PhD in Computer Science from Utrecht University in 2004. Since 2004, he has worked as a lecturer and researcher at the Institute of Cybernetics in Tallin, and at the universities of Freiburg, Bonn, and Utrecht. Andres' research is focused on improving abstraction and reuse in functional programs, by using or enhancing the underlying type system. His interests also include embedded domain specific languages, version control and typesetting.

Andres was the program chair of the 2006 ACM SIGPLAN Workshop on Haskell. He has served as a program committee member for several other academic conferences and has published regularly at conferences and journals. He also has extensive experience in teaching Haskell to undergraduate and graduate students, as well as people with an industrial background.

Andres will begin working for Well-Typed in November.

Dmitry Astapov

Dmitry has ten years of experience using Haskell for solving practical problems. He has used Haskell as a day-to-day scripting language, for large and small projects, at full-time jobs and in consulting projects. He has created several hackage packages and contributed to various Haskell projects including darcs, xmonad, bytestring, happs, HaXml and lambdabot. He has written several tutorials, written content for the Haskell wiki, and is currently editor of a Russian-language magazine on functional programming for which he writes articles on a regular basis. He has supported Haskell newcomers and mentored students as part of the Google Summer of Code programme.

He holds a bachelors degree in applied math and a specialists degree in computer science from the Taras Shevchenko National University of Kiev, Ukraine.

Dmitry has already started working for Well-Typed.


Visiting Utrecht and IFL

Friday, 03 September 2010, by Duncan Coutts.
Filed under community.

I've been in Utrecht all week. As per usual I've spent most my time talking about Haskell and plotting world domination with friends and colleagues.

Andres Löh invited Don Stewart and me to give guest talks on the last day of the Utrecht summer school on applied functional programming. The idea was to give the summer school participants some perspective from people who make a living working with Haskell. Don talked about how Galois have been using Haskell for the last 10 years; what features, tools and techniques they have found to be important. I gave a semi-technical talk entitled "Monoids monoids everywhere!" about functional design patterns. The point was that identifying, capturing and abstracting over patterns is useful in real programs. I used monoids as the running example partly because they're nice and simple but also because it's somewhat surprising how far you can go with such a simple concept.

Several people have asked for the slides from the talk and I've promised to post them soon. In the live talk I drew some diagrams on the whiteboard and I'd like to add the diagrams to the slides so that the slides will make some sense on their own.

Over the weekend we had a mini-hackathon. Ian caught the ferry over and Johan Tibell flew in to join us. Johan and Don worked on optimising the Data.Map implementation. I spent much of my time reviewing Cabal patches. In particular I spent quite a while reviewing a big pile of patches from Thomas Tuegel which implement the new "cabal test" feature that he's been working for his GSoC project. He'll be presenting it at the Haskell Implementors' Workshop next month.

The latter half of the week Don, Andres and I have been at IFL. There's been lots of interesting talks on a fairly wide range of FP topics. There's been plenty of talks on parallelism and concurrency. I've been slightly surprised by the number of talks on low-level, hardware and embedded topics.

Don and I had several interesting chats with Kevin Hammond and his colleagues from the University of St Andrews about their continuing work on parallel and distributed Haskell. I think I've finally sorted out the confusion in my mind about the relationship between "GpH", "GdH", "GUM" and "Eden". For reference:

Oh and I finally gave in and signed Well-Typed up to twitter.


On Hiring Haskell People

Tuesday, 24 August 2010, by Duncan Coutts.
Filed under well-typed.

A couple of months ago we announced that Well-Typed were hiring Haskell people. This is a brief report on how we went about the process of hiring and what we found. The purpose is partly to give some feedback to the many people who applied and hopefully also to provide information to other people who may be looking to hire Haskell experts in future.

The background to our decision to hire was simply that we have found we have more work to do than we have time available and that we expected this to continue. So while last year we had two people help us for short term projects, we decided that it makes the most sense to expand the size of our permanent team. Both people who worked with us last year had moved on to other exciting Haskell jobs.

Applications

We posted the job notice on our company blog (which is syndicated to Planet Haskell) and also to the haskell and haskell-cafe mailing lists. We probably should have also posted it on the CUFP jobs page.

We were pleased to get a total of 42 applications, of which 19 merited serious consideration, and we eventually settled on a shortlist of 7 to interview. We also received a couple expressions of interest from people looking for part-time work.

Advertising openly was certainly the right decision, though it does entail a fair amount of work. We received applications from well-known members of the community plus many excellent applications from people we did not know or who we were only peripherally aware of. In the end, we made two offers to people we would not have asked directly: one person we did not previously know and another person we do know but who we would not have thought to ask.

The main features of our job posting were that it was not geographically limited, the work is, we think, interesting but the rate of pay we were able to guarantee was not especially high. All of these affected the kind of applications we received. We are in the lucky position that we do not need to sit in the same physical office as our co-workers, which gives us access to a big international pool of talent. The people who applied were quite dispersed geographically, covering 21 countries. There were 18 people from the EU, 8 from the US and 16 from elsewhere including several from Australia, Russia and Ukraine.

World map highlighting the countries we got applications from

We know that we cannot compete with large companies in terms of pay, but on the other hand we are able to offer a great deal of flexibility and work that involves using Haskell and interacting with the Haskell community. Many people wrote about their love of Haskell and functional programming and how they would like to make more use of it professionally.

Decision process

When it came to the decision making process, a key issue was that we had decided that we would need two people rather than one. Hiring two people gave us the opportunity to increase the range of skills in our team by picking people with different skills and background experience. We decided that we should aim to select one person with a mainly academic background and one person with more business and consulting experience.

Ian and I both read all the cover letters and résumés. We didn't want to influence each others initial assessments so we read everything independently and compared notes at the end. There was a wide variety in style of résumés, from 1 page bullet lists of education and experience, through to 9 pages including descriptive paragraphs. I don't think one style is a particular advantage over another; those with short résumés tended to come with longer more descriptive cover letters. A few résumés made it quite difficult to guess how much Haskell experience the applicant really has.

We decided to make an initial "longlist" by restricting our attention to people with three or more years of Haskell experience. That combined with a little more discussion and comparing notes between Ian and myself gave us a longlist of 19 people. This was quite a spectacularly talented group of people, everyone with some mixture of Haskell programming and other commercial experience. It included 5 people with 10 or more years professional programming experience, 7 people with masters degrees, 3 people with PhDs and 7 people who have used Haskell in a commercial context.

The next step of picking an interview shortlist was of course very difficult. Our aim was to interview only a handful of people. We reread and discussed letters and résumés.

It became obvious that we had a clear first choice on the academic side. In a sense we had a shortlist of one. We were thus in the slightly odd position of not shortlisting several people with excellent academic qualifications, but relatively little commercial experience.

On the commercial side, the deciding factor was (our perception of) the combination of Haskell programming skill and business experience, the latter especially in a client-facing role. Of course you cannot really assess programming skill on paper, but before the interview stage one has to go by what people say about themselves and their achievements.

We eventually picked an interview shortlist of six people on the commercial side, plus the one person on the academic side. We sent out notifications to everyone who had applied. Five of the six people had experience of either running their own company or otherwise acting in a client consulting role.

Interviews

For the interviews we decided to use IRC rather than phone or Skype. Ian devised a technical problem to use in the interviews. We had candidates log in to our server so that they could view, edit and run code during the interview. We used a shared console session so we could all see. We scheduled two hours for each interview and ended up using more or less the full time. The technical part ended up taking around 40-60 minutes and we spent the rest of each interview chatting about past experience and future plans.

The purpose of the technical problem was to assess more directly candidates ability to write Haskell programs that can be used to solve real world problems, where memory usage and performance are important. The problem was all about evaluation order and memory behaviour. We started by asking candidates to look at a short program and say what shape they would expect the heap profile to be. That would then lead on to a discussion of what things are evaluated at what stage and how much memory they are taking in the meantime. For the final step we asked candidates to rewrite the program to run in constant space. We felt overall that the technical problem was quite useful and we allowed it to become a significant factor in our final decision making process.

The choice of problem is based on our belief that a good understanding of evaluation order is very important for writing practical Haskell programs. People learning Haskell often have the idea that evaluation order is not important because it does not affect the calculated result. It is no coincidence that beginners end up floundering around with space leaks that they do not understand.

We had about a week of interviews following which we made two offers. With our shortlist of one on the academic side, the outcome was a foregone conclusion. On the commercial side we based our decision both on our prior reading of résumés etc and also on how things went in the interview. Again, the key factor was the combination of Haskell programming skill and consulting experience.

Reflection

I think all in all, given that we have never run a hiring process before, that we did OK. We could however have done better with keeping candidates informed about the process and expected timeline. The period for applications was perhaps unnecessarily long. I would welcome any feedback from people who found the process painful.

Ian and I would again like to thank everyone who applied. We appreciate the thought and effort people put in and that so many people are interested in working with us.


World map originally by John Harvey and others, CC-BY-SA.


Haskell Platform download stats

Saturday, 07 August 2010, by Ian Lynagh.
Filed under community.

It's been just over two weeks since the Haskell Platform 2010.2.0.0 was released. The Windows and Mac installers were hosted on our new server, destined to replace haskell.org. As the server isn't yet doing anything else, this gives us some clear data for the traffic associated with the release.

Shortly after the release, the daily bandwidth graph peaked just short of 3MB/s:

Over 24 hours the rate dropped steadily, and then levelled off at 200-300kB/s:

and it isn't showing any signs of falling off yet:

So far, there have been 9613 downloads of the Windows installer, from 3947 unique IPs, and 1068 downloads of the Mac installer, from 959 unique IPs.

You can see how this compares to the previous series of releases here.


Previous entries

Next entries