We are hiring Functional Programmers.

The Modelling and Analytics Group was the original introducer of Functional Programming (especially Haskell) to the Financial Markets business of Standard Chartered Bank, and although there are now several other teams who contribute to our large codebase and who are also hiring, we now have some vacancies in the Core team.  Please think about applying!

Job Description:
  • Join the Modeling and Analytics’ Core team, part of Financial Markets Front Office, in Singapore or London.
  • Apply Functional Programming concepts to the design and implementation of the unified bank’s Analytics library.
  • Support infrastructure requests from the Quant group, flow trading desks, structured trading desks & structuring worldwide.

Candidate qualifications:
  • Master or PhD in Computer Science, with a focus among: functional programming, language & compiler design, efficient data processing.
  • Excellent programming skills in one of the major statically typed functional languages (ideally Haskell), and preferably exhibited by academic output (research / teaching) or open-source development.
  • Proficiency with C/C++ and debugging / performance tuning tools is a strong advantage.
  • Good communication skills required for interactions with other team members and with trading desks.
  • Familiarity with financial markets is a plus but not required.

How to apply:
  • In the first instance, send your CV to Raphael.Montelatici@sc.com



Haskell Symposium 2009 - videos now online

Videos from the recent Haskell Symposium 2009, held in Edinburgh, are now edited and uploaded to


The original programme of talks is at http://www.haskell.org/haskell-symposium/2009/schedule.html



HUGE Data but small Programs

The slides for our PADL'09 talk, about language design (of DSLs) as a strategy for solving visualisation problems, are now available online. The "programs" for exploratory combinations of visualisations are small. So is the Haskell implementation behind it all.



codec implementations

Many years ago, Jeroen Fokker wrote a "Functional Specification of the JPEG algorithm, and an Implementation for Free". It was great on clarity - really helped me to understand the codec - but totally sucked on performance. With all the recent activity in making pure Haskell libraries with decent speed, utilising fusion and other optimisation techniques, I'm wondering whether it is time to revisit that paper, and see whether we can keep the high-level specificational style, but do rather better on all that number-crunching. In fact, after JPEG, why not take a look at something more challenging, like specifying/implementing the H.264 or Ogg Theora video codecs?



Haskell Workshop 2007 videos

Videos of all this year's Haskell Workshop talks, demos, and discussion, are now available: ̄here. The first location we hosted these at chewed through 100Gb of transfers in 24 hours, so go easy on the current host won't you? The videos are in QuickTime (H.264 + AAC audio, 320x240, about 8-12 fps), and are about 120-150Mb each.



HsColour gets a new backend

This morning, people in #haskell were playing with colours and bold/underline style formatting in the IRC channel, having just discovered it was possible. Mauke took HsColour's output for ANSI terminal codes and wrote a perl script to translate it into IRC colour codes instead. Now, how could I live with that? - munging the output of a beautiful haskell program using a perl script that looks like pure line noise!

Hence, the emergence of a new pure haskell backend for HsColour: the -mirc formatting flag. It's actually really easy to create new backends, hence the growth from the original 2 (-tty and -html) to 5 formats now. In fact, it was 70 lines of haskell, against 89 lines of perl. Smaller, more beautiful, more readable, more maintainable.

And only yesterday, Christophe Poucet contributed another new feature to HsColour: the -lit flag for literate code. Obviously, you don't really want syntax highlighting on the main parts of the document, just in the embedded code fragments. So with a little cannibalisation of the unlit module in the Haskell 1.2 Report, voila, HsColour now only colourises the code, not the document.



Register-based bytecode

I blogged a few months ago about a student who was looking at turning the standard stack-based bytecode into a register-based variant. Well, now the results are in, and they look pretty good. Just to recap, the plan was to take Yhc's backend bytecode (based on the spineless G-machine); to define a new bytecode that implements the same operations, using machine registers instead of a stack for temporary storage; to translate from one bytecode format to the other (including a register allocator); and to implement the corresponding runtime interpreter to execute the bytecodes.

The implementation proceeded in stages, at first using an array to represent a set of registers, then later using a cunning despatch mechanism to get hold of real registers. At this point, performance of the new system was about the same as that of the old system. But the cool thing is that register-based code allows more peephole-optimisation
opportunities. You can re-order instructions, provided they do not touch the same registers, and this code motion brings together pairs of instructions which can cancel each other out, or be fused into a different instruction that is cheaper. Alex also did some extensive profiling to discover common instruction/instruction pairings, and instruction/register pairings, and created specialised versions of those recurring operations. (This kind of profiling had already been done on the original nhc98/yhc stack-based bytecode, gaining about 25% in speed. The naive translation to register code actually threw away all that optimisation work, aiming initially for a simple, regular instruction set.)

So with some optimisation phases added, the register-based bytecode now runs up to 2x faster than stack-based code. There are more potential optimisations waiting in the wings too, although Alex will not have time to implement these.

One of the interesting negative results is that the first translation scheme ended up making the bytecode size larger (about twice the size on average per program), but that this made the "hello world" program 7x slower! Since yhc loads the program code dynamically at runtime, what this means is that there is a non-linear relationship between file size and loading time. But this is only really a concern for very short-running programs: above 1 second, the gains in running time outweigh the losses at loading time.

This page is powered by Blogger. Isn't yours?