The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

flib

Update We’ve moved the date of our first meeting to June 17th, so you’re still in time to join us! If you want to follow our adventures, you can also ask for an invitation to our mailing list.

The other day, Andy and I met Jos, an experienced schemer who lives near Barcelona, with the idea of having lunch, talking about Scheme, and create a Scheme Users Group. After a bit of discussion, we agreed on widen the group’s scope, and start what we’re calling Fringe Languages In Barcelona (FLIB). The plan is to conduct periodic meetings with a main presentation followed by some lightning talks (the latter were a complete success at ILC, and we’d like to try and see how they work for us), with as much discussion interleaved as we see fit. We’ll have some refreshments available and, since we’re meeting in the very center of the old city, visits to pubs or a restaurant for dinner and further socializing are to be expected.

As i said, we’re expecting much discussion about Scheme and Lisp, but we’re not ruling out by any means other fine languages. For instance, the talk for the inaugural session (scheduled June 10th17th, 7:30 pm) is entitled The implementation of FUEL, Factor’s Ultimate Emacs Library, and it will include a short introduction to Factor (yes, i am the victim speaker). Jos will come next, the same day, with a lightning talk about PLT Redex. We have free slots for more lighting talks: you are invited not only to come, but to give one if you’re so inclined. This being our first meeting, there will be also some time for logistics and organisation.

So, if you’re near here by then, by all means, come in and join the fun:

Calle del Pi 3 Principal Interior (first floor)
Barcelona

Not really needed, but if you’re thinking about coming, sending me a mail beforehand will help us to be sure that we’ve got enough food and drinks.

We’re looking forward to getting FLIB started, and we’re sure that at least grix more fringers are coming! Don’t miss it!

A Taste of Haskell

SimonI finally found time to watch Simon Peyton-Jones’ recent OSCON 2007 tutorial, A Taste of Haskell (split in Part I and II, around 90 minutes each). From the many comments around and a quick look at the slides, I knew it was a basic intro for non-functional programmers and kept wondering if it would be worth the time. From this you can easily infer that either i am a moron or hadn’t seen a talk of Simon’s before. Not that it discards totally the first option, but actually i hadn’t! As it comes out, Simon is a delight to hear and see in action. He’s full of energy and enthusiasm and knows how to transmit his passion for what he does. In this regard, this talk reminds me of Abelson and Sussman’s SICP videos. These guys are not your ivory-tower academic type. They have such a great time doing what they do that they cannot help showing it off. Perhaps it’s because they got their hands dirty implementing the systems they theorize about. At any rate, Simon’s tutorial is absolutely worth watching. If you’re new to Haskell and functional programming, you belong to the perfect audience for this crash course. It is also remarkable that the attendees were quite participative and made lots of questions that made the lecture quite lively. That and Simon’s refreshing sense of humour, which makes these videos great fun even for those of you knowing everything about monads. Enjoy!

Quantum hype

One of the things i would really, really like to see some day is a working quantum computer. Quantum mechanics is deep magic that nobody really understands, but we have learnt a lot about how to use it during the last century–including its application to some kinds of computation. As you surely know, the most outstanding quantum algorithm is Shor’s prime factorization, which allows factoring a number N with a time complexity O({(\log N)}^3). That means that we go from exponential to polynomial time when going from classical to quantum for this particular problem (and related ones: the Wikipedia article on QC gives a pretty good survey; see also David Deutsch’s introductory lectures). I’m stressing the last point because there’s a widespread misconception that quantum computers will be able to solve NP-complete problems in polynomial time. Not so. On the contrary, experts are almost sure by now that this won’t be the case (note, by the way, that factoring is not NP-complete).

The most recent examples of such bogus claims are the reports on D-wave’ demos of their ‘quantum computer’, which are surrounded by piles of hype. So please, before taking them at face value, see Scott Aaronson’s The Orion Quantum Computer Anti-Hype FAQ (more here here here). Scott Aaronson is an expert in the field and the author of a PhD thesis under the title Limits on Efficient Computation in the Physical World (for a less technical introduction to quantum computing, see his nice Quantum Computing Since Democritus lectures). For an executive summary, here’s the first entry in the FAQ:

  • Q: Thanks to D-Wave Systems — a startup company that’s been in the news lately for its soon-to-be-unveiled “Orion” quantum computer — is humanity now on the verge of being able to solve NP-complete problems in polynomial time?
  • A: No. We’re also not on the verge of being able to build perpetual-motion machines or travel faster than light.

The old rule applies: no silver bullet. But, of course, their limitations notwithstanding, quantum computers would (will?) be an interesting challenge for us programmers, and we do not have to wait for the hardware to play with them: see this Brief survey of quantum programming languages, or a more in-depth description of how an imperative quantum programming language looks like, although, if you ask me, functional quantum languages like QML are nicer. Simon Gay has also put together a comprehensive Bibliography of Quantum Programming Languages.

Finally, if you’d rather write some code, there’s André van Tonder’s Scheme simulator (which will work with any R5RS scheme), and a QML simulator written in Haskell. Haskellers will also enjoy Jerzy Karczmarczuk’s Structure and Interpretation of Quantum Mechanics: a functional framework.

Happy quantum hacking!

Writing A Lisp Interpreter In Haskell

Chances are you’ve already seen mentioned somewhere defmacro’s Writing A Lisp Interpreter In Haskell, but just in case:

A while ago, after what now seems like eternity of flirting with Haskell articles and papers, I finally crossed the boundary between theory and practice and downloaded a Haskell compiler. I decided to do a field evaluation of the language by two means. I was going to solve a problem in a domain that Haskell is known to excel at followed by a real world problem that hasn’t had much exploration in Haskell. Picking the problems was easy. There’s a lot of folklore that suggests Haskell is great for building compilers and interpreters so I didn’t have to think long to a pick a problem that would be self contained, reasonably short, and fun – writing an interpreter of a Lisp dialect.

Also interesting in defmacro.org, these ramblings on The Nature of Lisp or on Functional Programming for the rest of us.

I’ve not read these articles in full, but a first skimming over them left a pretty good impression.

Becoming a Haskell developer

YhcIf you like programming languages and compiler and have a few free time in your hands, please take a look at this roadmap to become a YHC developer. The York Haskell Compiler is a relatively new effort to write a, well, Haskell Compiler. Its development team looks all but newbie-friendly, and the job is one of the most interesting (if a bit underpaid) i’ve found lately. Right now, the YHC seems to be under (heavy, one would say) development. It’s based on nhc98, but already outperforms it (as well as the Hugs interpreter). Another interesting trait: it compiles to cross-plaftorm bytecode, runnable from a small runtime system (and there’s even a little surprise to all you pythonistas). This nice presentation (PDF) gives some further details on the system’s innards.

Definitely, worth a closer look.

Follow

Get every new post delivered to your Inbox.

Join 42 other followers