Update: See also Patent Absurdity for a 30 min video where Eben Moglen, Karen Sandler and others talk about the history and present of software patents.
I’ve just stumbled upon (via a post in John Cook’s blog) an old article back at kuro5hin entitled the greatest program ever written. It describes 1k chess on the ZX81, a program capable of playing chess on the Sinclair ZX81, a computer endowed with a total memory of 1024 bytes, and written by David Horne. I didn’t believe it either, until i saw it by myself. Kids these days!
Update:I’ve just learnt, thanks to a comment below, that there’s actually an earlier chess program, written by Peter Jennings, of similar characteristics: Microchess for the Kim-1 computer. The linked webpages contain lots of information about the program, including listings. Pretty fun stuff.
Today is the anniversary of programming musings: it’s been 4 years, 141 posts, 429 comments and almost 300,000 visits since i wrote my first post. A drop in the web ocean, i know, but enough to make waves in my local pond, and much more than i expected. These are the twelve most visited entries so far:
If you’re still thinking about reading SICP, here’s an article, written by Abelson and Sussman shortly after publishing the first edition of the book, that may dispel your doubts: Lisp: A language for stratified design. It’s a quick distillation of some of the central themes discussed at length in SICP, with accompanying code. Some of them may seem old hat these days (the article was published in 1987), but they’re as relevant as they were back in the day, and it’s difficult to find them exposed in as good (let alone a better) a way. Its dozen pages are full of quotable pearls of wisdom. For instance, right from the start:
Just as every day thoughts are expressed in natural language, and formal deductions are expressed in mathematical language, methodological thoughts are expressed in programming languages. A programming language is a method for communicating methods, not just a means for getting a computer to perform operations–programs are written for people to read as much as they are written for machines to execute.
and later on
[…]if a methodology is to be robust, it must have more generality than is needed for the particular application. The means for combining the parts must allow for after-the-fact changes in the design plan as bugs are discovered and as requirements change. It must be easy to substitute parts for one another and to vary the arrangement by which parts are combined
The examples include Henderson’s picture language, which motivates a discussion of metalinguistic abstraction (or DSLs, as rediscovered these days):
Part of the wonder of computation is that we have the freedom to change the framework by which the descriptions of processes are combined. If we can precisely describe a system in any well-defined notation, then we can build an interpreter to executed programs expressed in the new notation, or we can build a compiler to translate programs expressed in the new notation into any other programming language.
There’s also a synopsis of the implementation, in Scheme, of a rule language geared at simplifying algebraic expressions, and the accompanying interpreter. The conclusion is that you want to write languages and interpreters, in a variety of paradigms; and that you probably want to write them using Lisp while sitting in greenhouses:
The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm […] A linguistic approach to design is an essential aspect not only of programming but of engineering design in general. Perhaps that is why Lisp […] still seems new and adaptable, and continues to accommodate current ideas about programming methodology.
As true today as it was twenty odd years ago, if you ask me.
Here’s a list of (mostly) scheme video lectures, based on the links posted in this thread of the PLT mailing list, with some extra bits taken from my own collection:
And i cannot help mentioning neither Gerry Sussman’s The role of programming in the formulation of ideas
(and its accompanying article) nor Oleg Kiselyov’s Normal-order syntax-rules and proving the fix-point of call/cc:
And don’t miss this gallery for some really beautiful ones.
James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!
As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:
$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install
That worked for me. Afterwards, i wrote the CPU burner suggested by James:
#lang scheme (require scheme/future) (define (loop) (loop)) (define (run) (for-each touch (for/list ([i (in-range 0 (processor-count))]) (future loop)))) (run)
and, lo and behold,
mzscheme cpu-burner.ss is making my two little cores beat at top speed.
Happy parallel hacking!
I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.
A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.
The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.
In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.
There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.
The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.
As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).
Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.
All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.