Not surprisingly, the multi-touch interface i mentioned a few days ago has also caught the attention of some of the folks at the Squeak-devel list. After all, Squeak and Smalltalk worlds are about the only programming environments trying to go beyond the typical file-based coding style. Of course, there’s much to say in favour of ASCII files as the primary code substratum, as well as against. I’m not entering in a profound debate on these issues in this post. Instead, i will share a few pointers to systems and research that try to go beyond traditional input methods and that i find amusing, and let you decide on their relevance.
First, some pointers from the above mentioned Squeak-devel thread. Besides a link to the Multi-Touch Interaction Research site (which gives some more information on the guys behind the fancy video), Kitty Tech (non-flash entry here) has been mentioned a couple of times. As explained in the site, the Kitty project provides a futuristic glove letting you typing without a keyboard: just join a couple of contacts (by joining the thumb and any other finger) to produce a give character. The image on the right shows how it works. Besides fun, this looks like a good keyboard replacement for portable devices, but other than that, i don’t see it as a revolution in the way we program. That said, it may help as a complement to things like multi-touch interfaces for my dreamed Self-like environment. The TactaPad would probably be more interesting for such an environment: it’s a system that shows an overly of your hands (which move on a tactile pad) on the screen, and lets you manipulate objects in there and actually feel them. The application to drawing programs is obvious, and hence to my pet prototype-based system. Another nice thing about TactaPad is that it is also a multi-touch system: you can use simultaneously all your fingers to manipulate objects, going far beyond the functionality a mouse provides. Take a look at these videos, they’re really amusing. Add a split keyboard or a couple of Kitty gloves and there you go.
Of course, applying these innovative technologies to programming depends on developing more visual, less text-based languages and environments; and the question of whether such environments can be as powerful and convenient as the traditional ones is controversial, to say the least. Before discovering Squeak and, specially, Self, i would have probably joined the skeptical, conservative side. Now i’m not so sure.
Needless to say, you’re are not limited to your fingers when it comes to alternative input technologies. Voice recognition systems have a long history, as shown in this excellent collection of videos (some of them from the seventies!) from MIT’s Speech Interfaces Group. Somehow, voice input has not caught up, neither to general use nor to programming environments. I still think they can be useful, though. An interesting project in this area is DivaScheme, which enhances DrScheme with voice-based program editing. It works by, first, defining a series of commands for structured Scheme code editing, and then, providing voice access to those commands. (Unfortunately, DivaScheme works out of the box only with an expensive and Windows-based speech recognition system, and i’ll have to spend an afternoon configuring the free Sphinx system to try it out.) Not as fancy as the previous stuff, but maybe more useful. Let me say in pass that the idea of defining a set of editing commands that are s-expression based instead of character-based is an obvious step that should be taken by any decent editor (and, of course, used by any decent programmer). Emacs lispers, for instance, should never leave home without Taylor Campbell’s paredit (or reading Marco’s highly opinionated guide on Lisp editing).