A Java implementation of Ruby's gsub

In Ruby, gsub is like Java's String.replaceAll. Ruby has first-class blocks, though, and when you have those, your programming style changes. Kent Beck's exhortation to "do the simplest thing that could possibly work" sounds different when you have first-class blocks. The simplest thing has a habit of being the most general, and it's only when you get tired of some idiomatic usage that you come back and write yourself a convenience routine.

Because of this, Ruby's gsub is a lot more general than Java's String.replaceAll. As an alternative to a replacement string – which is often all you need – Ruby lets you pass a block of code whose result will be the replacement. It's similar to, but nicer than, Perl's /e modifier on the s/// operator. It's also similar to, but a lot more concise than, the sample code in the JavaDoc for Matcher. It also gives a much better implementation of example code in the Java Almanac, a cut-and-paster's compendium of terrible code (an amusing exercise, if it weren't for copyright law, would be to take every example on that site and demonstrate how it should really be done).

Here's my current best attempt to come close to this idiom in Java; spoiled mainly by Java's lack of regular expression literals:

package e.util;

import java.util.regex.*;

* A rewriter does a global substitution in the strings passed to its
* 'rewrite' method. It uses the pattern supplied to its constructor,
* and is like 'String.replaceAll' except for the fact that its
* replacement strings are generated by invoking a method you write,
* rather than from another string.
* This class is supposed to be equivalent to Ruby's 'gsub' when given
* a block. This is the nicest syntax I've managed to come up with in
* Java so far. It's not too bad, and might actually be preferable if
* you want to do the same rewriting to a number of strings in the same
* method or class.
* See the example 'main' for a sample of how to use this class.
* @author Elliott Hughes
* @author Roger Millington
public abstract class Rewriter {
private Pattern pattern;
private Matcher matcher;

* Constructs a rewriter using the given regular expression;
* the syntax is the same as for 'Pattern.compile'.
public Rewriter(String regularExpression) {
this.pattern = Pattern.compile(regularExpression);

* Returns the input subsequence captured by the given group
* during the previous match operation.
public String group(int i) {
return matcher.group(i);

* Overridden to compute a replacement for each match. Use
* the method 'group' to access the captured groups.
public abstract String replacement();

* Returns the result of rewriting 'original' by invoking the method 'replacement' for each match of the regular expression supplied to the constructor.
public String rewrite(CharSequence original) {
return rewrite(original, new StringBuffer(original.length())).toString();

* Returns the result of appending the rewritten 'original' to 'destination'.
* We have to use StringBuffer rather than the more obvious and general Appendable because of Matcher's interface (Sun bug 5066679).
* Most users will prefer the single-argument rewrite, which supplies a temporary StringBuffer itself.
public StringBuffer rewrite(CharSequence original, StringBuffer destination) {
this.matcher = pattern.matcher(original);
while (matcher.find()) {
matcher.appendReplacement(destination, "");
return destination;

public static void main(String[] arguments) {
// Rewrite an ancient unit of length in SI units.
String result = new Rewriter("([0-9]+(\\.[0-9]+)?)[- ]?(inch(es)?)") {
public String replacement() {
float inches = Float.parseFloat(group(1));
return Float.toString(2.54f * inches) + " cm";
}.rewrite("a 17 inch display");

// The "Searching and Replacing with Non-Constant Values Using a
// Regular Expression" example from the Java Almanac.
result = new Rewriter("([a-zA-Z]+[0-9]+)") {
public String replacement() {
return group(1).toUpperCase();
}.rewrite("ab12 cd efg34");

result = new Rewriter("([0-9]+) US cents") {
public String replacement() {
long dollars = Long.parseLong(group(1))/100;
return "$" + dollars;
}.rewrite("5000 US cents");

// Rewrite durations in milliseconds in ISO 8601 format.
Rewriter rewriter = new Rewriter("(\\d+)\\s*ms") {
public String replacement() {
long milliseconds = Long.parseLong(group(1));
return TimeUtilities.msToIsoString(milliseconds);
result = rewriter.rewrite("232341243 ms");

for (String argument : arguments) {

Java versus ISO 8601 and RFC 2822 date formats

You'd think a language like Java, born during the popularization of the Internet and originally positioned as a language for writing code to be run straight off the Internet would have better support for Internet standard date and time formats. I know date and time handling is complicated, and I'm very glad someone other than me had to write all those classes, but I wish they'd given them a simple interface for people who just don't care.

The only time format that's really easy to work with in Java is raw milliseconds, which is just weird. It's not totally insane for a program to output something like "took 109ms", but it's habit-forming, and before you know it your programs are saying things like "uptime 823475ms" too. Which is silly.

I wish they'd focused on globalization as much as they did on localization. Who cares about locale-specific date formatting when we have ISO 8601?

The only mentions of ISO 8601 I could find in the JDK are in java.util.GregorianCalendar and java.util.logging.XMLFormatter; the former having a comment saying that it uses the same interpretation of week numbers, the latter a private method for appending the time and date (as a long) to a StringBuffer. I found no references to RFC 2822.

The ISO 8601 representation of 823475ms, by the way, is P13M43.475S. It's weird the first time you see it (especially with the upper-case letters; according to section 5.1.2 of the standard, the 'P' stands for 'period') but you get used to it. And it's a lot easier to work out that that's 13 minutes 43 seconds than do anything with 823475ms.

Oh well. My library of useful Java stuff (available from my home page in conjunction with Edit or SCM) just gained a class.


g++ __attribute__ format

The format attribute lets you tell g++ that a function takes printf-like arguments that should be type-checked against a format const char*. (I won't say string, especially not since C++ has strings, even if it doesn't have string literals.)

You say which argument is the format const char*, and which argument begins the list of arguments to be checked against. Arguments are numbered from 1.

Unless you're using the attribute on a member function.

Function Attributes - Using the GNU Compiler Collection (GCC) says: "Since non-static C++ methods have an implicit this argument, the arguments of such methods should be counted from two, not one".

Why? What's that got to do with anyone but the compiler writer? It's not like this can be the format const char*, so why do we need to pollute our code with this implementation detail of someone else's program?

I love the way it's presented, though. As if there's nothing they could have done about it. Why aren't all compilers as good as Jikes? Why don't more compiler writers take pride the user interface? Surely as developers themselves they feel this on a daily basis?


Ruby scratches

I have a love/hate relationship with Ruby. It's not congenitally evil like Perl, but it does have some rather nasty disfigurements. Here's how it annoyed me today:

mercury:/tmp$ cat -n ruby-is-stupid.rb
1 #!/usr/bin/ruby -w
2 a = 4
3 if a == 1
4 puts("one")
5 else if a == 2
6 puts("two")
7 else
8 puts("many")
9 end
mercury:/tmp$ ./ruby-is-stupid.rb
./ruby-is-stupid.rb:9: parse error

The error is actually on line 5, where I have else if instead of the Perlism elsif. But hey, if "parse error at end of input" is good enough for gcc, why shouldn't it be good enough for Ruby?

This is a lot harder to spot in a real program.

An ugly design choice (elsif) combined with a poor implementation (parse error). Distractions like this really make my day.

Ruby's puts has a nasty wart, too. It doesn't do what you might think given its partnership with print, and the existence of seemingly similar choices in other languages. Java's print and println, say. Or C's printf and puts.

What Ruby's puts does depends on the string it's given. It doesn't output the string it's given followed by '\n'. It outputs the string it's given and, if that string didn't already end with '\n', it'll append a '\n'. Which is a great way to have a program that works only on most input.

The idea is that you can be sloppy about whether your string ends in a newline or not. And that puts means "print this, ensuring there's exactly one newline on the end, except when the string already had multiple newlines on the end". Or something. This is why C and Java don't bite here: it's trivial to say what their equivalent routines do. You can see the implementation as you read the description, and you can fathom all the consequences without a thought. And they're so trivial that you're unlikely to forget.

I've no idea why you'd want to be sloppy about whether a string ended in a newline or not. Why, if there were some special meaning to a final newline, you wouldn't want it explicit in your program.

The lesson? Always use print in a real Ruby program.

Why doesn't someone name a language after something on the low end of the Mohs scale. Talc, anyone? It even sounds like a Unix program.

Evolution Alarm!

Every time Evolution pops up an alarm (stealing the input focus from what I was doing, just like a Windows program) I laugh. "Evolution Alarm," it shouts, in type large enough to be read across the room. Beneath that, much smaller, it deigns to show me my description of the reminder-worthy event.

I'm tempted to create an event with a description along the lines of "Ximian Evolution has been found on this computer. Would you like to remove it?"

"Evolution Alarm my ass", as Ripley would have said. It's my reminder for my event, and those were my key presses you just threw away.

The "snooze" functionality is good too. You give it the number of minutes you want to snooze for. Cunningly placed about as far from the actual snooze button as possible without going outside the dialog. But when's that measured from? When the dialog appeared, or when I hit the snooze button? And why would I want to measure forwards from either of those times, unless the event was in the past ("yeah, hang on, I'll be there in a second; just need to check this code in")? As long as the event's in the future, I'm more likely to want to say "yeah, tell me again 5 minutes before it starts".

I couldn't find a screenshot of the Evolution Alarm for you to laugh at, but I did find this sneak preview of Evolution 2.0; a screenshot entitled Quickly Accept Meeting Invitations, something I've complained about in the past. Strangely, it seems every bit as broken as at present. Note how they've had to scroll down past the header so you can even see – let alone interact with – the control that would let you accept a meeting invitation (quickly or not). Maybe version 2.0 scrolls past the header automatically?

If they keep this up, I'll be so impressed I'll be laughing my way back to MS Outlook.


Constantly updating search results

Computers are fast these days. Something that it seems is all too easy to forget if you grew up in the 1980's, when home computers were slow, you spent most of the decade programming in BASIC, and weren't even particularly good at it.

In the best Unix terminal emulator, I made the regular expression searching automatically highlight new output. In Edit, I didn't. Even though it's obviously the right thing to do.

In my defense, it was a couple of years ago now, and Java wasn't as fast as it is now, but I'd guess it was fast enough back then. Here's how I know: the matches are updated in real time as you type your search expression. Have been for years. So what did I think would be so expensive about updating the matches as you edit the document?

Dunno. Search me.

[Update: weak puns aside, while actually making the change I came across commented-out code that suggests my original intention was that further editing should cancel the find. Having since used applications that behave in both ways, I prefer the ones that keep find results up to date unless I explicitly tell them to stop.]


Humans pay for context switches too

Although I hadn't come up with anything more concrete than a feeling, Martin pointed out a specific way in which determinate progress is better than indeterminate: you can decide whether or not to pay the price of a mental context switch, do something else, and come back.

What's interesting when you're using it like that isn't the percentage complete: it's the rate of change. "It'll be finished in two seconds; I'll wait" versus "I've got time to get a drink and come back".

Odd that I hadn't noticed myself using it for that, but I do. At least with other applications; I'm quite patient with this particular application: I've got some half-formed thoughts in my head that want to be translated into check-in comment, and I don't want to get distracted and forget anything.

I think I mostly use determinate progress to work out how long I'm going to have to continue to wait. This belief seems to be borne out by the fact that I sit reading the text in web browser download windows rather than watching the progress bar. (Though I'm sure that subconsciously the continual motion towards the goal helps me endure the wait.)

I make my own determinate progress when reading a book. I've always thought time was the most useful unit. "How far is it?" "About ten minutes' walk." "How much more have you got to do?" "I'll be done in twenty minutes." So when I'm reading a book, I don't care how many pages are left; I care about how long it's going to take to read the rest. In particular, should I finish this book now, or should I put it aside, get some rest, and finish it tomorrow? Mostly I use the relative sizes of the "already read" and "still to read" page groups to judge the time. Other times, I'll find out how many pages are left and assume a constant reading rate. 30 pages/hour seems to work quite well for English.

Page numbering is an unusual reason I loved Chuck Palahniuk's "Survivor", and might have done even if it hadn't been probably his best book: it's the only book I own where the page numbers count down. Why aren't more books like this? Page numbers only increase because there was a time when this was the only convenient way to do it. If you were setting a book, you'd have to go forwards or you'd risk arriving at the end of your work (the beginning of the book) in an awkward place on the page.

You could argue that humans have a natural tendency to count up, but I'd argue that even that is the same kind of convenience. To count down, you need two fixed points; to count up, only one. This is why, unlike mathematicians, programmers tend to have the y-axis increase downwards, because it's easier to implement documents that grow if you don't have to recalculate everything's position every time you append something. Gap-buffer implementations in text editors also reflect the same fact: it's easier to expand something if there's space waiting to expand into; if only one point is fixed.

Anyway, there's no reason why a computer couldn't number pages backwards, in a separate pass if need be (TeX numbers citations in a separate pass). It's not as if one direction is better than the other for referring to material – which is the only other use I can think of for page numbers – page numbers are pretty weak compared to HTML anchors, and even they're weak in the face of modification to the document since the point at which the reference was made.

In the case of "Survivor", the page numbers are backwards because it's relevant to the story, which I won't explain here. Read it yourself.

Electronic texts are particularly poor at proving this kind of feedback, almost to the extent that I wonder if their developers think that it's not important to provide feedback for active activities such as reading; only passive ones such as waiting. There's certainly nothing to match the feel of a book in your hand, the different thickness touching your left and right thumbs. Continual subconscious feedback like that isn't easy to do on-screen, though an "all on one page" format seems like a good work-around. The vertical scroll bar divides "already read" and "still to read", and you at least have visual feedback about how far through you are that you otherwise lose.

And now it's time to put this aside, get some rest, and write some more tomorrow.


WiFi location transparency

I may have had to bring the infrastructure here in my rucksack, but it's cool to wake up with the same computing environment I'm used to at home. I can read and reply to mail without using the heinous BlackBerry, I can read RSS feeds and web pages, I can listen to the Digitally Imported streaming vocal trance station...

I could even write some code and check it in if I cared to.

One day, son, decent wireless connections will be everywhere. It won't seem like a novelty, and I'll expect it every bit as much as I expect to find water, power and air. One day, it won't just be the places we go, it be the bits in between, too. Like the BlackBerry, only good. Maybe I should see about getting a GPRS card for my PowerBook?

But then I'd have no excuse to stay indoors on sunny days.


Adding SIGQUIT handlers in Java

I wanted to make my code for diagnosing VMs hung on exit more integrated with the VM. A shutdown hook is obviously useless, because the whole problem is that we're not shutting down. What's the first thing you do when your Java program is hung? You send it SIGQUIT, and see what the various threads are doing.

Ideally, you'd be able to addSigQuitHook (or addDiagnosticHook or something a bit more focused on the intention than the implementation). But you can't. And it turns out you can't use sun.misc.Signal and sun.misc.SignalHandler to add a SIGQUIT handler, because the VM doesn't allow more than one handler per signal. And it's already using SIGQUIT, as we know. SIGUSR1 is in use, too. You can have SIGUSR2, but I want SIGQUIT, damn it!

Hopefully they'll fix that at some point. One of the great leaps from the C mentality to the Java mentality was from the global variables and zero-or-one "instances" to the zero-or-many instances of Java. Where it's as cheap (in terms of programming effort) to have a vector of listeners as it is to have a single call-back function pointer. And where the standard idiom is to add listeners to a list and fire events by traversing the list.

It's a shame that potentially useful signal-handling functionality is spoiled by being in a package we're not supposed to use, and by being so limited by design. Maybe we'll get proper signal handling in 1.6; after all, we got proper environment variable support in 1.5. I'd almost given up hoping for that.


You don't notice until you're using it, but the Netgear DG834G 54Mbps Wireless ADSL Firewall Router's front-panel lights don't line up. Luckily, it's in my parents' house, and they don't notice that kind of thing, so they're happy. (And I have a deeply ugly ASUS piece of junk under my porch, so I can't really talk.)

Cool to be accessing the Internet at a decent speed from my lap in the house where I grew up, even if it is over ten years since I last lived here. Reminds me of what Jobs said at the last WWDC keynote: "we used to dream of this stuff; now we're building it".


Automatic hyperlinking in a terminal emulator

You know how sometimes you implement something and then every time you use it – until you become blasé – you ask yourself why you didn't implement it years ago, it's that useful?

I just added something like that to the best Unix terminal emulator.

It's not finished yet, and it's only in the nightly builds (available from my home page), but already it's great. It's the best thing to happen to terminal emulators since we added really awesome find functionality.

What I did was this: we now assume that the window title (set via the usual escape sequence in your bash prompt) is the name of your current directory. If we see text output that looks like it might be a filename, we check whether appending that to the window title gives us the name of an existent file. If it does, we turn the text into a link that puts the file in your editor.

You have no idea how cool this is.

I implemented it with only grep -n in mind; I wanted to be able to click on grep's output and go straight there in my editor, just like I can already do if I run grep within the editor. Now I can. But it's better than that. I didn't realize until I ran it, but it works for ls too. All of a sudden you're looking at a clickable list of files. Choose one and boom! You're in your editor, looking at the file. Do a bk pull to get the latest changes, and see a file you're interested in has changed? Click on it. Boom! It's in the editor.

I broke the URL linking for this, but it's so worth it. I did see a few http: links, but never wanted to follow one. I never saw a mailto: link. A "Connect to Server" menu item would be a better replacement for the ssh: and telnet: links I never saw either.

Filenames, though: they're a shell's bread and butter.

You don't get this kind of adaptation to your way of working with a typical IDE, and that's is why I don't like them. You don't have the freedom to work how you choose, with the tools you choose (or have available, or are forced by circumstance or policy to use); at least not if you want the full power to be available to you. And what they give you in return is rarely worth the price you pay.

(I'm happy to call my editor an Integrating Development Environment, a term probably coined by Rob Pike or Charles Forsyth in reference to the acme editor, though. "Integrated" sounds like the job's done. "Integrating" admits that it's barely started, will never finish, and – for the time being – should be the IDE's job, not the job of each and every tool.)

Times are changing, and in many cases bad APIs and bad systems are forcing us into IDE straightjackets because otherwise life's too hard to bear. But these API-softening tools only tend to exacerbate the problem. Have you ever had to work with IDE-written GUI code? Then again, have you ever had to work with GUI code written by the kind of less-skilled developer that lets an IDE write their code for them?

Which would you like? The rock, or the hard place?

Using alpha blending for underlines

I love alpha blending. I use it all over the place, in part because I'm still just trying stuff at random to see if it's useful. I don't think alpha has been mainstream for long enough that we've really explored it properly. My editor makes effective use of alpha in highlighters, for example, and in the best Unix terminal emulator, I came up with a new use for it. (Note that if you follow the link, the screenshots are too old to show this.)

A problem we had was that the underlining was too heavy, and it kept colliding with the monospaced fonts' descenders. What I really wanted was a way to stop the underline just short of any descenders, but I couldn't think of a cheap way to do it. (We might not have performance as good as rxvt, but we were competitive with konsole and gnome-terminal last time we checked.)

The solution I came up with was simple and surprisingly effective: use an alpha color. Rather than insist on a separate compositing step, as Apple's Objective-C API seems to, Java lets you have colors with alpha, and any drawing using a Graphics that's using such a Color will be alpha-blended. The effect is almost the opposite of the problem we're trying to solve: instead of disappearing, the descenders stand out more, because there the underline is combined with the text.

Download it, and have a go.


Diagnosing AWT thread issues

There are two ways a Java GUI program can exit. There's the right way, and there's the wrong way. Sadly, the wrong way is easy, and the right way often fails to work.

The wrong way is to call System.exit. This always works, but the first time your code gets embedded inside some larger application, you'll lose someone their work. "Crash only" programming is all well and good, but you don't know the outer application is written like that, even if yours is. So don't do that.

The right way is to have no displayable components, no native events in the native event queue, and no AWT events in Java event queues. See AWT Thread Issues. The trouble with this is that it requires that you write a nice clean program. And that everything you use is equally nice and clean. Which is often not the case.

When things go wrong, if you're lucky, it's your mistake, and it's in code you've just added. Insert the missing call to dispose, and you're laughing.

This one was a bit harder though, and in the end I wrote a class to automate the work.

Jumping to the money shot, here's the output from my new debugging aid when I close the last Frame in the program in question:

mercury:~/Projects/scm$ revisiontool src/e/scm/RevisionTool.java
*** Examining Frames...
Extant frames: 1
Problem (displayable) frames: 0
*** Examining Swing TimerQueue...
listener #0 apple.laf.AquaProgressBarUI$Animator@18f51f
Problem (extant) timers: 1

That seems a pretty clear indication of the source of the trouble to me.

I know the output makes it look like an Apple problem, but it was originally in Sun code. The bug turns out to have been fixed in Java 1.5.0-beta2, which explains why other people were seeing it, but I only saw it on Mac OS (with 1.4.2) and not on Linux (with 1.5.0-beta2). Indeterminate progress bars are the cause of the problem. (Bug 4995929, if you're interested. There's a work-around there, too, if you can't move to 1.5.0-beta2.)

All I had to do was add the following line to the constructor of the application's JFrame subclass:


The implementation is pretty straightforward. Frame.getFrames takes care of the Frames, and a little bit of reflection lets us wander down the TimerQueue's list. The latter is, of course, pretty brittle, but what can you do when the class is default access?

Anyway, for those who don't want to follow the link to my home page, and check out my library of useful stuff, here's the class:

package e.debug;

import java.awt.*;
import java.awt.event.*;
import java.lang.reflect.*;

* Diagnoses a GUI application that doesn't exit when you close what
* you think is the last Frame.
* The two most common problems in my experience involve Frames that
* haven't had Window.dispose invoked on them, and timers. This class
* offers a static method "explain" that you should use to register
* the Frame that you'll be closing last. (I'd guess it's easiest to
* register them all, but there's no need to.)
* In a constructor, something like e.debug.HungAwtExit.explain(this);
* would do it.
public class HungAwtExit {
public static void showDisplayableFrames() {
System.err.println("*** Examining Frames...");
Frame[] frames = Frame.getFrames();
System.err.println("Extant frames: " + frames.length);
int displayableFrameCount = 0;
for (int i = 0; i < frames.length; ++i) {
if (frames[i].isDisplayable()) {
System.err.println("Displayable frame: " + frames[i]);
System.err.println("Problem (displayable) frames: " +

public static void showSwingTimerQueue() {
System.err.println("*** Examining Swing TimerQueue...");
try {
Class timerQueueClass = Class.forName("javax.swing.TimerQueue");
Method sharedInstanceMethod =
timerQueueClass.getDeclaredMethod("sharedInstance", null);
Object sharedInstance = sharedInstanceMethod.invoke(null, null);

Field firstTimerField =
Field nextTimerField =

int extantTimers = 0;
javax.swing.Timer nextTimer =
(javax.swing.Timer) firstTimerField.get(sharedInstance);
while (nextTimer != null) {

nextTimer = (javax.swing.Timer) nextTimerField.get(nextTimer);
System.err.println("Problem (extant) timers: " + extantTimers);
} catch (Exception ex) {

private static void showActionListeners(ActionListener[] listeners) {
for (int i = 0; i < listeners.length; ++i) {
System.err.println(" listener #" + i + " " + listeners[i]);

public static void explain(Frame f) {
f.addWindowListener(new WindowAdapter() {
public void windowClosed(WindowEvent e) {

private HungAwtExit() {
// Prevents instantiation.



I've never really appreciated just how green England is. Perhaps because the other places I've lived in or visited have been pretty green themselves. The one previous time I went anywhere very un-green (Arizona, USA) it was winter. I returned to crisp snow and clear blue skies, and apart from the temperature it didn't seem so different.

But then you spend some time in California, USA during summer, and all of a sudden it's a real shock to come back to what you've known all your life.

You walk around like you've survived some great natural disaster. You're delighted by the unruly greenness. No careful little strips of a springy variety of grass you've never seen before, fed by "reclaimed water — do not drink". Oh no. Here there are wild uneven expanses of grass. There are undergrowth areas made up of any number of different grasses, ferns, and mosses. There are wild flowers. There are thorn bushes and trees striving to take over and envelop whatever man-made junk has been planted amongst them. Paths you can't easily walk down because the trees, bushes, ferns and brambles want to close the path for ever.

And you're amazed that no-one seems to be paying any attention to any of this.

(I saw a sign in San Francisco, CA, USA. It said something like "it's a good job we arrived on the east coast; if we'd arrived on the west we'd never have gone further than San Francisco". I liked that. Delightfully ambiguous, like all the best aphorisms.)

Error returns are always worth checking

You can find that phrase in Sun's The JVMPI Transition to JVMTI, which talks about the new interface to Java VM internals in Java 1.5.0 that replaces older interfaces.

It sounds quite different if you read it in context, though. The full quote is:

Note that all of the JVMTI error return handling and checking code has been left out of the above example. Do not do this in real life; those error returns are always worth checking.

I'm sure you've seen that said so many times before it almost seems natural. But I think it's helpful to have it translated, so here are the subtitles:

Note that this exact code will be copied & pasted all into software all over the world. This is real life. Those error returns will never be checked.

The usual excuse for not showing error-handling code in examples is that it obscures what you're trying to show. People actually say that. As if they don't realize that what they're saying is "we made such a mess of the interface that it's really nasty to use; it's not too bad if you remove all this fundamentally important stuff that makes the difference between a proof of concept and a shippable application and pretend it's not needed".

To be clear: I haven't looked at JVMTI yet. For all I know, it's a great API and their only mistake was missing out the trivial, clean error-handling code on this web page. It's also the case that what they're doing is intricate enough that you'll probably have to take one of their full examples to get anything working, and the web page implies that the proper error handling is present there. They just happened to say the wrong thing at the wrong time, and it set me off!

The attitude that "it's only example code" really annoys me. Example code is exactly where you have to be on your best behavior. Example code is what a lot of people learn programming by fiddling with, and example code gets pasted in to a lot of shipping applications. And yet you look at it, and you see missing error-handling and heinous levels of duplication.

The main thing I learned from O'Reilly's "Building Cocoa Applications" is that the authors can't program. (Scott Anguish and friends can, though, so buy their "Cocoa Programming", published by Sams, instead.)


Crash-only software

I read "Is your software crash-only?" today, and the paper it links to. As a file system developer by day, the philosophy is a familiar one. A lot of the code I write deliberately forces crashes. Crashes aren't inherently evil. They can protect data, they can increase availability, and they can make it easier to fix root causes.

I remember, back in the days before Java, that one of the things I most hated was when a C program would get SIGSEGV and just disappear, taking my work with it. Java, as I experienced it, was wholly different: an exception would propagate up the call stack, but the program wouldn't terminate. It would just print a stack trace and carry on. Often, you'd just be able to ignore what had happened. Other times, you could save what you were doing and restart. In the worst cases, you could often save much of what you'd been doing and limit your losses to some specific failed part.

Not crashing, I felt, was a major step forward.

Then I left application development in Java and went back to file system development in C++, where the sooner I can crash the server when things start to go wrong, the sooner it will recover.

In my work, in C++, assertions are a fundamental part of what I do. Whenever I write a comment, I ask "can I rewrite this as an assertion?", and if I can, out goes the comment and in goes an assertion. In my spare time, in Java, I've never written assertions. I still don't. (I've written exactly one assertion in Java. I can't remember where it is, or what it asserts, but I remember it was in the next method I wrote after the last time I noticed this discrepancy.)

The main reason I don't write assertions in Java is because I have this idea that Java programs don't give up; they struggle on.

These two philosophies aren't as different as they might seem; they both aim to avoid losing the user's work. The difference is just an implementation detail in how they go about ensuring that they lose nothing, with one choice ("crash-only") being harder to program than the other ("struggle on"). The funny thing is that although we've long recognized the idea of recovery in file systems and databases, to the extent that we now expect them to feature recovery, we don't have the same expectation of applications.

Why not?

Apple's iCal, iTunes and Address Book don't have any explicit "save". (This was particularly useful in the case of Address Book because in the early days it crashed all the time.) Using these "save-less" applications convinced me to make a similar effort in my own applications. I don't just mean taking care of things the user has explicitly input (i.e. typed); I mean things such as what files were open, and window locations and sizes too. This becomes a minor feature in itself, even if the program in question doesn't crash. It means I can log out or reboot quicker; it means I can recover from a power cut better; it means I can upgrade to a new version in the middle of working on something.

The more I use programs with that kind of behavior, the more I'm annoyed by programs that make me manually save and restore state. Worse still are those programs where you can't even do the job yourself. Safari and Camino (the two main Mac web browsers) don't crash often, but when they do, they take a bunch of windows with them. And if those windows were open, it was for a reason. That was my input, and the program I gave it to didn't take sufficient care of it, and now it's lost. (I believe, from an Ars Technica review, that the Opera browser does remember window URLs, locations, and scroll positions. But why don't all browsers?)

Another example is the BlackBerry. A reasonable bit of hardware ruined by terrible software. It's lost so many mails I was typing I only use it as a read-only device these days. I don't trust it, and it's unlikely ever to regain that trust. Even though I'm told that more recent software crashes less, I won't even give it a try unless it starts automatically saving what I'm doing, and automatically restores it after a watchdog-invoked reboot. (Booting in less than 2 minutes would be important, too.)

Trust is a very difficult thing to win back. The best way I've seen for applications to do this is by being able to return to the exact state they were in before they crashed or were quit. The Mac OS X Apple Software Design Guidelines don't mention this, not even under "Reliability", but they should.

Java GUI portability; scripting languages

It's often quite difficult to write UI that works well on the Mac in addition to Linux, especially if no-one had the Mac in mind at the beginning. But there's something really exciting and fresh about writing some piece of UI on Linux, and finding when you run the same bytecode on a Mac that it looks great. Sometimes better than the original (because, for example, the Mac's JProgressBar is beautiful, and Linux's one is functional but plain), but both being good approximations to the UI that was in your head.

All this talk of scripting languages overtaking languages like Java is nonsense until at least such time as a scripting language has anything like as good as Swing. It needs to be at least equivalently full-featured, and it needs to have an equivalent ability to fit in on various platforms. Both of which rule out tk (despite its rather powerful text widget), the latter of which rules out such things as RubyCocoa. Anything that isn't part of the default installation is also a non-runner; I can use Swing out of the box if I have a Java VM, and the same must be true of any Python/Ruby solution. (I'd also suggest as a requirement that it must not require any more Perl to be written. There's enough of that effluent polluting our environment already.)

Java, in turn, could really do with even faster start-up times. Java 1.5.0 is visibly better than 1.4.2, but it's still not in the same ballpark as Ruby. And regular expression literals would make an enormous difference. I never get a regular expression string literal right first time.

To be honest, I think Groovy probably has the best chance of becoming as useful as Java, because it can simply use Swing. And Groovy is hardly going to hurt Java's presence. In the meantime, Java and scripting languages are useful for very different things, with relatively little overlap.

Dodging the Mac's grow box in Java

I stopped using rxvt on Linux a couple of months ago. A friend who was between jobs got sick of hearing me complain that nothing compared to Apple's Terminal, and started to write something that did compare. The result is a new Unix terminal emulator. I won't try to convert you here; read about it and try it yourself.

If you're a Mac user, you probably won't be giving up on Terminal just yet. I've helped out quite a bit, but have little reason to work on making it fit Mac OS better, because it was really Linux that needed a new terminal emulator.

One thing I did do, though, was stop the grow box covering the scroll bar arrows. (Unlike most X11 window managers, which let you resize a window by dragging the frame, Mac OS uses a square area called the "grow box" which lives inside the frame, over your content; it's your application's job to keep out of its way.) Although the solution seems obvious now, it wasn't immediately obvious to me how to do this, or even that it could be done so well.

The situation we had was a JScrollPane filling the entire window. The grow box covered one of the vertical scroll bar arrows. The solution involved making a JPanel that has the same dimensions as the grow box. What are they? Well, it's square, and it fits between the scroll bars, so it must have the same side length as the short side of either scroll bar:

final int size = (int) scrollPane.getVerticalScrollBar().
JPanel growBoxPanel = new JPanel();
Dimension growBoxSize = new Dimension(size, size);

We also pull the JScrollPane's scroll bar out (JScrollPane doesn't mind, as long as the scroll bar continues to exist somewhere). We put it in the CENTER of a new BorderLayout panel, with the grow box-sized spacer in SOUTH:

JPanel sidePanel = new JPanel(new BorderLayout());
sidePanel.add(growBoxPanel, BorderLayout.SOUTH);

Finally, we put that new panel in EAST of the same BorderLayout that the original JScrollPane is CENTER of:

add(sidePanel, BorderLayout.EAST);

Now the only thing that gets covered up is our fake grow box. Perfect!

You can see a screenshot if you follow the link above. Interestingly, if you compare that with an actual Terminal window, you'll see they're very slightly different. I wonder if Apple used magic numbers rather than measuring? Or is the Java scroll bar not exactly the same as the Cocoa one? (This wouldn't be surprising; if you look carefully, it seems that every component is slightly wrong in some way. Swing pop-up menus don't even feel like Cocoa pop-up menus, let alone look like them.)


Evolution versus the 21st century

If I had to choose one word for Evolution, it would have to be "clunky".

You know how sometimes you're using Outlook and it does something stupid? And what it'll have done will be a typical "Microsoft mistake"? Their particularly speciality is bad error dialogs. Not only do they seem to put little effort into steering you away from troublesome behavior, when something does go wrong, they behave utterly idiotically.

They come up with a dialog saying "I've just thrown away this mail." with nothing but an "OK" button, and you shout "no, damn you, it's not okay!"

They come up with a dialog saying "Something went wrong. To fix it, you'll have to do A, B, and then C. And I'm just going to tell you this. It's not my job to do it."

They come up with a dialog saying "Something went wrong. It was either this, or that, or something else, or something else yet again. And I'm not going to make any effort to find out. Nor was there any real point me enumerating the possibilities, because I'm not going to even explain how to recover, let alone do anything for you."

You almost get the feeling there's a competition amongst Microsoft's programmers to come up with the worst, most offensive dialogs they can.

Evolution doesn't have that tendency. But there's something really, well, "clunky" about it. It reeks of C like a wino reeks of alcohol and urine.

It keeps blocking, for no obvious reason. Except maybe threads are "expensive" in some way. This is really irritating, and I can imagine they don't even notice. They're probably expecting it, and think it's fine. I've been guilty of that myself, but it doesn't take your users long to point out that there's something wrong. The authors should go away and use Apple's Mail or Microsoft's Outlook (or better still, Outlook Express, which was a great little mailer; the only Microsoft program I ever thought liked me). And when they come back, they'll be driven insane by the way that Evolution keeps stopping for no obvious reason. (And perhaps surprised to find that Outlook's constant slowness is less of a problem than Evolution's jerkiness. Predictability is important.)

Some of the interactions are far too awkward, and for no obvious reason. I've already talked about meeting requests, but name completion is another example. Apple and Microsoft both do this perfectly well. Their systems are different, but I work with both and have no problems. But Evolution keeps catching me out. It keeps turning underlined names back into non-underlined text. And it's far too stupid when it comes to guessing names. Outlook comes a respectable second here to Apple's Mail, which does a perfect job.

There's not enough reuse, presumably because C doesn't make it easy enough. Think the mail composer's name completion is bad? Check out the meeting request creator; it doesn't have any. You have to know and type out the full email address of the people you want to attend. Or you can open a dialog from which to choose names, which is a great showcase for how stupid the name-guessing is. It's laughably bad. You have to understand how lower-case letters are ordered after upper-case letters in ASCII to fully understand why it makes some of its more ridiculous guesses. (If it can't find a match, it chooses the ASCII-sorted 'next' entry.)

I'll give Evolution some credit for making an effort to interoperate with Outlook and Exchange (I wish Apple would do more in that area), but it really feels like a very early beta. I couldn't inflict it on my parents, for example.

Evolution versus meeting requests

Evolution understands the kind of meeting request Outlook users mail round. This is good. What Evolution does with them – specifically, how it presents them – needs work.

You're presented with a scrollable area within a scrollable area. The outer scrollable area is the mail itself. It contains the header, and the inner scrollable area. For some reason, the inner scrollable area doesn't fit in the outer scrollable area. Presumably this depends on the size of your screen, but the fact that it's possible at all is stupid. The inner scrollable area serves no purpose and its content should be in the outer scrollable area.

It gets worse. The only thing you need to interact with is hidden away in the most nested part. So first you need to scroll the outer area to the bottom. If you don't, you'll only have to come back and do it, increasing the overall amount of work you have to do. So do that first. Then you can scroll the inner area to its bottom. And then, finally, you can accept or decline.

The components you interact with take almost no vertical space; about as much as one line of text. Why aren't they near the top? Why isn't it easier to see how this meeting request fits on your calendar? What made the authors think they should write such a system without having paid proper attention to existing products? How come this didn't get fixed right away, just after the first time they tried to use it?

It's hard to imagine how they could have done a worse job of this. It actively discourages people from including meaningful information about the meeting in the mail, because that just makes the only bit you interact with all the more inaccessible.

Blogger's new interface

Something strange happened last night. Blogger's really useful "Preview" function disappeared, and a seemingly useless (but impressively large) "Change Time & Date" bar appeared instead. This is in Safari. If you remember the keyboard equivalent for "Preview", you can use it, and the text area disappears, but you don't get to see the preview.

It turns out that there's a new "rich" editor. Only it doesn't work on the Mac. If you use Safari, you lose your old "Preview" function (which was really, really, useful) but everything else still works. If you use Camino, you can't type in the text area. You can title a new entry, but you can't add any content. So I don't know whether "Preview" works or not.

An interesting way to implement a WYSIWYG editor, that. You see nothing, you get nothing. (But then, you pay nothing.)

As far as I can see, there's no way to get the old interface back. So I have to wait for a new Safari (H1 2005, with Mac OS 10.4?), or – presumably – wait for a new Camino. I'd like to think that Blogger will fix itself, but if the Blogger programmers cared about the Mac, they wouldn't have released the update like this, would they? [See update at bottom.]

So, what do I have to look forward to when it does work on the Mac? A tool bar of icons that don't mean anything to me (but it's okay, I can wait for tool tips to appear, one by one), offering functionality I won't use. Multiple fonts? Colors? Links to my cat pictures? A clumsier implementation of spelling checking than Cocoa gives me for nothing?


I wouldn't mind so much if it had always been like that, but I was seriously impressed last week when I first saw Blogger. Here, I thought, were people who really got it. If you'd have asked me what to expect in future, I'd have mentioned things such as automatically translating -- to &ndash; and automatic quote-sexing and stuff like that. Invisible clever stuff that just does the right thing. LaTeX did it, and to be honest, I was quite surprised Blogger didn't.

Oh well. I'd better go and mail them a complaint. Remind them that the 7% of web users who aren't using MS Internet Explorer do exist, and know how to whine...

[Update: "Preview" has returned in Safari, and works properly. There are only two tool bar buttons ("ABC cuneiform" and "cat pictures"). Camino now has a text area you can actually type in, and has less on its tool bar, but more than Safari (in addition to Safari's icons, Camino gets "b", "i", "big-eyed frog" and "flip-top bins").]



I just saw Alien on TV. Not the best experience (particularly with ad breaks), but I had to accept that I wasn't going to see it on the big screen. (The re-release wasn't shown round here.)

The story didn't really grab me, so I was in my usual sci-fi grazing mode, looking at the stuff.

What struck me the most was that it was a very 70's future. The tech wasn't very high even by today's standards (with the exception of the android), and the fashions suggest that there's been another 70's revival whenever this is set.

I found it particularly interesting that there were no tattoos or piercings on any of the crew.

Now, in 1979, I was 4 years old, so I don't think I'd have had any kind of opinion, but I'm pretty sure that even if I'd been old enough to have an opinion, I wouldn't have guessed how mainstream tattoos and – to a lesser extent – piercings would be, just 20 years into the future. And now I'm in that future, I'm left with two questions:

First, are tattoos and piercings here to stay? They seem fundamentally different to items of clothing; clothing changes, but we're all still wearing clothes of some description. Will tattoos and piercings change in a similar manner to clothes, not going away, but changing in design and location? (Please, no more dolphins!) Or will kids in 40-50 years look at my generation and find all that body decoration as quaint as we find old people who still wear hats or driving gloves?

Second, what's the next big thing? And how would you predict it? I'd guess it won't be solely decorative, but then I'm a geek. So if I try to imagine the future, I think of computing power integrated into my body. Not so much a piercing as an insertion. But I'd never have predicted tattoos or piercings. I'd never have either on my body, so why would I imagine anyone else would?

If I try to think of something without function, I end up at variations on themes we already recognize. Stick a light under the skin, and it's still something you could class as a tattoo. But then it's mostly pure chance whether any "new" thing gets a new name, or gets classed as a variation on an old theme. And it'll depend on its success and ubiquity whether any particular variation actually comes to stand for the original. Contrast the move from "electronic mail" to "email" to "mail" with the way the term "voice mail" isn't going anywhere because the technology sucks.

But don't get me started on how bad telephones are... I have to get some sleep.


JDIC for Mac OS X

My implementation of org.jdesktop.jdic.desktop for Mac OS X was committed today.

When I get chance, I'll have a look at a WebKit-based org.jdesktop.jdic.browser.


A Tale In Two Headlines

Great juxtaposition of Slashdot headlines today:

* Alan Kay Decries the State of Computing

* Microsoft Expects 1 Billion Windows Users by 2010


Evolution and grep(1), and Xcode

I wanted to tell Evolution that anything in the big5 charset is junk. (Even if that weren't strictly true, I don't speak any non-European languages, so it may as well be treated as junk for all the good I'll get out of it.) Ever since posting to the ruby-core mailing list, I've been getting a lot of spam from the far east.

Anyway, most of the big5 spam I get is multi-part with no charset in the message header, but charsets in the individual parts. In most mailers, if you say "in headers", they only include the message headers. If you say "in body", they mean the decoded parts, not their headers. Running out of options on Evolution's extensive list (I don't know what the promising-sounding "Regex Match" works on; I certainly couldn't get it to work), I came to "Pipe to Program". Awesome. A quick grep -q later, and I had my crude but effective filter.

Apple are pretty good at this kind of "or there's the whole Unix tool-set if you prefer" approach too; many programs have a directory they'll look in for scripts. Each script gets an entry on a scripts menu that's represented by a little image of a scroll. (You can find the image in /Applications/Mail.app/Contents/Resources/scriptMenu.tiff, though I'm not sure you're supposed to just copy it into your own application. I wish Apple would make it easier to use the same icons. Even better if they'd make them available for cross-platform use, like Sun did with their Java LAF icons, but Mac-only would be a useful step in the right direction.)

To use a script, just select it off the menu. When you do, it's run with data passed in on standard input. What happens to the output is up to the application. Xcode lets you choose between discarding it, replacing the selection, replacing all the text, inserting after the selection, inserting after all the text, putting it in an alert panel or sticking it on the clipboard. You choose which by putting a special comment (such as "%%%PBXOutput=InsertAfterSelection}%%%") in your script. I don't really like this solution, but luckily my applications haven't need this kind of control yet.

Additionally, Xcode also lets you set the selection by inserting "%%%{PBXSelection}%%%" markers in your output, in case you want to control caret positioning yourself, or automatically make a selection.

Xcode also rewrites the script before executing it, replacing various special variables such as "%%%{PBXFilePath}%%%". In my own applications, I've found it more convenient to pass such supplementary information in via environment variables.

I like programs like these. I like to see that the Unix baby isn't being thrown out with the Bourne shell and glass tty bath water.


Praise for Blogger's interface

Blogger doesn't have the fanciest web interface I've ever seen, but it might just have the best.

Obvious, but complete. Nice little touches (like changing the title on the preview page) but nothing that messes you about (unlike Microsoft's Outlook Web Access, which inhabits the opposite end of the usability spectrum, where you find those things built by people who don't use them themselves). Fast. Uncluttered, and yet I can't think of anywhere where I can't directly do what I want. It even looks good.

One notable omission is the lack of any spelling checking; I guess they're all Mac users too, enjoying the fact that Cocoa checks everything they type, as they type, all for free. Even if it does mean that "Blogger" is marked as a misspelling. [Update: it turns out there is spelling checking, and a very pretty implementation too, but they still lose a point for using an icon that doesn't stand out anything like as well as the other functionality, and which relies on US cultural knowledge for its meaning. And to think I almost gave them an extra point in the previous paragraph for not using any icons!]

I wonder how long it took to get to be this good?

Making the easy things easy

One of the things I find most frustrating about programming is when it's not my program I want to alter. When I have some desire that's easily specified, and would be easy to do if I had control over the source that's made difficult because we don't make software that works that way. Right now, for example, I want my browser to pass all incoming HTML to some code of mine, and use the HTML it returns instead of the original. Easily said, and you can imagine that if I were the browser author this would be easy to add.

As it is, software doesn't give us the right level of transparency (so we can see what's going on, and where we'd like a hook) or the right level of hookability (so we can have our desired affect). We can do things like this, but it's made far more difficult than it ought to be.

Getting a sampled profile of an already-running program is a similar example, but one where Apple at least have made a significant step forward. Mac OS' "Activity Monitor" lets you simply select a process and click "Sample Process". A window appears asking you to wait while the sampling takes place. When it's done, the window fills with a call graph. No need to use the keyboard. No process ids. No temporary files. No pipelines. No doing it again, only this time with c++filt. Not even any need to write a script to at least reduce typing.

Activity Monitor's sampling isn't perfect; it doesn't cooperate with the Java VM to get names for JIT-compiled methods, for example, so the profile of a Java program is pretty meaningless unless your problem is in native code. But it goes a long way towards making an easy thing – profiling a C++ program – easy.

Maybe there's a healthy trend coming our way: Sun's dtrace also focuses on run-time instrumentation of production systems, because they realize that that is what's most useful and interesting. We need to give up this idea that profiling (of whatever kind) needs to be specially arranged beforehand, and there are two systems that are finally moving us away from the special-arrangements way of working.

I hope that programmers a few generations from now will have trouble believing that we had to make special arrangements (in advance!) to see what our programs were doing. Just as we find it hard to believe that people used to write programs by making holes in pieces of card and handing them to another person. And I hope that they also find it incredible that our software was not only largely unobservable; it was unadaptable too.

Any color you like, as long as it's black.

Assume the position; the computer will see you now.



ProcessBuilder is a great relief. Or will be, when I have 1.5.0 on Mac OS, and can thus afford to lose 1.4.2 compatibility there. At the moment, it's mostly annoying that I can't fix my code on Linux without losing Mac OS. But the prospect of finally having decent control over process creation (and after only a decade of waiting!) is quite exciting. As is the fact that System.getenv has been unbroken too, even if they haven't fixed the name.

(Now I guess I can start holding my breath for decent signal-sending capabilities that don't require me to get naughty with reflection. Sometimes you know you're only writing something for Unix, and don't care that what you've written doesn't necessarily translate to Windows. Bummer; stick it on the pile with all those applications that look terrible on Mac OS because their authors haven't even considered other platforms. Portability is rarely free, and engineering appropriate abstractions to make it free is usually difficult. Deal with it. Until Sun accepts this, we'll have to keep living in denial, and keep writing sub-standard software.)

One thing I find interesting (in a bad way) about ProcessBuilder, though, is the Smalltalk/Objective-C style of naming. So instead of:

File getDirectory();
void setDirectory(File directory);

we have:

File directory();
void directory(File directory);

which is fine – if I'm completely honest, I think I probably prefer it – but not very Java-like. This worries me a little. One of the great things about Java is the completeness of its libraries. And one of the things that makes that size of library manageable is its uniformity; the way you can just guess what something will be called and how it works — and be right.

No matter how funky other languages' traditions and idioms may be, I don't think you can graft them on to Java without paying a price. Particularly if you're not going to revisit the entire library, which I'm sure the Binary Compatibility Boys wouldn't dream of, for better or worse.

There's a difference between overlapping idioms like these two ways of naming accessors, and idioms that have no parallel in one of the languages. For example: other languages make a distinction between asType and toType (wrapping the original object versus creating a new object reflecting the original object's current state), and – if a few existing JDK methods hadn't spoiled things by using the two synonymously – that could usefully be "grafted on". But where you already have a strong well-known idiom for something, as we do with getters and setters in Java, and your new idiom doesn't introduce a useful distinction, things can only get worse.


Take two languages into the shower?

Some days, C++ just isn't awkward enough. And Perl isn't unreadable enough. So what are you do, except write Perl that reads VHDL and writes C++?

Despite myself, I have to admit to having really enjoyed myself today. No individual part of what I did was hard in itself, but the combination made things just complicated enough to require full concentration. A little more, even: I had to stop part way through and do something about the way I kept accidentally fixing errors in the generated source rather than the generator. The big "WRONG WAY" comment at the top doesn't work if you've automatically clicked on your editor's link from the compiler diagnostic (and you've not been using #line directives, which is something I hadn't even thought about until now).

Step 1 was to make the generated files read-only. Obvious, but something missing from the make rules all these years. Step 2 was to modify my editor to include a watermark repeating the text "(read-only)" behind read-only files. It works pretty well. It's subtle enough to not get in your way if you really do want to edit a read-only file (and chmod, or "Save As", say, when you've got down whatever it is you're thinking and don't want to lose), but unmissable enough that there's no way you could accidentally start editing without realizing there was something special about this file.

And indeed, that was the last time I made that mistake all day.

Non-type template parameters

I've never liked generics much, and C++ templates even less. I'm glad that Java 1.5 generics won't allow non-type parameters, despite being a C++ programmer by profession. Or perhaps because of my daily exposure to C++ templates.

It's not that I never paint myself into corners – or get painted into them by others while my back's turned – where the only ways out are duplication or a template, and I'm often glad of templates then. But I've yet to see a non-type parameter I didn't think was a mistake. I hate having my hands tied, and having to make a decision at compile-time... that's just offensive. I can feel the metal biting into my wrists.

I came across an interesting example today, where a collection class used an int template parameter as its fixed size rather than using a constructor parameter for this purpose (I wanted a fixed-size collection; those handcuffs were for security... and fun). Anyway, it turned out that being unable to make a small improvement to some code by using a size that was a run-time rather than compile-time constant was a good thing, because it's encouraged me to make a bigger improvement and go for a self-sizing container instead.

So you see, sometimes two wrongs (a magic number and an int template parameter) do make a right (a self-sizing collection).

Hello, World!

I give in. If blogging has reached the point that it's mentioned in a Stevenote and seen as important enough to warrant support in the next version of Safari – a mainstream browser for real people – I guess I may as well give in and have a go myself. I wasn't impressed when it was mentioned in the technology section of the UK "Independent" newspaper, because I know my parents would just skip over it. But maybe a little blue "RSS" icon will have them asking "what's RSS?"

Or maybe not. Maybe I just wanted an excuse to vanity publish. Or maybe Jobs is just such a great salesman that I wanted some of what he had to show, even if Tiger's not available until next year.

Hopefully this blog will give me an outlet for things that don't make sense as mail (expose people to whatever's in your head, and they're likely to think you're speeding) and don't seem worth the hassle of writing an article for a journal or website.