2006-01-06

Network graphics can be expensive

Raymond Chen recently wrote Taxes: Remote Desktop Connection and painting with example code where you want double-buffering locally, but not remotely because you then have to pay to ship a big bitmap rather than just a line-drawing request.

I had a similar situation recently with the Terminator terminal emulator which uses an alpha-blended rectangle to implement a really pleasing visual bell (rather than the traditionally ugly reverse video solution). A user complained that Terminator kept hanging when he ran vi(1), and it turned out that each time he hit escape when he didn't need to, vi(1) would sound the bell, and he'd have to wait a second or two while the original image was shipped across the network, the rectangle was composited over it, and the new image was shipped back. Ouch.

I added an option so the user can ask not to have the "fancy" bell, in which case we fall back to XOR, which takes place on the remote machine. It looks okay, but not nice enough that we'd want to drop the alpha-blended solution.

The trouble is, I don't want the end-user to have to know about this stuff, or have to make the decision, or have to suffer XOR even when running locally just because they sometimes run remotely.

In Raymond's case, he was writing Win32 code, and he can use GetSystemMetrics(SM_REMOTESESSION)) to find out which situation he's in. I'd been wondering what Win32 offers since reading about their plans to support different levels of graphical fluff in Vista. Because it seems to me that you may as well treat the "crappy Intel on-board graphics" case (say) the same as the "remote X11/RDC display" case. Or at least, it would be nice to have some bogomips-like measure of how good the graphics hardware is.

I scouted around GraphicsConfiguration and GraphicsEnvironment but didn't find any way to tell the difference.

One possibility is to time how long it takes to render something, and decide whether it's acceptable. There are two problems with this. The first is that it's rather like walking up to someone in the street, kicking them in the plums, and then apologizing if it turns out that they're not a masochist: it would be distinctly preferable if you'd work out beforehand whether your behavior was going to be appropriate. The other problem is that the obvious time (from the user's point of view) to test and measure is when you start, but that's the worst time from the JVM's point of view because it'll still be getting in to its stride. If you've tried our EventDispatchThreadHangMonitor you'll know that Java can easily go away for a second or more to load a class or load a font, and assuming your program goes to some effort to keep hard work off the EDT, it's most prone to this behavior at start up.

I raised Sun 6362233 and the evaluating engineer didn't come up with anything better than looking at $DISPLAY, which I'd already rejected because in the (not uncommon) VNC case, the X11 display is still on localhost (on a screen other than 0, usually) but the pixels are actually being displayed on a different machine.

So while looking at $DISPLAY would be better than nothing, it wouldn't solve the problem our particular user had. Nor would it have solved Raymond's Win32 problem. (I've never used it, but I assume Apple's Remote Desktop has the analogous problem.)