2006-01-30

Synergy versus x2vnc

I've long used x2vnc, but before I start, I should make an admission. Around 2000 I woke up in a hotel room in Milan with roughly the x2vnc idea. When I got home, I implemented the idea. Pleased with it, and having persuaded a colleague or two of its usefulness, I looked about for somewhere to contribute it, and found x2vnc had beaten me to it. Almost exactly the same, except my clipboard handling was more modern, I ditched my program and became an x2vnc user. My time's divided finely enough between the applications I do support, without me adding new ones willy-nilly.

Although I don't know my usernames and passwords, I seem to have two sourceforge accounts because I get two identical mails a month showing me, amongst other things, the most popular sourceforge projects. I always have a look, if only to snigger at the number of IRC clients, IM clients, and BitTorrent clients. Recently, though, two new entries caught my eye. One was a Win32 RSS reader that's so bad it's almost funny, and which manages to make SharpReader look good. I've given up reading RSS on Win32 thanks to those two.

The other interesting new entry, though, was Synergy, a modern alternative to x2vnc that's better in many ways. I'll assume below that you've used x2vnc. If you haven't, I recommend you don't bother reading my comparison below, and go straight to Synergy.

Synergy pretty much fixes all the things I dislike about x2vnc. Synergy's improvements over x2vnc include:

  • Synergy automatically restarts itself if a connection is lost.
  • Synergy automatically skips over a non-responsive computer if you have another display logically on the "other side" of the non-responsive computer's display.
  • Synergy supports Mac OS. (There's a client and server for each of Mac OS, Win32, and X11. You can connect up any combination of these.)
  • Synergy's server/client distinction gives you one single convenient point of configuration, making it much easier to set up connections to more than two machines (with arbitrary topology between their displays).
  • Synergy understands the difference between the X11 selection and the X11 clipboard, and sets the X11 clipboard. That is, it works much better with modern X11 applications.
  • Synergy seems to suffer much less than x2vnc from the problem of stuck-down modifier keys.
  • Synergy correctly handles starting when Windows starts, and continuing to work when I log in. (WinVNC and x2vnc used to manage this, but it hasn't been working for a while, and the morning I finally switched, I just couldn't get them to work after I logged in at all. Synergy "just works".)
  • Synergy's client and server have none of the overhead of VNC. They just care about key and mouse events. If you need the VNC functionality, you might see this as a negative.

There are just a few areas where it's still lacking from my point of view:

  • The documentation doesn't make the usefulness of aliases obvious. The "Three Stooges" example may have seemed funny at the time, but it disguises the fact that you will need this feature if, say, one of your machines thinks of itself as "machine.my.domain.net" rather than just "machine". (My Mac OS machine suffers this.)
  • There's no easy way to automatically start up on Mac OS. I think this is by design on Apple's part, for security reasons, but it's a shame. (GNOME correctly saves the server as part of your session, or you can add the line "synergys" to your .Xsession, and the Win32 client has a button you can click on to make it start as a system service.)
  • There's no easy support for authenticated and encrypted communication, so all your keystrokes go out in plaintext. (No competitor I'm aware of offers secure communication either, though Synergy's to-do list suggests it's on the plan.)
  • Cygwin's rxvt doesn't work for me. Everything I type seems to be interpreted as a control character. Luckily, Terminator for Cygwin is in pretty good shape now, and a superior replacement.

Despite these niggles, I'm converted. I wonder how many more years before sourceforge's newsletter next recommends something worthwhile?

2006-01-25

Are we ready to replace /bin/true?

I've been talking about re-implementing someone else's C program in Java recently. A friend reminded me that I wouldn't be able to use the Java replacement in the same way as I use the C original, which is invoking it as a subprocess via ProcessBuilder.

Now that's okay, and not the point, because part of the reason for re-implementing the C program would be so we could use it as a library rather than as a separate program, and avoid all the I/O associated with the C program, and all the code on both sides associated with the interchange file format.

But I was still curious. I know that putting a window on the screen is the kiss of death as far as start-up time goes for a Java program, but how does "Hello, World!" compare these days?

Here are the three programs I used:

hydrogen:~$ cat x.c
#include
#include
int main(void) {
printf("Hello, World!\n");
exit(0);
}
hydrogen:~$ cat x2.cpp
#include
#include
int main(void) {
std::cout << "Hello, World!\n";
exit(0);
}
hydrogen:~$ cat x3.java
public class x3 {
public static void main(String[] args) {
System.out.print("Hello, World!\n");
System.exit(0);
}
}
hydrogen:~$


First Mac OS 10.4.4 on the dual G5, running Java 1.5.0_06 (this is the last of several runs):

hydrogen:~$ time ./x ; time ./x2 ; time java x3
Hello, World!

real 0m0.006s
user 0m0.001s
sys 0m0.005s
Hello, World!

real 0m0.006s
user 0m0.001s
sys 0m0.005s
Hello, World!

real 0m0.211s
user 0m0.123s
sys 0m0.077s
hydrogen:~$

The interesting thing there is that the C program is no faster than the C++ program. This doesn't tend to be true of trivial programs on Linux, because the C++ program will involve more shared libraries, and that leads to a longer start-up time.

Now Ubuntu (Linux 2.6.12-10) on the Opteron, running Java 1.5.0_06 for x86 and for amd64:

helium:~$ time ./x
Hello, World!

real 0m0.001s
user 0m0.000s
sys 0m0.002s
helium:~$ time ./x2
Hello, World!

real 0m0.003s
user 0m0.001s
sys 0m0.002s
helium:~$ time /usr/local/jdk/jdk1.5.0_06/bin/java -cp ~ x3
Hello, World!

real 0m0.154s
user 0m0.065s
sys 0m0.009s
helium:~$ time /usr/local/jdk/jdk1.5.0_06-amd64/bin/java -cp ~ x3
Hello, World!

real 0m0.211s
user 0m0.134s
sys 0m0.011s

There we see that C++'s extra shared libraries cause it to start very slightly slower, and that the x86 JVM's client compiler lets it start significantly quicker than the amd64 JVM's server compiler. (The amd64 JVM doesn't have a client compiler.) Both JVMs are well behind the C++ program, though.

So although these are respectable times compared to the early days, and they're not important when you're talking about interactive programs or servers, you still wouldn't want the Ruby interpreter rewritten in Java, let alone something like /bin/true.

2006-01-22

How does Terminator know what processes might die on Linux and Cygwin?

If you enjoyed How does Terminal.app know what processes might die? you'll love this. If you didn't, you may as well stop reading right now.

If you remember, I finished my tour of Mac OS options for finding all processes using a particular tty with the following:
The only bummer is that Linux's sysctl.h doesn't include KERN_PROC_TTY, so I guess we'll have to grub around in /proc or call lsof(1) there.
The easier option for Linux, it seemed, was to use lsof(1). It was pretty slow at around 240ms on my Opteron compared to Mac OS' 7ms on my dual G5, but that seemed just about bearable. 300ms seems to be about the point where users at my level of impatience notice a delay.

What I didn't think about, though, is that most machines aren't very lightly loaded Opterons. A Pentium 4 with a lot more processes was regularly taking about 0.5s, which was noticeable. And then the complaints started coming in of times over a second on another Pentium 4.

lsof(1) doesn't scale well.

I knocked up a little Ruby script proof of concept to see how well grubbing around in /proc would work:
#!/usr/bin/ruby -w

if ARGV.length() != 1
$stderr.puts("usage: lsof.rb <absolute-filename>")
exit(1)
end

filename = ARGV[0]

def has_file_open(pid, filename)
Dir["/proc/#{pid}/fd/*"].each() {
|fd_file|
begin
linked_to_file = File.readlink(fd_file)
if filename == linked_to_file
return true
end
rescue
# Ignore errors.
end
}
return false
end

pids = []
Dir.chdir("/proc")
Dir["[0-9]*"].each() {
|pid|
if File.stat("/proc/#{pid}/fd").readable?()
if has_file_open(pid, filename)
pids << pid
end
end
}

names = []
pids.sort().uniq().each() {
|pid|
# Extract the "(name) " field from /proc/<pid>/stat.
name = IO.readlines("/proc/#{pid}/stat", " ")[1]
# Rewrite it as "name(pid)".
names << name.sub(/^\((.*)\) $/) { |s| "#$1(#{pid})" }
}
puts(names.join(", "))
exit(0)

This was significantly faster, and had the advantage of working on Cygwin, which doesn't ship with lsof(1) but does have a sufficiently compatible /proc. Even on Cygwin it was only taking about 70ms. On Linux (on the Pentium 4) it was down around 40ms.

The killer though, that gave me sufficient impetus to actually make the change, is that lsof(1) can hang if your Linux box has a hung mount. Bad enough that it was taking over a second (on the event dispatch thread!), but that it could sometimes just go away and never come back...

lsof(1) doesn't play nice with network file systems.

A quick rewrite of my Ruby in C++ later, and there's no danger of this part of Terminator hanging over a hung mount. We get our result in exactly the form we want in 20ms (on the Pentium 4 Linux machine). The Opteron is down to around 10ms: the same as the dual G5's Mac OS sysctl(3).

Why did I rewrite the script in C++ rather than just call out? No particularly good reason. I didn't really want to start a new process when the user's probably trying to kill something for the same reason that shells tend to have kill(1) built in. But really it came about because I initially thought "there's no reason not to do this in Java", and then realized that, actually, file system access is one of Java's worst foibles.

Maybe I'm overly sensitive about that, given how much of my life is taken up with file system performance, but Java also has huge functionality gaps when it comes to the file system. Don't even talk to me about symbolic links! The C++ was easy, less verbose than the equivalent Java, and roughly as verbose as the equivalent Ruby. A good POSIX C++ binding would have made things even better.

Anyway, the users are quiet again, so I can go back in my box.

2006-01-08

How does Terminal.app know what processes might die?

With Mac OS' Terminal, if you close a window that has processes still running in it, you'll be warned and asked if you really want to close the window. The warning will include all the processes' names. So if you're running top(1) from vim(1) from bash(1), you'll be told that "Closing this window will terminate the following processes inside it: bash, vim, top".

There are two obvious things these processes share. Firstly, their process ids (pids) and parent process ids (ppids) would let you work out their relationship, and Terminal presumably remembers the pid of the process it started. Alternatively, the processes will all have access to the same tty, so if you could find all those processes, you'd also have your list.

The advantage of finding the processes using the tty is that you automatically include processes whose ppid chain back to the direct descendent of Terminal is broken, and you automatically exclude processes who've given up their connection to the terminal.

So, if we want similar functionality in our Terminator terminal emulator, how are we going to get at the list? One option would be to look at the output of lsof(1). It even has a fairly convenient mode for being called by other programs. That's fine for Linux, but it's way too slow on Mac OS. It's 10 times slower, in fact. Here's Mac OS:

hydrogen:~$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.4.3
BuildVersion: 8F46
hydrogen:~$ tty
/dev/ttyp4
hydrogen:~$ time lsof -w -Fc /dev/ttyp4
p14244
cbash
p14526
clsof

real 0m1.453s
user 0m0.076s
sys 0m1.367s
hydrogen:~$ time lsof -w -Fc /dev/ttyp4
p14244
cbash
p14528
clsof

real 0m1.431s
user 0m0.076s
sys 0m1.351s
hydrogen:~$

And here's Linux:

Linux helium 2.6.12-10-amd64-generic #1 Fri Nov 18 11:51:07 UTC 2005 x86_64 GNU/Linux

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
helium:~$ tty
/dev/pts/2
helium:~$ time lsof -w -Fc /dev/pts/2
p920
cbash
p928
clsof

real 0m0.182s
user 0m0.062s
sys 0m0.119s
helium:~$ time lsof -w -Fc /dev/pts/2
p920
cbash
p930
clsof

real 0m0.182s
user 0m0.063s
sys 0m0.118s
helium:~$

Even if the performance were acceptable, though, lsof(1) on Mac OS can't recognize when top(1) is running. The reason for this is that top(1) is setuid root so it can get the information it needs. Ugh. lsof(1), on the other hand, is setgid kmem, so that it can open /dev/mem. So although lsof(1) can find out about open file descriptors, it can only find out about ones for processes running as the same user, which won't include setuid programs like Mac OS' top(1).

hydrogen:~$ ls -l `which top`
-rwsr-xr-x 1 root wheel 83088 Mar 20 2005 /usr/bin/top
hydrogen:~$ ls -l `which lsof`
-rwxr-sr-x 1 root kmem 111356 Mar 26 2005 /usr/sbin/lsof
hydrogen:~$

So lsof(1) doesn't look like it's going to be so convenient after all.

If you try a few plausible Google searches or play about with apropos(1), you'll likely come across kvm(3). In particular, kvm_getprocs(3), which lets you pass KERN_PROC_TTY to get kinfo_proc structures for all matching processes.

It's badly documented, but it's not too hard to work out. The main trick is working out that the KERN_PROC_TTY parameter is the st_rdev field.

#include <iostream>

#include <fcntl.h>
#include <kvm.h>
#include <sys/sysctl.h>
#include <sys/stat.h>

int main(int argc, char* argv[]) {
const char* tty_name = argv[1];

struct stat sb;
if (stat(tty_name, &sb) != 0) {
perror("stat");
return 1;
}

kvm_t* kvm = kvm_open(0, 0, 0, O_RDONLY, "kvm_open");
if (kvm == 0) {
return 1;
}

int count = 0;
kinfo_proc* procs = kvm_getprocs(kvm, KERN_PROC_TTY, sb.st_rdev, &count);
for (int i = 0; i < count; ++i) {
std::cout << procs->kp_proc.p_pid << " "
<< procs->kp_proc.p_comm << std::endl;
++procs;
}

int result = kvm_close(kvm);
return result;
}

This solution is nice and fast, and it gives us the right answer in the "bash(1) running vim(1) running top(1)" case, but because it needs access to /dev/mem it needs to have privileges at least equivalent to setgid kmem or you'll get "kvm_open: /dev/mem: Permission denied". Here it is running as root:

hydrogen:~$ time sudo ./kvm /dev/ttypd
17762 top
17724 vim
6603 bash

real 0m0.032s
user 0m0.004s
sys 0m0.015s
hydrogen:~$

So this gives us the right answer, in a sensible amount of time, but its requirements are unacceptable.

So, what would Brian Boitano do? Let's ask the kernel using ktrace(1) and kdump(1), Mac OS' much less convenient equivalent of strace(1). Here's the relevant excerpt, with a bit of context:

198 Terminal RET read 24/0x18
198 Terminal CALL read(0x21,0xf2243bd0,0x200)
198 Terminal CALL __sysctl(0xbfffe5d0,0x4,0,0xbfffe5f0,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe5d0,0x4,0x1996800,0xbfffe5f0,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe910,0x4,0,0xbfffe930,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe910,0x4,0x1996800,0xbfffe930,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe910,0x4,0,0xbfffe930,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe910,0x4,0x1997400,0xbfffe930,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe420,0x4,0,0xbfffe440,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffe420,0x4,0x1996800,0xbfffe440,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffd6c0,0x4,0,0xbfffd6e0,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL __sysctl(0xbfffd6c0,0x4,0x1996800,0xbfffd6e0,0,0)
198 Terminal RET __sysctl 0
198 Terminal CALL ppc_gettimeofday(0xf0e9c8c0,0)
198 Terminal RET ppc_gettimeofday 1136687072/0x43c077e0

So it looks like it's calling sysctl(3). Looking at the man page, there's an example that fetches process information for processes with pids less than 100 that suggests we're on the right track, and further down there's this:

KERN_PROC
Return the entire process table, or a subset of it. An array of
pairs of struct proc followed by corresponding struct eproc
structures is returned, whose size depends on the current number
of such objects in the system. The third and fourth level names
are as follows:

Third level name Fourth level is:
KERN_PROC_ALL None
KERN_PROC_PID A process ID
KERN_PROC_PGRP A process group
KERN_PROC_TTY A tty device
KERN_PROC_UID A user ID
KERN_PROC_RUID A real user ID

The system header files suggest that (as the example earlier in the man page implies) we can ignore that stuff about pairs of structs, and just talk in terms of kinfo_proc. As with the kvm functionality, the tty device is the st_rdev field from stat(2).

There's a slight complication in that with sysctl(3) we have to allocate memory for the table ourselves, and we need to do the usual Unix dry-run trick so we know how much space to allocate. Here's the code:


#include <sys/types.h>
#include <sys/sysctl.h>
#include <sys/stat.h>

#include <iostream>
#include <vector>

int main(int argc, char* argv[]) {
const char* tty_name = argv[1];

struct stat sb;
if (stat(tty_name, &sb) != 0) {
perror("stat");
return 1;
}

// Fill out our MIB.
int mib[4];
mib[0] = CTL_KERN;
mib[1] = KERN_PROC;
mib[2] = KERN_PROC_TTY;
mib[3] = sb.st_rdev;

// How much space will we need?
size_t len = 0;
if (sysctl(mib, sizeof(mib)/sizeof(int), NULL, &len, NULL, 0) == -1) {
perror("sysctl test");
return 1;
}

// Actually get the information.
std::vector<char> buffer;
buffer.resize(len);
if (sysctl(mib, sizeof(mib)/sizeof(int), &buffer[0], &len, NULL, 0) == -1) {
perror("sysctl real");
return 1;
}

// Dump it.
int count = len / sizeof(kinfo_proc);
kinfo_proc* kp = (kinfo_proc*) &buffer[0];
for (int i = 0; i < count; ++i) {
std::cout << kp->kp_proc.p_pid << " "
<< kp->kp_proc.p_comm << std::endl;
++kp;
}
return 0;
}

And see how it runs:

hydrogen:~$ time ./sysctl /dev/ttypd
17762 top
17724 vim
6603 bash

real 0m0.007s
user 0m0.001s
sys 0m0.006s
hydrogen:~$

We get the right answer, really quickly, and we don't need any special privileges. Awesome!

The only bummer is that Linux's sysctl.h doesn't include KERN_PROC_TTY, so I guess we'll have to grub around in /proc or call lsof(1) there.

2006-01-06

Network graphics can be expensive

Raymond Chen recently wrote Taxes: Remote Desktop Connection and painting with example code where you want double-buffering locally, but not remotely because you then have to pay to ship a big bitmap rather than just a line-drawing request.

I had a similar situation recently with the Terminator terminal emulator which uses an alpha-blended rectangle to implement a really pleasing visual bell (rather than the traditionally ugly reverse video solution). A user complained that Terminator kept hanging when he ran vi(1), and it turned out that each time he hit escape when he didn't need to, vi(1) would sound the bell, and he'd have to wait a second or two while the original image was shipped across the network, the rectangle was composited over it, and the new image was shipped back. Ouch.

I added an option so the user can ask not to have the "fancy" bell, in which case we fall back to XOR, which takes place on the remote machine. It looks okay, but not nice enough that we'd want to drop the alpha-blended solution.

The trouble is, I don't want the end-user to have to know about this stuff, or have to make the decision, or have to suffer XOR even when running locally just because they sometimes run remotely.

In Raymond's case, he was writing Win32 code, and he can use GetSystemMetrics(SM_REMOTESESSION)) to find out which situation he's in. I'd been wondering what Win32 offers since reading about their plans to support different levels of graphical fluff in Vista. Because it seems to me that you may as well treat the "crappy Intel on-board graphics" case (say) the same as the "remote X11/RDC display" case. Or at least, it would be nice to have some bogomips-like measure of how good the graphics hardware is.

I scouted around GraphicsConfiguration and GraphicsEnvironment but didn't find any way to tell the difference.

One possibility is to time how long it takes to render something, and decide whether it's acceptable. There are two problems with this. The first is that it's rather like walking up to someone in the street, kicking them in the plums, and then apologizing if it turns out that they're not a masochist: it would be distinctly preferable if you'd work out beforehand whether your behavior was going to be appropriate. The other problem is that the obvious time (from the user's point of view) to test and measure is when you start, but that's the worst time from the JVM's point of view because it'll still be getting in to its stride. If you've tried our EventDispatchThreadHangMonitor you'll know that Java can easily go away for a second or more to load a class or load a font, and assuming your program goes to some effort to keep hard work off the EDT, it's most prone to this behavior at start up.

I raised Sun 6362233 and the evaluating engineer didn't come up with anything better than looking at $DISPLAY, which I'd already rejected because in the (not uncommon) VNC case, the X11 display is still on localhost (on a screen other than 0, usually) but the pixels are actually being displayed on a different machine.

So while looking at $DISPLAY would be better than nothing, it wouldn't solve the problem our particular user had. Nor would it have solved Raymond's Win32 problem. (I've never used it, but I assume Apple's Remote Desktop has the analogous problem.)

2006-01-01

Requiring a minimum version of OS X for your Java application

There was a recent blog post Apple Bug Friday: Minimum Application Version where Dan Wood complained about Apple's code to check that an application was being run on a suitable version of Mac OS. Chris Campbell responded in Requiring a minimum version of OS X for your application by supplying an executable that would perform the check, present a better dialog on failure than Apple's one, and call your actual executable on success.

I write Java applications, mostly, so I don't really care all that much about the OS version. I'm usually more interested in the Java VM version. Plus I don't want the hassle of more native code. I'd rather have a script.

At the moment, my requirement is Java 5 which in turn means Mac OS 10.4, so I wrote a little Ruby script to make those checks and prompt the user to upgrade appropriately. At the time of writing, Apple hasn't released a Java 5 VM that makes itself the default, so the calling script is expected to work around that by putting /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Commands/ at the head of $PATH.

I had to learn a little AppleScript to present the error because Mac OS doesn't have an equivalent of zenity(1) yet. In particular, Jon Gruber's BBEdit and TextWrangler CSS Syntax Checker 1.0.1 showed how to display the best error dialog possible on Mac OS 10.4 (where it looks exactly how you'd expect from a Cocoa program) but fall back to a crude approximation on earlier versions. Someone who knew enough AppleScript (if it weren't for Jon Gruber, I wouldn't believe anyone actually still used it) could probably knock up a decent imitation of zenity(1) for Mac OS. (I'll get round to it myself if I ever come across a useful enough script that uses it.)

Anyway, the idea with this script is that it displays an error dialog if necessary, and reports via its exit status whether you should carry on and try to launch. This is quite convenient in conjunction with && in your launch script. (And probably not much use to you if you're a JarBundler user.)

Anyway, no screenshots on my blog. Just code:

#!/usr/bin/ruby -w

# Reports (via its status code on exit) whether we're running on a suitable
# Mac OS installation.

# Usage: ensure-suitable-mac-os-version.rb && program-requiring-10.4-with-Java-5.0

# FIXME: we should take the required Mac OS and Java versions as parameters.
# FIXME: we should be clever enough to offer to open Software Update if you
# only need to go up a minor revision.

def informational_alert(title, message)
# The whole point of this script is to cope with old software, and
# "display alert" is only available on 10.4.
# Before then we need "display dialog".
has_display_alert = `osascript -e 'property AS_VERSION_1_10 : 17826208' -e '((system attribute \"ascv\") >= AS_VERSION_1_10)'`.chomp() == "true"
if has_display_alert
display_command = "display alert \"#{title}\" message \"#{message}\" as informational"
else
display_command = "display dialog \"#{title}\" & return & return & \"#{message}\" buttons { \"OK\" } default button 1 with icon note"
end
system("osascript -e 'tell application \"Finder\"' -e 'activate' -e '#{display_command}' -e 'end tell' > /dev/null")
end

# Do we have a good enough version of Mac OS?
actual_mac_os_version = `sw_vers -productVersion`.chomp()
if actual_mac_os_version.match(/^10\.[4-9]/) == nil
informational_alert("This application requires a newer version of Mac OS X.", "This application requires Mac OS 10.4, but you have Mac OS #{actual_mac_os_version}.\n\nPlease upgrade.")
exit(1)
end

# Do we have a good enough version of Java?
actual_java_version = `java -fullversion 2>&1`.chomp()
actual_java_version.match(/java full version "(.*)"/)
actual_java_version = $1
if actual_java_version.match(/^1\.[5-9]\.0/) == nil
informational_alert("This application requires a newer version of Java.", "This application requires Java 5, but you have Java #{actual_java_version} for Mac OS #{actual_mac_os_version}.\n\nPlease upgrade.")
exit(1)
end

# Everything's cool.
exit(0)

You'll have to excuse the long lines because Blogger's been broken for a year now when it comes to handling backslash. I'd like to use backslashes for line continuation in pre-formatted text because I've no desire to manually break lines. That's a sure-fire way to introduce errors. No matter how you write the backslashes, Blogger still thinks they're its to mangle. I reported the bug, but they don't care.

"Do no evil"? "Do nothing", more like.

As usual, the latest version of the script will be in the salma-hayek library linked to somewhere on this page.