Review: Camino

Okay, so this isn't really a review. I'm not going to list all the features, measure performance, and make up a score out of ten or anything like that. Basically, all I have to say is "if you're using Mac OS, Camino should be your web browser".

I'll elaborate a little though, because you might well ask why I'm wasting my time like this. This post, which takes you but a minute or two to read (even assuming you don't skip over it) will take me at least an hour to write. But I've been surprised recently to find that other Mac users I know, despite being men and women of sophistication, aren't using Camino. They're generally using Safari or Firefox.

Safari is okay. It's not the worst browser ever. IE for Mac OS X probably holds that title, and I for one am grateful that's it's dead. Safari's interface is Mac like, it has a few nice features, and little touches like the combined Stop/Reload button are classy, and I'll tell you the truth and admit that although I was a staunch Camino user until Mac OS 10.4 and whatever version of Safari shipped with that, 10.4's Safari was good enough, and Camino at the time was stagnant enough, that I didn't bother installing Camino on my new Mac OS 10.4 system. Safari was no longer a buggy crash-prone piece of junk that didn't render the few sites I care about very well. It was "an okay browser".

The trouble with Safari is that it's quite hard to say anything positive about it. Whereas Firefox on Linux (or even Windows) is a pretty sweet browser. I've suffered enough unfinished Apple software to have avoided Safari 3 so far, so for my money Firefox has the best in-page search. It's annoying that it can't highlight by default (like Evergreen or Terminator, but it's still way better than crappy dialog-based search. Plus Firefox really has no competition. I'm not going to run a KDE app, I'm not going to pay for a browser, and I've yet to understand the purpose of Epiphany. I don't know what its "gimmick" is, and the fact that even after years of reading Slashdot I can't sum up the point of Epiphany in one sentence suggests to me that there is no point.

If you try Firefox on Mac OS, though, you're in for a disappointment. For one thing, it looks like shite. I know Firefox looks a little bit odd everywhere, but it doesn't really stand out anywhere but Mac OS. But on Mac OS, you may as well be running it on Linux and displaying it via X11.app, so wrong does it look and so strange does it feel. Plus it crashes. I'd almost forgotten that browsers used to crash, but Firefox on Mac OS would crash regularly for me. I'd know it was coming because I'd start to see empty tool tips being left around, and Firefox's ability to restore the pre-crash session eased the pain, but still... I started to wonder why I kept punching myself in the nuts. There were various lesser irritations too. I actually think the final straw for me was the way Firefox on Mac OS would open new windows so that the status bar would be off-screen.

That, and the release of Camino 1.5, which happily coincided with my decision to give up on Firefox on Mac OS.

I'll tell you the bad news up front: Camino doesn't have decent in-page searching. It's the crappy dialog again. And the bookmarks icon (the need for which escapes me completely) still looks as bad as it did in 2002 when we were refugees from Mac IE. But everything else looks and feels beautiful. Camino also feels very fast, though I've done no objective measurements.

The best part, though, is the "Web Features" section of the preferences dialog. This is what all web browsers would be like if they weren't funded by kick-backs from advertisers. (Ironic, then, that as I understand it, Camino's main developer works on it in his "20% time" at the biggest web advertiser of them all.) I'm told that the intertubes are festooned with adverts these days, somewhat like the countryside in the film "Brazil". But all I see is countryside, thanks to Camino. "Block web advertising". Yes, please. "Block pop-up windows". Well, of course! "Prevent sites from changing, moving or resizing windows". Thank you. "Block Flash animations". An excellent idea. "Play animated images only once". You took the words right out of my mouth!

I know that I can install Firefox's FlashBlock extension, and I can set "image.animation_mode" to "once" (or even "none"), but what kind of user experience is that? A web browser that respects its user more than advertisers should ship with this stuff, and make it trivially easy for me to opt in or out (and, perhaps most importantly, make it obvious that I have a choice).

If Camino had better in-page find, I wouldn't have to equivocate. Even as it is, it's still my recommendation for Mac web browsing.


Still no free lunch: the surprising cost of %

Some performance problems are easy to find. Most aren't. This week I came across one I'd never had found if I hadn't introduced the performance regression myself while trying to improve performance.

The code in question was part of Terminator, breaking up lines into runs of the same style for rendering. I was adding a condition that basically said "don't make any run too long, because it's expensive to render a long string, and that's hugely wasteful if it's just going to be clipped anyway". I keep meaning to look into why that is, but that's not the problem here.

My problem was that when I switched from a magic number to a computed guess for the "sensible maximum run length", my benchmark's run time went up by 30%. I wasn't expecting that the better guess would improve performance, but I didn't think it would hurt it.

I was actually working in Java, but that's irrelevant so to protect Java from random drooling slashdot monkeys who might come across this post, I'll give an equivalent C++ example. Here's the program with a constant:

int f(int n) {
int j = 0;
for (int i = 0; i < 200 * 1024 * 1024; ++i) {
if ((i % 82) == 0) {
return j;
int main() {
return 0;

The non-constant version differs only in that "i % 82" is replaced with "i % n". Here are the run times on an Athlon X2 3800+ running Linux, built by g++ 4.1.2 with -O2:

x86, i % n 4.5s
x86, i % 82 0.7s

For comparison, here are the run times on a PowerPC G5 running Mac OS, built by g++ 4.0.1 with -O2:

ppc, i % n 3.7s
ppc, i % 82 0.8s

So those clever RISC chaps aren't getting their free lunch either, and any fix I come up with to help the x86-using 99% is likely to benefit the RISC-using 1% too.

One advantage of running a C++ test is you can easily ask to see the assembler. On x86, in the slow case the problem is that we're doing a divide with "idivl %esi". (There is no "mod" instruction; you do a divide and get the remainder "for free".)

If you're anything like me, you long ago stopped counting the cost of integer operations. I don't think I've paid any real attention since I was last writing assembler, and that was a long time ago now. Even then, it was mostly out of curiosity. Back in those days, processors were simple, and instruction costs were given as cycles per instruction. These days with out-of-order execution, deep pipelines, and multiple heterogeneous back-end units, what you're told instead is the instruction's latency. The relevant entry from AMD's "Software Optimization Guide for AMD64 Processors" says that this instruction's latency is 26/42/74 cycles, depending on whether it's operating on a 16, 32, or 64-bit register. So that's 42 cycles for us.

Funnily enough, division is about as scary as it gets on x86, unless you start messing with the weird vector instructions. Most branches (bogey-men of old) are cheaper than division in latency terms, and I/O instructions are about the only "normal" instructions that have higher latencies. "Replace Division with Multiplication" is considered such good advice it has three separate sections in the aforementioned AMD manual.

In my case, I switched to decrementing a counter (which probably ends up something like "decl %eax", 1 cycle), and got my 30% back. Hard to imagine that we're still doing this kind of thing in 2007, in languages several layers removed from the microcode. At least the continued usefulness of some understanding of the actual hardware you're running on means SQL-wielding managers aren't likely to make us all redundant in the near future!