I tend to ignore articles on security because I don't have a lot of respect for the security companies. As far as I can tell, most security stories are credulous regurgitations of these companies' misleading press releases. Their vested interest in FUD, their conflict of interests with their own customers, their alarmist and uninformative tendencies: all these things make it hard to take them seriously.
Just this last week there was one or other of this motley crew claiming "Windows more secure than Linux". The numbers were blatant nonsense, counting any Linux vulnerability once per distribution, for example, and I'm not interested in that non-story.
In amongst the usual stream of commercial effluent, I found myself reading a couple of interesting papers on phishing.
If you're anything like me (and I hope you're not) you receive several hundred spam messages a day. For my home account, one of the mod3 Solaris zone hosting dudes set up a greylisting system that pretty much squashed the problem. Work uses a commercial filtering system that doesn't work nearly as well, and doesn't even let me say "drop anything in any non-European language", which would be a very effective work-around for me. I'll admit to having been nervous about the greylisting idea ("but won't it delay genuine mail?"), but I've only been inconvenienced once so far, and that wasn't for long. I waste far much more time wading through the obvious spam at work every day than I did on the one occasion I've had to wait for a web site to retry its confirmation mail.
Anyway, given the amount of spam that gets through at work, I see quite a lot of phishing attempts. Some would be worryingly convincing if I had any connection with the alleged institutions, many are fairly obviously bogus if you give them more than a second's glance, and some are laughably bad. That last class has always interested me the most. My assumption was always that such mails wouldn't fool anybody, leaving me wondering why the prospective phisher didn't try a bit harder?
Now I'm starting to wonder if the criminals aren't just being clever, expending no more effort than necessary to fool the foolable.
Reading Why Phishing Works, I was shocked by the lack of acumen displayed by the experiment's subjects. The sample size was, I felt, small: only 22 people. I'm also not sure how representative of the general public university staff and students are. All the same...
Even if you don't care about security, if you're a programmer it's worth reading the paper just to see how far out of touch with technology many users are. In particular, they have no idea what's easy to fake and what's hard to fake.
That text and graphics inside the page are more trusted than text and graphics in the browser's own UI shows you just how much the disconnect between the user's model and system's model can cost.
It's also interesting to see how much of the browser people just ignore. I was thanked for adding a "new" feature to Terminator the other week when all I'd done was add a tool tip to draw attention to a feature that had been there much longer. That was understandable because the feature was otherwise invisible and only enjoyed by people who had just assumed it would be there. This paper, though, suggests that browser features that you and I probably consider highly visible just aren't seen. Or they're seen and misunderstood, which is potentially worse when they're security features.
Not all of the problems identified in the paper are anything to do with technology, though. Except insofar as they suggest that people are bad at transferring real-world common sense to the "virtual" world, or bad at realizing that they're the same world.
I wonder if the woman who "will click on any type of link at work where she has virus protection and system administrators to fix the machine, but never at home" would agree to be beaten by said system administrators with baseball bats in the grounds of a local hospital. Presumably that would be fine, because the hospital can fix things up afterwards? So no harm done, right?
And there's the woman who types in her username and password to see if a site's genuine. Presumably she'd be happy to give me her life savings to see whether I can be trusted to return them?
I do hope those two are now starred out. But I know they aren't, and I know there are millions like them, sharing LANs (or even machines) with us.
I showed the paper to my girlfriend. She didn't know about https: versus http:, didn't know there was a padlock icon anywhere (and I'll admit that I had to look for it in Safari; I'll be switching to Firefox completely as soon as it has spelling checking), or what the padlock means, and definitely didn't know anything about certificates. It had never really occurred to me before that there were millions of people out there typing their financial details in to HTML forms without the vaguest idea of which end of the firestick the boom comes out.
We've accidentally created a whole race of virtual autists, devoid of their usual ability to infer trustworthiness.
If you think that's an over-statement, read the paper and look at the cues the participants were using. In ignorance of the high-tech stuff the browser was offering, they were falling back to tried-and-tested visual cues, despite the fact that it's trivial to copy any image, text, or video on-line.
The authors have a suggestion, if you're not too depressed to keep reading. The Battle Against Phishing: Dynamic Security Skins describes a way of improving the browser's security indicators, but I didn't really get how it's supposed to address what seems to be the more fundamental problem: people just don't know what they're looking for. If Firefox's yellow location bar is as invisible as it appears to be, is that battle not already lost?