If crooks can communicate securely, why can't we?
If we learned one thing from the drama surrounding Edward Snowden and his revelations about the National Security Agency, it's that Russia is still the best place to go if you have US secrets to peddle.
But if we learned another, it's that governments are able to access far more of our personal information than we previously thought.
Even before we heard about PRISM and the rest, we were on notice that what we keep on our phones can be vulnerable. Way back in 2005, Paris Hilton (remember her?) had the contents of her T-Mobile Sidekick (bet you don't remember them) uploaded all over the internet.
It's happened to a litany of celebrities in the years since - Paris always was at the cutting edge of fashion.
Smartphones are portable grenades whose victims are likely to be ourselves. One ironically-named US politician has destroyed his career on multiple occasions using nothing more than the phone in his hand.
To the extent that any of us weigh up the risks at all, we seem to have decided that the fun and convenience of smartphones outweighs the risk. And so, we keep our lives on our phones. They're full of our photos, emails, documents, financial details and passwords. Accessing our smartphones is as close as it's possible to get to accessing our souls. And we trust that our gadgets are as secure as our own inner thoughts.
This is precisely why the FBI is so eager to get Apple's help with accessing an iPhone used in the recentSan Bernardino terrorist attacks. They believe it contains information that will help law enforcement officials uncover the rest of the attackers' network - an admirable goal, surely.
The right balance between privacy and law enforcement is difficult to strike. I suspect most of us would like the police to be able to protect us against "bad guys", but expect innocent citizens to be protected from rampant, speculative snooping.
Of course in practice, it isn't that simple - or so Apple says. CEO Tim Cook has released a public letter arguing that if they create a backdoor of this nature, it will be like opening Pandora's box - by which I mean the mythical figure, not the music streaming service.
It appears that the FBI wants Apple to disable two of the most potent security restrictions that the iPhone and similar devices possess. Firstly, they want to be able to make unlimited attempts to guess the phone's PIN without the phone slowing down between wrong attempts, as it's currently programmed to do in order to prevent "brute force" attacks. Secondly, they want to deactivate the feature whereby the phone gets wiped after 10 wrong attempts.
These changes would be made, it's important to note, only for this one phone, which would have a specially vulnerable version of the operating system uploaded to it. But Apple argues that if this is rendered possible even in one special instance, it will be possible for other hackers too.
On reading Cook's letter, I was sceptical. He suggests that right now, the knowledge about how to do what the FBI wants does not exist, whereas if it's invented, it can't be uninvented, so to speak. All phones will be vulnerable henceforth.
That's an assertion that seems worth querying. I'm no programmer, but surely the restrictions that currently exist in iOS are in a few lines of code that could easily be modified? Instead of wiping the user out after 10 wrong attempts, couldn't that number be changed to 1,000,000? And couldn't the slowdown between incorrect guesses also be modified to be, say, 10 milliseconds instead of 10 minutes?
There are probably complexities I'm missing, but it doesn't at first glance seem to be a request that involves radically re-engineering the phone or inventing some breakthrough. If anything, the FBI's request seems a lot less complicated than what the jail-breaking community routinely does with Apple's operating system. (Perhaps they'll help if Apple won't?)
But while I suspect it might be possible to hack into this one phone without compromising everything, Cook's effort to protect his users' privacy seems admirable in general terms.
Nobody wants to use a hackable phone. I'm sure everyone who buys one of his products would urge him to make them more, not less, secure. And I'm sure that the many members of the law enforcement community who use them feel the same way.
Cook is more convincing about the impossibility of creating a device that can be hacked only by one trusted set of users. If the FBI can get into everyone's smartphones, then others will be able to as well. And indeed, if Apple can theoretically hack the San Bernadino phone today, probably others can too.
More broadly, Apples seems to want to make devices that nobody can get into - not them, and not law enforcement. (Their Touch ID devices are apparently much harder to break into than the San Bernardino one, which is an old model.) If they're secure enough, no subpoena could compel them to breach their users' privacy, just as someone who installs an alarm for you shouldn't retain a code of their own.
As a user, this is extremely reassuring. As a member of the public, let's face it, it's somewhat scary. So where should the balance be struck?
It seems likely that the average terrorist already uses smartphone apps that are far more secure than the hardware itself. Apps like Wickr (used by Malcolm Turnbull) and TOR (funded by the US State Department) already permit what are believed to be completely secure communications, while Osama Bin Laden cleverly evaded the NSA's dragnet for many years by sending couriers with physical USB drives.
So while human error is already a factor and the odd criminal might stuff up and use an insecure smartphone on occasion, the really canny crooks are already using unhackable systems. Otherwise the US would be able to send in their drones about five minutes after the average ISIS lackey sent a text.
And while we know that there are terrorists out there exchanging messages that are a threat to our wellbeing, the reality is that the average person is far more likely to come into contact with hackers who want to steal their personal information. Our email inboxes are full of messages from scammers, and they fool millions of people each year.
So if the average terrorist is already using extremely secure communications platforms that are far less hackable than the devices they run on, and the average non-terrorist is at risk from hackers and identity thieves, I believe it's reasonable for Apple and other tech companies to continue to make devices that are as secure as possible. In terms of threats to the average person, it's probably the lesser of two evils.
There's also the question of whether we can always trust law enforcement agencies to do the right thing. History shows that while the vast majority are of course dedicated, admirable people, any organisation is subject to corruption. Should the dodgiest members of the law enforcement and intelligence communities really be entrusted with a back door to everything sitting on our telephones? And if our spies can get in, can't all spies?
It's tempting to voluntarily yield our civil liberties whenever we hear the word 'terrorism'. But while I'm a little sceptical of Cook's argument that this one exception would irrevocably open the floodgates, our cybersecurity is an extremely precious thing, and it's something that our law enforcement agencies are also supposed to be protecting.
Since criminals already seem to use state-of-the-art encryption that goes far beyond anything built into our day-to-day hardware, it seems reasonable that the rest of us are allowed to use secure devices as well.