The Apple spyPhone (contd.)

It’s fascinating to see what happened overnight on this story. Firstly, lots of people began posting maps of where their iPhones had been, which is a clear demonstration of the First Law of Technology — which says that if something can be done then it will be done, irrespective of whether it makes sense or not. Personally I’ve always been baffled by how untroubled geeks are about revealing location data. I remember one dinner party of ours which was completely ruined when one guest, a friend who had been GPS-tracking his location for three years, was asked by another guest, the late, lamented Karen Spärck Jones, if he wasn’t bothered by the way this compromised his privacy. He replied in the negative because he had “nothing to hide”. There then followed two hours of vigorous argument which touched on, among other things, the naivete of geeks, the ease with which the punctiliousness of Dutch bureaucracy made it easy to round up Dutch Jews after the Germans invaded Holland in the Second World War, the uses to which location data might be put by unsavoury characters and governments, Karl Popper and the Open Society, etc. etc.

Michael Dales has a couple of interesting blog posts (here and here) about the iPhone data-gathering facility. And, like all geeks, he’s totally unsurprised by the whole affair.

It seems rather than worry geeks, most of us find the data amazing. I suspect that’s because most of us know that this data could be got otherhow anyway – all it really shows is where your phone has been, and the phone operators know that anyway – and I typically trust them a lot less than I trust Apple (not that I think Apple is angelic, it’s a shareholder owned company, but I generally have a more antagonistic relationship with phone companies than I do Apple). So the fact the data resides on my phone is handy – if I was worried about people tracking where my phone goes then I’d never turn it on.

Michael also sees positive angles to this.

If you have a Mac and want to see where your iPhone has been (and then, like most people, post it to the Internet :) then you can get the tool to do so here. What I think is potentially really exciting is what you can do with the data now that you have access to it, not just your phone company. Quentin has already had the idea that you could use it to geotag your photos, which would be awesome, but how about things like carbon calculators, trip reports, and so on?

This post attracted a useful comment from ScaredyCat which gets to the heart of the problem:

The brouhaha isn’t just about the data being stored, it’s about the data being stored unencrypted. I love data like any geek but you do have to wonder why the data is being collected in the first place.

Precisely. What the data-logging and storage facility means is that your iPhone is potentially a source of useful confidential information for people who would have no hope of obtaining that information legally from a mobile phone network.

This point is neatly encapsulated by Rory Cellan-Jones in his blog post:

This obviously has intriguing implications for anyone who possesses one of these devices. What, for instance, if you had told your wife that you were off on a business trip – when in fact you had slipped off to the slopes with some mates – and she then managed to track down your iPhone location file? (I should stress that this is an imaginary scenario).

For divorce lawyers, particularly in the United States, the first question when taking on a new client could be “does your spouse own an iPhone?” And law enforcement agencies will also be taking a great interest in the iPhones – or iPads – of anyone they are tracking.

The other interesting thing about the spyPhone story is that, according to Alex Levinson, it’s an old story. He says that

Back in 2010 when the iPad first came out, I did a research project at the Rochester Institute of Technology on Apple forensics. Professor Bill Stackpole of the Networking, Security, & Systems Administration Department was teaching a computer forensics course and pitched the idea of doing forensic analysis on my recently acquired iPad. We purchased a few utilities and began studying the various components of apple mobile devices. We discovered three things:

* Third Party Application data can contain usernames, passwords, and interpersonal communication data, usually in plain text.
* Apple configurations and logs contain lots of network and communication related data.
* Geolocational Artifacts were one of the single most important forensic vectors found on these devices.

After presenting that project to Professor Stackpole’s forensic class, I began work last summer with Sean Morrissey, managing director of Katana Forensics on it’s iOS Forensic Software utility, Lantern. While developing with Sean, I continued to work with Professor Stackpole an academic paper outlining our findings in the Apple Forensic field. This paper was accepted for publication into the Hawaii International Conference for System Sciences 44 and is now an IEEE Publication. I presented on it in January in Hawaii and during my presentation discussed consolidated.db and it’s contents with my audience – my paper was written prior to iOS 4 coming out, but my presentation was updated to include iOS 4 artifacts.

Thanks to David Smith for passing on the link to the Levinson post.

The Apple spyPhone

Oxford to Cambridge and then London from Alasdair Allan on Vimeo.

Fascinating video of location data routinely and covertly gathered by an iPhone belonging to research Alasdair Allen. I came on it via an intriguing Guardian story which reported that

Security researchers have discovered that Apple’s iPhone keeps track of where you go – and saves every detail of it to a secret file on the device which is then copied to the owner’s computer when the two are synchronised.

The file contains the latitude and longitude of the phone’s recorded coordinates along with a timestamp, meaning that anyone who stole the phone or the computer could discover details about the owner’s movements using a simple program.

For some phones, there could be almost a year’s worth of data stored, as the recording of data seems to have started with Apple’s iOS 4 update to the phone’s operating system, released in June 2010.

“Apple has made it possible for almost anybody – a jealous spouse, a private detective – with access to your phone or computer to get detailed information about where you’ve been,” said Pete Warden, one of the researchers.

Only the iPhone records the user’s location in this way, say Warden and Alasdair Allan, the data scientists who discovered the file and are presenting their findings at the Where 2.0 conference in San Francisco on Wednesday. “Alasdair has looked for similar tracking code in [Google’s] Android phones and couldn’t find any,” said Warden. “We haven’t come across any instances of other phone manufacturers doing this.”

Lots more information (plus a downloadable open source application that enables you to locate the file containing your location data history) on Pete Warden’s site. He’s got some helpful FAQs, including these:

What can I do to remove this data?

This database of your locations is stored on your iPhone as well as in any of the automatic backups that are made when you sync it with iTunes. One thing that will help is choosing encrypted backups, since that will prevent other users or programs on your machine from viewing the data, but there will still be a copy on your device.

Why is Apple collecting this information?

It’s unclear. One guess might be that they have new features in mind that require a history of your location, but that’s pure speculation. The fact that it’s transferred across devices when you restore or migrate is evidence the data-gathering isn’t accidental.

Is Apple storing this information elsewhere?

There’s no evidence that it’s being transmitted beyond your device and any machines you sync it with.

What’s so bad about this?

The most immediate problem is that this data is stored in an easily-readable form on your machine. Any other program you run or user with access to your machine can look through it.

It’s interesting that the mobile operators also keep this data, but the cops have to get a special order to access it. (Which they often do, as we find out in evidence to murder trials, for example.) But anyone who gets access to an iPhone (or, it turns out, a 3G-enabled iPad) can get it without going through any legal palaver.

Interesting, ne c’est pas? n’est-ce pas?

(Thanks to Duncan Thomas for correcting my French.)

Federated social networking

There’s a useful piece on the Electronic Frontier Foundation’s site about federated networking, seen as a way of counteracting the centralising power of outfits like Facebook.

To understand how federated social networking would be an improvement, we should understand how online social networking essentially works today. Right now, when you sign up for Facebook, you get a Facebook profile, which is a collection of data about you that lives on Facebook's servers. You can add words and pictures to your Facebook profile, and your Facebook profile can have a variety of relationships — it can be friends with other Facebook profiles, it can be a ‘fan’ of another Facebook page, or ‘like’ a web page containing a Facebook widget. Crucially, if you want to interact meaningfully with anyone else’s Facebook profile or any application offered on the Facebook platform, you have to sign up with Facebook and conduct your online social networking on Facebook’s servers, and according to Facebook’s rules and preferences. (You can replace “Facebook” with “Orkut,” “LinkedIn,” “Twitter,” and essentially tell the same story.)

We’ve all watched the dark side of this arrangement unfold, building a sad catalog of the consequences of turning over data to a social networking company. The social networking company might cause you to overshare information that you don’t want shared, or might disclose your information to advertisers or the government, harming your privacy. And conversely, the company may force you to undershare by deleting your profile, or censoring information that you want to see make it out into the world, ultimately curbing your freedom of expression online. And because the company may do this, governments might attempt to require them to do it, sometimes even without asking or informing the end-user.

How does it work?

To join a federated social network, you’ll be able to choose from an array of “profile providers,” just like you can choose an email provider. You will even be able to set up your own server and provide your social networking profile yourself. And in a federated social network, any profile can talk to another profile — even if it’s on a different server.

Imagine the Web as an open sea. To use Facebook, you have to immigrate to Facebook Island and get a Facebook House, in a land with a single ruler. But the distributed social networks being developed now will allow you to choose from many islands, connected to one another by bridges, and you can even have the option of building your own island and your own bridges.
Why is this important?

Why does this matter?

The beauty of the Internet so far is that its greatest ideas tend to put as much control as possible in the hands of individual users. And online social networking is a powerful tool for the many who want a service that compiles all the digital stuff shared by family, friends, and colleagues. But so far, social networking has grown in a way that concentrates control over that information — status posts, photos, and even your relationships themselves — with individual companies.

Distributed social networks represent a model that can plausibly return control and choice to the hands of the Internet user. If this seems mundane, consider that informed citizens worldwide are using online social networking tools to share vital information about how to improve their communities and their governments — and that in some places, the consequences if you’re discovered to be doing so are arrest, torture, or imprisonment. With more user control, diversity, and innovation, individuals speaking out under oppressive governments could conduct activism on social networking sites while also having a choice of services and providers that may be better equipped to protect their security and anonymity.

Porn, cash and the slippery slope to the National Security State

One of the most unsettling experiences of the last decade has been watching Western democracies sleepwalking into a national security nightmare. Each incremental step towards total surveillance follows the same script. It goes like this: first, a new security ‘threat’ is uncovered, revealed or hypothesised; then a technical ‘solution’ to the new threat is proposed, trialled (sometimes) and then implemented — usually at formidable cost to the public; finally, the new ‘solution’ proves inadequate. But instead of investigating whether it might have been misguided in the first place, a new, even more intrusive, ‘solution’ is proposed and implemented.

In this way we went from verbal questioning to pat-down searches at airports, and thence to x-ray scanning of cabin-baggage, to having to submit laptops to separate scanning (including, I gather, examination of hard-disk files in some cases), to having to take off our shoes, to having all cosmetic fluids (including toothpaste) inspected, and — most recently — to back-scatter x-ray scanning which reveals the shape of passengers’ breasts and genitals. It may be that we will get to the point where only passengers willing to stip naked are allowed to board a plane. The result: a mode of travel that was sometimes pleasant and usually convenient has been transformed into a deeply time-consuming, stressful and unpleasant ordeal

The rationale in all cases is the same; these measures are necessary to thwart a threat that is self-evidently awful and in that sense the measures are for the public good. We are all agreed, are we not, that suicidal terrorism is a bad things and so any measure deemed necessary to prevent it must be good? Likewise, we all agree that street crime and disorder is an evil, so CCTV cameras must be a good thing, mustn’t they? So we now have countries like Britain where no resident of an urban area is ever out of sight of a camera. And of course we all abhor child pornography and paedophilia, so we couldn’t possibly object to the Web filtering and packet-sniffing needed to detect and block it, right? A similar argument is used in relation to file-sharing and copyright infringement: this is asserted to be ‘theft’ and since we’re all against theft then any legislative measures forced on ISPs to ‘stop theft’ must be justified. And so on.

So each security initiative has a local justification which is held to be self-evidently obvious. But the aggregate of all these localised ‘solutions’ has a terrifying direction of travel — towards a total surveillance society, a real national security state. And anyone who expresses reservations or objections is invariably rebuffed with the trope that people who have nothing to hide have nothing to fear from these measures.

In the UK, a novel variation on this philosophy has just surfaced in Conservative (capital C) political circles. A right-wing Tory MP who is obsessed with the threat of Internet pornography has been touting the idea that broadband customers who do not want their ISPs to block access to pornography sites should have to register that fact with the ISP. (This is to ‘protect’ children, of course, in case the poor dears should mistype a search term and see images of unspeakable acts in progress.) Last Sunday a British newspaper reported that the communications minister, Ed Vaizey, is concerned about the availability of pornography and says he would quite like ISPs to do something about it for him. According to The Register, “he plans to call the major players to a meeting next month to discuss measures, including the potential for filters that would require those who do want XXX material to opt their connection out”.

The Register doesn’t take this terribly seriously, because it’s convinced that Vaizey is too shrewd to get dragged into the filtering mess that afflicted the Australian government. Maybe he is, but suppose he finds himself unable to hold back the tide of backbench wrath towards the evil Internet, with its WikiLeaks and porn and all. The implicit logic of the approach would fit neatly with everything we’ve seen so far. First of all, the objective is self-evidently ‘good’ — to protect children from pornography. Secondly, we’re not being illiberal — if you want to allow porn all you have to do is to register that fact with your ISP. What could be fairer than that?

But then consider the direction of travel. What if some future government decides that children should not be exposed to, say, the political propaganda of the British National Party? After all, they’re a nasty pack of xenophobic racists. And then there are the animal rights activists — nasty fanatics who put superglue in butchers’ shop doors on Christmas eve. Why should thay enjoy “the oxygen of publicity”? And then there are… Well, you get the point.

Stowe Boyd has an interesting post about another bright security wheeze which has really sinister long-term implications. Since terrrists and drug barons use cash, why not do away with the stuff and switch over to electronic money instead?

In a cashless economy, insurgents’ and terrorists’ electronic payments would generate audit trails that could be screened by data mining software; every payment and transfer would yield a treasure trove of information about their agents, their locations and their intentions. This would pose similar challenges for criminals.

Who would such a system benefit, asks Boyd?

Not the part-time sex worker, trying to make ends meet in a down economy. Not the bellman at the airport, whose tips might disappear after the transition to cards. Not the homeless guy I gave $2 to the other day, or the busker playing guitar in the train station. Or the Green Peace folks collecting coins at the park.

The ones that benefit are the those selling the cards and the readers. And the policy-makers who want to see the flow of cash to find — supposedly — drug lords and terrorists, but secretly want to know everything about everybody.

But this is the argument for pervasive surveillance again. In the name of security and safety, they say we should all accept the intrusion of the government into our private lives so that the state can be protected from its enemies. After all, they say, if we aren’t doing anything illegal, why should we care? What have we got to hide?

But we have the right to privacy in our doings. We don’t have to say why we want privacy: it is our right.

And the shadowy doings at the margins of people’s lives are exactly the point of privacy. The man funneling money to a child born to his mistress without his wife’s knowledge, or a woman loaning money to her brother without her husband knowing: they want anonymous cash.

Boyd thinks that cash is a prerequisite of a free society, and he’s right.

“Cash”, he says,

is not a metaphor for freedom, it is a requirement of freedom. A strong society that accepts human nature without moralizing will always have anonymous cash. Only totalitarian governments — where everything not expressly required is illegal — would want to monitor the flow of every cent.

.

Facebook’s über-communications platform

Dan Gillmor takes a pretty sceptical view of Facebook’s new messaging system.

In a feature that Facebook thinks is great — and will thrill law enforcement and divorce lawyers — every conversation will be captured for posterity, unless users delete specific messages or entire conversations. Do you assume that the people with whom you communicate are saving every text message and IM? You’d better.

That’s only one of the things that makes me cautious about the service. Facebook’s privacy record is spotty enough already; trusting the company to archive and protect my communications? Not so likely.

Om Malik is much more complimentary:

Facebook has not only reinvented the idea of the inbox, but it has gone one better: it has done so by moving away from the traditional idea of email. One of the reasons why Yahoo and Google Mail have struggled to become entirely social is because it is hard to graft a social hierarchy on top of tools of communication. If you look at Gmail – it has most of the elements that are available in the new social inbox, but they are all discrete elements and give the appearance of many different silos, being cobbled together.

Facebook did the exact opposite – it imagined email only as a subset of what is in reality communication. SMS, Chat, Facebook messages, status updates and email is how Zuckerberg sees the world. With the address book under its control, Facebook is now looking to become the “interaction hub” of our post-broadband, always-on lives. Having trained nearly 350 million people to use its stream-based, simple inbox, Facebook has reinvented the “communication” experience.

The deadpan NYT report on the new initiative is here.

We’re all hamsters now

Image used under Creative Commons licence from Flikr user: www.flickr.com/photos/cryztalvisions/2422753682/

Dave Winer has a lovely blog post in which he explains why Facebook (and Google and all those other ‘free’ services) are effectively hamster cages for humans.

They make a wide variety of colorful and fun cages for hamsters that are designed to keep the hamster, and their human owners, entertained for hours. When you get tired of one, you can buy another. It’s looks great until you realize one day, that you can’t get out! That’s the whole point of a cage.

Remember how they used to say: “If it sounds too good to be true then it probably is?” They still say it. :-)

Another one: “There’s no such thing as a free lunch.” Exactly.

When they say you get to use their social network for free, look for the hidden price. It’s there. They’re listening and watching. It’s pretty and colorful and endlessly fun for you and your human owner.

Or, as one of the commenters on Dave’s post put it: “If you’re not paying for it, you’re not the customer; you are the product being sold.”

The £500m question

This morning’s Observer column.

The news that, according to the national security review at least, cyber attack comes second only to terrorism as the gravest security threat facing the nation will have come as a great surprise to most citizens. We are conscious of the annoyances of malware, viruses, worms, spam and phishing, but for most these are just minor irritations, not threats to the nation's survival.

Yet the other day we had the foreign secretary gravely intoning why, in the midst of the most savage spending cuts in living memory, it is suddenly necessary to give an extra £500m to GCHQ to protect us against nemesis in cyberspace. At the same time, in America, we see the Pentagon setting up a whole new cyber command, USCybercom, with all the usual paraphernalia and awash with funding.

What, you might ask, is going on?

There seem to be two broad answers to the question…

A taxonomy of social networking data

Bruce Schneier has come up with what seems to me to be a really useful taxonomy — first presented at the Internet Governance Forum meeting last November, and again — revised — at an OECD workshop on the role of Internet intermediaries in June.

1. Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.

2. Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.

3. Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.

4. Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.

5. Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.

6. Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

How the Grid can ruin your alibi

This is both creepy and fascinating — from The Register.

ENF [electrical network frequency] analysis relies on frequency variations in the electricity supplied by the National Grid. Digital devices such as CCTV recorders, telephone recorders and camcorders that are plugged in to or located near the mains pick up these deviations in the power supply, which are caused by peaks and troughs in demand. Battery-powered devices are not immune to to ENF analysis, as grid frequency variations can be induced in their recordings from a distance.

At the Metropolitan Police's digital forensics lab in Penge, south London, scientists have created a database that has recorded these deviations once every one and a half seconds for the last five years. Over a short period they form a unique signature of the electrical frequency at that time, which research has shown is the same in London as it is in Glasgow.

On receipt of recordings made by the police or public, the scientists are able to detect the variations in mains electricity occuring at the time the recording was made. This signature is extracted and automatically matched against their ENF database, which indicates when it was made.

The technique can also uncover covert editing – or rule it out, as in the recent murder trial – because a spliced recording will register more than one ENF match.

The Met emphasised that ENF analysis is in its infancy as a practical tool, having been used in only around five cases to date. Proponents are optimistic about its uses in counter-terrorism investigations, for example to establish when suspects made reconnaissance videos of their targets, or to uncover editing in propaganda videos.

Dr Alan Cooper, the leader of the Met’s ENF project, said the technique is proving invaluable in serious cases, where audio and video evidence and its authenticity is often questioned.