Why surveillance techology is usually better than we realise

This morning’s Observer column:

The images of the moon’s surface coming down from the orbiters were of astonishingly high resolution, good enough to blow up to 40ftx54ft pictures. When Nasa engineers initially stitched the images together they had to hang them in a church to view them. Eventually, they found a hangar where they could be laid on the ground for astronauts to walk on them in stockinged feet in order to search for suitable landing sites. Sign up for Lab Notes – the Guardian’s weekly science update Read more

For decades, nobody outside of Nasa and the US military knew how good these images were. The few that were released for public consumption were heavily degraded and fuzzy. Why? Because the cameras used in the lunar orbiters were derivatives of the cameras used in high-altitude US aerial reconnaissance planes and satellites and the Pentagon didn’t want the Soviets to know the level of detail that could be derived from them.

In a way, we shouldn’t be surprised by this revelation. It’s an old story: powerful states have often possessed more sophisticated surveillance technology than their adversaries – or their citizens – knew or suspected…

Read on

Tech-driven wealth is the new aphrodisiac

This morning’s Observer column:

It’s a quintessential Silicon Valley story. A smart, attractive 19-year-old American woman who has taught herself Mandarin while in high school is studying chemical engineering at Stanford, where she is a president’s scholar. Her name is Elizabeth Holmes. In her first year as an undergraduate she persuades her professor to allow her to attend the seminars he runs with his PhD students. Then one day she drops into his office to tell him that she’s dropping out of college because she has a “big idea” and wants to found a company that will revolutionise a huge part of the healthcare system – the market for blood testing services. Her company will be called Theranos.

Holmes’s big idea was for a way to perform multiple tests at once on a tiny drop of blood, and to deliver the results wirelessly to doctors. So she set about pitching to investors…

Read on

‘Social credit’ in China

This morning’s Observer column:

In the old days, western snobbery led to the complacent view that the Chinese could not originate, only copy. One hears this less now, as visitors to China return goggle-eyed at the extent to which its people have integrated digital technology into daily life. One colleague of mine recently returned exasperated because he had been expected to pay for everything there with his phone. Since he possesses only an ancient Nokia handset, he was unable to comply and had been reduced to mendicant status, having to ask his Chinese hosts to pay for everything.

If the future is digital, therefore, a significant minority of China’s 1.4 billion citizens are already there. More significantly, the country’s technocratic rulers have sussed that digital technology is not just good for making economic transactions frictionless, but also for implementing sophisticated systems of social control.

Read on

The Bitcoin/blockchain story: a mixture of greed and idealism

This morning’s Observer column:

Because I write about technology I am regularly assailed by people who are exercised about so-called “cryptocurrencies” like bitcoin, which most of them regard as a scam. But when I reply that while bitcoin might be newsworthy, the really important story concerns the blockchain technology that underpins it, their eyes glaze over and they start looking for the nearest exit as they conclude that they are in the grip of Coleridge’s Ancient Mariner.

And, in a sense, they are. Blockchain technology is indeed important, but it seems largely incomprehensible to ordinary mortals, even though the web teems with attempts to explain it…

Read on

How about an Angry Founders Club?

Lovely rant by Dave Winer:

We should start an “Angry Founders of the Internet” social club to discuss what the fuck happened and how can we tell people about the magic that underlies the crapware that the bigco’s are shoveling at us. It really is beautiful and amazing in there. Think of it this way. It’s easier to take the Interstate highway everywhere, but if you do that, you miss the charming B&Bs, the dramatic beaches, restaurants, jazz clubs. The thrill of riding a bike, hiking the Appalachian Trail, skiing. All that intellectually unperpins this.

I’m not a ‘founder’ — though I count some of them among my friends. But I sympathise with Dave. The technology remains as magical as ever. It’s the corporate capture of it that rankles — plus the passivity and gullibility of so many of our fellow-humans.

Automation isn’t just about technology

This morning’s Observer column:

Ideology is what determines how you think when you don’t know you’re thinking. Neoliberalism is a prime example. Less well-known but equally insidious is technological determinism, which is a theory about how technology affects development. It comes in two flavours. One says that there is an inexorable internal logic in how technologies evolve. So, for example, when we got to the point where massive processing power and large quantities of data became easily available, machine-learning was an inevitable next step.

The second flavour of determinism – the most influential one – takes the form of an unshakable conviction that technology is what really drives history. And it turns out that most of us are infected with this version.

It manifests itself in many ways…

Read on

“The business model of the Internet is surveillance” contd.

This useful graphic comes from a wonderful post by the redoubtable Doc Searls about the ultimate unsustainability of the business model currently dominating the Web. He starts with a quote from “Facebook’s Surveillance Machine” — a NYT OpEd column by the equally-redoubtable Zeynep Tufecki:

“Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Doc then points out the irony of his Privacy Badger software detecting 13 hidden trackers on the NYT page on which Zeynep’s column appears. (I’ve just checked and Ghostery currently detects 19 trackers on it.)

The point, Doc goes on to say, is that the Times is just doing what every other publication that lives off adtech does: tracking-based advertising. “These publications”,

don’t just open the kimonos of their readers. They bring people’s bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of returning “interest-based” advertising to those same people.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach or experience), and damn little care or control by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data? No one entity, that’s for sure.

Doc points out that on reputable outfits like the New York Times writers like Zeynep have nothing to do with this endemic tracking. In such publications there probably is a functioning “Chinese Wall” between editorial and advertising. Just to drive the point home he looks at Sue Halpern’s piece in the sainted New Yorker on “Cambridge Analytica, Facebook and the Revelations of Open Secrets” and his RedMorph software finds 16 third-party trackers. (On my browser, Ghostery found 18.) The moral is, in a way, obvious: it’s a confirmation of Bruce Schneier’s original observation that “surveillance is the business model of the Internet”. Being a pedant, I would have said “of the Web”, but since many people can’t distinguish between the two, we’ll leave Bruce’s formulation stand.

Ethics 101 for Facebook’s geeks

”Ask yourself whether your technology persuades users to do something you wouldn’t want to be persuaded to do yourself.”

”Toward an Ethics of Persuasive Technology” By Daniel Berdichevsky and Erik Neuenschwande, Communications of the ACM, Vol. 42 No. 5, Pages 51-58 10.1145/301353.301410

Macron on AI: he gets it

Very interesting interview given by President Macron to Wired Editor Nicholas Thompson. Here’s a key excerpt:

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

Even after discounting the presidential hubris, this is an interesting and revealing interview. Macron is probably the only major democratic leader who seems to have a grasp of this stuff. And a civilising view of it. As here:

The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.

And this:

At a point of time–but I think it will be a US problem, not a European problem–at a point of time, your [American – ed] government, your people, may say, “Wake up. They are too big.” Not just too big to fail, but too big to be governed. Which is brand new. So at this point, you may choose to dismantle. That’s what happened at the very beginning of the oil sector when you had these big giants. That’s a competition issue.

But second, I have a territorial issue due to the fact that they are totally digital players. They disrupt traditional economic sectors. In some ways, this might be fine because they can also provide new solutions. But we have to retrain our people. These companies will not pay for that; the government will. Today the GAFA [an acronym for Google, Apple, Facebook, and Amazon] don’t pay all the taxes they should in Europe. So they don’t contribute to dealing with negative externalities they create. And they ask the sectors they disrupt to pay, because these guys, the old sectors pay VAT, corporate taxes and so on. That’s not sustainable.

Third, people should remain sovereign when it comes to privacy rules. France and Europe have their preferences in this regard. I want to protect privacy in this way or in that way. You don’t have the same rule in the US. And speaking about US players, how can I guarantee French people that US players will respect our regulation? So at a point of time, they will have to create actual legal bodies and incorporate it in Europe, being submitted to these rules. Which means in terms of processing information, organizing themselves, and so on, they will need, indeed, a much more European or national organization. Which in turn means that we will have to redesign themselves for a much more fragmented world. And that’s for sure because accountability and democracy happen at national or regional level but not at a global scale. If I don’t walk down this path, I cannot protect French citizens and guarantee their rights. If I don’t do that, I cannot guarantee French companies they are fairly treated. Because today, when I speak about GAFA, they are very much welcome I want them to be part of my ecosystem, but they don’t play on the same level-playing field as the other players in the digital or traditional economy. And I cannot in the long run guarantee my citizens that their collective preferences or my rules can be totally implemented by these players because you don’t have the same regulation on the US side. All I know is that if I don’t, at a point of time, have this discussion and regulate them, I put myself in a situation not to be sovereign anymore.

Lots more in that vein. Well worth reading in full.

Will the GDPR make blockchains illegal in Europe?

Well, well. This is something I hadn’t anticipated:

Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.