Common sense about hacking

From the Economist blog:

FOR companies, there are two strategies for dealing with people who uncover flaws in their IT security: a right way and a wrong way. Our leader on hacking this week tells of the approach that Volkswagen took when a group of academics informed it that they had uncovered a vulnerability in a remote-car-key system: the firm slapped a court injunction on them. It is difficult to conceive of an approach more likely to be counter-productive.

United Airlines, it seems, has a far more enlightened attitude. It has just awarded two hackers 1m air miles each after they managed to spot security weak spots in its website. The move is part of a scheme called “bug bounty”, in which hackers are incentivised to contact the company with security flaws, rather than post them online. This approach is common at Silicon Valley firms, and makes just as much sense for old-fashioned industries too. Pound to a penny, there are nefarious types out there trying to break into most big companies’ IT systems. Encouraging “white-hat” hackers to uncover flaws, and then rewarding them for not revealing them to the wider world, may sit uncomfortably with people’s sense of fairness. However, if it gives firms time to fix the problem, in pragmatic terms the benefit is obvious.


The big heist

OK. If you want a really big story, then this is it:

WASHINGTON — The Obama administration on Thursday revealed that 21.5 million people were swept up in a colossal breach of government computer systems that was far more damaging than initially thought, resulting in the theft of a vast trove of personal information, including Social Security numbers and some fingerprints.

Every person given a government background check for the last 15 years was probably affected, the Office of Personnel Management said in announcing the results of a forensic investigation of the episode, whose existence was known but not its sweeping toll.

The agency said hackers stole “sensitive information,” including addresses, health and financial history, and other private details, from 19.7 million people who had been subjected to a government background check, as well as 1.8 million others, including their spouses and friends. The theft was separate from, but related to, a breach revealed last month that compromised the personnel data of 4.2 million federal employees, officials said.

Both attacks are believed to have originated in China, although senior administration officials on Thursday declined to pinpoint a perpetrator, except to say that they had indications that the same actor carried out the two hacks.

The breaches constitute what is apparently the largest cyberattack into the systems of the United States government, providing a frightening glimpse of the technological vulnerabilities of federal agencies that handle sensitive information. They also seemed certain to intensify debate in Washington over what the government must do to address its substantial weaknesses in cybersecurity, long the subject of dire warnings but seldom acted upon by agencies, Congress or the White House.

Note the phrase “other private details, from 19.7 million people who had been subjected to a government background check”.

Humans are the weakest link

This morning’s Observer column:

PGP (now in its fifth incarnation) does indeed enable one to protect one’s communications from spying eyes. It meets Snowden’s requirement for “strong crypto”. But it hasn’t realised its revolutionary potential because it turns out that powerful software is a necessary but not sufficient condition for effective security. And the reason is that, to be effective, PGP has to be implemented by humans and they turn out to be the weak link in the chain.

This was brought forcibly home to me last week at a symposium on encryption, anonymity and human rights jointly organised by Amnesty International and academics from Cambridge University…

Read on

Learning to read

Today’s Observer column:

I never thought I’d find myself writing this, but the Daily Mail has finally done something useful for society. Mind you, it’s done it unintentionally: it didn’t know it was doing good. But still… It would be churlish not to acknowledge its achievement…

Sounds improbable? I know. But read on

Why Bitcoin is interesting

This morning’s Observer column:

When the banking system went into meltdown in 2008, an intriguing glimpse of an alternative future appeared. On 31 October, an unknown cryptographer who went by the name of Satoshi Nakamoto launched what he described as “a new electronic cash system that’s fully peer to peer, with no trusted third party”. The name he assigned to this new currency was bitcoin.

Since then, the world has been divided into three camps: those who think that bitcoin must be a scam; those who think it’s one of the most interesting technological developments in decades; and (the vast majority) those who have no idea what the fuss is about.

I belong in the second camp, but I can see why others see it differently…

Read on

Robotic reporting


This follows on from our seminar on the implications of advanced robotics for employment. Here are two reports based on wire-service reports of a company’s results. One was written by a bot, the other by a human.

It begs two questions:

  1. Which was which? (Easy, I think).
  2. More difficult: which is better? More accurate? Good enough?

HT to Andrea Vance for the link.

Implications of a new machine age

This morning’s Observer column:

As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.

So it is with information technology…

Read on

Advice from right field

Sometimes, interesting ideas come from the least-expected sources.

Here, for example, is Tim Montgomerie in The Times offering some to the Labour Party:

“Left-wing parties need to find a new identity for a movement that has been defined by redistribution for as long as Marxism elbowed Methodism aside as socialism’s main inspiration. What is the left’s new purpose? Intergenerational equality? Using new technology for progressive ends? Housebuilding to spread ownership of assets? Or even some renewed recognition of the value of Methodism’s voluntary mutuality?”

Answer: All of the above, but underpinned by an overarching analysis of the world as it is, not as it used to be or as we’d like it to be.

Interesting also that Montgomerie mentions the one thing that Labour under Miliband resolutely ignored: the potential of the Net to revitalise political action.

Technology and the future of work

Our Technology and Democracy research project had a terrific talk this afternoon by Mike Osborne of the Oxford Martin School about the research that he and Carl Frey published in “The future of employment: how susceptible are jobs to computerisation?”.

That paper is impressive in lots of ways. Unlike many academic research reports, for example, it’s written in pellucid prose. And it’s historically informed — which is unusual in technology publications: the authors know that the issue of the impact of machinery on jobs goes back a long, long way — at least to Elizabethan times with William Lee and his request for a patent on his stocking frame loom.

But most importantly, the Frey-Osborne study is the best analysis to date of what we in our project regard as one of the most significant puzzles of our time: namely what does the combination of infinite computational power, big data, machine learning and advanced robotics mean for our future? Or, to quote the title of Norbert Wiener’s book, what will constitute “the human use of human beings” in a digital future?

What preoccupies us is the question of whether we now stand on a hinge of history. Are there things about digital technologies which make our situation and prospects different from the disruptions that our ancestors faced when confronted with the seminal general-purpose technologies of the past? Can we say with any confidence that this time it’s different?

Mike’s presentation provoked lots of thoughts…

The first is the objection often made by historians and economists who argue say that apocalyptic concerns about digital technology are just outbreaks of a-historical hysteria. Historically, they say, technological progress has always had two conflicting impacts on employment. One is the overtly destructive impact — the leading edge of the Schumpeterian wave, if you like. The other is the capitalisation effect, as companies start to enter industries where productivity is relatively high, leading to the expansion of employment in these new or revitalised industries. So, according to the sceptics, although automation definitely taketh away, it also giveth.

But if I’ve understood Mike and Carl’s work correctly, this time it might be different, for two reasons.

  • One is that whereas automation historically served to eliminate manual and/or highly routinised tasks, the new digital technologies mean that automation is remorselessly moving into work domains that have traditionally been seen as cognitive and non-routine.

  • The second is that what happening now is what Brian Arthur called “combinatorial innovation”, which is basically the network effect applied to technological innovation. This means that the pace of innovation is increasing exponentially, which in turn means that our traditional capacity to transition into employment in new areas is going to be outpaced by the pace of change. In which case, the life-chances of a lot of human beings could be undermined or destroyed.

Which leads to a final thought, namely that in the end this will have to come down to politics. Mike and Carl’s analysis is not a deterministic one — they don’t imply that the job-destruction that they think could happen will happen. Decisions about whether to deploy these technologies will, in the end, be made by people –- the owners of capital — not by machines. And if there’s no element of societal control in all this, then the clear implication is that Piketty’s rule about the returns from capital generally outrunning the returns from employment will be turbocharged, with predictable consequences for inequality.

But of course, it doesn’t have to be like that. The economic and productivity gains that result from these technologies could be used for different purposes other than giving even more to those who already have. And that brings to mind Keynes’s famous essay on “The Economic Possibilities for our Grandchildren” in which he saw the possibility that, through technology-driven productivity gains, man “could for the first time since his creation … be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well”.

Only politics can ensure that that agreeable prospect comes to pass, however. This isn’t just about technology, in other words.

And now here’s the really strange thing: in all the sturm und drang of our recent election campaign, the implications of computerisation for employment weren’t mentioned once. Not once.