But iOS 14.5 apparently changes all that; now, iPhone users are asked if they want to opt in to tracking. A pop-up dialogue box appears saying: “Allow [app name] to track your activity across other companies’ apps and websites?” and providing two options: “ask app not to track” and “allow”. En passant, note that it says “ask” rather than “tell”, another subtle indicator of how much tech companies actually care about their users’ agency.
When Apple announced months ago that it was planning to make this change, the big shots in the data-tracking racket went apeshit, rightly inferring that many iPhone users would decline to be tracked when offered such an obvious escape route. Suddenly, the lucrative $350bn business of collecting user data to sell to data brokers, or linking a user’s app data with third-party data that was collected in order to target ads, was under threat. The new rules, Apple said, would also affect other app processes, including sharing location data with data brokers and implementing hidden trackers for the purpose of conducting ad analytics. Momentarily taken aback by the ferocity of the storm, Apple decided to postpone the introduction in order to give the industry “time to adapt” to the forthcoming reality, thereby breaking the golden rule that one should never give gangsters an even break…
Where those Florida beach celebrants went after partying
You know the story: lots and lots of people congregated on a beach in Fort Lauderdale, Florida during the Spring break weekend. A couple of geo-tracking companies, one of which was Tectonix (Slogan: “Reshape Your Data Experience) posted an fascinating video showing where those celebrants went afterwards.
Donie O’Sullivan of CNN has a good report on this. Apparently there are at least two companies — Tectonix and X-mode — doing this.
The map generated from X-Mode’s data by Tectonix, a data visualization firm, is indeed powerful and underlines why the US government might be considering using location data from Americans’ cell phones to try to track and possibly curtail the spread of the coronavirus.
It also may point to a potential sea change in how some in the tech industry talk about the data they possess. Silicon Valley has endured years of high-profile data privacy scandals. But now smaller companies like X-Mode, unheard of unknown by the majority of Americans, are publicly touting demonstrations of their technology — suggesting businesses like theirs see the potential of helping track the spread of the coronavirus as an opportunity to show how their often maligned data can be used for good. Cuebiq, another location tracking company, has been similarly public about its abilities.
For people who don’t understand the tracking capabilities of smartphones this will probably be startling info. For those who follow the industry it is, sadly, old hat.
IT support staff are critical workers too
One overlooked group of critical workers is the folks who are doing IT support to enable thousands and thousands of people who have never worked out of an office to do it at home. I see it in my college and in the wider university. But a colleague who works for a large corporation tells me that their IT staff are stretched to breaking point — having to prep 300-500 new laptops a day with the security and other specialised software needed for secure home working. IT Support is one of the most stressful occupations there are (partly because their clients are often angry and/or frustrated when they call them in). And getting beginners onto Zoom, Teams, VPNs etc. isn’t often easy.
How the telephone failed to make the Spanish flu bearable
This isn’t the first time technology was supposed to make isolation easier.
This is from the St Louis Post-Despatch of November 17, 1910, years before the Spanish flu outbreak. Harry McCracken, the Tech Editor of Fast Company, has a lovely of the role the Bell system played in that crisis.
Cities and entire states imposed emergency measures similar to those in place today, aiming to flatten the flu’s curve by keeping people apart from each other. Places of business, education, and worship were temporarily closed, and masks were required in some areas.
For a time, it looked like the telephone might help people carry on their lives with minimal disruption. In Holton, Kansas, the local Red Cross distributed placards that local merchants could place in their windows, encouraging customers—especially those who might be ill—to call rather than enter the premises. (Even before the epidemic, telephone ordering was becoming a popular form of commerce—grocery stores, for instance, offered Instacart-like delivery services.)
Guess what? There was a category of ‘critical’ workers that nobody had thought of. Switchboard operators. Overwhelmingly female. And they were as vulnerable to the flu as our critical workers. SO you can guess the rest of the story — encapsulated in these ads:
It’s a lovely essay — worth reading in full.
YouTube and 5G coronavirus conspiracy theories.
You may not have noticed but seriously crackpot conspiracy theories about 5G mobile telephony causing COVID-19 have been circulating on social media, including YouTube. And nutters have apparently been so moved by them that they ahve started to set fire to mobile phone masts. This has led to demands to YouTube to shape up and stop this nonsense circulating.
According to a Guardianreport by Alex Hern, YouTube has
“reduced the amount of content spreading conspiracy theories about links between 5G technology and coronavirus that it recommends to users, it has said, as four more attacks were recorded on phone masts within 24 hours.
The online video company will actively remove videos that breach its policies, it said. But content that is simply conspiratorial about 5G mobile communications networks, without mentioning coronavirus, is still allowed on the site.”
This is standard-issue First Amendment cant. YouTube (like Facebook and Twitter) believes it has a responsibility to let nutters broadcast so long as they do not violate those sacred Terms and Conditions. But the First Amendment applies only to the government. YouTube is a private platform, owned and controlled by Google (well, Alphabet, Google’s parent company). It can do what it likes. It has no obligation to give a platform to anyone.
But 5G doesn’t give you cancer. It won’t make you sick. And…god, I am getting stupider just thinking about typing this, coronavirus is not a false-flag op to disguise the illnesses that 5G is secretly creating.
The reason I have to mention that is that the conspiracyverse is full of that specific theory, and it’s inspiring people to COMMIT ARSON and torch 5G towers.
In the wake of multiple attacks on 5G towers, Youtube has announced changes to its moderation guidelines. It will allow 5G conspiracy theories, just not ones that (oh god my fingers are seizing up from the stupid) link 5G with coronavirus.
Corona conspiracy theories are new, but conspiracy theories have have been around for ever. “Even a cursory perusal of the arguments for these conspiracies”, says Cory, ” reveals that they have not gotten better, even as they’ve gained traction”. If the same arguments are attracting more adherents, he argues, then one of two things is going on. Either: YouTube is a mind-control ray that can turn rational people into believers in absurd ideas; or the number of people to whom these ideas seem plausible has grown and/or Youtube has made it more efficient to reach those people.
Cory thinks it’s the latter, and I agree. YouTube is not a powerful hypnotising machine so much as a machine for finding people who are susceptible to nonsense.
This blog is now also available as a once-a-day email. If you think this might work better for you why not subscribe here? (It’s free and there’s a 1-click unsubscribe if you subsequently decide you need to prune your inbox!) One email a day, in your inbox at 07:00 every morning.
One of the things that makes this epidemic different from predecessors is the dominance of social media in today’s world. One of the most perceptive analyses of what’s going on has come from Kate Starbird of Washington State University, who’s a leading expert on “crisis informatics” – the study of how information flows in crisis situations, especially over social media. Crises always generate levels of high uncertainty, she argues, which in turn breeds anxiety. This leads people to seek ways of resolving uncertainty and reducing anxiety by seeking information about the threat. They’re doing what humans always do – trying to make sense of a confusing situation.
In the pre-internet era, information was curated by editorial gatekeepers and official government sources. But now anything goes, and sense-making involves trying to find out stuff on the internet, through search engines and social media. Some of the information gathered may be reliable, but a lot of it won’t be. There are bad actors manipulating those platforms for economic gain (need a few face-masks, guv?) or ideological purposes. People retweet links without having looked at the site. And even innocently conceived jokes (a photograph of empty shelves in a local supermarket, for example) can trigger panic-buying…
The other day, partly out of curiosity — having noticed that our local Aldi store had apparently been cleaned out of hand-sanitisers, I went on to Amazon.co.uk to see what was happening there. Lots of sanitizers on offer, though only a small percentage seemed to have the 60%+ alcohol content needed to see off the Coronavirus. So I chose one — priced at £6.99 (which seemed steep for a tiny bottle) but it advertised free delivery so I pushed it into the basket and continued. Turned out that the free delivery means delivery between March 30 and April 7. But if I wanted it sooner than that I could have it by paying for delivery. How much? £48. Having thus confirmed my low opinion of human nature, I deleted the item and logged off. (I have plenty of soap and have never hitherto used a hand-sanitiser.)
I guess this always happens when there’s a panic and people over-react. And of course there are smart people who know how to exploit that. The NYT has an interesting story today about two brothers who set about buying every hand-sanitizer and wipe they could find — in the process clearing the shelves of every story they visited on March 1 with the intention of selling them at a heavy markup on Amazon. Initially, it went swimmingly — until Amazon decided to take action against merchants the company judged to be engaged in price-gouging. Now, as the headline puts it over a photograph of one of the brothers in his lock-up garage, “He has 17,700 bottles of Hand Sanizer and Nowhere to Sell Them”.
“If the government were to demand pictures of citizens in a variety of poses, against different backdrops, indoors and outdoors, how many Americans would readily comply? But we are already building databases of ourselves, one selfie at a time. Online images of us, our children, and our friends, often helpfully labelled with first names, which we’ve posted to photo-sharing sites like Flickr, have ended up in data sets used to train face-recognition systems.”
Yeah, but if you’re an AI geek, you can make a T-shirt with a pattern that renders you invisible to facial-recognition systems. This from a fascinating New Yorker essay by John Seabrook.
The economic impact of the pandemic (and related thoughts)
Greg Mankiw is the Robert M. Beren Professor of Economics at Harvard. People keep ringing him up asking for his views on the impact of the virus. Here’s his blogged reply:
A recession is likely and perhaps optimal (not in the sense of desirable but in the sense of the best we can do under the circumstances).
Mitigating the health crisis is the first priority. Give Dr. Fauci anything he asks for.
Fiscal policymakers should focus not on aggregate demand but on social insurance. Financial planners tell people to have six months of living expenses in an emergency fund. Sadly, many people do not.
Considering the difficulty of identifying the truly needy and the problems inherent in trying to do so, sending every American a $1000 check asap would be a good start. A payroll tax cut makes little sense in this circumstance, because it does nothing for those who can’t work.
There are times to worry about the growing government debt. This is not one of them.
Externalities abound. Helping people over their current economic difficulties may keep more people at home, reducing the spread of the virus. In other words, there are efficiency as well as equity arguments for social insurance.
Monetary policy should focus on maintaining liquidity. The Fed’s role in setting interest rates is less important than its role as the lender of last resort. If the Fed thinks that its hands are excessively tied in this regard by Dodd-Frank rules, Congress should untie them quickly.
President Trump should shut-the-hell-up. He should defer to those who know what they are talking about. Sadly, this is unlikely to occur.
Ian Donald’s tweetstream about UK government policy on COVID-19
Ultimately, the lesson of Clearview is that when a digital technology is developed, it rapidly becomes commodified. Once upon a time, this stuff was the province of big corporations. Now it can be exploited by small fry. And on a shoestring budget. One of the co-founders paid for server costs and basic expenses. Mr Ton-That lived on credit-card debt. And everyone worked from home. “Democracy dies in darkness” goes the motto of the Washington Post. “Privacy dies in a hacker’s bedroom” might now be more appropriate.
UPDATE A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.
”The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.”
Great OpEd piece.
The winding path
Why the media shouldn’t underestimate Joe Biden
Simple: Trump’s crowd don’t. They think he’s the real threat. (Which explains the behaviour that’s led to Trump’s Impeachment.) David Brooks has some sharp insights into why the chattering classes are off target About this.
It’s the 947th consecutive sign that we in the coastal chattering classes have not cured our insularity problem. It’s the 947th case in which we see that every second you spend on Twitter detracts from your knowledge of American politics, and that the only cure to this insularity disease is constant travel and interviewing, close attention to state and local data and raw abject humility about the fact that the attitudes and academic degrees that you think make you clever are actually the attitudes and academic degrees that separate you from the real texture of American life.
Also, the long and wide-ranging [NYT interview)(https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html) with him is full of interesting stuff — like that he thinks that Section 230 of the Communications Decency Act (that’s the get-out-of-gaol card for the tech companies) should be revoked. I particularly enjoyed this observation by Brooks: “ Jeremy Corbyn in Britain and Bernie Sanders here are a doctoral student’s idea of a working-class candidate, not an actual working person’s idea of one.”
I’ve been reading “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, the original academic paper in which the co-founders of Google, Sergey Brin and Larry Page, outlined their search engine and its properties. It’s a fascinating read for various reasons, not least the evidence it presents of the pair’s originality. And at the end there are two Appendices, the first of which suggests an eerie prescience about the extent to which advertising would be a malignant business model for any enterprise aiming at objective search. Here it is:
Appendix A: Advertising and Mixed Motives
Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is “The Effect of Cellular Phone Use Upon Driver Attention”, a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.
Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious. A good example was OpenText, which was reported to be selling companies the right to be listed at the top of the search results for particular queries [Marchiori 97]. This type of bias is much more insidious than advertising, because it is not clear who “deserves” to be there, and who is willing to pay money to be listed. This business model resulted in an uproar, and OpenText has ceased to be a viable search engine. But less blatant bias are likely to be tolerated by the market. For example, a search engine could add a small factor to search results from “friendly” companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market. Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline’s homepage when the airline’s name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
In today’s edition of his regular newsletter ‘Big’, Matt Stoeller reports something Rana Foroohar, author of Don’t Be Evil: the case against Big Tech (my review of which is here), said when he asked her what was the most surprising or weird thing she learned when working on her book. “I don’t know if it’s weird”, she replied,
but the most surprising thing I leaned while researching the book was that the founders of Google, Sergei and Larry, had basically predicted the key problems with surveillance capitalism and where they would lead us back in their original paper on search, written while they were Stanford grad students. At the very end, in the appendix, there’s a paragraph where they admit that the targeted advertising business model could be misused by companies or other entities in ways that would hurt users. This is kind of a bombshell revelation given that search engines say everything they do is for users. The fact that this paper hasn’t gotten more attention makes me think people aren’t reading….which is itself part of the problem of attention capture I describe in the book.
“Don’t be evil” was the mantra of the co-founders of Google, Sergey Brin and Larry Page, the graduate students who, in the late 1990s, had invented a groundbreaking way of searching the web. At the time, one of the things the duo believed to be evil was advertising. There’s no reason to doubt their initial sincerity on this matter, but when the slogan was included in the prospectus for their company’s flotation in 2004 one began to wonder what they were smoking. Were they really naive enough to believe that one could run a public company on a policy of ethical purity?
The problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to the boys that they didn’t have one. So they invented a model that involved harvesting users’ data to enable targeted advertising. And in the four years between that capitulation to reality and the flotation, Google’s revenues increased by nearly 3,590%. That kind of money talks.
Sign up for Bookmarks: discover new books in our weekly email
Rana Foroohar has adopted the Google mantra as the title for her masterful critique of the tech giants that now dominate our world…
Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.
The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.
The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.
You also go to the Philippines in this book and you talk to people from other countries, in Mexico, for example. What are the consequences of outsourcing these jobs in terms of the quality of the work being done? And I don’t ask that to imply that people abroad can’t do a job as well.
I think there is a precedent for outsourcing this type of service work, and we see that in the call-center industry. The same kinds of problems that are present in that work are present in this particular context. So that would be things like the dissonance and distance culturally and linguistically, contextually, and politically, for a group of people that are being asked to adjudicate and make decisions about material that emanates from one place in the world and is destined for another, that may have absolutely nothing to do with their day-to-day life.
I think a second thing is that the marketplace has chased a globalization solution for the same reasons it has in other industries, which are the issues of: Where can we get the cheapest labor? What countries are lax in terms of labor protections? Where is organizing low? Where is there a huge pool of people for whom this job might be appealing because it’s better than the other jobs on offer? It’s not a simple case of everyone in the Philippines who does this work is exploited, and I was really trying hard not to make that claim in the book. But, at the same time, the United States sends the work to the Philippines for a reason. It sends the work there because Filipino people have a long-standing relationship, so to speak, with the United States, that means that they have a better facility to understand the American context. That’s actually been in the favor of most people in the Philippines.
It’s worrisome to see those kinds of colonial traditions and practices picked up again, especially in this digital marketplace, this marketplace of the mind that was supposed to be deliverance from so many of the difficult working conditions of the twentieth century. So I think that’s the big thing about the way that this plays out on the global stage. The companies have a problem that they don’t have enough people to do the work. And so they are pulling out all the stops in a way to find people to do the work, but it’s still not nearly enough.
What could be done to make the lives of these workers better, given that this is a job that needs to be done? And it needs to be done by smart people doing it well, who need to be very well-trained.
This is a question that I’ve often posed to the workers themselves because I certainly am not possessed of the answers on my own. They want better pay. And I think we can read that in a lot of ways: they want better pay, they want to be respected. The nature of the way the work has been designed has been for the work to be secret. In many cases, their N.D.A. precludes them from even talking about the work. And the industry itself formulated the job as a source of shame in that sense, an industry source of shame. They were not eager to tout the efforts of these people, and so instead they hid them in the shadows. And, if nothing else, that was a business decision and a value judgment that could have gone another way. I think there’s still a chance that we could understand the work of these people in a different way and value it differently, collectively. And we could ask that the companies do that as well.
Q: We’re now more than two years out from that experience, and obviously the controversies have not gone away — they’ve actually multiplied. Do you think Zuckerberg and Sandberg have made any progress on the stuff you warned about?
A: I want to avoid absolutes, but I think it’s safe to say that the business model is the source of the problem, and that it’s the same business model as before. And to the extent that they made progress, it’s in going after different moles in the Whack-a-Mole game. From the point of view of the audience, Facebook is as threatening as ever.
My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.
Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.
Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…