Tuesday, November 15, 2011

Ammo: Privacy 2.0

For an explanation of the 'Ammo' prefix, please see here.

My research work is now reaching the stage where I'm able to usefully share parts of it (updates to censoring.me with actual working parts is coming soon too...). This is the first of those contributions and is a necessary prelude to some upcoming posts that will require reading this first.

I was prompted to write this blog entry today with the very sad news of the death of one of the co-founders of Diaspora, 22-year old Ilya Zhitomirskiy. I'll be writing on Diaspora soon and making important contrasts with Facebook, Google+ and Twitter. I think Diaspora is incredibly useful and will argue why in a later post - including recommending why you should switch from the latter three tecnhnologies to Diaspora.

First though, its important to understand Jonathan Zittrain's concept of 'Privacy 2.0'. And why is this included in the 'Ammo' series? Simply because our conceptions of privacy are outdated - scholars and technologists such as Zittrain and his peers (such as Morozov, Benkler etc) are beginning to provide immensely useful analyses and concepts for understanding the brave new digital world we find ourselves in. The powers that be, including the mass media, have found themselves flat-footed by the new social media technologies. And its more than that they are vested interests trying (and failing) to protect their turf. It is that they simply don't have the conceptual know-how to even begin to grasp this new domain and the promise (and pitfalls) it offers. Staying ahead of their (albeit very slow) curve will arm you (and help to protect you against the mendacious individuals who have already grasped it...).

Below is a summary of Zittrain's 'Privacy 2.0' concept. It is written in a very academic style and I make no apologies for that - it is one component amongst many in my current academic toolbox and so is expressed that way. I hope you find it useful:


'Privacy 2.0'


Instead of simply ‘privacy’, this particular concept appends the ‘2.0’ to reflect the new era of digital and internet privacy that are still being addressed using concepts, practices and legal precedents that can be said to apply to an ‘earlier’ conception of privacy – ‘privacy 1.0’.

The “generative” technologies that form the basis of digital, networking and internet devices and behaviours put old problems of privacy into new and often unexpected configurations. In both the digital and internet landscapes, broadly understood, there are enormous amounts of uncoordinated actions by actors that can be combined in new and often unpredictable ways thanks to these same technologies.



Limitations on actors to preserve freedom has up until very recently focused on constraining institutional actors (governments, large corporations) etc. New privacy problems however go beyond this traditional paradigm, which centres on the collation of data in centralised databases logging potentially sensitive information on individuals. Whilst this is still an issue within the purview of ‘Privacy 2.0’, it is only a small part of a much wider breed of new problems. More modern legislation in the UK, such as the Data Protection Act recognises this to a limited degree, yet still largely targets the same institutional actors as previous ‘Privacy 1.0’ legislation. The precedent setting Privacy Act of 1974 in the U.S. remained limited to public institutions. The 1998 UK Data Protection Act recognises part of the new privacy problem by the casting the net wider to “data controllers” generally and investing them with legal responsibilities. The fears motivating both pieces of legislation however, originate from the idea of “mass dataveillance” – i.e. de facto surveillance via centralised data collection. Solutions such as restraint, disclosure and encryption are appropriate for these ‘Privacy 1.0’ concerns but extremely limited for the new generative technologies.

The generative mosaic

'Generative mosaic' is a term coined by Jonathan Zittrain, in ‘The Future of the Internet’ that I think elegantly expresses the data mining privacy issues now coming to the fore.

Certain datasets collected on individuals, even if only focused on one aspect of their behaviour, allow patterns to be mined from the data that the individual themselves may have no awareness of (and thus may also never have cause to complain if such – potentially advantageous – information is used against them).

Such data can be immensely powerful even when only gathered for a very narrow range of behaviour. For example, Amazon were able to roll out differential pricing of their products according to past customer behaviour. They were caught when some individuals deleted their browser cookies and discovered that the advertised price would change (no longer having had a reference point to the individual’s previous behaviour).

This example brings home the fact that data mining can very quickly produce tangible results with a comprehensive enough data set. Wonga.com currently claim to have a “market leading” bespoke algorithm for predicting whether a customer is likely to default on their loan. The standard assumed default rate in the retail loan sector is 10%. Wonga claim that their default rate remains in single figures, which is particularly astonishing considering their risky lending sector (short term loans of hundreds of pounds at an astronomical interest rate). Their two primary sources of information are a set of approximately thirty questions asked on the initial application followed by “thousands” of online data points. The fact that Wonga have monetized such information so efficiently demonstrates that there are hidden behavioural cues in people’s data available online that most are not aware of.

(see my earlier blog 'The Rights of Wonga' for more information on this).

The new threats to privacy

Compounded with the ‘generative mosaic’ problem is the fact that government and corporate databases are increasingly less threatening privacy concerns than those created by our ‘generative’ digital technologies and those who use them (virtually everyone in the first world and ever increasing numbers in the second and third worlds).

Ever cheaper processors, networks and sensor technology have created billions of constant data gatherers worldwide. Further, the flow of data from (and to and between) said data gatherer is not generally impeded by gatekeepers – unlike the relatively restrained government and corporate sectors.

A key feature of “Web 2.0” is peer production, and the rise of the ‘prosumer’ – people who are constantly consuming and producing new content, often ‘remixing’ the content they have consumed. The process is chaotic, ever changing and usually without gatekeepers. As a result, the surveillers (and ‘sousveillers’ to add Steve Mann’s lexicon) are us. Government and corporate actors and their intermediaries represent an ever shrinking portion of the ‘Privacy 2.0’ landscape.

From Intellectual Property and Copyright to ‘Privacy 2.0’

“The intellectual property conflicts raised by the generative Internet, where people can still copy large amounts of copyrighted music without fear of repercussion, are rehearsals for the problems of Privacy 2.0” – Jonathan Zittrain, ‘The Future of the Internet’, p.210.

Whilst intellectual property and copyright issues generated by modern digital technologies and environments (‘Web 2.0’) are at the forefront, with various legal scholars such as Yochai Benkler and Lawrence Lessig at the coal face philosophising the quintessential issues and concepts, the impact of ‘Privacy 2.0’ is yet to be truly felt or understood. We are – as Zittrain puts it – effectively ‘all on notice’ as anyone can become a youtube superstar in minutes.

Daniel Solove, in ‘The Future of Reputation’, considers the impact this can have, highlighting examples such as the ‘bus uncle’ of Hong Kong and ‘dog poo girl’ of South Korea. Incidents which, whilst public, would have remained relatively ephemeral in the past can now be recorded and spread virally across the globe, often with undesirable results due to a mass public reaction that would never before have been possible. ‘Bus uncle’ was attacked at his workplace in a targeted attack and ‘dog poo girl’ left her job, both as a result of the firestorms resulting from the videos. Lives are easily ruined in these cases because the total outrage generated is completely disproportionate to the social norm (or possibly, law) violated at the time. And as Zittrain puts it, “…ridicule or mere celebrity can be as chilling as outright disapprobation”.

A debate that regular resurfaces as a result of this scrutiny concerns the idea of the ‘participatory panopticon’ – popularised by science fiction authors such as David Brin (in non-fiction works), the proposal is that total surveillance would not be a problem if it was comprehensive and equal: any and all surveillers could themselves be surveilled. Steve Mann has frequently carried out ‘sousveillance’ interventions to test social norms in situations of surveillance and data sharing, often finding that a Brin style participatory panopticon may not be as remotely welcome as Brin and others suppose.

A strong counter –argument to the idea of the participatory panopticon, which itself rests on an assumption of inevitability and technological determinism, (c.f. the quote from McNealy – similar sentiments have been expressed by other prominent figures in IT and data mining industries such as ex Google CEO Eric Schmidt), is the charge that such extensive scrutiny renders us all into automatons – hence the “chilling” effect referred to by Zittrain above. Indeed, Zittrain compares the situation to that of politicians whenever they are in the public eye – and he implies – they have been the first to understand and adapt to “Privacy 2.0” in their public behaviour (even if they are disastrous at articulating and applying these concepts) and as such we should regard their behaviour in modern times as the canary in the coal mine:

“Ubiquitous sensors threaten to push everyone toward treating each public encounter as if it were a press conference, creating fewer spaces in which citizens can express their private selves.”
(Zittrain, ‘The Future of the Internet’, p.212).

Public statements by politicians cleave to an uncontroversial and bland centre ground. This isn’t just a result of realpolitik; it is also a direct result of ubiquitous media coverage (increasingly now an activity of citizens – especially those from opposing camps) and the ease with which a sentence can be taken out of context. This has a chilling effect that stifles behavioural outliers, and as examples of which, politicians are only the most prominent case. Speech and behaviour in the past was only subject to a relatively small group for disapprobation. Now the exposure group can potentially be society-wide in seconds.



New conceptions of ‘Public’ and ‘Private’

It isn’t just legal conceptions, of the ‘Privacy 1.0’ type, that lag behind. It is also a whole raft of concepts – one of the most important pairings in this case is our notions of ‘public’ and ‘private’ which still inform debates on privacy using ‘Privacy 1.0’ understandings. The most ubiquitous uses of these terms are not subtle enough to capture what we as individuals may want in privacy terms in the ‘Privacy 2.0’ world. Typically we use notions of ‘public’ and ‘private’ and ‘in private’ that forget this. Whilst behaviour ‘in public’ is technically open to the public eye, it is usually only a small number of eyewitnesses, often strangers, who observe it and it remains ephemeral. What were previously private public spaces become public public spaces.

The principle of freedom of speech framed in the U.S. constitution assumed that private conversations in public spaces would not become public broadcasts. There were no means at the time to effect this and now that those means do exist, we are effectively naked in the eyes of the law with no defence and a potentially chilling effect on our behaviour that may only allow new, or radical behaviours or speech to those who are completely disconnected from existing norms and so do not fear them.

Combining this ‘conceptual slippage’, (generally ignored – commercial interests are rarely threatened in the same way by ‘Privacy 2.0’ as they are by ‘Web 2.0’ and its attendant dilution of Intellectual Property and Copyright), with the mass generation of fresh data and ever increasingly convenient and accurate means of tagging and identifying (including the kind of facial recognition being pioneered by Facebook, Google and others), creates the perfect ‘Privacy 2.0’ storm.

Generative ‘mash-ups’ of data and the vast array of tools and APIs available for processing them mean it is increasingly trivial to find answers to questions such as ‘where was person x on date y’. And the answers will increasingly be coming from the general public, not from government or corporate surveillance.

For a practical application of this, see for example Tom Owad’s application for identifying subversives on Amazon via their wishlists using a bot that queries the site. A mashup combining this with photo recognition technologies would be relatively straightforward and itself could be combined and recombined endlessly with other mashups to provide a much sharper slice of someone’s life than is now visible through even the most invasive state database systems in the world.

The ‘peer production’ technologies have, effectively, been disruptive for Intellectual Property and Copyright, whilst restrictive for privacy. Even were it possible to circumscribe a database describing the total picture of an individual that is accessible digitally, this would be of little use for what the database itself is changes rapidly, often from one moment to the next. The emergent and inscrutable nature of the outputs of these technologies mean that the current ‘Privacy 1.0’ concepts and legal structure we operate within, from the government to the corporate to the academic worlds, are urgently in need of revision.

No comments: