Privacy and data use — a gap in the theory
This is the first of a two-part series on the theory of information privacy. In the first post, I review the theory to date, and outline what I regard as a huge and crucial gap. In the second post, I try to fill that chasm.
Discussion of information privacy has exploded, spurred by increasing awareness of data’s collection and use. Confusion reigns, however, for reasons such as:
- Data is often collected behind a veil of secrecy. That’s top-of-mind these days, in light of the Snowden/Greenwald revelations.
- Nobody understands all of the various technologies involved. Telecom experts don’t know a lot about data management and analysis, and vice-versa, while the political reporters don’t understand much about technology at all. I think numerous reporting errors have resulted.
- There’s no successful theory explaining when privacy should and shouldn’t be preserved. To put it quite colloquially:
- Big Brother is watching you …
- … and he’s scary.
- Privacy theory focuses on the “watching” part …
- … but the “scary” part is what really needs to be addressed.
Let’s address the last point.
Privacy theory before computers
Modern privacy theory is usually dated to an 1890 article by Louis Brandeis and Samuel Warren, which is said to have been a reaction to issues raised by new technology, specifically cameras. In that article, they outlined four different kinds of privacy violation, which may be described as:
- Identity theft.
- Creating public false impressions about the victim.
- Publicly disclosing true, but properly private, facts about the victim.
- Unreasonably intruding upon the seclusion or solitude of the victim.
But the “right to privacy” was soon widened. In 1928, Brandeis — by then on the Supreme Court — famously summarized privacy as “the right to be let alone”, a right so expansive it was even the basis for the Roe v. Wade decision assuring reproductive freedom in the matter of abortion rights.
I actually agree with a Brandeis-style right to privacy or liberty. I just don’t think it helps much when we’re discussing tough IT-related tradeoffs.
Privacy theory in the computer age
Privacy theory as applied to computers and databases was perhaps first organized in the 1960s, most famously by Alan Westin. In his 1967 book Privacy and Freedom, Westin defined privacy quite narrowly, one of his formulations being:
the claim of an individual to determine what information about himself or herself should be known to others.
A history of social and political views about privacy published by Westin in 2003 gives more insight into how this concept evolved. As for his historical views themselves, those may be perhaps be summarized as:
- People grew more concerned about privacy in line with the increase in technology’s power to intrude on it …
- … until 9/11/2001, when surveillance suddenly began to appear more appealing.
Recent privacy theory
The secondmost famous book in privacy theory is probably Helen Nissenbaum’s 2009 Privacy in Context. Nissenbaum — in my opinion correctly — observed that:
- The issue isn’t exactly or just privacy, at least not in a narrow Westin-style definition. Rather, it is all of:
- Information gathering and monitoring.
- Information analysis and use.
- Information publication and dissemination.
- Societal, political and individual views on these matters vary, as they should, according to the purpose and “context” of the information’s gathering, use, or dissemination.
Unfortunately, Nissenbaum’s focus was descriptive than prescriptive. Even so, her work was the basis for, for example, the Obama Administration’s Consumer Privacy Bill of Rights — but that didn’t work out very well.
What’s wrong with privacy theory to date
Discussions of IT privacy and related issues seem stuck, and I have an idea why. Many laws and regulations are designed to avert measurable harms — death, injury, financial loss, etc. There are complications, of course, which start:
- Usually what’s averted are risks or probabilities of loss, rather than certainties.
- The measures to avert these danger carry costs, e.g. in money or in time and inconvenience.
Even so, the rules are rooted in some kind of measurable effect, and at least in principle they can be evaluated on a cost/benefit basis. Other laws focus on benefits — for example, they fund education; but again, in principle a cost/benefit analysis can be done.
When it comes to privacy and information flow, however, the cost/benefit analysis is distressingly one-sided. Reasons for government to impinge on privacy start with anti-terrorism and other law enforcement. Reasons for corporations to impinge on privacy start with profits and customer service. But reasons to preserve privacy — well, those are discussed in terms of “creepiness” and other synonyms for “vague emotional discomfort”. And what’s more important — vague emotional discomfort, or not being blown up by evil Moslem terrorists? When that’s the trade-off, the terrorists win.
Comments
4 Responses to “Privacy and data use — a gap in the theory”
Leave a Reply
[…] The first post in this two-part series: […]
I think the key issue is the premise that information=power, and the government in many countries such as North Korea and USA have enough power to assert totalitarian control over individuals.
I guess we have assumed in the past (e.g., Hitchcock movie scenarios where a single person is barraged by a cabal) that intense pressure could be asserted against an individual.
But something happened. Deep data, not odds and ends, is being collected (assume at least 10% of the entire internet nonstreaming data, or enough with lossy compression to store all calls and all web traffic.)
The difference between the red scare scenarios and now is both scope and breath. The Hitchcock scenarios required senior people driving dozens of agents against the lone outsider. The current data is more intrusive and automation can allow targeting any individuals and any grouping of people.
The protection in the US has been incompetence of government as institutions, siloing of information and roles, the press, and whistleblowers. This all seems eroded. You no longer need huge teams to target people, perhaps one person is enough to target a large group. Data barriers are broken since some agencies have critical mass. Much of the press in the US and North Korea has been co-opted or changed roles. Whistleblowers seem somehow disrespected.
A fundamental question here is whether democracy could exist with this type of government capability. It seems unlikely. I think schools should teach more Foucault and less Jefferson to compensate for the new reality.
Aaron,
you write:
[…]A fundamental question here is whether democracy could exist with this type of government capability. It seems unlikely.[…]
I believe that what will die are pluralism and freedom.
Democracy will survive and will be even stronger than today because every idea that is deviant from the standards will be ostracized (facebook’s filters are already effective at this) and peer group pressure will reach unprecendeted level of effectiveness.
Marco,
You make a good and scary point that democracy can be opposed to freedom, especially democracy that’s toward the “true democracy” end of the democracy-republic spectrum.