Privacy and data use — the problem of chilling effects
This is the second of a two-part series on the theory of information privacy. In the first post, I review the theory to date, and outline what I regard as a huge and crucial gap. In the second post, I try to fill that chasm.
The first post in this two-part series:
- Reviewed the privacy theory of the past 123 years.
- Declared it inadequate to address today’s surveillance and information privacy issues.
- Suggested a reason for its failure — the harms of privacy violation are too rarely spelled out in concrete terms, making it impractical to do even implicit cost-benefit analyses.
Actually, it’s easy to name specific harms from privacy loss. A list might start:
- Being investigated (rightly or wrongly) for a crime, with all the hassle and legal risk that ensues.
- Being discriminated against for employment, credit, or insurance.
- Being embarrassed publicly, or discriminated against socially.
- Being bullied or stalked by deplorable private-citizen acquaintances.
- Being put on the no-fly list.
I expect that few people in, say, the United States will suffer these harms in the near future, at least the more severe ones. However, the story gets worse, because we don’t know which disclosures will have which adverse effects. For example,
- Algorithms for identifying potential terrorists are secret and ever-changing …
- … and the same goes for algorithms designed to identify potential civilian criminals, fraudsters, mortgage deadbeats or lazy employees.
- Simple-minded discrimination is often illegal … but the more subtle kinds are hard to identify or prove.
- Analytic software and computing power improve over time. We don’t know what kinds of analysis will become possible in the future …
- … nor who will want to carry those analyses out.
Why is this uncertainty bad? Well, prudence might suggest:
- Not posting anything to social media that might be interpreted as fitting the profile of a terrorist sympathizer …
- … and not saying anything of that kind in email either …
- … and not surfing to websites that might be of interest to terrorists.
- (And that all applies to any kind of real or alleged terrorism, be it Islamic, rightwing/militia, or Occupy Wall Street.)
- Not doing anything online that might be positively correlated with tax evasion …
- … or pedophilia …
- … or slacking off at work.
And that’s hardly all. Car license plates are now heavily photographed, so you might not want to drive to a druggie part of town, or otherwise deviate too far from a boring routine. You might not want to buy anything that speaks to a risk-taking nature in the years before you apply for a mortgage. Indeed, almost anything you do in your life could, if observed, harm you sometime in the future. And by the way — almost everything you do is, one way or another, electronically observed.
Chilling effects
In law, a “chilling effect” arises when you don’t exercise a freedom (e.g. free speech) out of fear of (usually legal) consequences (e.g. a libel suit that, irrespective of its merits, would be expensive to defend). But with the new data collection and analytic technologies, pretty much ANY action could have legal or financial consequences. And so, unless something is done, “big data” privacy-invading technologies can have a chilling effect on almost anything you want to do in life.
This problem will not be averted solely through controls on data collection, retention, or analysis. My reason for that opinion boils down to:
- Anti-terrorism efforts aren’t going to stop.
- Neither will the business initiatives that depend on recording and analyzing detailed consumer behavior.
- Free speech isn’t free unless you can express yourself publicly, for example in social media.
- The previous three points comprise more than enough data and analysis to fuel the chilling effects.
But what else is there? Well, the full chain is collection + retention –> analysis –> use + consequences. So for information privacy theory to be useful, it must address the use and consequences of surveillance’s fruits.
Until something better comes up, I propose a principle like:
The societal benefits of using citizens’ private information should exceed the societal cost of the chilling effects such use could produce.
I hope to suggest more detail in future posts.
Related links
- The discussion above of privacy-related harms is based in part on a 2011 post.
- This post may help explain what I wrote three Independence Day weekends ago about the essential questions of fair data use.
- One of the best precedents I’ve found for the connection between chilling effects and privacy is the 2004 TAPAC report, sponsored by — believe it or not — the US Department of Defense.
Comments
12 Responses to “Privacy and data use — the problem of chilling effects”
Leave a Reply
[…] first post, I review the theory to date, and outline what I regard as a huge and crucial gap. In the second post, I try to fill that […]
Your post is right on.
A friend of mine at a large social media company who works on storage talks about enabling “the web that endures.” It’s a beautiful and chilling phrase. Unfortunately most citizens don’t seem to be very interested in exploring the consequences of having every detail of their lives stored somewhere.
benefits less costs from the privacy path we choose should sum to more than the benefits less cost of the privacy path we choose not to take. Choosing increased privacy could be a ‘net gain’ within a fixed period of time but choosing decreased privacy could be a ‘greater net gain’ (in that same fixed period of time). In this case although both choises may result in a positive benefit less cost scenario (per however you measure it) it may result in slower progress. For that reason I would argue it would be a less attractive option. Hence I don’t think youre principle quite works. Additionally both options may lead to a net loss to society – in that circumstance youre principle would suggest neither more nor less (nor the same amount) of privacy should be adopted which is clearly unworkable.
replace youre with your – sorry!
smarty,
I’m not sure of everything you meant, but benefits-exceeds-costs should at least be viewed as a necessary condition for a public policy choice.
And please note that a “choice” is the replacement of one policy with an alternative. There’s ALWAYS a policy, even when the policy is “do nothing”.
[…] the societal cost of the chilling effects such use could produce[…]
I’m a bit worried by the fact that not everyone considers the chilling effects a cost but rather a benefit.
And with “not everyone” I mean enough powerful people to make decisions in favor of big-data led privacy invasions.
Political correctness is already putting a practical limit to the freedom of speech in a number of areas for individuals that are worried about their future.
Having everything stored and retrievable forever will make this self-imposed limitation even more widespread.
The panopticon is a (bad) dream of the past becoming true for the entire society.
[…] and under-appreciated danger of privacy intrusion, which in a recent post I characterized as a chilling effect upon the exercise of ordinary freedoms. When government — or an organization such as your employer, your insurer, etc. — […]
[…] to sum up — the only activities I want to see chilled by electronic surveillance are the actual planning and execution of terrorist attacks.* For […]
we can found the problem associated with the use of DBMS
[…] But that’s only part of the story. As I wrote in July, 2013, […]
[…] the uses of the resulting information are VERY limited, freedoms will be chilled into […]
[…] the uses of the resulting information are VERY limited, freedoms will be chilled into […]