Popular Concern About Online Privacy is Still Growing

A popular refrain of privacy naysayers is that the public has effectively “thrown up its hands” in response to the overwhelming digitization of social life and personal information, and simultaneously, the extensive and growing mining of this data by private companies and governments and its exposure in proliferating hacks and data breaches, resulting in identity theft for hundreds of millions of consumers.  The argument (which, unsurprisingly, often comes from government and major-corporation officials) seems to be: “there’s nothing we can do about the gradual loss of privacy, plus people don’t really care — and we get all kinds of wonderful benefits from not fixing it — so we shouldn’t do anything about it.”

Indeed, some of this author’s unpublished privacy research within the past couple years seemed to validate at least part of this argument, in that web search results generally showed gradually-declining instances of searches for privacy-related terms over the last decade or so.  But based on observed public sentiment, growing privacy-related legal restrictions around the world, and “activist”and other “flashpoint” events, I suspected that such a conclusion simply wasn’t true.

A trio of events all (coincidentally) happening this past week underscored to me that the public does care, and that we should continue to do more about online privacy.

An Unintentional Study of Social Outrage

In one of the cases, Danish social science researchers released a research study and its underlying data to an online, open science collaboration forum.  Shockingly, the released data was from online dating site OKCupid.com, and consisted of nearly 70,000 users’ answers to profile and love-life survey questions from the site (the site uses an extensive set of questions to help match users by literally thousands of aspects of compatibility).   These questions consist of everything from sexual practice preferences to religious beliefs — in other words, exceedingly personal information.  And it was released by the researchers with no added anonymization (users’ screen names to the dating site were used directly, which is not much protection at all).

Even worse, the researchers’ method of gleaning the data was to get access to the site through a normal user account, and use a crawler to perform “screen-scraping” of the data (i.e. capturing the human-readable site screens, then automatically interpreting it and “back-porting” it to their own computer database) — in violation of the site’s TOS (Terms of Service).

There was a considerable public backlash in response to these actions, and the researchers were forced to pull the data from the scientific collaboration forum.   OKCupid was none too pleased; complaining in a statement of the violation of their TOS, and arguing that the incident constituted a violation of the Computer Fraud and Abuse Act (CFAA) (this is probably correct, because the CFAA criminalizes impermissible conduct which “exceeds access granted to a system”, which includes use of a system in a merely not intended, let alone expressly forbidden.  The US v. Morris case established this way back in 1991).  It was also argued that social scientists really should know better, because well-established social science ethical practices (and institutional requirements) strictly require the consent of social science research subjects to a study, making the transgression all the more overt (indeed, your author once had to get IRB approval for an innocuous focus group study of a search engine interface back in the mid-2000s — even though it didn’t deal with the participants’ personal information — simply because it involved human subjects).

Yet, amazingly, the researchers maintain that the data they trawled from OKCupid was “public” merely because it was online (even though it was behind a login wall), and so none of the usual ethical restrictions apply, to say nothing about common decency (the researchers don’t seem to have any response to the putative illegality of their actions under the CFAA).   As mentioned, these arguments mirror what certain authority figures (who have an interest in unbridled mining of general public personal data) have been saying for years.  The arguments also echo the Third Party Doctrine (the proffered legal basis for, e.g., the NSA’s now-defunct warrantless domestic surveillance program), which states that any information disclosed to a commercial third party — even in confidence — is not “private” and can thus be viewed and utilized by the government.  The difference, this time, is that this time no one seems to be “buying it.”  That is very good to see, and perhaps a sign of a pro-privacy sea change in cultural awareness and public sentiment.

Online Opt-Outs

In the second piece of related news, a study by the National Telecommunications and Information Administration (NTIA) was released, which revealed that 45% of households said that security and privacy concerns (i.e., identity theft, fraud, data collection, and loss of control of personal data) discouraged them from engaging in activities that would require disclosing personal information online.  These activities include online banking, shopping or discussing controversial or political matters on social networks.

This is a shockingly high percentage, and may explain why some metrics (such as my own above-mentioned research) seems to show a blasé attitude about privacy online: those who have such concerns may simply be increasingly “opting-out,” in large numbers.  Of course, if that is the case, then the opinions gleaned by those online would tend to be automatically-biased towards those less concerned.

At any rate, governments and corporations banking their futures on more and more digitized personal information and public participation in online sites and platforms should sit up and take notice.  This is especially the case for online consumer goods retail companies and all internet-focused companies and networks dependent on advertising, given the supposed “savior” status of online retail to the ailing brick-and-mortar retail sector.

My takeaway would be that results like the NTIA’s argue strongly that more systematic policy efforts to guard and uphold the privacy of digital personal information are going to be needed to really “win” most of the public over anytime in the near future.  Some efforts are already underway, of course (such as more breach notification laws and breach penalties, and general privacy laws around the world — but largely excluding the US, and incremental industry moves such as the payment card industry’s transition to EMV “chip cards”, etc.) — but more coherent, systematic and foundational initiatives would be welcome.  The U.S. could, for example, do any or all of: explicitly end the Third Party Doctrine with a national privacy law, narrow the instances and create clearer standards under which third parties would be compelled to disclose private information to the government, or create a private right of action for apparent privacy violations (including breaches enabled by negligence).

Not Dismissing Privacy Out-of-Hand

Finally, on May 18th, 2016, the Supreme Court issued its decision in the Spokeo v. Robins suit, which originated in the Ninth Circuit.  This case concerns whether consumers have standing to sue when companies violate a federal law that has a provision punishing the companies (that is, contains a statutory damages provision) for a privacy violation (it could be some other consumer “right” besides privacy, actually, but in this case and for our purposes, privacy is the right at issue).  More precisely, the question is whether the mere privacy violation itself (within the bounds of an applicable law, which here was credit reporting and the Fair Credit Reporting Act) is enough to constitute actual damages to confer standing.  Without actual damages, plaintiffs do not have standing to sue — unless they are expressly granted a private right of action by law (per above).

In its decision, the Supreme Court said that merely alleging the statutory violation does not itself constitute proof of actual damages, rather, that concrete proof is required.  This would seem to rule out standing founded upon a per se violation of privacy — but the Supreme Court actually left that question open, remanding the case back to the lower court to determine if the privacy breach is itself a “harm” under the statute in concern.   If it is, then evidence of privacy breaches covered by the law would be “actual harm,” and there would be standing to sue.

As we pointed out in our coverage of the Anthem breach case (last paragraph), there is a growing wave of post-breach consumer class actions that are surviving dismissal despite no showing of specific pecuniary damages traceable to the corresponding breaches.  Up until just the last few years, these suits had simply been dismissed without much of a second thought.  But in the new wave of recent suits, and again this past week with the Supreme Court’s ruling, we appear to be getting a sea change in the case law towards privacy violate as a per-se “suable” harm.

Yes, the Supreme Court’s ruling could have been much stronger for the pro-privacy side (per the Sotomayor and Ginsburg dissent), but the majority of the court appeared to want to go with the “safer” route of letting lower courts get a crack at interpreting the statute with respect to the privacy-violation-as-per-se-harm question, and applying it to the current case.   Based on Anthem — also in the Ninth Circuit, and which has already survived dismissal — we have a hint at what the outcome is going to be.  Still, there are enough differences to keep us guessing.

 

 

Leave a comment

Your email address will not be published.


*