The Friday-the-13th ruling in FTC v. LabMD was a Pyrrhic victory for LabMD, and bad luck for the agency lawyers who brought the unfair practices action based on evidence the judge considered too flimsy. Here’s a brief recap of the 92-page decision:
LabMD was a cancer detection laboratory whose security practices were designed to comply with HIPAA’s standards. The FTC opened an investigation into their data security practices after an employee violated their policies and downloaded P2P software that wound up exposing some patient information on the file-sharing network [in the form of a 1,718 page “insurance aging” file containing personal information about 9,300 patients]…
… experts… were told to assume that the breach had occurred [without proof of third-party downloads of private patient data]. As it turned out, the data had not been downloaded by anyone other than Tiversa [a third party whose claims were relied upon by the FTC].
[The FTC] argued that LabMD’s “unreasonable” data security had put consumers at risk of substantial injury—even though there was no evidence that the data had ever been shared or that even one consumer had been harmed.
At issue was what constitutes “unreasonable” data security practices that put consumers at risk of “substantial injury,” in light of lack of proof of third parties accessing of the private information.
The judge came down on LabMD’s side, explaining that “Complaint Counsel has failed to prove the first prong of the three-part [Section 5 “unfairness”] test—that this alleged unreasonable conduct caused or is likely to cause substantial injury to consumers.” As the linked post says, the loss may make future data security-based enforcement actions more difficult for the FTC, as the standard for demonstrating likelihood of substantial injury has now been addressed in this ruling. In other words, the agency bit off more than it could chew.
We’ll make just a couple of points about this case:
- There’s also a lesson here on the information security side: if LabMD actively monitored its network, it may have noticed this vulnerability and intervened sooner, avoiding an enforcement action entirely. Indeed, they may have been able to make a prima facie showing of non-access. Network monitoring, automated enforcement of a data-loss-prevention policy (e.g., by disallowing P2P installation absent authorization, or blocking P2P traffic using an endpoint firewall), keeping the data encrypted at rest, or simply segregating sensitive data to keep it outside the software’s reach, could have saved a great deal of expense in terms of reputational harm and legal costs. Indeed, even though LabMD won, it’s virtually insolvent now.
The FTC hoped to build on its progress in Wyndam, which affirmed the FTC’s authority to pursue privacy and data security lapses under Section 5. This is a classic example of an agency gathering a “head of steam” as it moves towards greater enforcement power. Regardless of your position on the appropriate scope of the FTC’s enforcement authority, the drawing of reasonable limits should be reassuring. FTC enforcement actions and leadership in the privacy and data security field over the past few years have yielded many valuable lessons. Hopefully this swing-and-miss won’t derail that trend or otherwise take much wind out of its sails.
UPDATE: The FTC has filed a notice of appeal of the ALJ decision. And LabMD has sued the FTC attorneys who brought the case, claiming they improperly pursued the case despite knowing that the file at issue was illegally obtained by Tiversa. Pointing out that LabMD spent $500,000 dealing with the first 20 months of the FTC investigation alone—before the FTC’s complaint was even filed—LabMD contends that the FTC lawyers “fought so aggressively, abusively, unethically and illegally . . . that they put a small cancer-detection firm in Atlanta, Georgia, out of business.” (Case No. 15-2034 D.D.C.)
The key thing to understand here is that the FTC was alerted to LabMD’s breach by Tiversa, a data breach remediation service with a sketchy history of blackmailing prospective clients into purchasing their services. A Tiversa employee admitted that it scours P2P networks for sensitive documents, contacts the affected business, and sometimes makes it seem as if the data was found at IP addresses of known identity thieves, even if it wasn’t. If the business doesn’t buy their services, Tiversa reports it to the FTC. From LabMD’s point of view, the FTC countenanced this unscrupulous behavior—and allowed itself to be co-opted—by bringing an enforcement action against LabMD. Indeed, the ALJ pointed out that the FTC’s risk expert relied on discredited statements about source IP addresses, and that no actual victim had come forward, which is unusual in the data breach context. (In re LabMD Inc., Initial Decision, FTC Dkt. No. 9357 (Nov. 13, 2015).)
From the FTC’s point of view, that shouldn’t matter because the fact remains that LabMD left the private data where anyone could get it. The FTC wants the fact that data required to be protected under HIPAA was left within reach of the P2P client to be sufficient proof of a likelihood of substantial injury.
Here’s an analogy: Let’s say you check into Hotel LabMD and leave your car with the valet. The valet leaves your car running in the street with the window open. Then, someone named Tiversa sees it, gets in, and drives it back to the hotel. Tiversa tells the hotel manager the car was found near a chop-shop, and the hotel can quietly take the car back in exchange for $$. The hotel manager refuses and Tiversa calls the police. Hotel LabMD says it didn’t treat its guest unfairly because the car was actually in a relatively safe location, so there was no likelihood of harm. The government says the likelihood of harm is obvious because it’s too easy to steal and the threat of theft is ubiquitous. (Either way, you’ll never stay in that hotel again.)
It would be simpler to just have a bright-line rule and strict liability, so a HIPAA violation would constitute a per se violation of the FTC Act. But the law isn’t written quite that way, and ruling against LabMD could have been a boon to Tiversa’s business practices, which probably made it a tough decision for the ALJ. On one hand, finding liability based on a single exposure and consequent “breach” by Tiversa would arguably turn Section 5 into a strict liability statute and reward Tiversa’s extortion tactics. On the other hand, by finding no liability, the ALJ effectively requires the FTC to wait until there’s an actual breach by a more insidious actor. That arguably reads the “likelihood” standard out of the statute, kneecapping the agency’s ability to be proactive. At a minimum, it means the FTC will have to be more thorough or get more creative in demonstrating the threat of a breach in future cases.
ANOTHER UPDATE: The FTC and LabMD have filed their initial briefs in the appeal. I’m still on the fence as to the best interpretation of the law (the FTC’s brief makes some good points on that front), but LabMD’s brief persuasively argues that the FTC failed to meet its burden in this case. LabMD emphasizes, inter alia, that there was no helpful HIPAA compliance guidance at the time, nor did HIPAA require notification to potentially affected patients at the time, and that Section 5(n) of the FTC Act did not clearly apply to medical data. (The HIPAA/HITECH rules were updated in 2013.) Arguing that the sanctions sought by the FTC were unreasonably severe, the brief also points out that, like most breaches, this was traceable to a simple act of employee negligence.