text
stringlengths
0
118
avoided.91
Although many organizations fear a diabolical hacker who can break
into anything, what they should fear most are the small, careless errors that
are continually being made. But the diffuse nature of the risks make them
difficult for individual organizations to address.
Over the past 15 years of reported data breaches, there have been many new
threats and technological developments, but what is quite remarkable is
how the same common mistakes keep getting made again and again.
3
The Failure of Data Security Law
It began with a seemingly small theft. A laptop computer went missing
from the Veterans Affairs Medical Center. This theft, however, turned out to
be anything but small. An employee had put the personal information of
about 7,400 patients on the computer—their names, birthdates, physical
description, age, weight, race, and the last four digits of their Social
Security Numbers, among other things. To make matters worse, the
information was unencrypted, which meant that the thief could readily
access it.1 Putting unencrypted personal information on a laptop—
especially sensitive patient data at a hospital—is a big security no-no. But it
happens all the time because people often act carelessly.
As is typical procedure following many data breaches, the medical
center notified all the patients whose information was on the missing
laptop. In every state in the U.S., the law requires that individuals be
notified if their personal data is lost, leaked, or improperly accessed. The
medical center offered free credit monitoring to all the patients for one year.
The problem, though, is that credit monitoring for just one year doesn’t
really address all the harm to the patients. Credit monitoring is a service
that consumer reporting agencies provide that alerts people if there is
unusual activity in their credit report. Credit monitoring doesn’t immunize
against fraud; it only provides an alert if there is suspicious activity. The
period of one year is very limited. Criminals can use personal information
to conduct fraud at any time, possibly many years in the future. Because
credit monitoring sounds protective, it actually risks lulling people into a
false sense of security. You can have your identity stolen, your tax refund
intercepted, your sensitive information leaked, and your personal hard drive
with all your documents and data locked up behind ransomware without
credit monitoring triggering an alert.
Later, Dorn VAMC officials lost four boxes of pathology reports with
information on about 2,000 patients. These boxes contained names, Social
Security Numbers, and medical diagnoses. As with the laptop breach, Dorn
VAMC notified the individuals and provided one year of free credit
monitoring. The credit monitoring here did nothing to rectify the fact that
patients’ intimate health information was compromised.
A group of affected people sued Dorn VAMC. They argued that the
breach caused them “embarrassment, inconvenience, unfairness, mental
distress and threat of current and future substantial harm from identity theft
and other misuse of their personal information.” The plaintiffs argued that
they had to spend time monitoring their accounts and purchasing various
services to protect themselves from potential fraud.
Addressing both incidents together, the court noted that the data
breaches were “disconcerting,” but the plaintiffs had not suffered harm. The
plaintiffs claimed that the lost data put them at greater risk for future fraud
and identity theft. The court concluded, however, that this claim was “too
speculative” because the plaintiffs had not yet been victimized by fraud.
The fact that plaintiffs spent money for services to protect themselves was
“self-imposed”—an attempt to “manufacture” a harm. The plaintiffs were
thus out of luck.
This case demonstrates how badly the law of data security fails. The law
failed to prevent the data breaches, which were readily preventable. The law
failed to hold the medical center accountable for its inability to keep the
data secure. The law required patients to be notified of the breach, but it
failed to protect them from the harm the breach caused. The law also failed
to compensate the patients for their lost time, anxiety, the increased risk of
fraud they faced, their lost privacy over their personal and health
information, and their expenditure of money to protect themselves. In
almost every way, the law failed.
In this chapter, we examine personal data security law. This relatively
new body of law has developed quickly, mostly in the last few decades. It is
a sprawling framework, involving numerous types of laws at the federal and
state levels, as well as internationally. Broadly speaking, there are three
types of data security laws:
Breach Notification Laws
Laws that require organizations to notify various government authorities and affected
individuals in the event of a data breach.
Security Safeguards Laws
Laws that require substantive administrative, physical, and technical measures to secure
personal data.
Private Litigation
Lawsuits brought by affected individuals who are harmed by a data breach.
Our goal isn’t to explore the law in intricate detail; treatises are written for
this purpose. Instead, we aim to show some of the key themes of this law
and draw some big picture conclusions. With data security law, the forest is
often ignored for the trees, and laws keep sprouting up based on trendiness
rather than good policy.
Each type of security law accomplishes some good things, but each type
has many weaknesses. We are not arguing that existing security law should
be abandoned. Nor are we arguing for completely different types of security
law. Instead, the shortcomings of these types of law are actually due to a
more overarching problem: Data security law has an unhealthy obsession
with data breaches. This obsession has, ironically, been the primary reason
why the law has failed to stop the deluge of data breaches. The more
obsessed with breaches the law has become, the more the law has failed to
deal with them.
BREACH NOTIFICATION LAWS
In February 2005, people in California started to receive letters from
ChoicePoint, a company most had never heard of. ChoicePoint’s business
involved collecting personal data from numerous sources to compile
extensive dossiers on millions of people. Companies and government
agencies could sign up to access this data.