text
stringlengths
0
118
security practices are unreasonable, it is possible to develop a list of the
practices that the FTC has deemed to be reasonable and unreasonable. This
list begins to resemble a standards approach.45
The FTC also enforces the Safeguards Rule of the Gramm-Leach-Bliley
Act (GLBA), which requires financial institutions to adopt “administrative,
technical, and physical safeguards that are appropriate to [the institution’s]
size and complexity, the nature and scope of [institutional] activities, and
the sensitivity of any customer information at issue.”46 Although the rule
has a few specific requirements, it mostly takes a reasonableness approach.
A more recent enforcer of data security to join the party is the Securities
and Exchange Commission (SEC), which started to become active in
2014.47 Instead of issuing a set of security requirements, the SEC has used a
reasonableness approach. In 2015, the SEC faulted an investment advisor
for failing to adopt written cybersecurity policies and procedures reasonably
designed to protect customer records and information.48
Problems with Safeguards Laws
Regardless of their approach, safeguards laws have struggled to contain the
data security problem.
UNHELPFUL VAGUENESS VS. MECHANICAL RIGIDITY
A big debate for safeguards laws consists of the choice between the
reasonableness approach and the standards approach. The problem with the
reasonableness approach is that many companies find it too vague and
lacking in sufficient guidance about what they ought to do. They beg for a
checklist so that they can check the boxes and feel assured that they are
complying.
On the other hand, standards approaches are critiqued for being too
rigid. Security threats are evolving, and best practices for security have
changed over time, so a rigid list might not keep up with current technology
and practice. Lawmakers and policymakers are not always nimble enough
and lack the expertise to come up with an up-to-date and comprehensive set
of standards. There might be items on a list that don’t quite fit specific
organizations or contexts.
Not all standards approaches fall into this trap. The HIPAA Security
Rule’s standards are quite broad, allowing for a lot of flexibility in how they
are applied by specific organizations. With this balance, a standards
approach can avoid the pitfalls of being too vague or too rigid.
Unfortunately, standards approaches can fail if organizations undertake a
check-the-box compliance strategy. Organizations can check off everything
in a list of standards yet have poor measures to address each standard.
Compliance efforts often falter by focusing on quantity rather than quality.
As we will discuss later, security isn’t just a game of box checking; it’s
about establishing a careful balance between tradeoffs. While industry
standards for data security often recognize those tradeoffs, there are far too
many incentives for companies to implement these standards in a minimal
check-the-box manner.
IGNORING SAFEGUARDS
Even when the law requires certain security practices and these practices
are really effective, there still is an alarming number of organizations that
don’t do them. For example, in 2014, the U.S. Department of Health and
Human Services began conducting random audits under the HIPAA
Security Rule. The results were awful: 58 out of 59 audited organizations
were found to have one or more failures to comply with the Security Rule.49
The requirements of many laws—having a comprehensive security
program, doing routine security assessments, training the workforce, and so
on—are not controversial. They are near universally recognized as
worthwhile measures. Yet, they are often just ignored.
ENFORCEMENT IS TOO LATE
A bigger shortcoming of safeguards laws stems from the way they are
enforced. The enforcement of safeguards laws is generally triggered by a
data breach. The result is that enforcement of these laws mainly adds to the
pain of a breach. Breaches are already very costly and painful, so when
regulators come along and add a little more to the pain, it often is not a
game changer. This is especially true because the penalties are often far
smaller than the overall costs of the breach.
The fines imposed on organizations for poor security leading to a data
breach are often a slap on the wrist. One article colorfully described the
penalty that Australian regulators imposed on Adobe for its 2013 breach of
user passwords: “The commissioner has flogged Adobe with wet lettuce,
telling it to straighten up and fly right to make sure this kind of thing
doesn’t happen again.”50 Adobe’s fine in Australia was $1.3 million, pocket
change for a huge company like Adobe.51 In most cases, fines are often a
fraction of the total costs for a breach. Regulatory penalties ultimately raise
the costs of a breach by a small percentage, but not enough to make a
material difference.
Perhaps if costs and pain were ratcheted up even more, then these laws
would work better. But costs and pain for breaches have continually risen
throughout the years, and the situation isn’t improving. As we will discuss
later, breaches are the product of many actors and not 100 percent the fault
of the breached organization. There are limits on what organizations can do
to prevent breaches.
Enforcing after a breach is often the worst time to bring an enforcement
action. Certainly, there should be vigorous enforcement for covering up a
breach or lying about a breach. But post-breach enforcement is often an
exercise in redundancy. Organizations that suffer breaches are often already
engaging in soul-searching and exploring how to improve in the future.
Instead, enforcement could be much more effective before breaches occur,
prompting organizations to do the kind of rigorous thinking about their
security practices at a time when it can help prevent breaches.
Additionally, the enforcement of safeguards laws does little to help
compensate victims. In 2019, for example, the FTC reached a settlement
with Equifax for its breach where victims could be compensated by being
paid $125 or receiving 10 years of credit monitoring.
People rushed to claim their $125. There was an unfortunate catch,
however. The fund to pay victims was only funded with $31 million, and
people’s payments would be reduced if too many put in claims. The FTC
tried to put lipstick on this pig by trying to convince people of the value of
the free credit monitoring.52
Equifax also agreed to pay up to $425 million (and possibly more) to
people harmed by the data breach. But proving harm has long been a
challenge, as we will discuss later on.