text
stringlengths 0
118
|
---|
Krebs dug deeper into what had happened. He discovered that one |
reason why the hackers could readily access the devices was because the |
hackers used the default factory passwords for the devices. Many devices |
come with a default starter password. Users are asked to change these |
passwords, but many devices don’t force users to change them. Quite |
predictably, many users don’t bother to do so.7 This makes it easy for |
hackers to break into devices on a massive scale. |
It is quite alarming how quickly KrebsOnSecurity could be shut down, |
and it is chilling to think this could happen to journalists or anyone voicing |
an idea or opinion that hackers want to silence. The attacks can disrupt |
businesses, take down government sites, and interrupt nearly anything. The |
motive might be to silence critics, punish or bully people, or wreak havoc |
on important services. |
Krebs expressed grave concern that the Internet would be flooded with |
device-based DDoS attacks.8 He was right. Just a few weeks later, a group |
of unknown perpetrators used the Mirai botnet to pull off a gigantic DDoS |
attack against Dyn.9 Dyn is the company that controls much of the |
Internet’s DNS infrastructure. In October 2016, it was attacked three times |
over the course of 12 hours.10 The attack caused outages at dozens of |
popular websites, including Reddit, Spotify, CNN, Amazon, and Twitter.11 |
The perpetrators of the attack remain unknown. |
There are several lessons to be learned from this story. First, many |
devices are poorly designed for security. The term “Internet of Things” |
(sometimes abbreviated IoT) became popular in first decade of the 21st |
century to describe the growing trend of devices connected to the Internet. |
The trend continues to this day, where countless devices are now online, |
such as fire alarms, thermostats, doorbells, light switches, and appliances, |
among other things. All these devices pose a security risk, and they often |
are not designed with security in mind.12 As Bruce Schneier notes, |
“Engineering teams assemble quickly to design the products, then disband |
and go build something else. Parts of the code might be old and out-of-date, |
reused again and again.”13 He concludes: “The companies involved simply |
don’t have the budget to make their products secure, and there’s no business |
case for them to do so.”14 |
The market is failing. Consumers are not choosing devices based on |
security. They select them based on price, functionality, or how appealing |
they look. Even if consumers cared more about the security of these |
devices, they don’t know enough to assess how secure the devices are. The |
manufacturer of a baby camera might declare: “Our device has reasonable |
security measures.” But what does that mean? Manufacturers might tout |
their “great” or “strong” security, but these are just bald claims. How can a |
person assess these claims? Most consumers lack the expertise to evaluate |
the security of a device. Even for the experts, manufacturers often don’t |
provide enough specific information to make a meaningful security |
assessment. |
Moreover, the market fails to account for the fact that poor security |
doesn’t only affect the buyers of the devices but also can cause harm to |
many others. The hackers attacked Krebs by using other people’s insecure |
devices. Krebs had no say in whether a person decided to buy one device |
over another. Buyers often make their purchase decisions by focusing on |
the costs and benefits to themselves and rarely consider the costs to |
everyone else. |
The makers of these insecure products aren’t spending enough on |
security even though they are foreseeably jeopardizing people’s data. The |
market isn’t providing enough of an incentive to improve security. And the |
law, unfortunately, isn’t stepping in to correct for this market failure by |
forcing these manufacturers to internalize their costs. |
SOFTWARE DESIGNERS |
Software is often designed with gaping security vulnerabilities.15 We have |
almost come to expect software to be insecure. We are barraged with stories |
about software bugs. We are constantly being nagged to download and |
apply patches to our software. |
When a security flaw in software results in a breach, the software |
manufacturer is rarely held responsible for the harm caused. There are |
several frameworks that collectively fail to hold software designers |
accountable. First and foremost, contract and tort law doctrines are |
intertwined in ways that routinely relieve companies from liability for |
insecure software.16 Companies use exculpatory clauses to explicitly |
disclaim liability and argue that they are offering a service and not a good to |
avoid liability under express and implied warranties. Courts have generally |
been quite reluctant to hold developers liable for insecure software because |
of such deference to contractual provisions, the limited ability for plaintiffs |
to recover for economic loss, the unclear duties companies have under |
negligence law, and difficulties in causal links between conduct and harm.17 |
Additionally, federal legislation does little to hold companies liable for |
insecure software. For example, the U.S. anti-hacking law—the Computer |
Fraud and Abuse Act (CFAA)—specifically provides that “No action may |
be brought under this subsection for the negligent design or manufacture of |
computer hardware, computer software, or firmware.”18 |
Imagine if the software were like a regular product. Suppose your |
shampoo were accidentally made too acidic and could dissolve your hair |
and scalp. Imagine if your car could suddenly blow up. Imagine if your |
television was defective and could readily burst into flames. In all these |
situations, the makers of these products would be liable for the harms their |
faulty products caused. |
But software usually gets a pass. Of course, judges are more likely to |
hold companies liable that create software that physically injures people. |
But the law is remarkably porous and ineffective, even under such extreme |
circumstances.19 Perhaps we want some small amount of leeway for |
software, because it is hard to make software that isn’t riddled with security |
bugs. But perhaps a lot of software is so poor on security because there’s |
not enough responsibility. |
Much of burden is placed on consumers, who are constantly asked to |
install software patches. According to Bruce Schneier, the “industry rule of |
thumb” is that only 25 percent install patches on the date of release and |
another 25 percent within the month. But 25 percent only get around to it |
within the year, and 25 percent don’t patch at all.20 Patching can frequently |
be a clunky and cumbersome process, and “many embedded devices don’t |
have any way to be patched.”21 |
To be fair, it’s practically impossible to create software without any bugs |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.