text
stringlengths
0
118
constantly doubting its safety, constantly regretting that he didn’t build it
with even more defenses. He tries to make it totally secure, but one problem
remains: An enemy might invade through the entrance hole.
The animal says: “At a distance of some thousand paces from this hole
lies, covered by a movable layer of moss, the real entrance to the burrow; it
is secured as safely as anything in this world can be secured; yet someone
could step on the moss or break through it, and then my burrow would lie
open, and anybody who liked . . . could make his way in and destroy
everything for good.”
So, the animal winds up sleeping outside the burrow to stand guard over
the entrance. “My burrow takes up too much of my thoughts,” the animal
confesses. “I fled from the entrance fast enough, but soon I am back at it
again. I seek out a good hiding place and keep watch on the entrance of my
house—this time from outside—for whole days and nights. Call it foolish if
you like; it gives me infinite pleasure and reassures me.”
The irony (and absurdity) at the heart of the story is that the animal
becomes so obsessed with his project of building the most secure burrow
that he sacrifices his own security in the process.
Backdoors are a huge security risk and they undermine the effectiveness
of encryption for everyone. A report by a group of leading security experts
concluded that installing back doors would undermine security by creating
an enormous vulnerability: “If law enforcement’s keys guaranteed access to
everything, an attacker who gained access to these keys would enjoy the
same privilege.”50
About 60 of the leading technology companies, including Microsoft,
Alphabet, Inc. (Google’s parent company), Apple, Facebook, and Twitter
have vigorously critiqued backdoor proposals because of the significant
security risks that backdoors present.51
When such a chorus of technology experts and companies point out
problems, it is wise to listen. The security of all our communications is of
tremendous importance—and it has national security implications. If the
keys got in the hands of bad guys, our financial system could be
compromised. People who have access to critical systems could be
blackmailed. Key research and intellectual property could fall into the
wrong clutches. Private communication is not antithetical to security—it is
essential to security.
Encryption is a tool that can certainly be used by the bad guys, but it is
also a tool that is primarily used to keep the bad guys out. Creating a major
vulnerability will not make us more secure.
In an incident which occurred in 2016, the FBI obtained an order from a
magistrate judge to force Apple to develop software to help the FBI break
into an encrypted iPhone. The case arose out of the mass shooting in San
Bernardino in December 2015. Two shooters killed 14 people and injured
22 people before eventually being killed in a shootout with the police. One
of the shooter’s phones was recovered, but the FBI couldn’t unlock it.
The deceased shooter’s iPhone was secured by a feature of Apple’s iOS
that prevented brute force attacks on the phone. Brute force attacks use
software to make repeated guesses at passwords until the password is
cracked. To circumvent such attacks, Apple used a feature that delayed how
frequently password guessing attempts can be made. After 10 wrong
guesses, the contents of the phone would be permanently inaccessible.
The FBI requested that a judge force Apple to write a new iOS to install
onto the phone to get around these features. The FBI argued that a statute
from 1789 gave it the authority to compel Apple to write the software.52
The FBI convinced a magistrate judge to issue an order to compel Apple to
provide “reasonable technical assistance” to the FBI. Apple vigorously
opposed being forced to assist. In a letter to customers, Apple wrote:
“Specifically, the FBI wants us to make a new version of the iPhone
operating system, circumventing several important security features, and
install it on an iPhone recovered during the investigation. In the wrong
hands, this software—which does not exist today—would have the potential
to unlock any iPhone in someone’s physical possession.” The letter further
stated: “The same engineers who built strong encryption into the iPhone to
protect our users would, ironically, be ordered to weaken those protections
and make our users less safe.”53
The FBI eventually hired a firm to engineer a hack to break into the
iPhone, and the case was dropped. But there is nothing to stop the FBI or
other law enforcement agency from trying to compel companies to break
their own security features. Moreover, by hiring a firm to break the iPhone’s
security, the government funded the development of technology to weaken
security.
Pouring money, time, and resources into weakening security might seem
useful for an immediate case at hand. But in the long term, efforts like these
undermine security. Government officials, if not checked, will open
vulnerable backdoors to protect the front door. These moves make us less
secure—often ironically in the name of security.54
Exploiters
Exploiters are actors who learn about vulnerabilities and use them to their
advantage rather than reporting and fixing them. Exploiters are akin to
those people who see something harmful but don’t do anything to stop it or
even warn people about it.
In 2017, a strain of ransomware known as WannaCry attacked countless
computers through outdated Microsoft Windows operating systems. A look
into this incident reveals that there were several actors at fault, not just the
hackers and the people and organizations that failed to update their
operating systems.
WannaCry was targeted at older versions of Microsoft Windows. As we
discussed earlier, software companies have a common practice of
“deprecating” their software—they stop supporting older versions. Many
users keep using the old software because they can’t afford a new version,
have grown attached to their current version, have a device that can’t install
the necessary operating system, or have particular apps that only work on
the old system. Given the severity of the attack and the number of affected
computers, Microsoft rushed out a patch.
Long before the attack, the National Security Agency (NSA) had
discovered the vulnerability. Unfortunately, the NSA didn’t inform
Microsoft because it wanted to exploit the vulnerability. Had the NSA
reached out to Microsoft, the problem could have been fixed. Instead of
acting to keep us safe, the NSA allowed this ticking timebomb to exist
because it wanted to hack into systems itself. The NSA was believed to