text
stringlengths 0
118
|
---|
because they are no longer updated. Yet they remain connected to networks, |
making it easier for criminals to take advantage of unpatched |
vulnerabilities.These unserviced software programs and devices are like |
radioactive waste; they can’t just be left behind or thrown away. When |
technology companies deprecate software, they must have a meaningful |
plan to transition people away from using it. Merely saying “stop” isn’t |
good enough. Companies need to ensure that people don’t lose their |
investments in the software, such as source files or custom software. |
The law could better facilitate such reform of the deprecation process if |
it held technology companies more accountable. |
Sending Sensible Signals |
Design doesn’t just facilitate or hinder tasks. It also delivers information. It |
communicates by sending a signal to the user of a technology.65 Through |
signals, design helps define our relationships with other people and shapes |
our risk calculus when interacting online. These signals affect our |
expectations about how things work and the context within which we are |
acting. |
Data security law focuses a lot on safeguards like encryption but hardly |
at all on the signals that technologies send to the people using them, like |
whom a communication is coming from or how a feature works or should |
be used. Yet, these signals are what encourage people to click certain links, |
share personal data, or enable certain features. Designers must engineer |
their systems so that the signals are understandable and that they encourage |
secure rather than insecure behavior. |
The wrong design can send misleading signals. People will ignore |
signals if too many are sent or if not enough are sent. Poor signals can be a |
major vulnerability. Fraudsters exploit confusing, inconsistent, and |
ambiguous signals to trick people. In essence, poor signals make people |
more gullible. |
MISLEADING SIGNALS |
When people use devices, there is a ton of information thrown their way |
about the security risks of various activities. Unfortunately, this information |
is often vague or misleading, making it difficult for people to get a clear |
sense of what they should do or how vulnerable they really are. |
One example is the ubiquitous padlock icon people see when they use |
their browser, adjust their privacy settings on social media, and enter |
authentication credentials.66 The padlock is an icon of the physical |
manifestation of security—only those with the key get access to whatever it |
is protecting. But what does it mean in specific contexts online? It could |
mean almost anything, from the deployment of specific encryption and |
authentication protocols to a general warm and fuzzy sense of “security” |
similar to the comically vague statements in privacy policies that a |
“company takes your privacy and security seriously.” If nothing else, |
padlock icons are invitations to garner consumer trust, enticing people not |
to worry about providing more personal information.67 The padlock icon |
isn’t necessarily good or bad, but it is vague and companies often use it to |
promise more than they deliver. Because this icon isn’t regulated, the trust |
that it conveys is often false and prone to abuse. |
Policymakers can play an important role in encouraging more useful |
signals and discouraging vague and misleading ones. At a minimum, |
regulators should provide guidance and facilitate industry coherence in |
security signals for users. Regulators can also do a lot more, such as |
applying federal and state laws against deceptive trade practices to |
companies that use false or misleading signals. In several cases, the FTC |
has alleged that technology companies using phrases like “easy to secure” |
and “advanced network security” were being deceptive because their |
products and services were insecure.68 Icons that invoke the concept of |
security are conveying similar messages even though words aren’t being |
used. |
The term “security” itself also functions as a signal. Many promises |
made by companies about security are as vague and empty as the padlock |
icon. In other areas, the FTC has created rules to limit when companies can |
use certain terms that might mislead consumers. For example, the FTC has |
limited the extent to which companies can use the word “free” to describe |
certain products and services.69 The use of the word “security” and icons to |
represent security should be similarly scrutinized. |
POORLY TIMED, TOO MANY, OR NOT ENOUGH SIGNALS |
Signals not only fail when they are inconsistent, vague, or misleading. |
Signals also fail based on the timing and frequency of their use. For |
example, researchers at University College London conducted a study to |
ascertain the effectiveness of security warnings. They found that too many |
warnings desensitized people to the risks. The researchers concluded that |
“security warnings in their current forms are largely ineffective, and will |
remain so, unless the number of false positives can be reduced.”70 In |
another study, a different group of researchers concluded that “the status |
quo of warning messages appearing haphazardly—while people are typing, |
watching a video, uploading files, etc.—results in up to 90 percent of users |
disregarding them.”71 Researchers at Carnegie Mellon’s CyLab developed |
Warning Design Guidelines, which recommended that warnings be clear, |
concise, and accurate: “If too long, overly technical, inaccurate, or |
ambiguous, a warning will simply be discarded and its purpose will be |
lost.”72 These studies and others have repeatedly shown that security |
warnings are like the porridge for Goldilocks—they have to be just right. |
Too many warnings will work poorly, as will too few. Warnings at the |
wrong place and time will work poorly. |
Warnings should reflect the gravity of the risk. Using similar types of |
warnings for low security risk and for high security risks fosters confusion; |
people might begin to assume that all the warnings are for low risks and can |
be ignored. Warnings must be implemented in ways to avoid being treated |
like the boy who cried wolf. Moreover, the CyLab Guidelines suggest that |
warnings “follow a common visual layout” because it “can be recognized |
faster.” |
Instead of better calibrating warnings to the risk, the opposite trend is |
occurring. Some companies are now slapping warnings on all external |
emails, with warnings like: EXTERNAL EMAIL—SPAM RISK. As these |
warnings adorn so many harmless messages, people will become |
desensitized to them. |
Another problem is with spam filters. When going through the junk mail |
folder to look for any legitimate emails that have been snagged, there is |
often nothing to help distinguish the danger of the emails in the folder. All |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.