source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
205,208 | I was going through a web application from my account and I was able to view some confidential information about my other account. I want to know if this is a case of missing function level access control or privilege escalation. Say my URL is: www.example.com/user1/info/1 This URL contains my profile information. Now, I change this URL to www.example.com/user1/info/2 This gives info about the profile of another user which should not be accessible to me. | You are trying to use a technical tool to solve a social problem. The answer is that cannot fit. Techniques can provide great security when correctly used, but only user education can allow proper use. I often like the who is responsible for what question. That means that users should know that they will be accountable for anything that could be done with their credentials. It is not enough to prove that they did not do it, they shall prove that they correctly protected their credentials. The physical analogy can also help. They would not let the key of a physical safe unattended. They should understand that when they are given reasonably secured credentials, they should see it as a physical key and use it the same. But as they are used to their own home computer with no security at all, education is hard and things are to be repeated. Unfortunately, I have never found a better way... | {
"source": [
"https://security.stackexchange.com/questions/205208",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200199/"
]
} |
205,271 | We are a vendor providing a product that is being used in enterprises. We know that those companies having periodic CVE scans on products they are using part of their vulnerability management process. My question is, do we have to raise a CVE if our own security researcher found a vulnerability in our product or we can just raise this vulnerability in the weekly security updates we publish in our official website? | You can do either, but I recommend applying for a CVE so that customers who get threat intelligence feeds are more likely to notice the issue and expedite a patch. Assigning a CVE also makes it easier to reference a specific vulnerability in general communications if you need to later. It's also a signal to your customers that you take security transparency seriously. CVEs are assigned and managed by MITRE, and you can use the CVE application form to make a request. | {
"source": [
"https://security.stackexchange.com/questions/205271",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/188315/"
]
} |
205,283 | Some sites utilize a GUID for a file name when it goes in storage. For example, when you load up a receipt, instead of having the receipt named something like 1200 (an incremental number), it will have a GUID instead. What is the purpose of using this GUID if the underlying back-end code permissions checks are in place (User must be admin with receipt reading permissions or must be the receipt owner) instead of leaving in the incremental numbers in place? | You can do either, but I recommend applying for a CVE so that customers who get threat intelligence feeds are more likely to notice the issue and expedite a patch. Assigning a CVE also makes it easier to reference a specific vulnerability in general communications if you need to later. It's also a signal to your customers that you take security transparency seriously. CVEs are assigned and managed by MITRE, and you can use the CVE application form to make a request. | {
"source": [
"https://security.stackexchange.com/questions/205283",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/201818/"
]
} |
205,340 | I received an email with the subject " Your invoice from Apple #xxxxx ".
It then continues by: "[...] your payment from "Pokemon Go was accepted [...]". That line made me sceptical. I just downloaded the app recently. How could the scammer know this? Was it just a good guess? I assume it to be scam since: The sender is surpressed Typos No Username / data Generic text A suspicious little pdf Not the signature / style from your friendly, expensive fruit seller tech company Some online warning sites already caught up on it What I could think of: Another free app reports my other apps to the vendor A site I often visit has cookies that I was looking up stuff from let's go pikachu My account could actually be compromized and someone has access to my records Many people have the app installed If only a fraction of the people who have the app open the attached pdf the scammer wins. Anyway, how could this be and what counter messurements can I apply? | It's a game of probability and chances are high that you might have one of the most popular apps in history installed on your device. My guess is that the scammer does not know anything about you. The app in question is widely popular and one of the most successful apps on both iOS and Android. An attacker may just send out large amounts of mails containing such "most probable apps"/"best guesses". It would have been the same if the scammer sent an invoice for WhatsApp, which you most probably have installed on your device. This tactic can also be observed in other recent spam waves like the notorious sextorion scam where the attacker sends a rather ominous remark about your porn preferences: i installed a software on the adult videos (pornographic material) web-site [...] 1st part displays the video you were viewing ( you’ve got a nice taste haha ) So to sum it up, this is most likely just a wild, but very probable guess, and you are not compromised. Countermeasures in this case: delete the email, go catch some Pokémon and have fun. | {
"source": [
"https://security.stackexchange.com/questions/205340",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106901/"
]
} |
205,479 | Here’s an example request we can make to the GitHub API: curl 'https://api.github.com/authorizations' --user "USERNAME" This will prompt for the account password, to continue: Enter host password for user 'USERNAME': If we don’t want to get the prompt, we can provide the password at the same time as the username: curl 'https://api.github.com/authorizations' --user "USERNAME:PASSWORD" But is this method less secure? Does curl send all the data at once, or does it first setup a secure connection, and only then send the USERNAME and PASSWORD ? | Regarding the connection there's no difference: the TLS is negotiated first and the HTTP request is secured by the TLS. Locally this might be less secure, because: The password gets saved to the command history ( ~/.bash_history ) as a part of the command. Note: This can be avoided by adding a space in front of the command before running it (provided you have the setting ignorespace in variable HISTCONTROL ). On a shared system, it will usually be visible to others in ps , top and such, or by reading /proc/$pid/cmdline , for as long as the command is running. Storing the password unsecured in a script might pose a security risk, depending on where the script itself is stored. | {
"source": [
"https://security.stackexchange.com/questions/205479",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109257/"
]
} |
205,519 | Everywhere I look it says servers store passwords in hashed form, but then you have those breaking news about hackers stealing passwords from large companies. What am I missing? | There are two common failings, over and above letting the databases or files get stolen in the first place. Unfortunately, and against all security recommendations, many systems still store plain text passwords. Hashed passwords are technically not reversible, but as has been pointed out by others, it's possible to hash millions of password guesses then simply look for matches. In fact, what usually happens is that tables of pre-computed passwords and hashes (Rainbow Tables) are available and used to look for matches. A good rainbow table can support a high percentage match in fractions of a second per password hash. Using a salt ( an extra non-secret extension of the password ) in the hash prevents the use of pre-computed rainbow tables. Most compromisers depend upon rainbow tables. Computing their own hash set is certainly possible, but it's extremely time consuming (as in months or longer), so it's generally the vanilla hash that's vulnerable. Using a salt stops rainbow tables, and a high round count of hashed hashes of hashes can make brute force transition from months to years or longer. Most institutions simply don't implement this level of security. | {
"source": [
"https://security.stackexchange.com/questions/205519",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202079/"
]
} |
205,680 | Is it secure to hash a password before using it in an application to increase password entropy? Does this practice increase entropy when a PBKDF is used in the application itself or does the PBKDF itself increase the password entropy? If a random password is hashed with md5 will the output provide a 128 bit entropy? EDIT: It is meant to use the result of the hash function as a password for cryptographic functions and applications like AES-256, email and access to computer systems. The procedure used will be password -> hash of password -> application EDIT 2: E.g if an email application requests a password during registration, the intended password will be hashed locally before being provided to it. | No, you don't increase entropy by hashing it once, or twice, or ten times. Consider entropy as it is seem from the input, not the output. You cannot add entropy using a deterministic process, as the entropy of the result does not count. Even if you have some code like this: $password = "123456";
$result = md5($password) . sha1($password) . hash('gost', $password);
echo $result; // e10adc3949ba59abbe56e057f20f
// 8941b84cdecc9c273927ff6d9cca1ae75945990a2cb1f
// 81e5daab52a987f6d788c372 And you end up with a scary looking 136-byte string, the password is still 123456 , and any attacker bruteforcing your hashed password will have to try, on average, only once, as 123456 is the top worst password on almost every single list. If a random password is hashed with md5 will the output provide a 128 bit entropy? No, MD5 is deterministic, so if the attacker knows the string is a MD5 hash, the entropy of it is the entropy of the random password you supplied. To make the password more secure, use a proper key derivation ( PBKDF2 is a good one), ask the user for a longer password, and check if the user is following basic password rules (no chars repeated in a row, proper length, mixed digits and chars, mixed case, things like that). | {
"source": [
"https://security.stackexchange.com/questions/205680",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/127671/"
]
} |
205,894 | For convenience purposes I manage my passwords with the password manager Bitwarden on my personal computer and smartphone with autofill function (but with asking for the master password or fingerprint first every time). I was just thinking about also adding my credit card information (which is used to log into the online banking stuff) to my vault, but since that seems like such important data, I'm not sure if it would be safe or if this even is a good idea. Any opinions? I also saw this question on here, but it rather deals with whether that is reasonable from a law standpoint. | The question might come down to: which piece of data has a higher level of risk, your passwords or your credit card info? Your passwords can be used without you ever knowing about it. Passwords let someone into every aspect of your life with, potentially, every secret bit of information about you that you hold. So, it is possible for someone with your password to completely take over your life without you being aware until it is too late. Credit card use will be noticed on your next statement, or as soon as your card company posts its use. You also have several types of recourse to dispute charges and have them reversed. One might suggest that credit cards can be used to set up new cards or other lines of credit, but the same could be said with the information provided by passwords. Passwords are the higher risk. Credit card info has numerous mitigations in place to protect you. So, if you trust your password manager with your passwords, there is no increased risk with trusting it with your credit cards. There is always the inherent risk of recording any of this sensitive information, but if you have already accepted that risk for your passwords, then your credit card info does not materially increase your risks. | {
"source": [
"https://security.stackexchange.com/questions/205894",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198939/"
]
} |
206,000 | I recently found the GitHub repository https://github.com/userEn1gm4/HLuna , but after I cloned it I noted that the comparison between the file compiled (using g++) from source, HLuna.cxx , and the binary included in the repository ( HLuna ) is different: differ: byte 25, line 1 . Is the provided binary file secure? I've already analyzed that in VirusTotal without any issues, but I don't have the expertise to decompile and read the output, and I've previously executed the binary provided without thinking about the risks. | Compilation is not a directly verifiable deterministic process across compiler versions, library versions, operating systems, or a number of other different variables. The only way to verify is to perform a diff at the assembly level. There are lots of tools that can do this but you still need to put the manual work in. | {
"source": [
"https://security.stackexchange.com/questions/206000",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202690/"
]
} |
206,003 | How can I unmask the e-mail addresses in a Bcc field when I am just a recipient? Need very simple, step-by-step instructions for someone who doesn't code. I have received a group e-mail and would really like to see the others who got it. | You can't. You simply won't have any information about the Bcc header when you receive the mail, so you there's nothing to "unmask". The way Bcc is designed is specified in RFC 2822 , under section 3.6.3. To quote the specification: The "Bcc:" field (where the "Bcc" means "Blind Carbon Copy") contains
addresses of recipients of the message whose addresses are not to be
revealed to other recipients of the message. There are three ways in
which the "Bcc:" field is used. In the first case, when a message
containing a "Bcc:" field is prepared to be sent, the "Bcc:" line is
removed even though all of the recipients (including those specified
in the "Bcc:" field) are sent a copy of the message. In the second
case, recipients specified in the "To:" and "Cc:" lines each are sent
a copy of the message with the "Bcc:" line removed as above, but the
recipients on the "Bcc:" line get a separate copy of the message
containing a "Bcc:" line. (When there are multiple recipient
addresses in the "Bcc:" field, some implementations actually send a
separate copy of the message to each recipient with a "Bcc:"
containing only the address of that particular recipient.) Finally,
since a "Bcc:" field may contain no addresses, a "Bcc:" field can be
sent without any addresses indicating to the recipients that blind
copies were sent to someone. Which method to use with "Bcc:" fields
is implementation dependent, but refer to the "Security
Considerations" section of this document for a discussion of each. When a message is a reply to another message, the mailboxes of the
authors of the original message (the mailboxes in the "From:" field)
or mailboxes specified in the "Reply-To:" field (if it exists) MAY
appear in the "To:" field of the reply since these would normally be
the primary recipients of the reply. If a reply is sent to a message
that has destination fields, it is often desirable to send a copy of
the reply to all of the recipients of the message, in addition to the
author. When such a reply is formed, addresses in the "To:" and "Cc:"
fields of the original message MAY appear in the "Cc:" field of the
reply, since these are normally secondary recipients of the reply. If
a "Bcc:" field is present in the original message, addresses in that
field MAY appear in the "Bcc:" field of the reply, but SHOULD NOT
appear in the "To:" or "Cc:" fields. Note: Some mail applications have automatic reply commands that
include the destination addresses of the original message in the
destination addresses of the reply. How those reply commands behave
is implementation dependent and is beyond the scope of this document.
In particular, whether or not to include the original destination
addresses when the original message had a "Reply-To:" field is not
addressed here. In practice the case where To and Cc recipients receive no Bcc line, but each Bcc'ed address receives a Bcc line containing only their email address, is most common. This provides no indication of a Bcc to the To and Cc recipients, and indicates to the Bcc'ed recipients that they were sent the email via the use of Bcc without revealing other Bcc recipients. | {
"source": [
"https://security.stackexchange.com/questions/206003",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202694/"
]
} |
206,061 | How does Chrome/Firefox make sure add-ons are safe? Do they have any protection against a malicious add-on? How much access can add-ons have? Can they access internet history or maybe even cookies and such and send them to a server? Do I need to worry about this? I do have Kaspersky and Kaspersky add-ons but I still wonder should I still worry about add-ons? Considering there is nothing I can do to make sure some add-ons are malicious or not even if they still have an OK reputation. EDIT: bonus question, if an addon says it can read the data on websites you visit, does it mean it can know which websites I visit and technically can send them to a server and basically record my history this way ? (considering many adblockers and such addons have this permission) | Modern browser extensions use the WebExtensions API , which enforces a permission model; basically, addons can only have the access that you grant them (you can't reject individual permissions though; if you are uncomfortable with some, you can't install the addon). Regarding your specific questions: The browser history can only be requested if the history permission is granted. The cookies permission only works along with a host permission which will define which cookies can be accessed. Host permissions are required for all of the sensitive actions (such as injecting JavaScript into a page, reading the contents of a page, etc). Malicious extensions can of course execute arbitrary JavaScript in an isolated context, so something like a malicious cryptominer is certainly feasible. For access which doesn't require explicit permissions, see my related question: Danger of browser extension without any permissions? . | {
"source": [
"https://security.stackexchange.com/questions/206061",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202757/"
]
} |
206,090 | A lot of the users in my company are using their agendas to write down their password and usernames, or Excel sheets with a protected password. I'm hesitant to install software for password management after reading recommendations/feedback on them. Is there any other secure and user-friendly solution to store passwords? | Install a password manager. A good password manager is much, much better than anything you can do by yourself. They are software created by security professionals, follow strict development rules, and are tested by a lot of people, and attacked by a lot of people. They have better chance of protecting your passwords than anything invented by the average, even the above average user. | {
"source": [
"https://security.stackexchange.com/questions/206090",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202784/"
]
} |
206,186 | I want to bring this up to HostGator, but want to verify my suspicions before making a big fuss. I asked a customer care representative to help me add an SSL certificate to a site I host with them. When he was done, I received this e-mail with all my login information, and my entire password in plain text (I left the first letter visible as evidence). I set up this password over a year ago, and it was a big surprise to find out they sent it back to me, unprompted, in plaintext: I immediately brought this up to the representative, who repeatedly tried to convince me that it was OK. I decided to drop it after a few minutes, because I think I should bring it up to someone higher up. Before I do so, is it safe to assume that my password is stored in their database as plain text? If so, do you have any suggestions on how to address this issue with the provider? | Yep, that's a big problem, especially if that was your old password (i.e. not a newly assigned one). Technically, the password might be stored under reversible encryption rather than plain text, but that's nearly as bad. The absolute minimum standard should be a salted hash - anything less and anybody with access to the auth database who wants to can use an online rainbow table to get back the plaintext passwords in moments - but single-iteration secure hash algorithm (SHA) functions are still easy to brute force with a GPU (they're designed to be fast; a high-end GPU can compute billions per second) so they really ought to be using a proper password hashing function such as scrypt or argon2, or in a pinch bcrypt or PBKDF2. Also, there is absolutely no way to guarantee that the email was encrypted along the entire path between their mail server and your email client. Email was designed in a day when people didn't really consider such things to be critical, and short of an end-to-end encryption scheme like OpenPGP or S/MIME, email is at best encrypted opportunistically , and may be passed through an unencrypted relay. | {
"source": [
"https://security.stackexchange.com/questions/206186",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202911/"
]
} |
206,218 | One of my old email addresses was involved in the recent Whitepages breach disclosure ( source: Have I Been Pwned ). I don't remember on which websites I used that email address for registration, but I would like to reset my password everywhere possible . Websites could include: Facebook, Google, Amazon, eBay, Paypal, etc. - basically the top N commonly-used or sensitive web applications/platforms. This is particularly important as I was not using a password manager at the time and may have reused passwords. Is there an existing way to automate initiating password resets , mainly by requesting password reset emails, on common platforms given a single email address that I have access to ? | No, not really - they all have different processes for verifying your identity for password reset requests, and there isn't any standard for bulk password resets. For example, Apple may use a device which is registered to the account as a confirmation that it's you sending the request, while Facebook uses different schemes depending on whether you're changing your password from a device where you've previously logged in, or from a completely unrelated one. Easiest way is probably to go through common websites (e.g. work through a list like https://en.wikipedia.org/wiki/List_of_most_popular_websites , ignoring any which you are sure don't apply) providing the email address you want to reset, and watching for reset emails. It's not perfect, but if you're changing the ones you know are sensitive (e.g. ones which have credit card details associated, or email accounts, or government systems), that's ok - you know that those accounts will have unique passwords, even if an attacker may be able to log into your abandoned MySpace (or other defunct social network) account with an old password. | {
"source": [
"https://security.stackexchange.com/questions/206218",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59842/"
]
} |
206,520 | I am a software engineer and have been watching a lot of videos about XSS. But I fail to understand how is it dangerous if it runs on the client side and does not execute on the server side which contains the databases, and many important files. | Below are the things an attacker can do if there is XSS vulnerability. Ad-Jacking - If you manage to get stored XSS on a website, just
inject your ads in it to make money ;) Click-Jacking - You can create a hidden overlay on a page to hijack clicks of the victim to perform malicious actions. Session Hijacking - HTTP cookies can be accessed
by JavaScript if the HTTP ONLY flag is not present in the cookies. Content Spoofing - JavaScript has full access to client side code of
a web app and hence you can use it show/modify desired content. Credential Harvesting - The most fun part. You can use a fancy popup
to harvest credentials. WiFi firmware has been updated, re-enter your
credentials to authenticate. Forced Downloads - So the victim isn’t
downloading your malicious flash player from absolutely-safe.com?
Don’t worry, you will have more luck trying to force a download from
the trusted website your victim is visiting. Crypto Mining - Yes, you
can use the victim’s CPU to mine some bitcoin for you! Bypassing CSRF protection - You can make POST requests with JavaScript, you can
collect and submit a CSRF token with JavaScript, what else do you
need? Keylogging - You know what this is. Recording Audio - It
requires authorization from the user but you access victim’s
microphone. Thanks to HTML5 and JavaScript. Taking pictures - It
requires authorization from the user but you access victim’s webcam.
Thanks to HTML5 and JavaScript. Geo-location - It requires
authorization from the user but you access victim’s Geo-location.
Thanks to HTML5 and JavaScript. Works better with devices with GPS. Stealing HTML5 web storage data - HTML5 introduced a new feature, web
storage. Now a website can store data in the browser for later use
and of course, JavaScript can access that storage via
window.localStorage() and window.webStorage() Browser & System Fingerprinting - JavaScript makes it a piece of cake to find your
browser name, version, installed plugins and their versions, your
operating system, architecture, system time, language and screen
resolution. Network Scanning - Victim’s browser can be abused to scan
ports and hosts with JavaScript. Crashing Browsers - Yes! You can
crash browser with flooding them with….stuff. Stealing Information -
Grab information from the webpage and send it to your server. Simple! Redirecting - You can use javascript to redirect users to a webpage
of your choice. Tabnapping - Just a fancy version of redirection.
For example, if no keyboard or mouse events have been received for
more than a minute, it could mean that the user is afk and you can
sneakily replace the current webpage with a fake one. Capturing Screenshots - Thanks to HTML5 again, now you can take screenshot of a
webpage. Blind XSS detection tools have been doing this before it was
cool. Perform Actions - You are controlling the browser, | {
"source": [
"https://security.stackexchange.com/questions/206520",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/203373/"
]
} |
206,579 | Let's say a user can store some data in a web app. I'm now only talking about that sort of data the user can THEMSELVES view, not that is intended to be viewed by other users of the webapp. (Or if other users may view this data then it is handled to them in a more secure way.) How horrible would it be to allow some XSS vulnerability in this data? Of course, a purist's answer would clearly be: "No vulnerabilities are allowed". But honestly - why? Everything that is allowed is the user XSSing THEMSELVES. What's the harm here? Other users are protected. And I can't see a reason why would someone mount an attack against themselves (except if it is a harmless one, in which case - again - no harm is done). My gut feelings are that the above reasoning will raise some eyebrows... OK, then what am I failing to see? | This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves. | {
"source": [
"https://security.stackexchange.com/questions/206579",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108649/"
]
} |
206,606 | In the late 1990s, a computer virus known as CIH began infecting some computers. Its payload, when triggered, overwrote system information and destroyed the computer's BIOS, essentially bricking whatever computer it infected. Could a virus that affects modern operating systems (Like Windows 10) destroy the BIOS of a modern computer and essentially brick it the same way, or is it now impossible for a virus to gain access to a modern computer's BIOS? | Modern computers don't have a BIOS, they have a UEFI . Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code. This, however, assumes that: the mainboard manufacturers manage to keep their private keys secret the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage. And those two assumptions do not necessarily hold. Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you. Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system. | {
"source": [
"https://security.stackexchange.com/questions/206606",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/203496/"
]
} |
206,649 | I just looked at my user agent tracking page on my site (archived on Yandex ) and I noticed these user agents. I believe they are an attempt to exploit my server (Nginx with PHP). The 1 in front of it is just how many times the user agent was seen in the Nginx log. These are also shortened user agents and not long ones like Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36 . I no longer have access to the logs as I presume this occurred sometime in January or February (my oldest logs are in March and I created the site in January). 1 Mozilla/5.9}print(238947899389478923-34567343546345);{
1 Mozilla/5.9{${print(238947899389478923-34567343546345)}}
1 Mozilla/5.9\x22{${print(238947899389478923-34567343546345)}}\x22
1 Mozilla/5.9\x22];print(238947899389478923-34567343546345);//
1 Mozilla/5.9\x22 What exploit was attempted and how can I test to ensure these exploits are not usable? | It looks to be trying to exploit some form of command injection. As DarkMatter mentioned in his answer, this was likely a broad attempt to find any vulnerable servers, rather than targeting you specifically. The payload itself just appears to just be testing to see if the server is vulnerable to command injection. It does not appear to have any additional purpose. In order to test if you would be affected by these specific payloads, the easiest way would be to send them to your own server, and see how they respond. Note, that I only say this because the payloads themselves are benign; I do not recommend doing this with all payloads. My bet is that your server is not vulnerable, because I would have expected to see follow up request to actually exploit your server. | {
"source": [
"https://security.stackexchange.com/questions/206649",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/123902/"
]
} |
206,690 | Recently I've started working as a contractor for a company, which requires me to often log in to different b2b services. The way I receive the login data is usually over email in plain text. My gut feeling tells me sending sensitive data in plain text is not a good idea however I'm not sure if there is an alternative, or perhaps it is already protected by the mailing service (in this case gmail)? I'm aware of probably the most obvious danger which would be leaving my email account logged in and unattended, however I'm more interested in some kind of a man in the middle attacks or other dangers. Core of my question is: Is there an alternative to sending passwords over email, and what would be biggest dangers of using email for this? | A common practice is to send the user an initial password via email which is only valid for a very short time and needs to be changed immediately during the first login. This is not perfect either. An attacker with read access to the user's email could intercept that initial password before the user and use it on their behalf. The user would notice as soon as they try to use their initial password. They would notify the admin that the password is wrong, the admin would investigate and notice the illegitimate access. But the attacker already had some time to access the account, so there might already be damage. But it's still better than sending a permanently valid password. It also requires that the system supports this. So it's not an universally applicable practice. When you don't trust your email provider to keep your emails secret (you are using gmail , a mail service financed by data-mining the content of your email and monetizing the results), then email encryption is an option. There is the good old PGP , the more modern PEP , the IETF standard S/MIME as well as some non-standardized proprietary solutions. That's the nice thing about standards: There are so many to choose from! But they all have one thing in common: They just don't catch on. Getting your business partners to encrypt their email in a scheme you understand can be an annoying uphill battle. | {
"source": [
"https://security.stackexchange.com/questions/206690",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175806/"
]
} |
206,923 | Over the last week, there is a constant barrage of authentication failures to my email account from a variety of ip addresses - usually in blocks of exactly 575 attempts. My password is as strong as a password can be so the chance of brute force winning is infinitesimal. However as a result of the authentication failures, my hosting provider keeps locking the email account. Is there anything I can do (or that I can ask my hosting provider to do), or am I just screwed until the botnet moves on? Anyone with similar experience who can comment on whether I can expect this to ever end? EDIT: After about 9 days, I suddenly stopped getting locked out and the ticket got closed. I guess they finished "testing" the new policies/systems and hit the rollback button? I'm not happy that support insisted on so much troubleshooting at my end when the whole thing seems to have started after a security overhaul at their end, but that's how it always goes... | A few thoughts: Usually my first recommendation would be to pick an extremely strong password. But you allready got that covered. If there is two factor authentication available, turn it on. If you are lucky, it might make you an unattractive target and cause the attacker to move on. If the account lock out doesn't affect other methods of reading your mail, like via IMAP, you could switch to that to maintain access. (To be honest, I don't know much about the security of IMAP, so you might want to consider that before turning it on.) Forwarding the mail somewhere else will also ensure that you can read it even if your account is locked. Finally, you can try contacting your email provider. I think your best bet here is to just describe the problem to them, and ask what they can do to help you. | {
"source": [
"https://security.stackexchange.com/questions/206923",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/203955/"
]
} |
206,979 | There are built-in functionalities to encrypt a storage on OS X ( FileVault ) and Android. On OS X: to enable encryption current user must have a password protected account. After enabling the encryption, recovery key is generated (something like HHWj-Y8DK-ODO4-BQEN-FQ4V-M4O8 ). After the encryption is finished (and in all probability before that as well) user is able to change his password, without the need to re-encrypt the storage. On Android: user is required to set lockscreen protection to either pin or password. After storage encryption is done (again, probably before that as well), user is able to change password, and even switch from password to pin and vice versa. Now here is what puzzles me: my understanding is that when storage is encrypted, it is done with current user password (sort of like encrypting an archive) and if password is changed — the whole storage must be re-encrypted. This (apparently incorrect) understanding brings me to following questions: Based on what "key" (since it is not the password itself) encryption is done then? For OS X, I am guessing, it's the recovery key, but how is it connected to the user's password then? If password is not the basis for encryption, why is it required to set one before encrypting your storage? How is ability to decrypt storage is maintained (without re-encrypting) after password is changed? | At a high level, disk encryption is implemented using a data encryption key (DEK) and a key encryption key (KEK). The DEK is generated randomly and used to encrypt the drive, the KEK is derived from the user's password using a KDF like PBKDF2 or Argon2 and then used to encrypt the DEK. When changing the password, the DEK is simply encrypted with a new KEK derived from the new password. Encrypting without a password is likely prohibited to avoid a false sense of security. It'd be a bit like locking your door but leaving the key in the lock. Of course, if you're changing your password because you believe someone figured it out, and that person also had access to the encrypted device, it's possible they stored a copy of the DEK. In this case it may be necessary to re-encrypt the entire drive, though doing so will likely take some time. | {
"source": [
"https://security.stackexchange.com/questions/206979",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
206,984 | For someone who values anonymity and privacy, what is the recommended way to pay on the Internet? Example: To buy a domain or a VPN or another service I know that we can use cryptocurrencies, but at some point, you need to buy cryptocurrency using a traditional currency. | To protect your privacy and avoid tracking, nothing beats cash. There are various services that let you purchase credits in cash at a brick and mortar store, which you can then later use to purchase goods and services online. One example is paysafecard (I haven't tried it, but you should also be able to buy bitcoin with cash ). There are a number of VPN providers which accept these payments. Alternatively, you could simply purchase your VPN access directly offline at a store. There are also domain registrars which accept these payment methods, but most will ask for identifying information (name, address, etc) when registering a domain. So if you want to conform with registrars TOS, registering a domain anonymously wouldn't be possible. You can hide your information from third parties by requesting that your registrar doesn't disclose the information, violate the TOS by providing false information (not recommended), or find a registrar or third party service that does not request this information. | {
"source": [
"https://security.stackexchange.com/questions/206984",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176981/"
]
} |
207,100 | I searched Google about this term, but the definitions that I found was related to the medical world, and nothing related to IT. I think that is some kind of procedure of documenting something maybe? Note that I heard this word for the first time in the SOC (Security Operations Center) that I am currently working. | We just got reports that 4000 of our systems are infected with ransomeware. 3000 are end users, 800 are non-critical servers, 200 are critical servers. Triage is looking at this mess and deciding which order to start restoring systems in. We can't tackle them all at once, so we have to look at some and say 'Sorry, little Inspiron that couldn't, you get to sit there and be useless for a while.' It comes from the medical world, as you've stated. It's the same reasoning as an ER doctor looking at two patients and deciding to work on the one that they're more certain they can save. You let one go, as hard as it may be, so that the other might live. If you'd worked on the worse injured person, it's possible they both would have died. The difference in the security world is that often it's dollars lost due to users being unable to work, rather than literal life and death. You work on the systems that you are most likely to be able to restore, and that will return the largest amount of productivity to the environment. You leave the individual laptops that only affect a single user to the side, for now. | {
"source": [
"https://security.stackexchange.com/questions/207100",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/173189/"
]
} |
207,105 | The number of request strings like this one seems to have increased over the last weeks. They all have in common, that they contain a combination of google adwords / google adsense parameters and sql injection. The decoded value of the request string is /?gclsrc=aw.ds&gclsrc=aw.ds& and 1>1 I can see a duplicate gclsrc parameter an sql injection attempt ... but why this combination? Is it a technique to grab adsense revenues while sqli probing? | We just got reports that 4000 of our systems are infected with ransomeware. 3000 are end users, 800 are non-critical servers, 200 are critical servers. Triage is looking at this mess and deciding which order to start restoring systems in. We can't tackle them all at once, so we have to look at some and say 'Sorry, little Inspiron that couldn't, you get to sit there and be useless for a while.' It comes from the medical world, as you've stated. It's the same reasoning as an ER doctor looking at two patients and deciding to work on the one that they're more certain they can save. You let one go, as hard as it may be, so that the other might live. If you'd worked on the worse injured person, it's possible they both would have died. The difference in the security world is that often it's dollars lost due to users being unable to work, rather than literal life and death. You work on the systems that you are most likely to be able to restore, and that will return the largest amount of productivity to the environment. You leave the individual laptops that only affect a single user to the side, for now. | {
"source": [
"https://security.stackexchange.com/questions/207105",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/204206/"
]
} |
207,122 | Windows 7 support will end on January 14, 2020 . Assuming that after that day I still use an updated browser, is it true that I'm still safe? Can it "patch" the OS-based security holes? Minor question: typically, how long would the browsers stop supporting abandoned OS? Is there any number on this? Related: Why should browser security be prioritized? FYI: Attack surface - Wikipedia | Do not use an outdated OS, even with a modern browser. Assuming that after that day I still use an updated browser, is it true that I'm still safe? No, you cannot avoid browser-based security holes only by updating the browser. There are a few reasons for this. Primarily, the browser is not entirely self-contained. It makes use of operating system libraries, for example the system memory allocator. This allocator is designed to mitigate various memory corruption-related security issues. If the allocator is not kept up to date, memory exploitation bugs may be easier to perform against the browser, no matter how up to date the browser is. Another reason is that browser security often relies on OS sandboxing features. A powerful browser exploit must be combined with a so-called sandbox escape . How easy that escape is depends on how secure the operating system is as well as how secure the browser is. By using an outdated operating system, your browser is being protected by out of date and potentially vulnerable security features. Can it "patch" the OS-based security holes? No. Patching operating system vulnerabilities requires elevated privileges, which a browser does not have. Even if it did, browsers are not designed to modify system settings or system files. There is no extension or web page you can go to that is able to patch security vulnerabilities in your OS. Minor question: typically, how long would the browsers stop supporting abandoned OS? Browser vendors typically publish when they will stop officially supporting a particular operating system. After that point, changes made to the browser that break on older systems will no longer be considered bugs and may not be fixed. Programs typically continue running on older systems for a very long time, however. They only stop working when they begin to rely on newer system APIs that aren't present in older versions. This is relatively rare. A browser should be able to run on an outdated operating system for many years, albeit not very securely, and without official support from the vendor. Most likely, as it begins to rely on newer and newer APIs, features in the browser will just start breaking one by one (especially security-related features) until it eventually does not start up at all. | {
"source": [
"https://security.stackexchange.com/questions/207122",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94500/"
]
} |
207,241 | Lets say I buy something at some (physical) store and pay using a credit card on one of these electronic terminals. What information do the owners of this store get about me (or my credit card) from this transaction? Can they find out whether multiple purchases were paid using the same credit card? | From the card itself, the Merchant gets the track data , which includes card number, expiration date, and cardholder name. If the Merchant requires zip code verification, they'll get your zip code, obviously. (Card-Not-Present Merchants often get address data for billing/shipping purposes, but you asked about physical stores... and they get that from the Customer, not the card itself.) The Merchant can track purchases made with that card within their store(s), but not those made at other, unconnected stores. Be aware that sometimes multiple stores (e.g. HomeGoods, TJ Maxx) are actually the same "Merchant" (TJX Companies). The Processor, on the other hand, can correlate a single card's activity across multiple Merchants. They don't generally have transaction details ("what you bought") but they do have amounts, categories, Merchants, times, all of which may be provided to the Card Brands (Visa, Mastercard, ...) upon request, or law enforcement upon a subpoena. Each processor will have a different view. If Processor A handles Merchants A, B, and C, and Processor B handles merchants D, E, and F, then the Processors will have completely disjoint sets of data to work with. In general most Merchants use a single Processor; some load-balance across multiple Processors for redundancy and availability, but most transactions will only be seen by one Processor. Processors do a lot of data analysis to provide value-add, but not to the extent of providing individual cardholder details across Merchants. Most such data analysis is done on large, anonymous buckets, but others, like householding, require identifying factors be used in the analysis. Processors, Card Brands, and Banks can also make loose inferences about what you're buying based on the Merchant Category Code (MCC). These aren't very exact - those salted peanuts from the Exxon station might get classified as "Gas" - but they provide some guidance. These are the codes that Corporate-issued credit cards will use to block non-work transactions. Finally, cards themselves are informative. Merchants can tell the difference between a prepaid card and a Black Card, and they can treat the cardholder differently in accordance with their status, for example extending discounts to higher-value-card holders. This is true not only in a physical store, where the Merchant sees your card; Processors can provide this sort of metadata to Card-Not-Present Merchants as well. (The ability to determine the type of card is not unique to Processors; it's based on the BIN (the first 6 digits of the card) and you can look it up with freely available tools like binlist.net . However, since the list changes over time, and since it's only a portion of guidance, this is a service most usefully provided by a Processor. For example, anyone can tell if a card is a Black Card - but as a Merchant you might treat a Black Card with a high chargeback rate differently than the rest. Only the Processor can integrate that guidance.) | {
"source": [
"https://security.stackexchange.com/questions/207241",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199943/"
]
} |
207,389 | I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails. We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising. We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team. Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management. To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on " typical scenarios ", not what content users at our workplace see specifically Questions When is phishing education going too far? Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails? | I think there is an underlying problem that you will need to address. Why do the users care that they are failing? Phishing simulations should, first and foremost, be an education tool not a testing tool. If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too. So, your response should be: educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be) remove negative consequences to failing This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team. Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it. Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change. Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation. | {
"source": [
"https://security.stackexchange.com/questions/207389",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106510/"
]
} |
207,492 | I often recognise that people blur their license plates on pictures on the internet in Germany. I can't figure out what's the fuss. The information is public nevertheless (I mean it's on your vehicle), nobody but appropriated authorities can get any data out of it, and people also do it on platforms where they are identifiable anyways (Facebook, car selling platforms etc.). So what's the problem of having your license plate visible on the internet (in Germany/EU)? | It's a matter of privacy. The thing you definitely determine from the license plate in some countries is the county of the registered car. In small countries some counties have a small number of registered vehicles and that eases tracking one. Other things you may be able to determine in quite a lot of EU countries: year of birth of the owner (since some use 2 number character in the plate number) name of the owner (since many use the 2 or specially 3-letter abbreviation of their name) Example: a car in Romania with the number CT-90-GEO will give you quite good information: the location of the registration is in the Constanta county (abbreviated CT for vehicles) and the owner is named George and was born in 1990. So many people prefer not to release something like this in the wild. | {
"source": [
"https://security.stackexchange.com/questions/207492",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81995/"
]
} |
207,640 | Many living politicians' signatures are found online or can handily be acquired (You can write them, and their letters will contain their signature). Thus do they use a different signature for personal private documents? If not, how can they guarantee that their signatures aren't replicated without their consent? I deliberately quote signatures of lesser-known politicians. The Rt Hon. George Osborne MP The Honourable Rona Ambrose PC Some members of the Minnesota House & Senate | They don't. Outside the lowest security scenarios, signatures aren't intended to be security feature to prevent forgery. In most cases, putting a signature on paper is just ceremonial. The actual details/decision may have been recorded through other means. For example, depending on what exactly is being signed and how sensitive the matters is, by witnesses (whether by sworn individuals or even just casually), through publications in mass media, or filed into a register on a trusted computer system/filing cabinet. In the latter case, the paper you receive, even when physically signed with ink and even if it's the paper signed when you shake hands in agreement, might just be considered a copy of the true records in the registry you don't really see. This is similar to how a birth/death/university degree certificate is really just a copy of the registration detail in your civil/university records, the only really important detail in that piece of paper is the record number rather than the signature. A person who has reasonable suspicion that the letters might be a forgery should confirm with the office of the relevant signer to check for its authenticity. For important, formal letters, there may be a filing number for the file that you can cite to confirm the content of the letter. | {
"source": [
"https://security.stackexchange.com/questions/207640",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
208,926 | I am testing a website which accepts invalid credit card numbers for reservations. The interesting thing is they do CC validation if the currency is USD, but not for any other currencies. Should I report this as a security issue or will it come under fraud management? | Should I report this as a security issue or will it come under fraud management? There may be a business risk issue, which you can document under security, but how significant it is depends on the business. You say the web site accepts ... credit card numbers for reservations. What are those reservations for? If it's a hotel room, then there is limited potential for fraud, since the actual card would be required at check-in time. An attacker could attempt to impact service by blocking off rooms with bogus cards, thus reducing the pool legitimate visitors could reserve, but I see that as a minor concern based on scalability and sustainability issues. If it's a game store that is purchasing stock based on pre-order reservations, then the store is extending actual capital to stock games they might then be stuck without buyers for. This sort of business is more threatened by invalid card reservations, because they're sinking real dollars into the expected sales those reservations indicate. There are other businesses where 'reservations' are largely meaningless, a way to encourage engagement by the customer at no real cost. In these cases, the business impact is negligible. The interesting thing is they do CC validation if the currency is USD,
but not for any other currencies. And that may be reflective of business risk acceptance. If 99% of their reservations are in USD, then the risk of accepting invalid non-USD card reservations may be negligible. If implementing non-USD validation has any specific cost to it (fees from processor? coding time to handle if-then-else branches?) then it's a legitimate option to leave it out at 1% coverage. | {
"source": [
"https://security.stackexchange.com/questions/208926",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/202368/"
]
} |
208,937 | I recently investigated best-practices in regards to passwords, and the overwhelming majority of sources recommended using a password manager. This is great advice, but not usable in every situation. Certain situations, such as OS login, Disk Decryption or Password Manager unlocks do not allow me to let a password manager "type my password in for me". As such, I have looked at the second-best alternative, which seems to be Diceware and Passphrases. What had me stumped was this answer of a related question, which hinted that Diceware was superior. An excerpt from the answer: Passphrases are great ( Diceware is better) for locking password managers, [...] Emphasis mine What confuses me is why this claim that Diceware is supposedly superior? I used zxcvbn to compare the strength of the two example passwords below and it seemed as if the passphrase was more secure than the Diceware password. Further, the Passphrase generates a visual image, although nonsensical, which is easy to remember. The only disadvantage I can imagine is that the passphrase takes longer to type, which is a marginal disadvantage considering it would only need to be typed once before a password manager can be used again. Examples Diceware Diceware is the process of rolling a set of dice, which would indicate a random word from a pre-defined list. Depending on the desired security, more words are chosen. An example outcome of a Diceware process might be the password: cleft cam synod lacy yr wok Passphrases A passphrase is in essence a sentence, which make sense to the user and hopefully nobody else. It might make grammatical sense, but is very unlikely to make semantic sense. An example of a passphrase would be: Blue Light shines from the small Bunny onto the Lake. | Most people that use passphrases, use passphrases wrong. The remark that Diceware is better probably comes from the fact that, when people use passphrases, they usually take a well-known or otherwise logically structured sentence and use that. "Mary had a little lamb" is a terrible passphrase because it is one of a few billion well-known phrases that a computer can run through in a short amount of time. I know this works pretty well because I tried it . Diceware is just random words. It's as good as any other randomly generated set of words, assuming you use a good source of randomness: for Diceware, you should use dice, which is a reasonably good source. Digital password generators are usually also good, though homebrew implementations might use an insecure random generator by mistake. We know that any random passphrase is good because it's basic math. There are two properties to a passphrase: Dictionary size Number of words in the phrase The 'randomness' of a passphrase is simple to calculate: dictionary_size ^ words_in_phrase , where ^ is exponentiation. A passphrase of 3 words with a dictionary of 8000 words is 8000^3= 512 billion possible phrases. So an attacker, when guessing the phrase, would have to try 256 billion phrases (on average) before s/he gets it right. To compare with a password of similar strength: a random password using 7 characters, consisting of a-z and A-Z, has a "dictionary size" of 52 (26 + 26) and a "number of words" of 7, making 52^7= ~1028 billion possible passwords. It is well-known that 7 characters is pretty insecure, even when randomly generated. For randomness, it's the more the better up until about 128 bits of entropy. A little more than that helps buffer against cryptographic weakenings of algorithms, but really, you don't want to memorize 128 bits of entropy anyway. Let's say we want to go for 80 bits of entropy, which is a good compromise for almost anything. To convert "number of possible values" to "bits of entropy", we need to use this formula: log(n)/log(2) , where n is the number of possible values. So if you have 26 possible values (1 letter), that would be log(26)/log(2)= ~4.7 bits of entropy. That makes sense because you need 5 bits to store a letter: the number 26 is 11010 in binary. A dictionary of 8000 words needs about 7 words to get above the desired 80 bits: log(8000^7)/log(2)= ~90.8 bits of entropy. Six words would be: log(8000^6)/log(2)= ~77.8 bits of entropy. A large dictionary helps a lot, compared to the relatively small Diceware dictionary of 7776 words. The Oxford English Dictionary has 600k words . With that many words, a phrase of four randomly chosen words is almost enough: log(600 000^4)/log(2)= ~76.8 bits of entropy. But at 600 thousand words, that includes very obscure and long words. A dictionary with words that you can reasonably remember might have a hundred thousand or so. Instead of the seven words that we need with Diceware, we need five words in our phrase when selecting randomly from a dictionary of 100k words: log(100 000^5)/log(2)= ~83.0 bits of entropy. Adding one more word to your phrase helps more than adding ten thousand words to your dictionary, so length beats complexity, but a good solution balances the two. Diceware seems a little small to me, but perhaps they tested with different sizes and found this to be a good balance. I am not a linguist :). Just for comparison, a password (consisting of a-z, A-Z, and 0-9) needs 14 characters to reach the same strength: log(62^14)/log(2)= ~83.4 bits of entropy. | {
"source": [
"https://security.stackexchange.com/questions/208937",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
209,126 | I have started receiving unexpected emails from Yorkshire Bank. I have never been a customer. I don't recall applying for any of their products either, although maybe I did many years ago. The first of the strange emails reads: Your partial postcode is 8NX We've included your postcode at the top so you can be sure this email
is from Yorkshire Bank. To see how you can stay safe online, visit the
Security Centre We've sent your Authentication Code Letter Hi Mr Stewart, You should have received our letter that contains an authentication
code by now. Once you've got our letter, you can confirm the code by clicking on
the button below to get back to your application, and then follow the
instructions. This will allow us to progress with your application. The sender appears to be legitimately email.yorkshirebank.co.uk , but the 8NX is NOT part of my postcode in any way. There have been 3 following emails, of an advertising nature. I feel that ignoring would be the wrong thing to do, but I'm not sure what to do. My main concern is that my identity has been stolen for the purpose of procuring Yorkshire Bank products, such as a loan, which I may be chased for when the identity thief defaults. Is this likely or even possible? What other explanation might there be? | I feel that ignoring would be the wrong thing to do, but I'm not sure what to do. If you feel that ignoring this is wrong, look up the bank's phone number from a reputable source, e.g. yellow pages or the banks actual website. Call them, and ask. Or submit a contact form on their website, or similar - in short, contact them through a channel not related to the e-mail and ask them to verify the content. Do not use the links in the e-mail. If it's identity fraud, they will be very interested to clear it up. If it's phishing, banks tends to like to be made aware of ongoing phishing attempts, so they will not be angry with you for calling. | {
"source": [
"https://security.stackexchange.com/questions/209126",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/159619/"
]
} |
209,266 | Wikipedia describes credential stuffing as a type of cyberattack where stolen account credentials typically
consisting of lists of usernames and/or email addresses and the
corresponding passwords (often from a data breach) are used to gain
unauthorized access to user accounts through large-scale automated
login. Credential Stuffing attacks are made possible because many users will reuse the same password across many sites Interestingly there doesn't appear to be Wikipedia article on password spraying. Double Octopus describes it as Password spraying is an attack that that attempts to access a large
number of accounts (usernames) with a few commonly used passwords. Password spraying is an attack that that attempts to access a large number of accounts (usernames) with a few commonly used passwords . It seems that password spraying and credential stuffing are similar in the objectives and approach. It isn't clear as to the discrete difference between the terms. Are there any and if yes, what would these be? | Credential stuffing - use a bunch of usernames and passwords which are known to be associated with them to try and access multiple sites Password spraying - use a list of usernames and some common passwords (which aren't known to have been used by someone with the usernames being sent) to try and gain access to a single site The key difference is whether the password is known to be associated with the account or not, and whether the attack aims to get access to a single site, or to multiple sites. | {
"source": [
"https://security.stackexchange.com/questions/209266",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61038/"
]
} |
209,303 | I've had my USB for quite a while now. It contains quite a lot of personal information on it. (I know, I'm stupid for not encrypting it before hand).
Is it too late to encrypt my drive even if files are on there already? I heard somewhere that its best to encrypt the drive as soon as you purchase it and plug it in. | Credential stuffing - use a bunch of usernames and passwords which are known to be associated with them to try and access multiple sites Password spraying - use a list of usernames and some common passwords (which aren't known to have been used by someone with the usernames being sent) to try and gain access to a single site The key difference is whether the password is known to be associated with the account or not, and whether the attack aims to get access to a single site, or to multiple sites. | {
"source": [
"https://security.stackexchange.com/questions/209303",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192935/"
]
} |
209,355 | I want to wipe all residual data left behind even after a format on a regular 64GB fash drive, the ones someone can scan and recover data. What's the most efficient but quickest way to do this? Any test software I can scan for those residual files before and after the wipe? | Next time you're about to put sensitive data on a flash drive, consider encrypting it first ! Strongly encrypted data is useless without the key, and if you securely erase the drive first, all that will be left is an occasional sector of such encrypted data surviving due to wear leveling. If you're still unsatisfied by this technique because there's a small probability that (a) a meaningful chunk of data survives and (b) the adversary will be able to read it out and (c) decrypt it, consider that physical destruction may not destroy the data definitely: there will be a chance that one night you will sleepwalk to a potential adversary and sleeptalk the data to them. Edit addressing some of the comments : consumer-grade flash storage does have over-provisioning, e.g. SanDisk microSD Product Manual tells it's an intrinsic function in their products. And this over-provisioning is much more significant that the difference between 1GB and 1GiB, in fact, the ability to use low-grade flash wafers is why the flash storage is so cheap. On such wafers, 5% to 10% of the cells are stillborn, and a few others will only last a few write cycles, while a decent flash card or thumb drive is typically specced to survive 100-500 complete overwrites. Furthermore, the chance of a random sector to survive N full overwrites (assuming 15% over-provisioning) is not 0.15^N . Wear leveling is nowhere near uniform write distribution, in fact, if a file stays on the flash drive for a long time while other content is written/removed/overwritten, sectors allocated to that file will have significantly less writes done to them, so they may be overwritten every single time during subsequent full-disk overwrites. Additionally, wear leveling is not based exclusively on write count, but also on the number of correctable errors in a sector. If a sector containing sensitive data exceeds such correctable error threshold, it will never be written to again, so the data in it will be there no matter how many times you overwrite the disk. | {
"source": [
"https://security.stackexchange.com/questions/209355",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192935/"
]
} |
209,448 | I was working on a project, a private repo , and suddenly all the commits disappeared and were replaced with a single text file saying To recover your lost code and avoid leaking it: Send us 0.1 Bitcoin
(BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and
contact us by Email at [email protected] with your Git login and a
Proof of Payment. If you are unsure if we have your data, contact us
and we will send you a proof. Your code is downloaded and backed up on
our servers. If we dont receive your payment in the next 10 Days, we
will make your code public or use them otherwise. At the time of this happening, Google search didn't show up anything, but in an hour or so this started coming up. I am using SourceTree (always up-to-date) but somehow I doubt that SourceTree is the issue, or that my system (Windows 10) was compromised. I'm not saying it's not that, it's just that I doubt it. This happened only to one of my repositories (all of them private) and all the others were left untouched. I changed my password, enabled 2 factor authentication, removed one access token that I wasn't using for years and wrote an email to GitLab in the hopes that they could tell me something about where/who the attacker got in. My password was a weak one that could've been relatively easily cracked via brute-force (it's not a common one but starts with "a" and has only a-z characters in it) and it could be that they just automatically checked if they can access the account and then ran some git commands. It is also possible that my email address and that particular password are on a list of leaked accounts. One might argue that if this is how they got in, they would've simply changed the account credentials but searching the Internet revealed that in these cases GitLab/GitHub will simply restore the credentials for you, and so I assume this is why they didn't do it this way. Could've also been that old access token, I can't remember what and where I used it for in the past - most likely generated for use on a computer I previously owned, so I doubt that that was the issue. There are also 4 developers working on it, all having full access to the repository, so their accounts being compromised is also a possibility. I've scanned my computer with BitDefender and couldn't find anything but I am not doing shady things on the internet so I don't think that me being infected with a malware/trojan is what caused this. I am waiting for an answer from GitLab and maybe they can shed some light on this. I have the code base on my local Git, so that is not an issue, but I am not pushing the code back to the repository just yet. Also, just in case the code gets published somewhere, I will change any passwords that are to be found in the source (databases, IMAP accounts) UPDATE I found out that the code isn't gone. I tried accessing a commit's hash and it worked. So the code is there but there's something wrong with the HEAD. My knowledge on this is very limited but git reflog shows all my commits. What this means to me is that the attackers most likely didn't clone the repositories (would be a logistical nightmare to do this for all the victims, anyway) and that the chances for them going over the source code looking for sensitive data, or of making the code public are low. It also means to me that is not a targeted attack but a random, bulk attack, carried out by a script. I really hope this is the case for our own sake! UPDATE 2 So, if you do git checkout origin/master you will see the attacker's commit git checkout master you will see all your files git checkout origin/master
git reflog # take the SHA of the last commit of yours
git reset [SHA] will fix your origin/master...but git status now will say HEAD detached from origin/master still searching for a fix on this UPDATE 3 If you have the files locally, running git push origin HEAD:master --force will fix everything. See Peter 's comment So, the question is what commands will get my repository back to the previously working state assuming you don't have the repo locally, as for how the attacked got in, I am hoping that the answer from GitLab (if any) will help us more. There is a discussion going on here The attack targets GitHub, BitBucket and GitLab accounts. Here 's the magnitude on GitHub's public repos | You can use git reflog in a clone and checkout the last commit before this happened. It happened because .git/config on your webserver (in the directory of the cloned repo) includes the remote URLs and people added username:password in it which should never be the case - people should use SSH, deploy keys or authenticate on each pull. Never store your credentials in a config file. Use the credential helper(s). Source: https://www.reddit.com/r/git/comments/bk1eco/comment/emg3cxg hello, it is me , the guy with your backups .. i will reveal your sins Here is an article from 2015, its more detailed, https://en.internetwache.org/dont-publicly-expose-git-or-how-we-downloaded-your-websites-sourcecode-an-analysis-of-alexas-1m-28-07-2015/ Article by Internetwache about this: https://en.internetwache.org/dont-publicly-expose-git-or-how-we-downloaded-your-websites-sourcecode-an-analysis-of-alexas-1m-28-07-2015/ To prevent this either block access to directories starting with a dot, see https://github.com/h5bp/html5-boilerplate/blob/master/dist/.htaccess#L528-L551 # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Block access to all hidden files and directories with the exception of
# the visible content from within the `/.well-known/` hidden directory.
#
# These types of files usually contain user preferences or the preserved
# state of an utility, and can include rather private places like, for
# example, the `.git` or `.svn` directories.
#
# The `/.well-known/` directory represents the standard (RFC 5785) path
# prefix for "well-known locations" (e.g.: `/.well-known/manifest.json`,
# `/.well-known/keybase.txt`), and therefore, access to its visible
# content should not be blocked.
#
# https://www.mnot.net/blog/2010/04/07/well-known
# https://tools.ietf.org/html/rfc5785
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI} "!(^|/)\.well-known/([^./]+./?)+$" [NC]
RewriteCond %{SCRIPT_FILENAME} -d [OR]
RewriteCond %{SCRIPT_FILENAME} -f
RewriteRule "(^|/)\." - [F]
</IfModule> Or separate the .git directory and the data using --separate-git-dir . --separate-git-dir=<git dir> Instead of initializing the repository as a directory to either $GIT_DIR or ./.git/, create a text file there containing the path to the actual repository. This file acts as filesystem-agnostic Git symbolic link to the repository. If this is reinitialization, the repository will be moved to the specified path. But the best is to rm -rf .git after a deployment - which should just copy a build artefact to the destination using rsync . https://git-scm.com/docs/git-init#Documentation/git-init.txt---separate-git-dirltgitdirgt --separate-git-dir=<git dir> Instead of placing the cloned repository where it is supposed to be, place the cloned repository at the specified directory, then make a filesystem-agnostic Git symbolic link to there. The result is Git repository can be separated from working tree. https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---separate-git-dirltgitdirgt https://stackoverflow.com/a/8603156/753676 Information about deploy keys and the credential helpers: https://developer.github.com/v3/guides/managing-deploy-keys/ Deploy keys are read-only by default, but you can give them write access when adding them to a repository. https://gist.github.com/zhujunsan/a0becf82ade50ed06115 https://help.github.com/en/articles/caching-your-github-password-in-git Use git push -u origin master -f && git push --tags -f from your local clone to push all references for master, tags and so on to the remote and then enable 2FA in your account. If more branches are affected use git push -u --all -f Also please enable 2FA to decrease the possibility of such attacks. Please do not forget to change all compromised logins / passwords and revoke any unknown sessions. | {
"source": [
"https://security.stackexchange.com/questions/209448",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/207009/"
]
} |
209,529 | This message was sent to my websocket: echo kernel.unprivileged_userns_clone = 1 | sudo tee /etc/sysctl.d/00-local-userns.conf Is it dangerous, and what would it do? Thanks for your feedback everyone, chances are it was someone trying to install the Brave Browser who accidentally pasted into the wrong box and I'm overreacting. | Enabling unprivileged user namespaces can make severe vulnerabilities in the Linux kernel much more easily exploitable. If you did not intend to enable it, you should ensure it is disabled. Numerous vulnerabilities that are found regularly are often only exploitable by unprivileged users if unprivileged user namespaces are supported and enabled by the kernel. Unless you truly need it, just disable it. The reason for this is that much of the kernel that is only intended to be reachable by UID 0 is not audited particularly well, given that the code is typically considered to be trusted. That is, a bug that requires a UID of 0 is rarely considered a serious bug. Unfortunately, unprivileged user namespaces make it possible for unprivileged users to access this very same code and exploit security bugs. From Brad Spengler in 10 Years of Linux Security , describing exploitation trends: Attack surface exposed by unprivileged user namespaces isn’t decreasing anytime soon Even more functionality being exposed: 2abe05234f2e l2tp: Allow management of tunnels and session in user namespace 4a92602aa1cd openvswitch: allow management from inside user namespaces 5617c6cd6f844 nl80211: Allow privileged operations from user namespaces "Does this newly-allowed code pass existing fuzzing tests?" doesn’t appear to be a consideration for enabling such functionality A few examples of vulnerabilities only exploitable on systems with unprivileged user namespaces: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-16120 https://brauner.github.io/2019/02/12/privileged-containers.html https://www.halfdog.net/Security/2016/UserNamespaceOverlayfsXattrSetgidPrivilegeEscalation/ https://www.halfdog.net/Security/2015/UserNamespaceOverlayfsSetuidWriteExec/ https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html https://seclists.org/fulldisclosure/2016/Feb/123 https://seclists.org/oss-sec/2016/q4/607 https://seclists.org/oss-sec/2022/q1/205 https://seclists.org/oss-sec/2022/q1/54 https://www.openwall.com/lists/oss-security/2015/12/31/5 https://www.rapid7.com/db/modules/exploit/linux/local/nested_namespace_idmap_limit_priv_esc/ https://lwn.net/Articles/543539/ https://lwn.net/Articles/543442/ https://securityonline.info/cve-2022-32250-linux-kernel-privilege-escalation-vulnerability/ https://www.randorisec.fr/crack-linux-firewall/ Note that this particular sysctl may get deprecated and possibly removed soon. This is because a little more attention is being spent on finding unprivileged user namespace bugs and the feature is not quite as horribly insecure as it once was. Unfortunately, it is still quite insecure (in part due to the lack of CVE reporting for security bugs in the Linux kernel). If that is the case for your particular distro, you can disable user namespaces directly by setting user.max_user_namespaces = 0 . | {
"source": [
"https://security.stackexchange.com/questions/209529",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/201519/"
]
} |
209,531 | The scenario is: somebody without concern about security is navigating through the web. This person will access doubtful websites, like adult content or media sharing, for example. Between a pc with Windows and a smartphone with android, which one is a less bad option for this person? If the answer change depending on windows or android versions, please specify this versions. | First, here I compare an up-to-date Android phone which receives regular updates with a Windows PC which receives regular updates. While this might be the normal case if you buy a PC with Windows 10 it is not guaranteed if you just buy a cheap Android phone. Thus, I assume that you use a vendor and product known for its good product support, like phones from Google or the Android One phones. Even then the phones will only get updates for a few years, which is usually not as long as a PC would get updates. Thus, you might need to replace the phone after a few years with another one. With this in mind ... The security features of the underlying OS in terms of protecting the applications itself are basically the same, i.e. both provide hardening of the kernel, offer layered security with sandboxes inside the browser etc. One major disadvantage of Windows compared to Android is that in Windows all applications started by a user essentially run as the same user and can thus affect each other. This means that a compromised word document could lead to the installation of malware which could read the password store of the web browser. In Android instead the different apps are more isolated between each other since they are running as different users and data have to be explicitly shared between the applications except for data stored on some common storage where all apps have access. Another advantage of Android is that applications are usually installed from the Google Play Store and the user needs to be explicitly go into the settings and allow apps from third-party places to be installed. And while Windows has some kind of app store too it is currently common to install apps just downloaded from the internet, from some CD-ROM or an USB drive. This attack vector is actively used to trick users into installing some apps, because they are allegedly needed to view a video on some (usually illegal) video sharing site, allegedly are the security update for Adobe Flash which is needed or similar. While an app store like the Google Play Store might contain bad apps too (and often did in the past) it is still much less likely to get a bad app from the app store than one would get from just downloading something from the internet. And, as explained in the previous point, the harm a malicious application can do in Windows is significantly higher than what it can do in Android. Additionally entire classes of attack vectors which affect PC's are not relevant on Android phones: there is no Flash, no Java applets, no macros in Office documents, no EXE, SCR, ..., which means many of the typical malicious payloads in mails will simply not work. Credential phishing done through mail or by tricking users when browsing the web is relevant on both platforms though. One main disadvantage of a phone vs. a PC is the smaller screen size and therefore reduced information and the ways information can be displayed by interacting with the device. For example there is no such thing as hover over a link or click right for a context menu in order to receive more information about the real link vs the claimed link. Often the URL of the visited site is also not shown to save crucial screen space for the actual content. But, given your intended non-technical audience this loss of information might not be that much of a problem since this kind of audience can probably not deal with this detail of information anyway. But in summary I think that an Android phone which is currently up-to-date and will be kept-up-to-date (which means buying a new one after some years) is the better choice for a non-technical person with only a few needs in terms of communication, i.e. basically web browsing, mail and messaging. | {
"source": [
"https://security.stackexchange.com/questions/209531",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199846/"
]
} |
209,652 | I know the general advice that we should never design¹ a cryptographic algorithm. It has been talked about very extensively on this site and on the websites of professionals of such caliber as Bruce Schneier. However, the general advice goes further than that: It says that we should not even implement algorithms designed by wiser than us, but rather, stick to well known, well tested implementations made by professionals. And this is the part that I couldn't find discussed at length. I also made a brief search of Schneier's website, and I couldn't find this assertion there either. Therefore, why are we categorically advised against implementing crypto algorithms as well? I'd most appreciate an answer with a reference to an acclaimed security expert talking about this. ¹ More precisely, design to our hearts' content; it might be a good learning experience; but please please please please , never use what we designed. | The reason why you want to avoid implementing cryptographic algorithms yourself is because of side-channel attacks . What is a side-channel? When you communicate with a server, the content of the messages is the "main" channel of communication. However, there are several other ways for you to get information from your communication partner that doesn't directly involve them telling you something. These include, but are not limited to: The time it takes to answer you The energy the server consumes to process your request How often the server accesses the cache to respond to you. What is a side-channel attack? Simply put, a side-channel attack is any attack on a system involving one of these side-channels. Take the following code as an example: public bool IsCorrectPasswordForUser(string currentPassword, string inputPassword)
{
// If both strings don't have the same length, they're not equal.
if (currentPassword.length != inputPassword.length)
return false;
// If the content of the strings differs at any point, stop and return they're not equal.
for(int i = 0; i < currentPassword.length; i++)
{
if (currentPassword[i] != inputPassword[i])
return false;
}
// If the strings were of equal length and never had any differences, they must be equal.
return true;
} This code seems functionally correct, and if I didn't make any typos, then it probably does what it's supposed to do. Can you still spot the side-channel attack vector? Here's an example to demonstrate it: Assume that a user's current password is Bdd3hHzj (8 characters) and an attacker is attempting to crack it. If the attacker inputs a password that is the same length, both the if check and at least one iteration of the for loop will be executed; but should the input password be either shorter or longer than 8 characters, only the if will be executed. The former case is doing more work and thus will take more time to complete than the latter; it is simple to compare the times it takes to check a 1-char, 2-char, 3-char etc. password and note that 8 characters is the only one that is notably different, and hence likely to be the correct length of the password. With that knowledge, the attacker can refine their inputs. First they try aaaaaaaa through aaaaaaaZ , each of which executes only one iteration of the for loop. But when they come to Baaaaaaa , two iterations of the loop occur, which again takes more time to run than an input starting with any other character. This tells the attacker that the first character of the user's password is the letter B , and they can now repeat this step to determine the remaining characters. How does this relate to my Crypto code? Cryptographic code looks very different from "regular" code. When looking at the above example, it doesn't seem wrong in any significant way. As such, when implementing things on your own, it might not be obvious that code which does what it's supposed to do just introduced a serious flaw. Another problem I can think of is that programmers are not cryptographers. They tend to see the world differently and often make assumptions that can be dangerous. For example, look at the following unit test: public void TestEncryptDecryptSuccess()
{
string message = "This is a test";
KeyPair keys = MyNeatCryptoClass.GenerateKeyPair();
byte[] cipher = MyNeatCryptoClass.Encrypt(message, keys.Public);
string decryptedMessage = MyNeatCryptoClass.Decrypt(cipher, keys.Private);
Assert.Equals(message, decryptedMessage);
} Can you guess what's wrong? I have to admit, that wasn't a fair question. MyNeatCryptoClass implements RSA and is internally set to use a default exponent of 1 if no exponent is explicitly given. And yes, RSA will work just fine if you use a public exponent of 1. It just won't really "encrypt" anything, since "x 1 " is still "x". You might ask yourself who in their right mind would do that, but there are cases of this actually happening . Implementation Errors Another reason why you might go wrong implementing your own Code is implementation errors. As user Bakuridu points out in a comment, bugs in Crypto code are fatal in comparison to other bugs. Here are a few examples: Heartbleed Heartbleed is probably one of the most well-known implementation bugs when it comes to cryptography. While not directly involving the implementation of cryptographic code, it nonetheless illustrates how monstrously wrong things can go with a comparatively "small" bug. While the linked Wikipedia article goes much more in-depth on the issue, I would like to let Randall Munroe explain the issue much more concisely than I ever could: https://xkcd.com/1354/ - Image Licensed under CC 2.5 BY-NC Debian Weak PRNG Bug Back in 2008, there was a bug in Debian which affected the randomness of all further key material used. Bruce Schneier explains the change that the Debian team made and why it was problematic. The basic gist is that tools checking for possible problems in C code complained about the use of uninitialized variables. While ususally this is a problem, seeding a PRNG with essentially random data is not bad. However, since nobody likes staring at warnings and being trained to ignore warnings can lead to its own problems down the line, the "offending" code was removed at some point, thus leading to less entropy for OpenSSL to work with. Summary In summary, don't implement your own Crypto unless it's designed to be a learning experience! Use a vetted cryptographic library designed to make it easy to do it right and hard to do it wrong. Because Crypto is very easy to do wrong. | {
"source": [
"https://security.stackexchange.com/questions/209652",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108649/"
]
} |
209,681 | Every so often I will find myself on a mailing list that I know I didn't sign up for. This always gets me very cautious and I treat the junk mail as hostile spam - don't click, just delete and ignore. However, after regularly receiving mail from the same address (usually newsletters or training advertisements), I'll usually investigate to determine if I can safely unsubscribe rather than have to put up with junk in my inbox. When I examine the Unsubscribe links in the mail, they'll often go to well-known email marketing sites like Constant Contact or Mail Chimp. I take this as an indication that it's generally safe to follow that link, and I will usually unsubscribe that way, especially if there are other indications that the source of the junk is legitimate. This way I only ever go to the website of the marketing company, not the originator. Is it reasonable to assume that spam coming through legitimate email marketing companies can be treated as safe, or at least safe enough to unsubscribe? By unsubscribing, do spammers learn that my email address is active? If it is safe, is there a good way to determine which marketing companies are trustworthy enough that I can feel safe unsubscribing? | The reason why you want to avoid implementing cryptographic algorithms yourself is because of side-channel attacks . What is a side-channel? When you communicate with a server, the content of the messages is the "main" channel of communication. However, there are several other ways for you to get information from your communication partner that doesn't directly involve them telling you something. These include, but are not limited to: The time it takes to answer you The energy the server consumes to process your request How often the server accesses the cache to respond to you. What is a side-channel attack? Simply put, a side-channel attack is any attack on a system involving one of these side-channels. Take the following code as an example: public bool IsCorrectPasswordForUser(string currentPassword, string inputPassword)
{
// If both strings don't have the same length, they're not equal.
if (currentPassword.length != inputPassword.length)
return false;
// If the content of the strings differs at any point, stop and return they're not equal.
for(int i = 0; i < currentPassword.length; i++)
{
if (currentPassword[i] != inputPassword[i])
return false;
}
// If the strings were of equal length and never had any differences, they must be equal.
return true;
} This code seems functionally correct, and if I didn't make any typos, then it probably does what it's supposed to do. Can you still spot the side-channel attack vector? Here's an example to demonstrate it: Assume that a user's current password is Bdd3hHzj (8 characters) and an attacker is attempting to crack it. If the attacker inputs a password that is the same length, both the if check and at least one iteration of the for loop will be executed; but should the input password be either shorter or longer than 8 characters, only the if will be executed. The former case is doing more work and thus will take more time to complete than the latter; it is simple to compare the times it takes to check a 1-char, 2-char, 3-char etc. password and note that 8 characters is the only one that is notably different, and hence likely to be the correct length of the password. With that knowledge, the attacker can refine their inputs. First they try aaaaaaaa through aaaaaaaZ , each of which executes only one iteration of the for loop. But when they come to Baaaaaaa , two iterations of the loop occur, which again takes more time to run than an input starting with any other character. This tells the attacker that the first character of the user's password is the letter B , and they can now repeat this step to determine the remaining characters. How does this relate to my Crypto code? Cryptographic code looks very different from "regular" code. When looking at the above example, it doesn't seem wrong in any significant way. As such, when implementing things on your own, it might not be obvious that code which does what it's supposed to do just introduced a serious flaw. Another problem I can think of is that programmers are not cryptographers. They tend to see the world differently and often make assumptions that can be dangerous. For example, look at the following unit test: public void TestEncryptDecryptSuccess()
{
string message = "This is a test";
KeyPair keys = MyNeatCryptoClass.GenerateKeyPair();
byte[] cipher = MyNeatCryptoClass.Encrypt(message, keys.Public);
string decryptedMessage = MyNeatCryptoClass.Decrypt(cipher, keys.Private);
Assert.Equals(message, decryptedMessage);
} Can you guess what's wrong? I have to admit, that wasn't a fair question. MyNeatCryptoClass implements RSA and is internally set to use a default exponent of 1 if no exponent is explicitly given. And yes, RSA will work just fine if you use a public exponent of 1. It just won't really "encrypt" anything, since "x 1 " is still "x". You might ask yourself who in their right mind would do that, but there are cases of this actually happening . Implementation Errors Another reason why you might go wrong implementing your own Code is implementation errors. As user Bakuridu points out in a comment, bugs in Crypto code are fatal in comparison to other bugs. Here are a few examples: Heartbleed Heartbleed is probably one of the most well-known implementation bugs when it comes to cryptography. While not directly involving the implementation of cryptographic code, it nonetheless illustrates how monstrously wrong things can go with a comparatively "small" bug. While the linked Wikipedia article goes much more in-depth on the issue, I would like to let Randall Munroe explain the issue much more concisely than I ever could: https://xkcd.com/1354/ - Image Licensed under CC 2.5 BY-NC Debian Weak PRNG Bug Back in 2008, there was a bug in Debian which affected the randomness of all further key material used. Bruce Schneier explains the change that the Debian team made and why it was problematic. The basic gist is that tools checking for possible problems in C code complained about the use of uninitialized variables. While ususally this is a problem, seeding a PRNG with essentially random data is not bad. However, since nobody likes staring at warnings and being trained to ignore warnings can lead to its own problems down the line, the "offending" code was removed at some point, thus leading to less entropy for OpenSSL to work with. Summary In summary, don't implement your own Crypto unless it's designed to be a learning experience! Use a vetted cryptographic library designed to make it easy to do it right and hard to do it wrong. Because Crypto is very easy to do wrong. | {
"source": [
"https://security.stackexchange.com/questions/209681",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140214/"
]
} |
209,812 | Google Authenticator uses the TOTP algorithm to generate your One-Time Password (OTP). TOTP works like this : The server generates a secret key and shares with the client (you) when the client registers with the server. Using the shared key and the current timestamp, a new password is generated every 30 seconds. If anyone has the shared key, then they can generate the OTP themselves using the TOTP algorithm. Isn't this similar to a password? Doesn't it get reduced to having two passwords - One is the password that you use to login and the other is the shared key between you and the server? | Passwords are revealed every time you use them: if you have two passwords and you type them into a fraudulent web form, they are both stolen. The shared secret can't be calculated from a single OTP (or even from a set of them**), so a stolen OTP is only valid for limited time. The shared secret is never transferred during the authentication, so stealing it requires a different attack vector: access to the device where it is kept or copying it (e.g. its QR code) during the initialization. ** Calculating shared secrets backwards would be very impractical, as it's a one-way algorithm. Also, the minimum key length is 128 bits and the algorithm produces only 6 numbers i.e. ~20 bit OTP. This means for every OTP there would be oceans of potential shared secrets, and finding even a single match would only be possible with brute force i.e. calculating 2^128 hashes for every 30 seconds and ruling out every OTP that didn't match. | {
"source": [
"https://security.stackexchange.com/questions/209812",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8201/"
]
} |
209,908 | A common saying among people in the field of cryptography and security is that when providing a back door to law enforcement, you also provide a back door for hackers. I was trying to examine the implementation of Lawful Interception from 4G and the proposed implementation in 5G and to me it looks secure. The only way for a hacker to gain information that they shouldn't would be if they knew the private key of the base station. If we assume that the private key of the base station is secure, what could a hacker do that they could not have done without Lawful Interception being implemented? | Without access to the key, then the problem for attackers is the same as if there was no backdoor key: the attackers would have to break the encryption itself. But ... If we assume that the private key of the base station is secure Your base assumption is the one that requires challenge. That there is a key is the problem. key handling key misuse key leakage key strength key protection Each one of these elements needs to be secured for the key to be secure. And there are a lot of moving parts there and a lot of ways for people to cause weaknesses and ways for malicious actors to manipulate controls to their advantage. Even if we perfectly trusted all law enforcement not to be malicious, ever (a sensitive topic on its own, but of course, impossible) then there are still lots of ways for weaknesses to creep in or for trusted people to be manipulated. Once the door is there, it will become the intense focus of those with time, resources, and strong desire wanting access. How resilient will those with legitimate access be against such an onslaught? How perfect will those people be in engaging in the established procedures even without external pressures? Once you cut a hole in a wall, it becomes a point of weakness. The strongest lock will not compensate for hinges that can be broken. | {
"source": [
"https://security.stackexchange.com/questions/209908",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/206997/"
]
} |
210,045 | This is something that happened to me a few months ago. I don't know if it is a hack attempt, although I can't think of any way that there could be any danger or any personal information gained. I don't have a Netflix account and never have done. I have a Gmail address which I have never used for public communication. Suddenly I started getting email to this Gmail address from Netflix - not a "Welcome to Netflix" email or one requesting address verification, but what looked like a monthly promo for an existing account. This was addressed to someone with a different real name, with that name not similar in any way to the Gmail name. After a few of these messages I decided to investigate by going to Netflix and trying to log in with that email address. Using the "forgotten password" option I was able to get a password reset email, change the password and log in. The account appeared to be from Brazil, with some watch history but no other personal details stored and no payment information. Soon the emails from Netflix started to ask me to update payment information. I didn't, of course, and then they changed to "your account will be suspended" and then "your account has been suspended". The "come back to Netflix" emails are still coming in occasionally. I don't see how this could possibly be a phishing attempt - I carefully checked that I was on the real Netflix site, used a throwaway password not used on any other sites, and did not enter any of my personal information. I also checked the headers of the emails carefully and they were sent by Netflix. So is this just a mistake on somebody's part, mistyping an email address (although it's surprising that Netflix accepted it with no verification), or something more sinister? | I think it's likely that someone is trying to trick you into paying for Netflix for them. From: https://jameshfisher.com/2018/04/07/the-dots-do-matter-how-to-scam-a-gmail-user/ : More generally, the phishing scam here is: Hammer the Netflix signup form until you find a gmail.com address which is “already registered”. Let’s say you find the victim jameshfisher . Create a Netflix account with address james.hfisher . Sign up for free trial with a throwaway card number. After Netflix applies the “active card check”, cancel the card. Wait for Netflix to bill the cancelled card. Then Netflix emails james.hfisher asking for a valid card. Hope Jim reads the email to james.hfisher , assumes it’s for his Netflix account backed by jameshfisher , then enters his card **** 1234 . Change the email for the Netflix account to [email protected] , kicking Jim’s access to this account. Use Netflix free forever with Jim’s card **** 1234 ! (Note that the above steps don't include any "password reset" step for Jim to access the account; that's because the email from Netflix includes authenticated links that won't ask for it. The attacker wants the victim to click on the email links instead of visiting Netflix manually, this is what enables "Eve" to log back in to the account in step 7. Or, since Netflix emails authenticated links, possibly "Eve" already has one.) The above situation is partially caused by Netflix (understandably) not recognizing Gmail's "dots don't matter" feature where email sent to [email protected] and to [email protected] end up in the same account. That doesn't really matter in your case (given that if this is how you're trying to be scammed, step 1 was skipped entirely), however. A bigger problem is that Netflix apparently still allows people to register email addresses to accounts without verification. | {
"source": [
"https://security.stackexchange.com/questions/210045",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/207781/"
]
} |
210,072 | Why does SSL labs now mark CBC 256 suites as weak, although equivalent GCM and ChaCha20 are considered strong? Until a few months ago, it was unmarked in reports (neither explicitly as weak or strong), and it is still unmarked in their client lists . The suites in question are: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA384 TLS_DHE_RSA_WITH_AES_256_CBC_SHA The SHA1s are a requirement to support Android 5 and 6 with 4x100% score. It still gets 4x100% score, but it marks it as weak, which from an OCD perspective doesn’t look “professional”. | While CBC is fine in theory, there is always the risk that an improper implementation will subject the connection to padding oracle attacks . Time and time again, CBC implementations in TLS have shown themselves to be vulnerable, and each time an implementation is fixed, it seems yet another bug making padding oracle attacks feasible appears. Lucky Thirteen was published in 2013, and variants of this attack based on side channels keep popping up. SSL Labs is just observing history and learning from it. | {
"source": [
"https://security.stackexchange.com/questions/210072",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/207820/"
]
} |
210,114 | I noticed in some cases when I get a verification code from Google it may say something along the line of: "You should not share this code with anyone else and no one from Google will ever ask for this code." OK, this seems like it's for security reasons, but the code is only a one use code so if you give someone it after it was used then it will not work. (May not apply to giving someone the code before use, however even if someone knows your username and password and was able to get an unused code and that person were to login a new code should be generated for the new session. Right?) Am I missing something, is there any reason not to share the one time code, especially the part about some random stranger calling and asking for it? Also it should have long self expired before someone had the chance to call you and ask for it in the future. | They're not being precise because they don't have to, and precise language might confuse some users. They could say, for example, "You should not share unused codes that are less than an hour old with anyone else and no one from Google will ever ask for this code." You and I would know what they mean. My father in law and grandpa won't know why, though. My father in law is a specific example of a person who would see that there are times when he can share codes, and someone scamming him out of his social security check will get access to his email as well. (Yes, most of his inbox is about mind control chemicals added to contrails and how solar flares cause earthquakes, but it might also give someone access to his bank account.) As has been pointed out in comments, there are other examples of situations that are conditionally dangerous, but people just don't include the exceptions. For example: "Never look down the barrel of a firearm [unless you have cleared the chamber]," or "never stick your finger in a light socket [unless you have turned off the power to that socket]." Since a used or expired token is useless to everyone, there's no point in keeping it, sharing it, protecting it, deleting it, or adding exceptions to general security advice. I can tell from personal experience that there are users who will do stupid things when you let them know that there are edge cases and nuances to security. Knowing that, if I were to write such a warning to my users, I'd make my statement as broad and general as possible. | {
"source": [
"https://security.stackexchange.com/questions/210114",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94833/"
]
} |
210,243 | The site: https://www.nomoreransom.org/ offers many decrypter tools for ransomware. But why? It shouldn't be so hard to use the Windows Crypto API (e.g. just google "create AES Key in Windows") to create AES Keys, encrypt them with a locally generated public RSA Key and encrypt the corresponding private RSA Key with a Public RSA Key that the attacker controls. (The method of Wanacry.) If the victim pays the ransom, they have to send the encrypted private RSA key to the attackers, and hopefully get the decrypted private RSA key back. Why do these people try to reinvent the wheel and in the process make mistakes that allow the development of decrypter tools? | Disclosure : I work for one of vendors participating in NoMoreRansom. Most modern ransomware indeed implements proper cryptography. Earlier versions were using rand() for key generation, seeding the random generators with variants of time() - this is why it was important for successful decryption to know when exactly the infection happened; ideally down to minutes. Those could be decrypted with brute force. But most modern ransomware indeed uses either Windows Crypto API, or bundled crypto libraries. However, no matter how correctly ransomware is implemented, there is always a weak point - to facilitate decryption, the key(s) have to be stored somewhere. This location could be traced by security companies, who would work together with law enforcement to take it over. Access to the server gives the security company the ability to decrypt the ransomware victims files. This is for example the case with GangCrab ransomware. | {
"source": [
"https://security.stackexchange.com/questions/210243",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200179/"
]
} |
210,356 | Today I was exploring a website used for keeping track of student grades and everything related to school. Basically like a school progress tracker for your child which is used by 90% of schools in my country. I fired up Charles proxy and connected my phone to it and installed Charles's root certificate so I can use https (the site uses it). Anyway, I logged into the site and checked what Charles captured. It captured a simple ajax call with 4 fields containing all the login credentials. Here's a screenshot: Everything is even labeled - uporabnik means "user" and geslo means "password"
So if I am understanding this correctly (I am really really just a beginner), everyone that manages to capture this can look at it? Is this only possible with a proxy or can wireshark for example also do this and just capture packets over wifi? Are my assumptions true and if they are, what should I do about it? | You seem to fundamentally misunderstand what TLS does. TLS takes the regular plain HTTP traffic and encrypts it and adds integrity checks. Together with the certificate of the server, this ensures Confidentiality : An attacker who captures the network traffic can not read the content of the communication. Integrity : If an attacker modifies the network traffic, this would result in errors. Authenticity : You can be sure that your communication partner is the server you think you communicate with. (We get to this in a second.) If you were to look at the underlying HTTP communication, you would see your username and password in plain text, because this is what you have sent to the server. What does the proxy do now? If you use a TLS Proxy such as Charles, you essentially communicate with the proxy and the proxy communicates with the web server. So what stops an attacker from just using a TLS proxy? The certificate! When you installed the TLS Proxy, the proxy generated a new CA-certificate, which you then imported. This means you gave the proxy the authority to create a certificate for any domain. For the purpose of being a proxy, this is fine. An attacker however would have to make you import their certificate (or steal the private key of yours!) so you would trust certificates by their proxy. So, is this an issue now? No, it's not. Everything is working as it's supposed to.
At the end of the day, when you send your username and password to a website, it somehow has to actually reach that website. | {
"source": [
"https://security.stackexchange.com/questions/210356",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
210,522 | I work for a small company, developing an ASP.NET web-application. Recently, we've had the requirement of exposing an API endpoint come up, such that an automated script running in the cloud can periodically pull back some specific JSON-formatted data from the app via a web request. The implementation of this I can handle, but I wasn't sure regarding security concerns. I was reading about HMAC this morning & liked the look of it, as it seemed quite similar to the security protocols of other APIs I've used previously. However, it made me wonder what the value of some of the steps were. If a client and server have securely agreed on a passphrase / key via prior communication, what risk is there in sending a POST request with the passphrase as part of the body of the HTTPS request, such that the passphrase identifies the user? Trying to look this up I came across Replay Attacks and similar, but can these work over SSL & given the client-side & server-side environments can both be trusted? Edit: Adding a bit of clarification based on a user's comment below. Our intended use case is to have a script run periodically (once an hour, day, etc) either on one of our servers or in the cloud. It will pull back specific information from our app, as well as third-party APIs, & update a cloud-based spreadsheet for our business development team. It's something we ideally want to leave running & not require any user intervention. Our app normally requires login with a username/password to generate a temporary session, but we were hoping to simplify the process a bit & just provide an API for the script to securely retrieve specific data. | Assuming you mean TLS instead of SSL itself (which is far older and broken), it is absolutely fine to transmit the password in a POST request. This is standard and how virtually every major and secure web authentication service works. You just have to make sure that the TLS is configured properly. TLS mitigates all the attacks you are worried about. It uses HMAC (or a similar authentication) for integrity, and mitigates replay attacks, reflection attacks, and similar issues that affect authentication systems. Note that new vulnerabilities may be found that require either upgrading the version of TLS or manual mitigations. SSL Labs can test for known vulnerabilities in a TLS implementation. | {
"source": [
"https://security.stackexchange.com/questions/210522",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/208461/"
]
} |
210,629 | If there are places on a laptop malicious programs can leave elements, hooks, backdoors etc, in locations such as BIOS, device controllers, firmware etc - what confidence can one have in wiping the disk and installing a fresh OS image. If I were to first use data destruction software to overwrite every individually addressable location on the hard disk, before secondly installing a freshly downloaded Windows image, this presumably isn’t much of a solution. Surely, binning and buying a replacement is the only option? ( Which would be dire, since the machine is new. ) | You must do risk management. How likely it is that you and your laptop have been personally targeted? The vast majority of persistent malware operates entirely in software, and formatting the disk is more than enough to remove all traces of it. Sophisticated, firmware-resident malware is extremely rare and unlikely to be a threat unless you have particular reason to think that you are at risk. It is possible to check for firmware-level malware, but it requires a good understanding of common x86 architecture, and access to hardware to read from the flash chips. At a minimum, you'd need SPI readers for the BIOS/UEFI, and JTAG probes for the hard drive firmware and related. If you don't have any reason to think you're being targeted, just format and re-install. | {
"source": [
"https://security.stackexchange.com/questions/210629",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/208494/"
]
} |
210,656 | I have recently logged into a website. When I clicked on the "Update Profile" page, you are displayed with a list of text boxes for all the user fields, e.g. name, email, phone number etc. There is also a box for password and confirm password (for if you wish to update these values), however, when you go into this page, those boxes are already populated, which made me think, why are they putting placeholders in? When going into inspect element, they actually have the values of your password, transformed into upper case like this: <input type="password" name="txtPassword2" size="45" value="MYPASSAPPEARSHERE"> I have also recently noticed that the case of your password or username is irrelevant when logging in - e.g. I can put it in all caps, all lower, or a mixture of both and it will still accept the password. Is this a security hole and does this indicate they are storing passwords as plain text? This is not a duplicate of ( What to do about websites that store plain text passwords ) as I’m asking here for clarification of whether this indicates the site is storing plaintext passwords, rather than what to do about it. Response from the company: After pushing hard, the company confessed that they are in fact, storing passwords in plain text. | Quite obviously, if they can display your password, then they are storing your password somehow. They might cache your password on the client-side when you log in (for unjustifiable reasons, like session management), but more likely their password database is in clear text. Either way, it's stored and it should not be. And it looks like they are running a upper() function on the password, which wipes out 26 characters from the potential character set that would have otherwise added some entropy. This is very, very poor security on their part that has had no place for 2 decades. | {
"source": [
"https://security.stackexchange.com/questions/210656",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/208677/"
]
} |
210,897 | I have been learning about rootkits recently and have noticed this hooking techniques that kernel-land rootkits use in order perform malicious actions. Where a typical hooking operation would be to hook on to a legitimate system call, and then replace the legitimate action with the malicious action first, before actually calling the legitimate action. But if that is the case, why not make the system call table to be unmodifiable from the start? | You can check if they are read-only by looking up the kernel symbols. The "R" means read-only. * $ grep sys_call_table /proc/kallsyms
0000000000000000 R sys_call_table
0000000000000000 R ia32_sys_call_table
0000000000000000 R x32_sys_call_table So they are read-only, and have been since kernel 2.6.16 . However, a kernel rootkit has the ability to make them writable again. All it needs to do is execute a function like this † in kernel mode (directly or via sufficiently-flexible ROP gadgets, of which there are plenty) with each address as the argument: static void set_addr_rw(const unsigned long addr)
{
unsigned int level;
pte_t *pte = lookup_address(addr, &level);
if (pte->pte &~ _PAGE_RW)
pte->pte |= _PAGE_RW;
} This changes the permissions of the syscall table and makes it possible to edit it. If this doesn't work for whatever reason, write protection in the kernel can be globally disabled with the following ASM: cli
mov %cr0, %eax
and $~0x10000, %eax
mov %eax, %cr0
sti This disables interrupts , disables the WP (Write-Protect) bit in CR0 , and re-enables interrupts. Using assembly lets this work despite write_cr0(read_cr0() & ~0x10000) failing due to the predefined function for writing to CR0 now pinning sensitive bits . Make sure you re-enable WP after, though! So why is it marked as read-only if it's so easy to disable? One reason is that vulnerabilities exist which allow modifying kernel memory but not necessarily directly executing code. By marking critical areas of the kernel as read-only, it becomes more difficult to exploit them without finding an additional vulnerability to mark the pages as writable (or disable write-protection altogether). Now, this doesn't provide very strong security, so the main reason that it is marked as read-only is to make it easier to stop accidental overwrites from causing a catastrophic and unrecoverable system crash. * The particular example given is for an x86_64 processor. The first table is for syscalls in native 64-bit mode (x64). The second is for syscalls in 32-bit mode (IA32). The third is for the rarely used x32 syscall ABI that allow programs to use all the features of 64-bit mode (e.g. SSE instead of x87 for floating point operations) while using 32-bit pointers and values. † The kernel's internal API changes all the time, so this exact function may not work on older kernels or newer kernels. Globally disabling CR0.WP in ASM however is guaranteed to work on all x86 systems regardless of the kernel version. | {
"source": [
"https://security.stackexchange.com/questions/210897",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/208541/"
]
} |
210,923 | On Debian 9, installing default-jre creates a hidden directory /etc/.java . This is flagged as a warning while I run rkhunter. Looking up online, I found an old bug report against Debian. The bug was closed stating the sysadmin could configure rkhunter to ignore the directory. Speaking simplistically from the point of view of operating system security, is it a good idea to have a hidden directory under /etc ? Does it make security sense for rkhunter to look for and flag hidden files and directories under /etc ? What's the recommended best practice here? Edit 2019-05-29T02:42+00:00: What I mean to ask in the last question is if a hidden directory under /etc is a good idea from the point of view of "security usability". As in, it might be disconcerting for a sysadmin to find a hidden file under /etc and therefore could be bad security practice, especially from the point of view of a package maintainer. | Yes, that's safe. There's nothing inherently insecure about having a hidden directory under /etc. The only reason rkhunter flags it is that it's uncommon for legitimate programs to do it, and when malware does it, it makes it less likely that you'd otherwise notice it. | {
"source": [
"https://security.stackexchange.com/questions/210923",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82371/"
]
} |
210,949 | Is there an easy way to distinguish 4771 events from a real attack perspective vs. someone having a stale session with an old password? If you don't get logs from all endpoints and rely on Domain Controllers, you have to key off of 4771 and 4625 for failures, where 4771 is the Kerberos events from the domain joined computers to the DCs. It's nice having visibility across the endpoints without getting logs from everything but for these 4771 events, most of the alerts I see are just stale sessions and non-security events. I don't see any sub code or item to key off of for stale/old password vs. real attack. | Yes, that's safe. There's nothing inherently insecure about having a hidden directory under /etc. The only reason rkhunter flags it is that it's uncommon for legitimate programs to do it, and when malware does it, it makes it less likely that you'd otherwise notice it. | {
"source": [
"https://security.stackexchange.com/questions/210949",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4402/"
]
} |
211,112 | Even though the current recommendation for storing passwords is the usage of a slow key derivation function such as Argon2, scrypt, PBKDF2 or bcrypt 1 , many websites still use the traditional hash(password + salt) method, with MD5, SHA-1 and SHA-256 being the most commonly used hash functions. The SHA-1 hash of mySuperSecretPassword123 with the salt !8(L-_20hs is E5D0BEE0300BF17508CABA842084753685781907 . Assume an attacker would steal the salt and the first half of the hash, so E5D0BEE0300BF17508CA . We also assume that the attacker is aware that SHA-1 is being used and how the salt and the password are concatenated. How difficult would it be for an attacker to recover the original password? 1 bcrypt technically isn't a key derivation function, but for the purposes of this question it functions identically. | Actually, it's as bad as a full hash leak . Hash-cracking is done by: Generating password candidates Hashing them Comparing the resulting hashes to the hash you want to crack None of those steps will be slower in case of a partial hash leak, so this is very similar to a full hash leak speed-wise. Please note that if the partial hash output is not long enough, a lot of password candidates will match. In that scenario, you can't know which candidate was the real password. | {
"source": [
"https://security.stackexchange.com/questions/211112",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
211,137 | A typical web authentication workflow looks like this: User provides their credentials. Server validates credentials. If credentials are valid Server generates a token. Server keeps this token. Server responds to the login with this token. Browser stores token. Browser makes requests with token. Server validates token and responds accordingly. Normally, this token is stored in a cookie. The presence and validity of the token in a request lets the server know if the client making the request is authenticated. No token, no entry, which is effectively the same as not being logged in. So... Can I just log out by wiping cookies instead of hitting logout? What are the issues of just wiping cookies versus clicking the logout button? | Can I just log out by wiping cookies instead of hitting logout? Frequently yes, for the reasons you supplied in your question: Without the session token in your cookies, a typical web application won't know who you are. What are the issues of just wiping cookies versus clicking the logout button? Web applications that manage authentication following the OWASP session management guidelines will invalidate the session on the server side when you log out explicitly. If you simply discard the cookie with the session token, the session may be subject to session hijacking. Using door locks as an analogy for those who are not familiar with best practices of developing web applications (thanks to discussion in the comments): Your account can be seen as a room in a building. When you log in, the building's owner creates a door and puts an automatic lock on it, so that only you can enter. Your session token is your key, and is typically stored in your browser's cookies, but can be stored in other places. Discarding your token by deleting your cookies, clearing cache, etc., is simply destroying your copy of the key. Explicitly logging off is asking the building owner to brick up the doorway. There's nothing guaranteeing that they'll secure your account, but as the user, you're explicitly making your wishes known. There are various ways that an attacker can get a copy of your key, known as session hijacking, that are the responsibility of the site owner to mitigate, not the users. First, the attacker can just guess. If the site generates session keys sequentially, or uses a low entropy pseudorandom generation method, this makes guessing much easier. Sites mitigate this through using high entropy tokens and periodic session recycling. Recycling sessions doesn't prevent access, but it makes it obvious when unauthorized access has been granted. Second, the attacker can use session fixation: They give you a key before you log in, which you continue to use after you've logged in. Sites mitigate this by explicitly recycling the session when you log in. Third, a Man-in-the-Middle attack. The attacker can see your key directly. TLS mitigates this. It is possible to decrypt TLS traffic through downgrade attacks, insecure implementations, and zero-day attacks, but these are far outside a user's domain, rare, and zero-day attacks against TLS tend to raise a LOT of noise when they're discovered (Heartbleed, et al). As a user, your responsibilities are to log off and to hold the site accountable when they take shortcuts with security, much as your responsibility with your car in a public parking lot is to lock your doors. If the door locks are trivially bypassed, then its the manufacturer's fault, not yours. | {
"source": [
"https://security.stackexchange.com/questions/211137",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36190/"
]
} |
211,177 | I'm being stalked by a hacker. I've relocated and am trying to take all possible precautions. How can I prevent my WIFI from being hacked? Is there a way to prevent a hacker from finding my WIFI account? If the hacker finds my WIFI account, can't he just hack into my ISP to get my password? Please use plain English, I'm not too familiar with computer jargon. | Can I just log out by wiping cookies instead of hitting logout? Frequently yes, for the reasons you supplied in your question: Without the session token in your cookies, a typical web application won't know who you are. What are the issues of just wiping cookies versus clicking the logout button? Web applications that manage authentication following the OWASP session management guidelines will invalidate the session on the server side when you log out explicitly. If you simply discard the cookie with the session token, the session may be subject to session hijacking. Using door locks as an analogy for those who are not familiar with best practices of developing web applications (thanks to discussion in the comments): Your account can be seen as a room in a building. When you log in, the building's owner creates a door and puts an automatic lock on it, so that only you can enter. Your session token is your key, and is typically stored in your browser's cookies, but can be stored in other places. Discarding your token by deleting your cookies, clearing cache, etc., is simply destroying your copy of the key. Explicitly logging off is asking the building owner to brick up the doorway. There's nothing guaranteeing that they'll secure your account, but as the user, you're explicitly making your wishes known. There are various ways that an attacker can get a copy of your key, known as session hijacking, that are the responsibility of the site owner to mitigate, not the users. First, the attacker can just guess. If the site generates session keys sequentially, or uses a low entropy pseudorandom generation method, this makes guessing much easier. Sites mitigate this through using high entropy tokens and periodic session recycling. Recycling sessions doesn't prevent access, but it makes it obvious when unauthorized access has been granted. Second, the attacker can use session fixation: They give you a key before you log in, which you continue to use after you've logged in. Sites mitigate this by explicitly recycling the session when you log in. Third, a Man-in-the-Middle attack. The attacker can see your key directly. TLS mitigates this. It is possible to decrypt TLS traffic through downgrade attacks, insecure implementations, and zero-day attacks, but these are far outside a user's domain, rare, and zero-day attacks against TLS tend to raise a LOT of noise when they're discovered (Heartbleed, et al). As a user, your responsibilities are to log off and to hold the site accountable when they take shortcuts with security, much as your responsibility with your car in a public parking lot is to lock your doors. If the door locks are trivially bypassed, then its the manufacturer's fault, not yours. | {
"source": [
"https://security.stackexchange.com/questions/211177",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/209382/"
]
} |
211,189 | How to find out what programming language a website is built in? How much of a Django application could be reverse-engineered if the owner forgot to turn debug mode off? And other Qs like these ^ . Shortly: It would seem that at least in terms of web app development, we want to disclose as little information to the attacker as possible. Attackers want to determine the platform our web app is running on, but we want to trick them into believing it's a different platform than it actually is; We are advised to switch debug mode off because detailed exception info might leak portions of the source code of our app (not the platform's); If we open-source our web app's server code, we willingly hand everyone these very pieces of information that the questions I linked to discuss how to hide ; and even more information than that. It would seem, therefore, that open-sourcing the app is one of the last thing one would want to do. This is surprising to me because: Some think open-sourced apps are safer because more friendly eyes may look at the code, looking for exploitable bugs and submitting patches; Not open-sourcing the app because of aforementioned issues is security through obscurity, which is bad. However, according to @Mason Wheeler's comment on this site : I think that, if even a security tester can't figure out what language the site is built in, that makes it more secure because then no one will know which exploits to try. (Yes, there are occasionally valid use cases for security through obscurity.) Therefore, is it agreed upon that open-sourcing the server-side code of a web app is a horrible idea? | It's a complex matter because there are several aspects to consider, with pros and cons, and there might not be a definite answer. The security advantage of open source software is supposed to come from a "law" that Wikipedia calls "Linus's law", which says that "given enough eyeballs, all bugs are shallow". To start with, you'd have to ask yourself how many eyeballs you are going to have. For example, is your project going to be shared, forked, used extensively by lots of users and reviewed by a large community? Or is your software only going to be used on your website and no one else will care about it? Or maybe no one else will be able to reuse it because it doesn't come with a free-software license? In the end there are going to be white-hat eyeballs and black-hat eyeballs, so you need to be willing to accept that on one hand you will get some security improvement from ethical hackers, but on the other hand you will also be attacked by black hats. Will attackers be especially interested in targeting your project, or is it just going to be subject to non-targeted attacks? These are all things you should consider, and it might not be easy to draw a conclusion. It's also worth remembering that in several open source projects there were security vulnerabilities that had been there for a long time despite all the eyeballs of the community (see Linus's law on Wikipedia). Security by obscurity is another concept that is often misunderstood, because its name makes it sound like it's all about keeping something secret. It's not so. Security through obscurity is when a significant part of your security comes from the secrecy of the methods (the implementation). Here's an example of security by obscurity: // Login without password if URL has parameter dev=debug
// I'm a stupid dev, so I believe this is secure because nobody knows about it!
// But this source code can't be published or I'll be hacked at once
if ($login_password === $password || $_POST['dev'] === 'debug') {
login_ok();
} Anyway, even if your code is correct and you rely on security by design, nothing stops you from using a layer of obfuscation on top of it. That is, keeping the source code private can help you because it will slow down a potential attacker. The important thing to remember is that obscurity is ok and can be considered a great asset only if it's just a layer on top of good design. In conclusion, I'd say you'd better not publish the source code unless you have a reason to do so (for example because you want your software to be free/libre and you want to create a community around your project). If your only goal is to improve the security of the application, you won't gain anything from just publishing it on GitHub, for example. If you are really worried that your software might contain mistakes and you'd like someone else to help you by providing more "eyeballs", you might consider paying for a professional security audit. | {
"source": [
"https://security.stackexchange.com/questions/211189",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108649/"
]
} |
211,221 | I have been hearing more and more that the haveibeenpwned password list is a good way to check if a password is strong enough to use or not. I am confused by this. My understanding is that the haveibeenpwned list comes from accounts which have been compromised, whether because they were stored in plain text, using a weak cipher, or some other reason. This seems to have little to do with password strength to me. There could be very strong passwords that were stored in plain text, and thus compromised, and would really be pretty fine to use as long as they weren't used in combination with the original email/username. The fact that their hashes are known (duh, any particular password's hash is known!) doesn't matter if the place you are storing them is salted. Although it really doesn't hurt to rule out these passwords, as perhaps a hacker would start with this list when brute forcing, and it is easy to choose another one. But the inverse is where I am concerned - there will always be very easy to crack passwords that aren't on the list. "longishpassword" at this time has not had an account using this password that was hit by a leak. This does not mean however that were a leak of hashes to happen, this password would be safe. It would be very easy to break. What is the rationale behind checking a password (without an email/username) against the haveibeenpwned list to see if it is worthy to be used? Is this a good use of the list or is it misguided? edit: It is way too late to change the scope of the question now, but I just wanted to be clear, this question came from a perspective of checking other people's passwords (for instance when users register on your website, or people in your organisation are given AD accounts) not for validating the strength of a personal password. So any comments saying "just use a password manager" have not been helpful to me. | "Strong" has always had the intention of meaning "not guessable". Length and complexity help to make a password more "not guessable", but a long, complex, but commonly used password is just as weak as Pa$$w0rd . If a password is in the HIBP list, then attackers know that the password has a higher likelihood of being chosen by people, hence, might be used again. So those lists will be hit first. So, if your password is on the list, then it is "guessable". If your password is not on the list, then from a dictionary attack approach, it is less guessable and not what others have chosen, and by implication (for as much as that's worth), is "less guessable". Many other factors, of course, can make your password "more guessable", even if it is not on the HIBP list. As always, a randomly generated password is the most "unguessable" and a maximum length and randomly generated password is extremely difficult to bruteforce. And if you are randomly generating it, then why not go max length? | {
"source": [
"https://security.stackexchange.com/questions/211221",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45567/"
]
} |
211,419 | Imagine the following code: ATTACKERDATA="$(cat attackerControlledFile.txt)"
echo "${ATTACKERDATA}" An attacker can, through whatever arbitrary process, modify the contents of attackerControlledFile.txt to anything they desire. The content can be ASCII, UTF-8, Binary, etc. Anything is fine. The machine also assumes that it is infinitely fast, so even an extremely large file of multiple terrabytes would be read and printed immediately. Is it possible for an attacker, regardless of how unlikely it would be, to exploit this somehow by modifying the content of attackerControlledFile.txt ? "Somehow" refers to things like: This code only works in bash This code requires the output to be printed onto a specific terminal emulator Etc. Everything else assumes a reasonably sane system. This means that answers such as "If echo is an attacker-controlled binary that's actually malware" does not count, as the existence of malware is not exactly "reasonably sane". Answers that would require a specific software or version of that software to be present do count, as long as that software was not made for the purpose of exploitation. A similar question asks Is it possible to use the Linux “echo” command maliciously? , but the accepted answer is actually about a flaw in the design of a web application. Furthermore, it requires the attacker to be able to do redirects, which as far as I know, this construct cannot do. | Is it possible for an attacker, regardless of how unlikely it would be, to exploit this somehow by modifying the content of attackerControlledFile.txt? "Somehow" refers to things like: This code requires the output to be printed onto a specific terminal emulator In fact, yes. Old terminals like vt100 have the ability to use ANSI escape sequences to do special things, like execute commands. The following site below documents this ability using a simple echo, like you describe. https://www.proteansec.com/linux/blast-past-executing-code-terminal-emulators-via-escape-sequences/ The article is in depth with specific exploit instructions, but the general idea can be summarized from this excerpt from the site: Dangerous Escape Sequences
Terminal emulators support multiple features as described below [8]: Screen Dumping: a screen dump escape sequence will open arbitrary file and write the current content of the terminal into the file. Some terminal emulators will not write to existing files, but only to new files, while others will simply overwrite the file with the new contents. An attacker might use this feature to create a new backdoor PHP file in the DocumentRoot of the web server, which can later be used to execute arbitrary commands. Window Title: an escape sequence exists for setting the window title, which will change the window title string. This feature can be used together with another escape sequence, which reads the current window title and prints it to the current command line. Since a carriage return character is prohibited in the window title, an attacker can store the command in a window title and print it to the current command line, but it would still require a user to press enter in order to execute it. There are techniques for making the command invisible, like setting the text color to the same color as the background, which increases the changes of user pressing the enter key. Command Execution: some terminal emulators could even allow execution of the command directly by using an escape sequence. As pointed out in the comments, this particular exploit was fixed decades ago on modern terminal emulators. This just happened to be a simple example that a 30 second Google search revealed that nicely demonstrates the concept that there's still software at work that could be exploitable even in something as simple as displaying a file. Theoretically, there could be other problems with modern terminal emulators (buffer overflows?) that might be exploitable. | {
"source": [
"https://security.stackexchange.com/questions/211419",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
211,427 | I am studying for my RHCSA exam, and one of the topics is the ability to "change a forgotten root user password." This is an official exam objective and even has official Redhat documentation . How is this not a glaring security vulnerability? Shouldn't the ability to change a root user password not exist? I get that you need to have physical access to a machine (which makes it harder to implement) but why even have a password at all if it can just be changed like this? Is there a way to disable this 'feature' so that it cannot be changed from GRUB like this? Can you do this in all other Linux distros as well? Or is this a Redhat exclusive ability? | You pretty much hit the nail on the head when you said that you need physical access to the machine. If you have physical access, you don't need to go through the official steps to reset the root password, as you can flips bits on the hard drive directly, if you know what you're doing. I.e., you can boot up a recovery OS from a DVD or flash drive, and mount the drive that way to gain complete read/write access to the entire disk. Disk encryption will mitigate the risk, but doesn't remove it entirely* but makes attacks much more complicated. It is best to assume that an attacker with physical access will be able to influence every aspect of the device in time. Since it's assumed that attackers with physical access will always gain privileged account access eventually, there's little point in putting the legitimate administrators through extra trouble if they lost their password. Every Linux distro that I have used has had this feature, though it's possible that some of the distros aimed at a more paranoid audience could disable this. In addition, it's a standard feature in BSD Unixes, was tested for on the CCNA exam at least 15 years ago when I took it for Cisco devices, and it's fairly trivial to reset passwords on a Windows machine if it isn't explicitly secured. * The attacker could for example add a backdoored kernel or initrd in the /boot directory, that needs to be unencrypted because the bootloader must be able to read the kernel and initrd files. | {
"source": [
"https://security.stackexchange.com/questions/211427",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/157682/"
]
} |
211,433 | Using DNSSEC you can be sure that you have the right IP for a domain and using a certificate for the IP signed by someone you trust you know you have the right IP. Shouldn't this be enough to know the connection is correct? Why would the domain name be needed in the certificate used by the server? | You pretty much hit the nail on the head when you said that you need physical access to the machine. If you have physical access, you don't need to go through the official steps to reset the root password, as you can flips bits on the hard drive directly, if you know what you're doing. I.e., you can boot up a recovery OS from a DVD or flash drive, and mount the drive that way to gain complete read/write access to the entire disk. Disk encryption will mitigate the risk, but doesn't remove it entirely* but makes attacks much more complicated. It is best to assume that an attacker with physical access will be able to influence every aspect of the device in time. Since it's assumed that attackers with physical access will always gain privileged account access eventually, there's little point in putting the legitimate administrators through extra trouble if they lost their password. Every Linux distro that I have used has had this feature, though it's possible that some of the distros aimed at a more paranoid audience could disable this. In addition, it's a standard feature in BSD Unixes, was tested for on the CCNA exam at least 15 years ago when I took it for Cisco devices, and it's fairly trivial to reset passwords on a Windows machine if it isn't explicitly secured. * The attacker could for example add a backdoored kernel or initrd in the /boot directory, that needs to be unencrypted because the bootloader must be able to read the kernel and initrd files. | {
"source": [
"https://security.stackexchange.com/questions/211433",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45914/"
]
} |
211,637 | I have a 10mb encrypted volume which I made with Veracrypt and was planning to upload it to the internet. I have a pretty strong password but was worried about its protection against brute force attacks. After searching this SE it seems like a Veracrypt encrypted volume can be easily brute forced ref1 , ref2 . If that's the case then given enough resources we can crack any encrypted volume with any length of password (>100) in mere days with parallel brute-forcing instead of years. Do Veracrypt encrypted volumes have any kind of brute force protection built into it? Didn't Veracrypt creators know about this issue? Is there any other strong encrypting system which has brute force protection? | Any encryption is vulnerable to brute force attack, for example AES-256 has 2^256 keys, and given enough hardware we can “easily” brute force it. The problem is that there’s not enough silicon on Earth to construct enough processors to do it before the heat death of the universe. The fact that encryption can be bruteforced doesn’t mean that this will happen in a reasonable amount of time, and we can thank probability theory for that;) The weakest link is almost always not the key of the encryption algorithm, but the password from which the key is derived. Some passwords are more likely than others, and this allows dictionary attacks on passwords. In this respect the VeraCrypt project does everything by the books: they are using PBKDF2 with strong hash algorithms and high iteration counts (this is somewhat controlled by the user). Using PBKDF2 with random salt prevents the attacker from using pre-made hash tables and forces them to calculate every key attempt specifically for your container. Having high iteration count makes every attempt take a significant time (milliseconds to seconds). Repeated hashing is inherently unparallelizable (single PBKDF2 operation is unparallelizable, the attacker can perform multiple simultaneous guesses of course), so custom hardware wouldn’t help much. In these conditions the only feasible attack is dictionary-based, bruteforce would take too long. If your password is secure against dictionary attack - you can have high degree of certainty in the security of your data. Relevant documentation: https://www.veracrypt.fr/en/Header%20Key%20Derivation.html | {
"source": [
"https://security.stackexchange.com/questions/211637",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/89608/"
]
} |
211,693 | A young tech company which operates on sensitive data has employees that fall victim to phishing/porting scams despite its best efforts to instill security fobs, vpn, password managers, non-sms 2FA, limited email access and so on. Is it a good practice to force employees to hide their employment status from the public to avoid being targeted for hacking (e.g. remove the employer from LinkedIn)? | Hiding your employer would not appear to be of any use at all when you want to hide the employee's email address from the public. If you hide your employer info but spread your contact details far and wide, the employer info is not interesting. The assumption being made is that once you know the company name and the employee name, then one can freely email the employee. Trying to address the threat of incoming emails by trying to hide the company name, so that the email address domain can't be guessed, so that emails cannot be addressed is trying to push on the wrong end of the lever of control . And you are trying to do it with a wildly difficult policy to enforce. The trivially effective control is to break the direct tie between company name, employee name, and email address. I know of companies that stand up a separate domain to send emails from. So example.com stands up example-email.com . This immediately wipes out a lot of automated emails. Other companies salt the email address with 2-4 numbers, so [email protected] becomes [email protected] . Others use only the employeeID number: [email protected] . While each one of these can be overcome through analysis of other disclosed email addresses from the company, it is more effective and much, much easier to control and enforce through technical means than forcing people not to disclose where they work. The company name is simply not the primary data to control in this threat scenario. It's the email addresses . You can control those. Managing digital footprint is always a good consideration but you have an awareness problem and a trust problem with your employees that such a policy is not going to address. | {
"source": [
"https://security.stackexchange.com/questions/211693",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83281/"
]
} |
211,778 | My late brother was contacted by someone on landline number operated by a carrier in Australia and which displayed on caller ID. I traced the number to a company and though they did call him on a number of occasions from this number over a couple of days, they did not make the particular call in question which occurred in the same time frame. This has left me asking, is it possible someone could hack in and use their telephone number to phone my brother? The company is a financial services company and they were set up to make outbound calls using various landline numbers programmed into an auto dialler machine or possibly cloud-based phone system. They have been very cooperative and I am confident they did not make the call in question. I have also established the identity of the person who made the call to my brother, but how on earth did he get one of the company landline numbers to show in caller ID? This has me stumped. | Ars Technica did a superb piece on this a couple of years ago. A woman who is a real estate agent and publishes her cell phone, was inundated with junk calls. What was odd about these was They were fully automated calls They never played a message They used a different number every time They detailed her nightmare On the first night, France went to bed, slept for 7.5 hours, and woke up to 225 missed calls, she said. The calls continued at roughly the same pace for the rest of the five-day stretch, putting the number of calls at somewhere around 700 a day. France installed robocall blocking tools on her phone, but they didn't stop the flood. Unfortunately, anti-robocall services that rely primarily on blacklists of known scam numbers generally don't block calls when the Caller ID has been spoofed to hide the caller's true number. They included this quote from a security researcher (emphasis mine) Because it's an old, circuit-switched network, none of the switches along the way need to know who actually is placing the call. I was shocked to find out that the Caller ID is just an optional part of the original address message that gets sent along. You don't need it, and nobody is checking it along the way for authenticity , and, really this means you can put that to be whatever you want. To top it off, there are a lot of online services that allow you to send out phone calls and specify exactly what Caller ID you want them to come from . I've had to explain this to numerous family and friends. The pinnacle there was my father-in-law, who called me up one day to ask how he got robo-dialed from his own number . I even get random calls sometimes from people saying "I'm returning your call" when I have no idea who they even are, let alone know how to call them. Caller ID is never verified . That is hard to explain to most people, because their cell phone sends a proper ID and they can't easily spoof it. But the rise of VOIP, combined with the plummeting cost of phone calls in general and turnkey software that makes spoofing a breeze, has made this an incredibly cheap way to spam and scam people, especially from abroad . The FCC is proposing some changes to address this , but those changes are likely years off. | {
"source": [
"https://security.stackexchange.com/questions/211778",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/210071/"
]
} |
211,794 | We have been working on a OAuth 2.0 IDP implementation, and during the implementation of the authorize endpoint, i couldnt find in the RFC 6749, what should happen if the client_id is not passed in the request or is invalid, and there is no redirect_uri in the request also. Should the server return a 400, with no body, a 400 with json? Or is there a better approach? | Ars Technica did a superb piece on this a couple of years ago. A woman who is a real estate agent and publishes her cell phone, was inundated with junk calls. What was odd about these was They were fully automated calls They never played a message They used a different number every time They detailed her nightmare On the first night, France went to bed, slept for 7.5 hours, and woke up to 225 missed calls, she said. The calls continued at roughly the same pace for the rest of the five-day stretch, putting the number of calls at somewhere around 700 a day. France installed robocall blocking tools on her phone, but they didn't stop the flood. Unfortunately, anti-robocall services that rely primarily on blacklists of known scam numbers generally don't block calls when the Caller ID has been spoofed to hide the caller's true number. They included this quote from a security researcher (emphasis mine) Because it's an old, circuit-switched network, none of the switches along the way need to know who actually is placing the call. I was shocked to find out that the Caller ID is just an optional part of the original address message that gets sent along. You don't need it, and nobody is checking it along the way for authenticity , and, really this means you can put that to be whatever you want. To top it off, there are a lot of online services that allow you to send out phone calls and specify exactly what Caller ID you want them to come from . I've had to explain this to numerous family and friends. The pinnacle there was my father-in-law, who called me up one day to ask how he got robo-dialed from his own number . I even get random calls sometimes from people saying "I'm returning your call" when I have no idea who they even are, let alone know how to call them. Caller ID is never verified . That is hard to explain to most people, because their cell phone sends a proper ID and they can't easily spoof it. But the rise of VOIP, combined with the plummeting cost of phone calls in general and turnkey software that makes spoofing a breeze, has made this an incredibly cheap way to spam and scam people, especially from abroad . The FCC is proposing some changes to address this , but those changes are likely years off. | {
"source": [
"https://security.stackexchange.com/questions/211794",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/204628/"
]
} |
211,891 | I have a Linux machine that contains sensitive files. Users should be able to access (read) them when they are using the computer, but should not be able to copy them to another hard drive (USB stick or another hard drive that might have been added on the same machine). The main hard drive has been encrypted, in order to prevent someone from extracting it and stealing the files. I am free to use SELinux or other approaches in order to achieve the goal. UPDATE: After reading the answers, I would like to clarify: I am not very concerned about users who may take a picture of the screen. Protecting the actual file is my main goal. Even though protecting each and every file would have been optimal, I am mostly concerned about protecting the dataset as a whole (it is very large). Even if a few files get leaked, the damage is manageable. Moreover, due to the large amount of files, extracting them one-by-one in an inefficient way would not be practical. The users of the computer will not be given administrative privileges. | You can disable USB storage on Linux by blacklisting the module. modprobe -r usb-storage
echo blacklist usb-storage >> /etc/modprobe.d/10-usbstorage-blacklist.conf
echo blacklist uas >> /etc/modprobe.d/10-usbstorage-blacklist.conf If your users have physical access to the machine, and knows the encryption keys, the game is up no matter what you do software-wise. My suggestion would be to limit the access to physical interfaces of the machine. Lock it inside a box, and only let users interact via a keyboard, mouse and screen. You should also note that you can't stop a user from copying something. Worst case? Take up the phone, and take pictures of the screen as they sift trough the files. Data loss prevention should in my opinion be targeted at stopping accidental copying to untrusted devices. | {
"source": [
"https://security.stackexchange.com/questions/211891",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138155/"
]
} |
212,207 | How can we validate login credentials at the client-side itself without involving the server of a website? | You can't. The reason is that you can't trust the client at all. An attacker can modify the client as they wish, and circumvent any and all security measures you may have put in place. But what if we digitally sign our code? The attacker can't modify it then, right? Yes, they can. If you sign your code, the machine of the attacker needs to validate the signature and refuse to run it if the signature of the client does not match. Nothing stops the client from disabling this signature check and simply run code with a wrong signature or no signature at all. Furthermore, if you don't want to involve the server at all after sending the website, then all the potentially confidential content needs to be sent to the client first (before knowing if they are authorized to see it), and later revealed to them. Nothing stops an attacker from simply looking at the raw content being sent to them over the network, without any client-side code being run. But can't you encrypt the data with the user credentials? Yes, you could. But your goal is to authenticate the user, which means you confirm if the user is actually who they claim to be. The scheme suggested by user9123 would work as follows: User claims to be user "foo". Website encrypts payload for "foo" with credentials for that user, e.g. "foo:bar". User enters their credentials, which locally decrypt the payload. This scheme does not authenticate the user to the server in any way. The server does not know if the user is really "foo" or not. Furthermore, if the user has a weak password, the attacker can attempt to crack it. Yes, a key-derivation function can make this process slow, but it is still essentially a credential leak. What I am curious is why you would want to attempt this scheme, instead of the traditional tested method? | {
"source": [
"https://security.stackexchange.com/questions/212207",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/210608/"
]
} |
212,287 | Googling I saw several lectures saying that it is possible to sign using digital certificates. I am confused about that, because according to my knowledge it is possible to sign only using the private key, where digital certificates are used during the signature process? | Just a matter of imprecise language that should be understood by everyone involved. You sign using the private key that only you have. The public key is not used, and the certificate is not used. In fact, maybe there is no certificate using the public key. The verifier verifies the signature using the public key, the certificate is not used. But the public key is just a number (or two numbers). It is inconvenient to handle the bare numbers. So often the public key is carried around with some metadata that says what the public key is for. That is a certificate. Maybe it just says "juaninf's public key" and is self-signed (has a signature attached made using the private key matching the public key of itself). Maybe at work it has a note signed by your boss that says "This is the public key of juaninf, I hired him to work for me on 2019-06-23, I am the manager of department D at corporation C, here is a signature from the CEO of corporation C attesting to this fact". And the automatic doors at the building can test this. And your private key is in an unclonable smart card. If you click on the lock next to the URL in your browser, you can see the TLS certificate for the site. In the certificate, you can see the public key. You can see the certificate, therefore you have the certificate. But you do not have the matching private key. So you can't create a new signature for the public key in the certificate. So you can't do MitM attacks on other visitors of the site. To do MitM attacks, you would need to have the private key matching the public key in the certificate to sign DH key exchange messages (or decrypt encrypted secrets in obsolete configurations of TLS). Sometimes it is important to be precise when saying "I have the certificate" or "I have the certificate and private key". But often it is obvious from context what is needed. | {
"source": [
"https://security.stackexchange.com/questions/212287",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41180/"
]
} |
212,318 | I'm developing a web site where people will have accounts. However, unlike most web sites, user do not register, rather they are invited by the site admins. The site admins will create a new user profile, based on their email address, and then want the site to email them telling them that their profile is ready for use. However, I'm not sure of the safest way to let people know of their password. In a normal registration, the user enters their password of choice, which is hashed and stored. All that remains is to send them a link to verify their email address. In our case, they don't register, so don't supply a password. Whats the safest way to proceed? This answer suggests sending them a link to a page where they can see their password, but I'm not sure if that has any benefits over sending them to a page where they can enter their own password. Actually, I think the latter suggestion is better, as if the password has already been set, the web page can inform them that the password has been set, and if this wasn't them, to contact the admins immediately. What would be the best way to inform a new user of their password, security-wise? | The best practice in this instance is to send them a link to a page where they can set their own password. You should ensure that after they have used this link to register, that the link cannot be used for account takeover. One way of achieving this is including a time limited, single use token in the URL. | {
"source": [
"https://security.stackexchange.com/questions/212318",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/117404/"
]
} |
212,388 | How to dispose of a smartphone (it's an iPhone 5) at home? I was reading through this SE site questions and found this one which hardly applies here. Besides, I'd want to do it so that: It's not damaging the ecology (at least not too much) I don't have to do it with work tools (I don't have a hammer and surely don't want to set anything on fire in my apartment) The device is currently in use for all sort of things - 2factor auth, custom authenticators, personal data, banks eTAN etc etc. After the disposal, the data from the device must be impossible to restore and the device should be impossible to use as anything but decoration. | Unless you have secrets on that phone that someone would pay a lot of money to uncover, you don't need to go overboard. A factory reset would work just fine. To decrease the chances someone would still recover something, point the camera out of the window and let it record until it fills up all memory. Repeat if you want. Doing that will overwrite almost everything, and anything it could be recovered will mean too little to be of any use. But if you have secrets that would make someone use a electron microscope on it, rent a clean room or spend hundreds of times the price of your phone to recover its data, physical destruction is the only safe option. | {
"source": [
"https://security.stackexchange.com/questions/212388",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/210876/"
]
} |
212,589 | If the scanner tray is considered as an interface, and it accepts input (basically it is its main functionality), could it be hacked using malicious code written on a piece of paper? | The answer entirely depends on implementation of scanning process in the printer. Modern printers are in essence computers and they are much more powerful than their predecessors from earlier days. So, the question boils down to "is it possible to hack a computer by using an image"? The answer is yes , because creating exploit-free software is almost impossible as described in the answers here: Is exploit-free software possible? Image handling libraries happen to have vulnerabilities. An attacker could entice a user to open specifically crafted image which would exploit a vulnerability on victim's computer, and therefore affect it in some way. So, if printer's scanning process involves some sort of processing of scanned images, and its software contains bugs, then we can assume this vulnerability can be exploited by a knowledgeable attacker. could it be hacked using malicious code written on a piece of paper? The printer won't execute code written on a piece of paper. However, there exists probability that printer's software used for processing of scanned images contains bugs that make printer misbehave if it encounters a certain image. The attack surface depends on how much processing the printer does with the documents that are being scanned. The result of this is hard to tell. It depends a lot on the printer and its software, its capabilities. | {
"source": [
"https://security.stackexchange.com/questions/212589",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/211161/"
]
} |
212,696 | I've read "Is my developer's home-brew password security right or wrong, and why?" , but there's still a question in my mind: Even if someone uses a bad, home-brewed security algorithm just like Dave, an attacker can't get the real password if the attacker only compromises the database, not the server. Mark Burnett's answer to Dave's question seems to prove my guess. Again, take Dave's code as an example. Dave uses insecure/fast hash functions, md5 and sha1, so yes you can crack his password (get the plain text) from the database easily. But you still can't get the real password if his server is not compromised. | Yes, But.. To make it nice and clear... We're talking about a database-only compromise when an attacker has access to the database but not the application source code. In that case the attacker will get the password hashes but will be unable to crack them and get the original passwords because of Dave's custom algorithm. So in the case of a database-only breach, yes, Dave's password algorithm will protect passwords more than if he had used MD5 or SHA1. However That's only one possible avenue for system leaks. There is one key fact that trashes the "math" that makes Dave's homebrew algorithm seem reasonable. Half of all breaches start internally. (sources 1 2 3 ) Which is a very sobering fact, once you let it sink in. Of the half of breaches caused by employees, half of them are accidental and half are intentional . Dave's algorithm can be helpful if all you are worried about is a database-only leak. If that is all you are worried about though, then the threat model you are protecting against in your head is wrong. To pick just one example, developers by definition have access to the application source code. Therefore if a developer gains read-only access to the production database they now have everything they need to easily crack the passwords. Dave's custom algorithm is now useless, because it relies on old and easy-to-crack hashes. However, if Dave had used a modern password hashing algorithm and used both a salt and pepper, the developer who gained access to a database-only dump would have absolutely nothing useful at all. That is just one random example but the overall point is simple: there are plenty of data leaks that happen in the real world where proper hashing would have stopped actual damage when Dave's algorithm could not. In Summary It's all about defense in depth. It's easy to create a security measure that can protect against one particular kind of attack (Dave's algorithm is a slight improvement over MD5 for protecting against database-only leaks). However, that doesn't make a system secure. Many real-world breaches are quite complicated, taking advantage of weaknesses at multiple points in a system in order to finally do some real damage. Any security measure that starts with the assumption "This is the only attack vector I have to worry about" (which is what Dave did) is going to get things dangerously wrong. | {
"source": [
"https://security.stackexchange.com/questions/212696",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/159875/"
]
} |
212,702 | This is happening in Visa/MasterCard/American Express, etc. I tried checking in many payment apps and payment gateways that if I enter the correct debit card number, name, valid date, and wrong CVV number, I am able to receive OTP. however, the transaction is unsuccessful due to validation at the last for wrong CVV. But shouldn't it suppose to verify before I get the OTP? What's the reason, Isn' it a security issue? | But shouldn't it suppose verify before I get the OTP? What's the
reason, Isn' it a security issue? This is absolutely NOT a security issue! quite the opposite it's a protection. Lets go through the steps. You put in card details. You put in CVV You put in the OTP. The payment is processed if and only if the combination of all of it are correct. Now assume a scenario where it tell's you the CVV is wrong before the 2FA that is just going to simply give the attacker a chance to better attack.Now the attacker knows the CVV is wrong and can simply change that.While in the correct scenario attacker will have to break 2 Factor authentication to gain that information | {
"source": [
"https://security.stackexchange.com/questions/212702",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/156973/"
]
} |
212,705 | The title says it all really. Say my IP address was 1.2.3.4 and I wanted to change or 'spoof' it so that its exactly 2.3.4.5, would this be possible or are there too many varying factors that need to be taken into account before getting a definitive answer? Why you might ask? Well I was in a store the other day and they had iPads around the room setup so that they were showing the store's online website. I went over and looked at one and noticed that what was showing on their in-store iPads was different to what I would see by simply connecting to their site via my phone (and yes, they were both the exact same link using the same exact browser, Safari). This lead me to think that the only way they're able to do this is by either having the site detect the device's IP address and show specific (or exclusive) content on their homepage based on that , or by having the site detect that the device is using the stores WiFi (although I doubt this is possible, hence why I thought the IP route was more plausible). So I was curious whether it'd be possible to spoof my device's IP to that of the stores' exact IP so that my device showed exactly what theirs did in regards to their website. Feel free to discuss this, I know this is very very specific and with minimal details known, so I doubt there's a definitive solution... | You can change your IP to whatever you want; that's trivial. But that will not work the way you want to. Let's say the store's ISP is Apple Networks, and their IP range is 1.2.3.0 to 1.2.3.255. You note that and get home. Your home network is from Avocado Networks, and their IP range is 2.3.4.5. You change your IP to 1.2.3.123 and wait. Nothing happens. You cannot access any site. You are offline. But why? Routing . Avocado Network tells the entire world they own the network 2.3.4.0, so when people want to reach anyone on that range, they send the packet to Avocado routers. They don't send any 1.2.3.0 packets to them, they send to Apple Networks routers as they are the ones advertising to the entire world their IP range. So your computer sits there, waiting for anything to come, and nothing happens. If Avocado Networks employs Egress Filtering , your packets don't even leave their network. Their routers will say this is a packet coming from my network, but it says it's from Apple Networks' address space; it must be an error, so I will drop the package . If they and nobody along the path uses Egress Filtering, your request for connection will reach pineapple.com , the site will respond as usual, but the response will be sent to Apple Networks routers, not Avocado networks. And either there will be nobody with 1.2.3.123 IP address to answer and the packet gets forgotten, or there will be an 1.2.3.123 there, and they will say sorry, I never heard from this connection before. Forget it. and that's that. To achieve what you want, you must connect a system to the store network, make it work as a proxy, and forward packets from your home to that system, and then that system will access pineapple.com site on your behalf and send you the response. | {
"source": [
"https://security.stackexchange.com/questions/212705",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/211339/"
]
} |
212,739 | I'm doing an integration with a system that has a self-signed certificate. For initial development, we choose to ignore the certificate checking to bypass some errors: Exception in thread "main" javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target because we first need to understand how to add the certificate using the Java keytool on a docker environment. But, my question is: what is the advantage in that case to import a self-signed certificate as a "Trusted Certificate" when I could just ignore it? | If it's an official service you are integrating with the provider should really have a valid, publicly signed certificate installed for the sake of security. Assuming that you need to continue on with your provider using a self signed certificate, the big difference between ignoring the certificate and adding it as trusted is the following scenario. This ues DNS poisoning as an example of a man in the middle attack. Take the following example: api.example.com has a self signed cert with thumbprint XXX listening on the IP 5.5.5.5 . Adding it to your trust store makes your system expect the thumbprint to be XXX when connecting If someone was able to poison your DNS - make api.example.com resolve to 6.6.6.6 - the thumbprint would be YYY. By adding the cert to your store your product would refuse to connect to the malicious site. By disabling the check entirely, your product would happily connect to the malicious api.example.com at 6.6.6.6 . | {
"source": [
"https://security.stackexchange.com/questions/212739",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79397/"
]
} |
212,746 | Do router's normally use the ping command to discover local hosts? I ask because while I was at work my firewall on my personal laptop blocked two pings (among other things) and it said they came from the router. Is this normal or evidence of a curious/malicious user (because there are no network admins that might use it)? Timeframe of Blocked Activity from Router's IP: Ping @ 12:54 PM UDP 49538 @ 4:09:17 PM UDP 60655 @ 4:09:26 PM UDP 1900 @ 4:09:39 PM UDP 63266 @ 4:09:40 PM UDP 51081 @ 4:11:15 PM Ping @ 6:36:48 PM UDP 1900 @ 6:39:25 PM TCP 2869 @ 6:41:02 PM UDP 49609 @ 6:45:24 PM If I recall correctly, both pings occurred at about the same time that I brought my laptop out of sleep mode because I only used my laptop for a little bit before I closed it and I only used it a few times. | If it's an official service you are integrating with the provider should really have a valid, publicly signed certificate installed for the sake of security. Assuming that you need to continue on with your provider using a self signed certificate, the big difference between ignoring the certificate and adding it as trusted is the following scenario. This ues DNS poisoning as an example of a man in the middle attack. Take the following example: api.example.com has a self signed cert with thumbprint XXX listening on the IP 5.5.5.5 . Adding it to your trust store makes your system expect the thumbprint to be XXX when connecting If someone was able to poison your DNS - make api.example.com resolve to 6.6.6.6 - the thumbprint would be YYY. By adding the cert to your store your product would refuse to connect to the malicious site. By disabling the check entirely, your product would happily connect to the malicious api.example.com at 6.6.6.6 . | {
"source": [
"https://security.stackexchange.com/questions/212746",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/211044/"
]
} |
212,825 | I have a password policy which states that users must not write and store their passwords down in plaintext. How can I ensure that they haven't done so by writing their password in emails, scripts, documents or files? | There is no way that you can be sure that a user hasn't written down their password. Even if you have complete access to their computer, what if they noted it down in their phone? Or on paper? And even if you did have access to all their devices, you can only check that they haven't written down the password if you, as a sysadmin, yourself know the password. Which you shouldn't! Passwords should always be hashed, and never stored in plaintext or in a form which allows you to retrieve the original password. What about password managers? They're known to significantly increase security since now the user only has to remember one passphrase and is less likely to use an easy-to-guess password for your system. This is a social issue, which can only be solved by educating your users/employees about the dangers of leaving passwords written in plaintext around. | {
"source": [
"https://security.stackexchange.com/questions/212825",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200393/"
]
} |
212,933 | Our system uses passwordless login to send user a login code+link by e-mail by which they can login. We found one of our customers has a mail scanner that actually follows those links. We invalidate the login codes on use to the reduce the attack surface for an attacker. But now such emails no longer have a valid login link. There seem to be two solutions to this: Do not provide a login link but only a code in the message. Let the login code be valid after first time use for X period (say it invalidates after 15 minutes) Option 1 seems to be unacceptable by our UX team. Option two seems to be much less secure: Provides a bigger attack surface because more codes will be active in our database Social engineering will be easier: a user could take a picture of someones mail and type of the code within the valid period (even though user already used the code) and login with this code. Would there be any other or better solution that would be both UX friendly and (atleast as) secure? | Instead of invalidating the login code when the link is clicked, have a script on the page that runs and makes a request to your server to invalidate the code. This will only be effective if the "mail scanner" just visits the page and doesn't run any scripts on it. You should also invalidate the code after a period of time (your second option), because if the user is running a browser with scripts disabled the code will not be invalidated by this method. The timeout serves as a backup if this is the case. If you find that whatever "mail scanner" your users have is running scripts on the page, this won't work. Instead, you could require a user action on the linked page to complete the login process. Your users would click the link in the e-mail and be taken to a simple page with a login button, and perhaps some text like Log in as <username>? . Only log them in (and invalidate the login code) when they click the button. This would require two clicks from your users instead of one, but I don't think that's too inconvenient. Since the "mail scanner" will only visit the page and not click the button, it won't use up the code. The more I think about it, the more I like requiring a click. It doesn't rely on the mail scanner not running scripts so it should be more robust. You could even combine it with Steffen Ullrich's cookie idea . If the user clicks the login link in the e-mail and has the cookie, you know they're not the mail scanner so you can log them in right away. One click and they're logged in. If the cookie is NOT present, the click might actually be coming from the mail scanner, so you show a page with a login button. If it's actually just the user on a different browser (or the same browser with cookies disabled), they'll click the button and log in. In this case it took them two clicks, but that's only a slight inconvenience and it's only necessary if the cookie isn't there. See this similar question. | {
"source": [
"https://security.stackexchange.com/questions/212933",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/162080/"
]
} |
212,953 | I've been doing a security audit and found out you can easily identify host roles and running services just by their computer name (using nslookup). I would like to report this so that they use less obvious computer names and it becomes harder for an attacker to identify machine roles on the network. I would like to give some weight to this proposal by linking to some security naming convention from a trusted organisation. After some search, I've been unable to find some. Is there any existing ? | You have identified only one risk, that of an attacker identifying machine roles on the network by using predictable host names. I think you missed the competing risk, that of increased operator error by not using predictable and descriptive host names. This is how I would assess those conflicting measures: Use unpredictable host names Benefit(s) An attacker will need to spend (significant) more effort in determining the layout of your network and to identify the most profitable targets for a penetration attempt. Risks Operator error. Users and administrators may have difficulty identifying systems and their correct roles e.g. confusing test and production systems. Probability: high Impact: high Rationale : Most humans have terrible memories where "random" data is concerned --> high probability. Also there are usually very few barriers that prevent trusted users and administrators from making high impact mistakes --> high impact. Use predictable host names Benefit(s) Reduced operator error rates, ease of management and automation. Risks Attackers will also have an easier time determining the layout of your network and to identify the most profitable targets for a penetration attempt. Probability: medium Impact: low Rationale : Not every naming convention is immediately intuitive to a black-hat attacker --> medium probability. Also using hostnames to predict a network layout is only a shortcut, but doesn't provide information that an attacker wouldn't be able to learn through other means. And knowledge of the role of a server as disclosed by a hostname does not automatically make it more vulnerable (only more or less valuable). --> low impact. | {
"source": [
"https://security.stackexchange.com/questions/212953",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/110133/"
]
} |
212,994 | I was putting in my cc number and and as soon as I had finished inputting 16 digits it highlighted it in red. I looked it over and realized one digit was wrong. Corrected it, it became black. How did the website know it was wrong? | Credit card numbers can verified by calculating a checksum. Every credit card number created is assigned a number following an algorithm. Ross Millikan :
The checksum specifies the last digit, so there are 15 digits left.
That should mean there are 10^15 numbers available, but there are
other restrictions. The first digit is the card type (4=Visa,
5=MasterCard, etc.) and the next several have to do with the issuer. Following that, if this a credit card number does not comply with the algorithm, the checksum is incorrect so the number must be invalid. The algorithm is called the “Luhn algorithm”, check this Wikipage for more Info. TripeHound: Note: if a given number fails the check, it is definitely
not a real CC number. However, if it passes the check, it only proves
that it is a potential CC number: it does not prove that it has
actually been issued. The next stage of verification (talking to a
card-processor) should verify that. Similar checks can be made on UK
sort-code/account-number pairs (and probably something similar in
other countries) and on International Bank Account Numbers
(IBANs) Edit: Added some of the information given in the Comments to the Answer.
Pleas go and give them an Upvote too, its great information. | {
"source": [
"https://security.stackexchange.com/questions/212994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/173701/"
]
} |
213,272 | I generated a password recently for a new account and the first three characters were "aa1". After exhausting all other attacks, a cracker would start brute forcing. On the assumption they'd start from "a", my password of "aa1" would be cracked faster than, say, "ba1", and that faster than, say, "za1". This password was very long so this question is more theoretical than practical. (Unless password lengths are limited, of course...) Are my assumptions right about brute-forcing and passwords? | It would seem that it depends on how exactly the attacker is going to bruteforce your password. However, my opinion is that in the end it doesn't matter. A serious attacker will never start from the beginning in alphanumeric order, from aaaaaaaa to 99999999 , unless they know they can do that in a reasonable time. If that's going to take them a thousand years, why should they use that method, knowing they will necessarily have to stop at, say, cccccccc ? But if the attacker knows that they can try all the possibilities in a reasonable time, then it doesn't matter whether your password is among the first combinations or among the last, because in the end they will find it anyway (in a reasonable time). Most passwords are still weak (say, your dog's name, plus maybe your date of birth, etc.) and the attackers don't like wasting too much time, let alone years to crack passwords. So what attackers normally do is use dictionaries and patterns. They will first try passwords like: pass123 , 123pass , john90 , john91 , John92 , JOHN93 , 123456 , l1nux4dm1n , etc. If every attempt with dictionaries and patterns fails, they might move on and assume that the password looks truly random. How long will it take to try all the possible passwords? If that can be done in a reasonable time, they might try them all (for example from aaaaaaaa to 99999999 ). Otherwise if the attacker assumes that they will never be able to try them all, they might try to bruteforce the password with some random guesses (random strings, not ordered): 12hrisn589sjlf , 9f2jcvew85hdye , otnwc739vhe82b , etc. If the attacker is lucky they might find the password, sooner or later. However if the password is too strong, such that it would take them too many years to guess it, they had better give up or think of an alternative attack (phishing, shoulder surfing, keyloggers, etc.) | {
"source": [
"https://security.stackexchange.com/questions/213272",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/122006/"
]
} |
213,602 | If I download archived data from a possibly untrusted source at which point am I at possible risk of harming my system: Initially downloading and saving the archived data (still packed) Unpacking the archived data Executing any file from the unpacked archive At point 3 I will obviously be at risk, but what about 1-2? | 1 should not present any danger as long as the file is just saved somewhere and no attempts to open it with anything are made. If you view it even with a text editor, there's already a small danger of exploits. In the case of 2 there are vulnerabilities and exploits, so there are dangers. Some examples of such possible scenarios: Arbitrary file writes caused by .tar.gz archive symbolic link (symlink) vulnerabilities that are exploited because of how Bower (a popular web package manager) extracts such archives CVE-2018-20250 is an absolute path traversal vulnerability in unacev2.dll, the DLL file used by WinRAR to parse ACE archives that has not been updated since 2005. A specially crafted ACE archive can exploit this vulnerability to extract a file to an arbitrary path and bypass the actual destination folder. In its example, CPR is able to extract a malicious file into the Windows Startup folder. CVE-2018-20252 and CVE-2018-20253 are out-of-bounds write vulnerabilities during the parsing of crafted archive formats. Successful exploitation of these CVEs could lead to arbitrary code execution. Zip Slip which attackers might use to target files they can execute remotely, such as parts of a website, or files that a computer or user are likely to run anyway, like popular applications or system files. Helm Chart Archive File Unpacking Path Traversal Vulnerability. CVE-2015-5663 - the file-execution functionality in WinRAR before 5.30 beta 5 allows local users to gain privileges via a Trojan horse file with a name similar to an extension-less filename that was selected by the user. CVE-2005-3262 allows remote attackers to execute arbitrary code via format string specifiers in a UUE/XXE file, which are not properly handled when WinRAR displays diagnostic errors related to an invalid filename There are plenty more examples and databases with such vulnerabilities and even most of them got fixed in later versions of the software, a risk still exists. So therefore, [ 2 ] is risky and should be handled with care. | {
"source": [
"https://security.stackexchange.com/questions/213602",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/212437/"
]
} |
213,715 | Suppose we send out email verification to new subscribers that where they have to click on a link to verify their account. Suppose they forget to verify it, and later try to login. Should the error message say "Your user name or password is incorrect?", instead of letting them know that they have forgotten to verify the account. I assume this is the most secure way of handling it, because if we tell them that they have to verify the account, we are letting them know that an account with that userid exists ... Thoughts? Perhaps the best way to handle it is to allow them to access the account, but don't let them do anything in it until they are verified? | What I see most commonly is allowing the authentication and signing the user in, but locking meaningful features away until the email is verified. You should bubble up an error reminding the user to re-send an activation email if they try to access one of the restricted features. It is poor design to ever lie to a user - if they submit the correct username and password, you should never show an error claiming that either is incorrect. | {
"source": [
"https://security.stackexchange.com/questions/213715",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/160653/"
]
} |
213,716 | I added a new phone line and someone called claiming to be the previous owner of the phone number. He requested that I forward information a text message (He wanted me to forward 2 Factor Authentication information that would be sent to my new phone number via SMS). Naturally, I refused the request. I do not think that they are too happy with the refusal. Are there are any risks I should be aware of or precautions that I should take, given that there is some 'funny business' afoot? CLARIFICATION: The caller does not know my name or any of my accounts. If the caller is a bad actor, then he is compromising someone else's account because the phone number he called was recently issued to me and I do not give it out to anyone, because I use a call forwarding service. Said phone number has not been given out to anyone | It's a known scam attempt. The caller probably compromised one of your accounts, and got stopped by the 2FA token sent to your phone. If you send them the token, your account is fully compromised. Or, as Nic pointed very well, may be the account of someone else. What you do? First: don't send them any code or token. That will prevent them for compromising your account. Second: If your provider offers any alternatives, replace SMS as 2FA on every account you have with a more secure solution, like a hardware or software TOTP token. SMS is too insecure for that. 1 2 3 4 Third: change your passwords. If you don't have a password manager keeping different accounts for each service, install and setup one now. It will take time, but takes way less time than to recover from any mischief an attacker can do with your online services. While you are changing passwords and storing them on your password manager, switch the 2FA from SMS to TOTP to have a safer 2FA. Don't trust your brain to pick passwords. They are guessable, and a computer can try billions of combinations per second. Any password manager, no matter how primitive, is better than us at creating password. | {
"source": [
"https://security.stackexchange.com/questions/213716",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/115653/"
]
} |
214,048 | I have received one of those typical sextortion scams ("drive-by exploit", filmed by webcam (mine has tape on it), pay bitcoin etc.). The thing is that an old password of mine is included (I don't even remember where I used it), but searching the password on HaveIBeenPwned returns nothing (I have previously been notified of two leaks, Last.FM and MyFitnessPal, but those accounts use different passwords). That got me wondering: since this seems to be a rather old password, how complete are databases like HaveIBeenPwned, and where could I report such a new exploit, other than the authorities? | While services like HaveIBeenPwned are fairly extensive, there are still many stolen user / password lists that have not been revealed to the public eye. Maybe a company didn't actually disclose what happened, never realized anything happened, and/or no researcher has yet found the list. Unless you somehow find the list that included that password somewhere, there isn't a good option to try and report this incident. | {
"source": [
"https://security.stackexchange.com/questions/214048",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/207359/"
]
} |
214,116 | There's lots of talk about the HaveIBeenPwned password checker which can securely tell users if their password appears in one of their known data dumps of passwords. This tool has a publically available API behind it which websites/apps/etc are free to use to allow their users to check their passwords, but from what I can see all the listed applications are specifically email/password checker tools. Never have I seen or heard of a user entering a password into a website while creating an account and it then gives them an error message detailing that their chosen password can be found in a well-known data breach. If I were to create a website, would it be a bad idea to automatically check my user's passwords against HaveIBeenPwned's tool as an additional safety precaution and to require them to pick a password which the site doesn't know about? | Latest recommendations from the NIST (SP 800-63b Section 5.1.1.2; see here or here for a summary ) actually suggest checking user passwords against lists of known compromised passwords, so doing just that is actually in line with current best practices. It's also much better than requiring passwords to meet certain "rules" (which the NIST now recommends against). HIBP is just one way (and probably the simplest way) of doing this in practice. It only requires sending off the first 5 letters of the hash of the password, so actual risk to users is practically zero. So yes, please feel free to do it if desired. As for why a particular organization might not do this, I'm sure that varies wildly from site-to-site, but I think it's a safe bet that it boils down to the usual suspects: Security is an area where many like to skimp, and implementing such a system takes additional effort. It takes time for new best practices to become common knowledge for institutions It takes even more time for institutions to get caught up with best practices Every developed feature has costs: money in terms of engineering time to develop and maintain, lost users who don't understand or wish to follow the rule ( h/t @Woohoojin ), etc. Organizations may not consider the added benefits to be worth the costs. To be fair, none of my systems do this yet, so you can add me to #3 or #4. Item #4 is worth a bit more mention. The costs of implementing this are obvious—it takes developer time to build and maintain any feature. The benefits are much harder to quantify. Of course when it comes to security issues, many companies make the mistake of assuming benefits are zero and therefore skimp on security (see point #1). However, this is one feature in which the benefits are likely small. There are often real costs to a business related to the compromise of user accounts (more customer support, perhaps rolling back transactions, etc...), but as long as the compromise was due to the user's own mistakes (in this case, by choosing compromised passwords), a business is unlikely to see any direct liability and therefore will probably avoid any larger costs. As a result, features like this may not be worthwhile for all businesses to implement—it's always up to each business to weigh for themselves potential costs and benefits. | {
"source": [
"https://security.stackexchange.com/questions/214116",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/165948/"
]
} |
214,243 | I have a credit card saved to my primary Paypal account. To make a long story short, I needed to make another Paypal account that would not be connected to my original one. I used a different computer, which had never been logged in to my original one. When I tried to register my credit card to the account, Paypal told me I was unable to register it, as they said this card is already linked to a different Paypal account. How can they possibly know that? Shouldn't my credit card be stored with a hash under my account only? How and why is Paypal keeping all their registered cards in one big file for cross-reference? | There are a couple reasons why Paypal (or more generally, any payment service) can know if you've used your card in more than one place. Your credit card is absolutely tracked everywhere possible Shouldn't my credit card be stored with a hash under my account only? If your card is kept hashed then it can be easily compared across accounts. Hashes are deterministic, so for a fixed hashing algorithm a given credit card will always give the same hash. Therefore if they were storing hashes, they could easily compare across accounts and determine if the card was already stored elsewhere. Doing so can be advantageous, as in this case it is being used to prevent fraud (the implication is that if the same card is added to multiple accounts, it is likely due to fraud). Once you have a "secure" hash of a credit card, there's no reason not to check it across different accounts. Paypal certainly can and does. However, this ability isn't limited to Paypal, and can easily be available to much smaller merchants. For instance with Stripe (a common PCI-compliant payment method) the merchant will be given a unique identifier for each credit card number stored on Stripe. The merchant doesn't keep (or even see) the card number, but they can still compare the given hash against other card hashes that have been used in their systems. This can (and is) easily used for the less-altruistic purpose of tracking a user's buying history across multiple accounts and anonymous transactions, while still maintaining PCI compliance. So to be clear, your credit card is tracked absolutely everywhere by as many people as can keep their hands on it, even if they don't know your credit card number themselves. Paypal keeps your actual credit card number on file - not just a hash Smaller merchants can and should make sure and never store, transmit, or even look at actual card details. However, there is no requirement that forbids any merchant from keeping the actual card number if they so desire. In general though any merchant that wants to keep card numbers on file and remain PCI compliant will (theoretically) have to go through stricter validation, security auditing, and effectively have to pay a ton of money in fees. The increased costs and liability of keeping credit card numbers on file while remaining PCI compliant are so large that any moderately well run small-medium business will never try. However, large businesses can and do choose to do otherwise. The reality is that someone has to store card numbers somewhere so that your card can be billed. The larger credit card processors (which Paypal definitely is) certainly store the full card number. They should store the numbers using strong encryption and secure keys/access control procedures. As for the details of how they actually determine that a credit card number is used twice, ultimately only Paypal can answer that. They may have a method for comparing encrypted card numbers directly, but more likely they also store a hash of the card numbers and compare those directly ( h/t Jory Geerts ). Either way though, they do keep your card number on file, and they can compare card numbers against accounts. Note that this doesn't mean that they are "Keeping all registered cards in one big file for cross-reference". Their infrastructure for secure card storage is certainly far more complicated than that. However, they obviously have a compelling business need to be able to compare cards across accounts, and have setup their infrastructure so that they can both store your cards securely and also check for duplicates across accounts. I agree with the linked comment: I would guess that they are also calculating a secure hash of the credit card number and using that for easy comparisons. | {
"source": [
"https://security.stackexchange.com/questions/214243",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/164366/"
]
} |
214,278 | How a ticket system works A ticket system - one you see at festivals - works like this: when a user pays for their ticket, a row is added to the database with a column named is_scanned , whose default value is set to false. As soon as a guard at the festival scans the barcode (containing an ID, and unique hash) with their device, a request is sent to the database to check if: the user matching the ID and hash has paid, and if the value of column is_scanned is still set to false . If both conditions are satisfied, it sets the value is_scanned to true , to prevent someone else copying the ticket/barcode from getting in. The vulnerability problem The problem here is the time between the request being sent by the scanning device, and the value is_scanned being toggled from false to true . Consider this scenario: Alice has a valid ticket which she paid for, but then she lets Eve copy her barcode and changes the visible name on the false ticket from Alice to Eve. So now we have two tickets. One valid and one fraudulent, but both have the same barcode, the only difference is the name. What if the ticket from Alice and Eve gets scanned at exactly the same time when they enter the festival? The ticket system wouldn't toggle is_scanned to true in time to make sure Eve couldn't enter with the same barcode as Alice. This results in both tickets (the valid and fraudulent) being shown as "valid" to the guards. Possible solutions Of course, this kind of exploit really depends on a lot of luck, and while it's possible in theory...in a real scenario, this would probably fail. However, how can we defeat this kind of exploit also in theory? Identification I've already taken this exploit into account using the following method: When a barcode is scanned, I display not only if the ticket is valid (satisfies the conditions stated earlier), but also the name in the database. If the name doesn't match the one on the ticket, we know the ticket is manipulated in some way. Also, if the name which comes up on the scanning devic, doesn't match the name on the ID (which everyone needs to show anyways to prove age), entry is also disallowed. The only way to bypass this solution is identity fraud, and that of course is beyond the responsibility of the ticket system to check. Delay Another way to solve this, in theory, is to add a random time of delay between each request made to the database/validation API. This way, no one would be able to scan their ticket at the same time...because the time of validation is delayed each time with a random amount of milliseconds. I'm not a fan of this, because it: makes everything slower at the entrance isn't effective if it's not delayed hard enough. Because if it takes 50ms for the database to update is_scanned from false to true , the only solution would be to delay it with an interval of minimum 50ms each time. Other solutions? What other solutions do you think of to solve this exploit? | The vulnerability you're describing is a race condition . There are several ways to deal with it, but I would go with a SELECT ... FOR UPDATE SQL query, which puts a lock on the selected rows to prevent new writes until the current transaction is committed. Be sure to check your RDBMS documentation to check how to implement it correctly: PostgreSQL MySQL MariaDB | {
"source": [
"https://security.stackexchange.com/questions/214278",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/99202/"
]
} |
214,301 | Does anybody have hands-on experience with stateless password generators (managers) like Getpass ? It seems like it does most of the work of cloud password managers, but leans more to the security side as there is no servers with passwords to penetrate. | I have used a stateless password generator for years, and I think there are a lot of drawbacks: If your master password is compromised, all of your passwords are. In comparison, standard password managers requires that the attacker both compromise the master key and gain access to the password store. If a website has a password policy, you might not be able to generate a password that respects it. If one of the passwords needs to be updated for some reason, you need to keep that state somewhere. For example, you need to remember to generate a password for "StackExchange2" instead of "StackExchange". If you already have some passwords that you can't change (for various reasons), a static password generator won't help you. For all those reasons, I think you should definitively use standard password managers. | {
"source": [
"https://security.stackexchange.com/questions/214301",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213292/"
]
} |
214,304 | I'm working on something where using windows authentication to SQL server is difficult if not impossible. My team member is adamant that using SQL auth is much more difficult to manage and is a major red flag from a security and compliance perspective. I can definitely see how it could be more complicated than just using the domain credentials. I don't have a lot of experience in the world of security and compliance. I'm wondering if this is universally accepted as a non-starter or if this is maybe just his preference? Thanks! | I have used a stateless password generator for years, and I think there are a lot of drawbacks: If your master password is compromised, all of your passwords are. In comparison, standard password managers requires that the attacker both compromise the master key and gain access to the password store. If a website has a password policy, you might not be able to generate a password that respects it. If one of the passwords needs to be updated for some reason, you need to keep that state somewhere. For example, you need to remember to generate a password for "StackExchange2" instead of "StackExchange". If you already have some passwords that you can't change (for various reasons), a static password generator won't help you. For all those reasons, I think you should definitively use standard password managers. | {
"source": [
"https://security.stackexchange.com/questions/214304",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213295/"
]
} |
214,312 | the AJAX login script var xhr1 = new XMLHttpRequest();
xhr1.open("post", 'https://192.168.1.1/index/login.cgi', false);
xhr1.send("Username=admin&Password=6836394be82df057e085fc344c6179d1b50b30224ad0SJ0GQrNWmpsXCSk5so7o73f93282&challange=SJ0GQrNWmpsXCSk5so7o"); the problem is it gives me this error Login Failure: Browser did not support Cookie. Please enable Cookie and i can't send a cookie header cause it's a Forbidden header name ..... the page after i press login it sets the required cookies to perform a login . is there any way around this ? some lines from the page source code that i think are important var strCookie = document.cookie;
document.cookie = cookie;
var cookie = "Language=en" + expires + ";
var results = document.cookie.match ( '(^|;) ?' + cookie_name + '=([^;]*)(;|$)' ); | I have used a stateless password generator for years, and I think there are a lot of drawbacks: If your master password is compromised, all of your passwords are. In comparison, standard password managers requires that the attacker both compromise the master key and gain access to the password store. If a website has a password policy, you might not be able to generate a password that respects it. If one of the passwords needs to be updated for some reason, you need to keep that state somewhere. For example, you need to remember to generate a password for "StackExchange2" instead of "StackExchange". If you already have some passwords that you can't change (for various reasons), a static password generator won't help you. For all those reasons, I think you should definitively use standard password managers. | {
"source": [
"https://security.stackexchange.com/questions/214312",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/212560/"
]
} |
214,430 | See the title. I'm involved in a security audit right now, and am wondering whether 2FA should be enabled on not just human login accounts but also on service accounts (non-human accounts)? If so, how is this normally managed? Someone must still be at the other end to confirm the 2FA right? And would this be mainly a one time thing at setup or would they need to reconfirm the 2FA request periodically? | The trouble with requiring MFA on service accounts, is that it would have to be fully automated. For instance, a time based OTP . But as this OTP is based on a secret seed, it is effectively just another password stored in a config available to the service account. And it therefore gives no real additional security above that of just a single factor such as a password. | {
"source": [
"https://security.stackexchange.com/questions/214430",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213462/"
]
} |
214,492 | Consider a young (primary-school age) child who is starting to collect passwords for online services. How can a parent (or equivalent) help them manage their passwords? An example to make things clearer: My daughter might want to log on to http://scratch.mit.edu from several locations/devices to show her projects to the family. She also has a couple of email addresses, one of which she's likely to be using herself soon (under supervision). While her own device will be logged in, she may need access from others. So far I take care of it for her: I know her password and (pseudonymous) user ID, and store them in my KeePass. That's appropriate at this stage, but it's not much help if she needs them without me (short of sending login details in plaintext to her grandparents, for example). There should also be a solution that doesn't require me to possess these details, from the point of view of sticking to the general rule of keeping your login details secret. Memorising a really strong master password is probably a bit much to ask, and she's likely to mislay any physical storage. I like to plan ahead, so moving forwards: What's the best approach to take for a young, fairly bright child, to keep logins safe and train good practice in advance of more important accounts? | Maybe the lesson for children should be less about how to use tools to manage a password, and more about understanding why managing passwords is important? Let them write their passwords in a notebook. Have fun with devising a method for obfuscation in case the notebook is lost. Teach them about backups- keeping a copy someplace safe. In my experience, kids and old people are a lot alike when it comes to password (mis)management Until they were skilled enough to manage their own password database, I also kept the kids logins in a "family KeePass". This is the same one where the aged family members stuff is- because people die and sometimes you need to recover things for otherwise unable people. The trust/risk calculus is different in a family group than in a work or social circle. There is also a difference between sharing access to a password and sharing a password. It is awesome that you are thinking about this early. Good luck! | {
"source": [
"https://security.stackexchange.com/questions/214492",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29280/"
]
} |
214,513 | We are a brick and mortar company... literally. We are brick masons. At our office we connect to the internet through our cable modem provided to us by Spectrum Business. Our Treasurer uses a Verifone vx520 card reader to process credit card payments. It connects via ethernet. We don't store credit card data. We got a vulnerability report stating that we were not PCI compliant. They scanned our cable modem. Part 2b-1. 38173 - SSL Certificate - Signature Verification Failed Vulnerability
Part 2b-6. 38628 - SSL/TLS Server supports TLSv1.0
Part 2b-7. 38601 - SSL/TLS use of weak RC4(Arcfour) cipher (CVE-2013-2566, CVE-2015-2808) I don't understand what they mean. We don't have a server or an online store or anything. I was told they scanned our network and this is what they found. They told us to get with ISP to fix issue. How is the ISP supposed to get an SSL certificate installed on a cable modem? I did call our ISP and he didn't know what to tell me. We just use the card reader and it connects to our payment processor. Am I supposed to do something here? Is any of this applicable to us? | At our office with connect to the internet through our cable modem
provided to us by Spectrum Business. Our Treasurer uses a verifone vx520 card reader to process credit card
payments. It connects via ethernet. We don't store credit card data. It sounds like you fall under SAQ B-IP (and you will be amused that the mnemonic is that "SAQ B-for-Brick-and-Mortar"): SAQ B-IP has been developed to address requirements applicable to
merchants who process cardholder data only via standalone,
PTS-approved point-of-interaction (POI) devices with an IP connection
to the payment processor It sounds like someone did an external ASV ("Approved Scanning Vendor") scan on your known IP address and found the cable modem was, unsurprisingly, not up to snuff. Am I suppose to do something here? Is any of this applicable to us? Yes, this is applicable to you, and many other things besides, all of which are outlined in the Self Assessment Questionnaire linked above. And if your other office systems - desktops, printers, whatever - are also sitting on the same network behind that cable modem, then the requirements of the SAQ apply to those as well. Things like patching and access controls. For now, you will need to continue to work with your ISP. They either need to update the modem, upgrade it, or get it to stop accepting connections from the Internet at large. To break down those error messages for you: Part 2b-1. 38173 - SSL Certificate - Signature Verification Failed Vulnerability There's likely a self-signed certificate on that device, common for things like cable modems that need TLS but don't care about being trusted by random users. (PCI cares, though, even when users don't). Part 2b-6. 38628 - SSL/TLS Server supports TLSv1.0 When you go to a secure web site today, the newest you'll see is TLSv1.3, however most websites only support up to TLSv1.2 or TLSv1.1. TLSv1.0 is old, relatively insecure, and PCI declared it unacceptable to use a few years ago. Part 2b-7. 38601 - SSL/TLS use of weak RC4(Arcfour) cipher (CVE-2013-2566, CVE-2015-2808) TLS gets to pick from multiple algorithms; over time, weaknesses are found and individual algorithms get retired because of it. RC4 got retired a few years ago. | {
"source": [
"https://security.stackexchange.com/questions/214513",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213570/"
]
} |
214,689 | The communication is not encrypted during the SSL handshake. If an attacker conducts a man in the middle attack between server and client to capture the certificate, and change the public key in the certificate and send it to client, then the digital signature is same, all the properties except public key are same. So how can a browser understand the difference? If browser validates it, the attacker can use his/her own key pair and doesn't need the private key of the server. | ... change the public key in the certificate and send it to client. Digital signature is same , all the properties except public key are same. So how can browser understand the difference? The browser checks that the signature of the certificates fits the certificate. Since the public key is included in the signature and the public key is changed, the signature no longer fits the certificate. Therefore the validation will fail. | {
"source": [
"https://security.stackexchange.com/questions/214689",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197918/"
]
} |
214,717 | I work at a place that gives Wi-Fi to all the customers, with a password that is 19 characters long. A customer came in and claimed that because the password is long, it slows down the internet speed. Is there any truth to this claim? | No. This is because your password is converted to a cryptographic key which is of fixed length (128 bits). For any length of password, the corresponding crypto generated key (CMAC) would be of fixed size. Many other parameters such as the client and server id, large random values provided by client and server are used to calculate this CMAC. Encryption and decryption uses this fixed length CMAC. | {
"source": [
"https://security.stackexchange.com/questions/214717",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213838/"
]
} |
214,784 | Disclaimer: I have minimal web-dev/security knowledge so please answer as if talking to a "layman." I've heard that web-advertisements need to be able to run their own JavaScript so that they can verify they're being viewed by "real users." As this incident on StackOverflow shows, they're basically given free reign. I also know that JavaScript can be used to capture keystrokes on a webpage . So in a case like goodreads , where they have ads on the page and user/pass textboxes on the header, is there something in place to prevent the ad from reading keystrokes to record my credentials? Is reading keystrokes simply not possible from an ad? If I see ads on a login page should I assume that the page is not safe to enter my credentials? | Nothing prevents ads from reading your passwords. Ads (or any other script like analytics or JavaScript libraries) have access to the main JavaScript scope, and are able to read a lot of sensitive stuff: financial information, passwords, CSRF tokens, etc. Well, unless they're being loaded in a sandboxed iframe. Loading an ad in a sandboxed iframe will add security restrictions to the JavaScript scope it has access to, so it won't be able to do nasty stuff. Unfortunately, most of the third-party scripts are not sandboxed. This is because some of them require access to the main scope to work properly, so they're almost never sandboxed. As a developer, what can I do? Since any third-party script could compromise the security of all you personal data, all sensitive pages (like login forms or checkout pages) should be loaded on their own origin (a subdomain is fine). Using another origin allows us to profit from the Same-Origin Policy : scripts running on the main origin can't access anything on the protected origin. Note: Content Security Policy and Subresource Integrity could also be used if the third-party can be easily reviewed, but most ad networks couldn't work anymore if you used them. | {
"source": [
"https://security.stackexchange.com/questions/214784",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167733/"
]
} |
214,814 | I've been playing around with different login forms online lately to see how they work. One of them was the Facebook login form. When I logged out of my account my email and password were autocompleted by my browser. Then I decided to misspell my email and see what would happen if I tried to log in. To my surprise I logged in with no problem after changing my email from [email protected] to [email protected] . I then started experimenting with different spelling errors and I had no problem logging in as long as it was not too far off my real email. I tried changing the domain name as well [email protected] , my email prefix [email protected] etc. Then I also tried misspelling my password and as long as it was not too far off my real password I could log in no problem (with the password it worked when adding one random letter before or after the real password, but not when adding a letter in the middle of it). I also checked the actual data sent in the request by looking at it in Chrome DevTools and in fact it was the wrong data sent. How can this be? Should I be worried about my account's security? | Facebook is allowing you to make a handful of mistakes to ease the login process. A Facebook engineer explained the process at a conference . The gist of it is that Facebook will try various permutations of the input you submitted and see if they match the hash they have in their database. For example, if your password is "myRealPassword!" but you submit "MYrEALpASSWORD!" (capslock on, shift inverting capslock). The submitted password obviously doesn't match what they have stored in their database. Rather than reject you flat out, Facebook tries to up the user experience by trying to "correct" a few common mistakes such as inserting a random character before or after, capitalizing (or not) the first character, or mistakenly using capslock. Facebook applies these filters one by one and checks the newly "corrected" password against what they have hashed in their database. If one of the permutations matches, Facebook assumes you simply made a small mistake and authorizes your session. While worrying at first glance, this is actually still perfectly secure for a few reasons. First and foremost, Facebook is able to do this without storing the password in plaintext because they are transforming your provided (and untrusted) input from the form field and checking if it matches. Secondly, this isn't very helpful for someone trying to brute force the password because online attacks are nigh impossible thanks to rate limiting and captchas. Finally, the odds of an attacker/evil spouse knowing the text of your password and not the capitalization are abysmally small and so the risk created as a result of this feature is equally small. Should you be worried? No, probably not. Further reading: https://www.howtogeek.com/402761/facebook-fudges-your-password-for-your-convenience/ | {
"source": [
"https://security.stackexchange.com/questions/214814",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175806/"
]
} |
214,828 | I am new to security work and I am implementing someone else's design. The design calls for a TCP server with TLS in an environment where there is no DNS - only IPs. I am working with a typical certificate chain (Self-signed Root cert -> Intermediate cert -> Endpoint cert). The TCP server presents the Endpoint cert to a client which has the public portion of the Intermediate cert pinned in its code. When the client connects, it will check that the Intermediate cert was used to sign the Endpoint cert. So as I understand it, when the client connects, a key exchange occurs to secure the communication and the client then uses the certificate chain verification to verify that the server really is who it says it is. However, in this scheme, couldn't an impostor just present the certificate after getting it from the real server? Am I misunderstanding this as a flaw? From my understanding, without DNS names to tie the certificate to, checking the chain (with pinned signed parent certificate or not) is not sufficient. Can anything else be done here? | However, in this scheme, couldn't an impostor just present the
certificate after getting it from the real server? An impostor cannot present, and take advantage of, the real server's certificate unless it also has the matching private key . This is true whether the SAN DNS entry or IP entry are used to identify the certificate being presented. | {
"source": [
"https://security.stackexchange.com/questions/214828",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31934/"
]
} |
214,845 | In my work with online orders, I started noticing an extreme abnormality in a few orders. In one field that wasn't restricted there appeared a string of over 3 million characters that were totally gibberish consisting mostly of Cyrillic characters. On closer examination using Python, it turned out it was actually a list of over a thousand of such gibberish strings. I dug deeper and found more instances of that, the worst with a string of over 58 million characters consisting of over 18000 list elements. So we have a string that consists of several lists of strings, those strings again consist of several gibberish words separated by non-breaking spaces. An example (I added linebreaks for readability): 'Р В Р’ВР
’ Р В РІР‚в„ўР вР
‚™Р’В Р В Р’В Р Р
 вЂ Р В РІР‚љРІвЂћСћР Р
’ РІР‚™Р’ВР
’ Р В Р’ Р’РВ
’ Р Р†Р РР
†Р вЂљРЎв„ўР В Р вЂ Р Р†Р вЂљРЎвЂєР Р
ЋРЎвЂєР В Р’ Р’ РІРР
ІР‚љРІвЂћСћР В РІРВ
‚™Р’В РІThe following is a count of the 10 most common words in the 58 million character string: Р2453256
В 1926812
Р’В 895699
’В 822674
ІР399677
РІР‚в„ўР 382349
†235180
‚Р185503
‚в„ўР177792
†109266
ІвЂћСћР101490 Now take e.g. the string "РІР‚в„ўР" and put it into google. I'm getting over a million seemingly random sites where those strings are inserted into the source code of the sites. I have absolutely no idea what to make of this, does anyone know what this is? | However, in this scheme, couldn't an impostor just present the
certificate after getting it from the real server? An impostor cannot present, and take advantage of, the real server's certificate unless it also has the matching private key . This is true whether the SAN DNS entry or IP entry are used to identify the certificate being presented. | {
"source": [
"https://security.stackexchange.com/questions/214845",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213993/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.