source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
51,039 | If an attacker knows my database username and password, what can he possibly do if he doesn't have access to my server? Let's say my database is using Microsoft SQL Server. Is there a way he can use those credentials to manipulate my database? | Assuming that You're 100% sure that your database server only accepts local connections. You're 100% sure that the attacker doesn't have access to the local environment from which connections are allowed. You're 100% sure that the application that uses the database is otherwise secure. You're 100% sure that those credentials aren't used for anything else directly or indirectly related to the system. Then, there's no problem in them being exposed. Of course, as you can see, those are very big assumptions. If you're willing to take the bet, then go ahead and allow the credentials to be exposed. If not (I wouldn't), then work as best as you can on your security, and keep those credentials secret. | {
"source": [
"https://security.stackexchange.com/questions/51039",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/39592/"
]
} |
51,060 | UPDATE: My questions and concerns below boil down to: "Should I reject obviously poor passwords like 'hellomydarling' or 'password'? My guess is yes and I want to know to what extent. I'm using a password strength estimator to assist with that. I have been doing research regarding usability verses security when it comes to password selection. I came across this very interesting quote I found on this Stack Exchange. "Security at the expense of usability comes at the expense of security" https://security.stackexchange.com/a/6116/39548 My thoughts and question stem from this. I am working on a prototype for a password strength meter that appears beside new password selection input fields (JsFiddle at the bottom). The policy I have in place is to make my validator fail if the password is deemed "unusable" by me and to let "better" passwords through. However, I am concerned about the effect of banning " teenagemutantninjaturtles " but letting " Password must be at least 8 characters " through. Both are bad passwords, but my prototype really only handles the first very well. This password strength meter is based on Dan Wheeler's zxcvbn "realistic password estimation library" available from https://github.com/lowe/zxcvbn and discussed in detail here: https://tech.dropbox.com/2012/04/zxcvbn-realistic-password-strength-estimation/ I like zxcvbn because it actually uses a dictionary to help identify weak or poor passwords. However, I believe it is too forgiving of plain text pass phrases. For example, it's stock implementation will award the highest possible score (4/4) for 'teenagemutantninjaturtles'. However, Dan Goodwin at Ars Technica has pointed out that without true randomness, pass phrases aren't turning out to be very effective to thwart motivated attackers ( http://arstechnica.com/security/2013/10/how-the-bible-and-youtube-are-fueling-the-next-frontier-of-password-cracking/2/ ). So that is why I'm attempting to bolster the behavior of the password strength meter by finding passwords that easily match the zxcvbn's dictionary and simply mark them as "unusable". For your reference, we disclose that the minimum password length is 6 (even though they suck) and the maximum is 50 chars. There are no other restrictions or requirements. We do not disclose the precise reason why the password is unusable (e.g. it matches a dictionary word, or is to simple, or matches a pattern, etc) although we do provide hints such as those examples. Here are my actual questions: Considering that it's very hard for password strength meters to properly evaluate 'good' passwords from 'poor' passwords, when thinking about human behavior and security... Is it worth it to even include a password strength estimator? Is it worth trying to bolster the existing library with stricter
rules? Am I just going frustrate my users when I ban a password they
think is a good idea? (I'm worried this will potentially cause them to either give up or copy there password to their monitor, etc). If it is not a good idea, what's the best way to alert users and stop the worst possible passwords? Note: I am not trying to invent my own security protocol. Once we get the password over https, we store it using via bcrypt with a high cost/iteration. For the sake of argument, let's pretend like this is the only security problem I have left to solve (which it is not). I am simply trying to encourage better password use for our users. JsFiddle: I've copied over my work in progress to a jsFiddle, where my own changes are in the script part (and zxcvbn is an included resource). http://jsfiddle.net/wsKEy/1/ There's also a lot of commentary and work done to make the password strength meter more usable, which is why there are timers and stuff going on, too. Thank you for your thoughts. | Is it worth it to even include a password strength estimator? A strength meter? No, because it is too difficult to inform normal users that the "strength" listed is an absolute maximum possible strength, and that their password may be trivially crackable by a skilled opponent no matter what the meter says, while still asking they pay attention to the meter. A known-weak-password warning? Absolutely! If you can detect that a password is weak, then it's weak. You merely cannot possibly detect whether or not a password is strong, since your software won't have realtime access to the latest in cracking dictionaries, rulesets, and other advances (Markov chains, etc.) Is it worth trying to bolster the existing library with stricter rules? Yes! Go take a look at Hashcat for the types of rules it supports, and note that when you have the plaintext user password, it's easy to apply many rules. You can handle all the uppercase/lowercase rules with a simple UPPER() equivalent and an all-uppercase dictionary - if you find it, it's weak. (JacQueLinE) Appending/prepending numbers purely to meet length minimums is a simple pattern match - if the last/first N characters are numbers, and the remaining length isn't enough, it's weak. (Riddick123) Remove N numbers from the beginning/end, uppercase it, and check the dictionary for the remainder (JacQueLine12) The above, but N-1 numbers and/or symbols (#1JacQueLine) The above, but date formats. (JacQueLine02121995) If the last/first N-1 characters are numbers and the last/first is a symbol, and the remaining length isn't enough, it's weak. (!JacQueLine1) Take out one character at a time, see if it matches the dictionary. (jacqu$eline) Combine some of these. Reverse all of these. Do a pattern-match for dictionary words as subsets, i.e. correcthorsebatterystaple correct: 1813th most common English word, row 16828 on phpbb, row
9871 on Ubuntu american english small. horse: 1291st most common English word, row 14820 on phpbb (horses is
at row 1723!), row 21607 on Ubuntu american english small. battery: 3226th most common English word, row 7775 on phpbb, row 3644
on Ubuntu american english small. staple: 6 characters, all lower case, not in the top 5000 most common
words. row 40524 on phpbb (staples is at row 3852!), row 42634 on
Ubuntu american english small. Note that all of these are length 7 or less words; there are less than 21,000 length 7 or less words in Ubuntu's american english small dictionary, 21000^4 ~= 1.9E17, which is more or less 2^58 for a very simple "combinator attack: 4 words, no separators, length 1-7, from this one small dictionary". Certainly correcthorsebattery would be a much, much weaker password against a combinator attack - 3226^3 ~= 3.3E10 ~= 2^35, using the top 3226 most common English words. Get some better dictionaries; don't try to send them to the client, host them and the more complex rules serverside. Sure, send the client a tiny one for a first pass, but you need more. Phpbb is the best common small wordlist I know of, then add in rockyou. Many crackers start with brute force for tiny passwords, then small wordlists and large rulesets, then large wordlists - the largest I'm aware of is over 30GB, and includes almost every password found to have been cracked by anyone on a given popular forum, plus many, many other large wordlists. Find yourself a happy medium - fast enough to be performant, large enough and with enough "rules" to cut out the first few fast passes of cracking software - if you really are using enough bcrypt iterations, then only small dictionaries + large rulesets and large dictionaries + small rulesets will be practical attacks for a few years. Am I just going frustrate my users when I ban a password they think is a good idea? (I'm worried this will potentially cause them to either give up or copy there password to their monitor, etc). Yes. When you say "password", "Password", "P@$$w0rd", "P@$$w0rd1", "P@$$w0rd123", and even "P@$$w0rd123!" are bad passwords, you're going to annoy them. When you say "Jennifer2007" is a bad password, they're going to be frustrated (and perhaps Jennifer will be upset, too!). Manage their frustration as best you can, and simply accept some. Personally, I would recommend actually being explicit - tell them their password is a word in known cracking dictionaries plus two numbers, which is a normal cracking rule! Your purpose is twofold. First, you don't want weak passwords in your system. Second, you want to educate users on what a weak password is, so they have some understanding to mitigate their frustration. As part of educating, perhaps show them some alternatives you generate that pass your own tests, if you flunk their password. 1) Fully random passwords 2) Fully random passwords translated into bubblebabble or another pronounceable subset 3) correcthorsebatterystaple type passwords, but with longer and uncommon words. For instance, take the Ubuntu american english insane dictionary, subtract out all the words in the american english small dictionary, and select N words of at least 7 characters in length. This leaves you without any really short words, and without the most common words. 4) a mix of 1, 2, and/or 3. Then your users can, if they choose, simply pick something you showed them (over HTTPS with the best cipher suites you can get away with, of course). Personally, I would also strongly suggest raising your length limit; about 14 is what I would recommend, but for most userbases that's just too long. Try a minimum of 12 or even 10, enough so a fully random password might have a slight amount of value at the minimum length and character set. | {
"source": [
"https://security.stackexchange.com/questions/51060",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/39548/"
]
} |
51,116 | Is it bad security practice to have a password that will allow you to access any user account on a website for support purposes? The password would be stored in a secure manner and would be super complex. Huge mistake? | This sounds very much like an "Administrator" account, which typically otherwise has unlimited access to the things that it's the administrator of. The security implications of an admin account are pretty well-understood, as are the best practices. I won't go in to all the details, but your implementation breaks with best-practice on one key feature: traceablity. You want to be able to tell who did what, especially when it comes to administrators. If an admin can log in to my account using his password, then there's no way for an auditor after-the-fact to determine what was done by me versus what was done by the admin. But if instead the admin logs in to his OWN account with his super-secure password, then through the access he has through his own account performs some action I could have done as well -- well then now we can have a log telling who did what. This is pretty key when the manure hits the fan. And even more so when one of the admin accounts get compromised (which it will , despite your best efforts). Also, say the business grows and you need 2 admins. Do they share the superawesome password? NO NO NO. They both get their own admin accounts, with separate tracing and logging and all that. Now you can tell WHICH admin did what. And, most importantly, you can close one of the accounts when you fire one of the admins for stealing the donuts. | {
"source": [
"https://security.stackexchange.com/questions/51116",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5169/"
]
} |
51,290 | Here’s a quote from a reddit discussion : … for poker [a cryptographically secure RNG] is completely unnecessary.
If you have an appropriate unpredictable seed, and you are throwing away a lot of the randomness, MT is perfectly safe. I’d normally write this off as ignorance, but this goes on: I've actually implemented a real-money poker backend, and the company even had it certified And since nobody lies on the internet , this makes me wonder. Said person expands on this in another part of the discussion : [To prevent predictability] you would not use 624 consecutive values produced from [the MT generator]. If you want a real-money app. certified, one of the criteria is to throw away a large and unpredictable amount of the output of the PRNG. You also don't know the specifics of how the PRNG was used to produce a shuffle. So, you have no way to map the cards you see on the table to a sequence from the PRNG. You also don't know when the PRNG is reseeded. One last thing. You need to store (2 19937 − 1) * 4 bytes of data for lookup, in order to find the pattern you need to predict. Except for the last paragraph, these arguments sounds suspiciously like security by obscurity. The last paragraph sounds less so, but somebody else in the discussion has claimed that the statement is untrue, and you don’t need such a big lookup table. Just to clarify, I’m aware that MT19937 isn’t cryptographically secure (and so is the person I’m quoting). However, my assumption so far was that gambling (and poker) would require a cryptographically secure random source – and not just a secure seed – (a) to be tamper proof, and (b) for certification. Is this wrong? | Here is the cryptographer's point of view. The person you quote says : "you don't need a cryptographically secure PRNG", but what he actually claims is "when I use MT 19937 and do some mumbo-jumbo such as throwing away a large part of the output, it somehow becomes a cryptographically secure PRNG". His comment about storing "(2 19337 -1)*4 bytes for lookup" is enough to demonstrate that he is not very clear in his head about what security, cryptography, randomness and unpredictability actually are. This figure is the "period"; the period was used in (very) older times as a measure of security, because known PRNG from that time had very small periods, to the point that repetition did occur in practice. This engendered a whole family of non-crypto PRNG where designers where trying to overawe their competitors by flourishing the longest possible period. This makes no real sense from a cryptographic point of view. Security of a PRNG is about unpredictability , and a very short period is an issue only because it allows future output to be predictable. AES used in CTR mode is a PRNG with a period of 2 135 (bits), a figure much lower than 2 19337 -1, and yet not a problem at all. The "throwing away of large and unpredictable amount of output" also illustrates the confusion. Removing bits from the output may hide some state leakage from a weak PRNG; this can even turn a weak PRNG into a strong one, as studied with the shrinking generator . However, it does nothing about seed predictability; indeed, if some of the output is "thrown away", then there is a mechanism which decides what to keep and what to throw away, and that mechanism is also part of the PRNG. If all of this is seed with the current time, then exhaustive search on the possible "current time" values will be efficient (current time is not a secret) and will unravel the whole thing. However we may argue that though MT is not cryptographically secure, this does not mean that making an effective attack is easy . There are three types of PRNG: The awfully weak algorithms, either through very poor processing (leaking the internal state), or predictable seed, or both. These are broken in practice. The "cryptographically strong" PRNG, which resist attacks even in the ludicrous conditions that academics assume (an academic will consider an algorithm as "broken" if it claims 128-bit security but offers only 2 123.4 resistance). The grey zone in between: broken as per academics, but a practical attack is not immediate. With his Mersenne Twister , he is in the grey zone, and he believes that his voodoo manipulations will keep it that way. It is entirely plausible that he could also convince an auditor that voodoo works. This in no way implies that the algorithm is secure ; only that an auditor was ready to sign a paper claiming that the algorithm fulfils some legal requirements. At that point, it is a good thing to remember that some other auditors found it fit to sign papers claiming that Enron was a financially sound and clean venture: this helps to put things into the right perspective. From what he describes, chances are that the initial seed is time-based, and the actual security relies entirely on the non-publication of the algorithm details. That's security through obscurity at its best (or worst, depending from the point of view). | {
"source": [
"https://security.stackexchange.com/questions/51290",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1530/"
]
} |
51,294 | I'm implementing a REST service that requires authentication. I cannot store any per-user state (such as a randomly-generated token) because my service does not have direct access to a database, only to another backend service. The solution I came up with is creating a JSON Web Token ( JWT ) when the user authenticates. The JWT claims set contains the user ID in the Subject ("sub") field. The server then encrypts the claims set directly ("alg":"dir") using AES GCM with 256 bit key ("enc":"A256GCM") creating a JWE . The key is generated once when the service starts and stored in memory. To authenticate, the client submits the username/password, and the server responds with the token described above. The client then sends that token with each subsequent request. When the client submits the token with subsequent requests, the server decrypts it using the key, and assumes the user ID in the "sub" field to be the ID of the current user, without any further authentication checks. Token expiration is handled by the "exp" field in the JWT claims set. The connection between the client and the server will use SSL/TLS, so the token will not leak. I'm using this library to create and read JWTs as I don't trust myself to write correct cryptography code. My questions: Is the above approach secure? Can an attacker impersonate another user by manipulating the token? Is the approach over-complicated? Would using MAC (in other words: JWS ) instead of encryption have the same security? (or possibly more, since it's simpler and there's less chance of making a mistake). There's nothing particularly secret in the JWT claims set, and the user knowing their own ID doesn't matter. Is my choice of JWE algorithm and encryption appropriate? For the JWE "alg", the library I'm using supports direct encryption (using the key directly to encrypt the claims set) and RSA (generating a new key to encrypt the claims set for each token, and encrypting the generated key with an RSA public key). I chose the former because it's easier to generate a symmetric key than an RSA key. For the JWE "enc", the library supports AES GCM and AES CBC HMAC SHA2 (with various bit lengths). I chose GCM arbitrarily. | Your basic approach is valid: generate the JWT when the user logs in, expect subsequent messages to carry the JWT, trust the subject field in the JWT in those subsequent messages if the JWT is valid. But there are several things you should be aware of: As Daisetsu say, you could use a MAC ("alg":"HS256") as MACs are specifically designed to prevent alteration of the payload, while encryption algorithms typically (counter-intuitively) are not. However since you're specifically using AES in GCM mode, you already get tamper-resistant encryption ("authenticated encryption"), so that's not really a concern. When validating an incoming JWT, be careful what you consider valid. For example, I could call your service with {"sub":"me","alg":"none"} and while that JWT is valid in some sense, it isn't something you want to accept. Since JWT is a draft, not a standard yet, it might change. If it changes enough, the library you're using might have to change in ways that break compatibility with your code. If you can't store any server-side state, you have no way to invalidate the JWT when the user logs out. In effect your service has no logout function, which may be a security problem especially if you set the expiration time too far in the future. If you set the expiration time too soon, you may have a problem with users still being logged in but not having a valid JWT. This may lead to awkward error-handling and user workflow issues. Since you said your server has no access to a database, I assume the actual login is handled somewhere else, perhaps the backend server you mentioned. You didn't say how your server knows that the user just logged in. Depending on user perception of the relationship between your service and the thing they know they logged into, the last two points above might be moot. | {
"source": [
"https://security.stackexchange.com/questions/51294",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26899/"
]
} |
51,552 | I'm making a few assumptions about basic email security, and I want to confirm or clarify some of these points to make sure I understand the big picture. Please correct me where I'm mistaken: The answer to this question gives some insight, but doesn't cover all I'm looking for. This is all assuming a traditional email service, accessed using a desktop or mobile client, over POP or IMAP, and SMTP (ignoring webmail). Suppose I'm retrieving messages - my client app passes my username and password to the POP server, which authenticates me, and sends back the messages. If I'm not using SSL/TLS, then the entire conversation, including the message and credentials, is in plaintext. And anyone watching the network traffic can intercept the entire thing. And if I am using SSL, then the entire conversation is safe, even over a public network. Do I have that right? My understanding is that traditional messages are insecure when my server talks to someone else's server - so the message itself is likely vulnerable while in transit between servers, but at least with SSL my email password would be safe. If I understand, PGP or similar would mean that the message itself is encrypted, so that as long as my and the recipient's private keys are safe, nobody else could read the message. But that's just the message, right? Not the IMAP/SMTP/POP connection? Meaning if I used PGP for the message, but a non-SSL connection to SMTP, I'd still be sending plaintext username and password to authenticate. Basically, I'm trying to understand why an email provider would refuse to offer SSL/TLS for POP/IMAP/SMTP connections - one particular provider says they don't do it because email is inherently insecure anyway, so SSL doesn't actually do anything to protect you, and they suggest PGP for truly secure email. I'd like to argue that while SSL may not be end-to-end message protection, it would at least protect my credentials and protect my message for a significant portion of its journey (me to SMTP server, and POP server to recipient assuming they're connecting with SSL). Do I have everything straight with that? | To answer your question: If you're using SSL/TLS to access your e-mails, regardless of whether it's POP or IMAP then it would be very difficult for anyone to decipher the text of the e-mails from analysing the traffic alone. That said some large companies e.g a law firm I used to work for have a server which sat between us and the internet, stripping out the SSL so the answer is a qualified yes in saying you're safe. Also if the message is still sitting on the e-mail server after you access it, then it is possible for your e-mail provider to be bribed/coerced into handing it over. Of course GPG does solve this problem. You can also ask your e-mail provider to delete messages you download through POP though you have to trust that they're both able and willing to delete the messages securely. With SSL, when logging in using your e-mail password, your password cannot easily be read by intercepting the traffic between your computer and e-mail server subject to the proviso I mentioned in point one. Your understanding of PGP is largely correct. If a message is sent to you encoded with your public key then only you our someone who has broken your private key can read it. The sender's private key is irrelevant as he can't use it to decode a message meant for you. If your e-mail password were sent in plain text though you're absolutely right in thinking it could be intercepted and someone could access your e-mails or send them on your behalf pretending to be you. If you used PGP at all times to encrypt and sign all messages and others did the same for you, this would protect the content of your messages though. I can't imagine why an e-mail provider wouldn't want to offer SSL beyond sheer laziness. You can even get free certificates these days! Most of the free webmail providers offer SSL e.g Gmail, have you thought about using them? | {
"source": [
"https://security.stackexchange.com/questions/51552",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12429/"
]
} |
51,567 | When I generate a DSA key with ssh-keygen -t dsa , the resulting public key will begin with ssh-dss . How come? Why not ssh-dsa ? | DSS is simply a document that describes the signing procedure and specifies certain standards. The original document is FIPS 186 and latest revision in 2013 is FIPS 186-4 . DSS is a standard for digital signing. DSA is a cryptographic algorithm that generates keys, signs data, and verifies signatures. DSA, in itself, can use any hash function for its internal "cryptomagic", and it can also use any (L, N) for its parameters' length. DSS, as a standard, defines DSA's optional specifications. DSS says that DSA should use SHA-1 as its hash function (recently, SHA-2). DSS says that DSA should use specific length pairs such as (2048,224), (3072,256), etc. When SSH says DSS, they mean that they're implementing DSA in compliance with the DSS. | {
"source": [
"https://security.stackexchange.com/questions/51567",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37814/"
]
} |
51,648 | I have recently heard that most of the PHP code is confidential, because if attackers know your database structure or the hash function used to encrypt the passwords, there is higher chances of a breach. I was wondering, if it's so, then what about open source projects where everything is wide out in the open? | I have recently heard that most of the PHP code is confidential, because if attackers know your database structure or the hash function used to encrypt the passwords, there is higher chances of a breach. That's only when designers don't use the correct hash function or protect the webapp from SQL injection. But these things can be easily detected, there eve are automated tools these days that can look for SQL injection vulnerabilities via the source code (instead of pointing sqlmap at the webapp) If you use something like scrypt, you're perfectly safe in telling everyone that you use it. MediaWiki used to use a salted MD5 hash. The salt is installation specific and not part of the open source code (it gets generated on install). This isn't as secure as it can be (per-user salts and using a better hashing algorithm would be better), but it's still secure as long as the LocalSettings file isn't exposed. (Admittedly, that's not a great level of security). I think Drupal does this too , but with sha512. I was wondering, if it's so, then what about open source projects where everything is wide out in the open? Security by obscurity is not security. If putting the code out in the open impacts security, then you don't have any security in the first place. Besides, if there are security holes in the code, these get caught by contributors too. I recently caught and patched a security-related bug in the Bugzilla software, for example. | {
"source": [
"https://security.stackexchange.com/questions/51648",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
51,680 | Over the last couple of years there have been a number of changes in what would be considered an optimal SSL cipher suite configuration (e.g. the BEAST and CRIME attacks, the weaknesses in RC4) My question is, what would currently be considered an optimal set of SSL cipher suites to have enabled with the following goals. Provide the most secure connection possible, including where possible Perfect Forward Security and avoiding known weaknesses. Provide compatibility with a wide range of commonly deployed clients including mobile devices (e.g. Android, iOS, Windows Phone) and desktop OS (including Windows XP/IE6) | The most secure setup doesn't depend only on ciphers, but also on the tls-version used. For openssl, tls 1.1/1.2 is preferred. BEAST and CRIME are attacks on the client and are usually mitigated client-side, but there are server-side mitigations too: CRIME: just disable ssl-compression; that's it BEAST/Lucky13: just use TLS 1.1, no SSLv3 and no RC4, see Is BEAST Still a Threat? (Ivan Ristic) BREACH: works only, if some conditions are met, see breachattack.com ; easy and always-working mitigation would be to disbale http-compression (gzip) For a perfect setup: SSL always impacts performance on a high level, RC4 and other fast cipher-suites might still be ok for static content, esp. when served from your own cdn. A nice guide to understanding OpenSSL is OpenSSL Cookbook with detailed explanations also on PFS , cipher-suites, tls-version etc. pp. there are 2 blogposts that explains PFS and practical setup: SSL Labs: Deploying Forward Secrecy Configuring Apache, Nginx, and OpenSSL for Forward Secrecy cipher-suites-suggestions to enable PFS also on older clients: # apache
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 \
EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 \
EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
# nginx
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 \
EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 \
EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; For a detailed nginx/ssl-manual I'd like to direct you to this Guide to Nginx + SSL + SPDY . | {
"source": [
"https://security.stackexchange.com/questions/51680",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37/"
]
} |
51,771 | So, I want to start using pass , but I need a GPG key for this. This application will store all of my passwords, which means it's very important that I don't lose my private key, once generated. Hard disks break, cloud providers are generally not trusted. Not that I don't trust them to not mess with my key, but their security can be compromised, and all my passwords could be found. So, where can I safely store my GPG private key? | I like to store mine on paper. Using a JavaScript (read: offline) QR code generator, I create an image of my private key in ASCII armoured form, then print this off. Note alongside it the key ID and store it in a physically secure location. Here's some that should work for you no matter what operating system you use, as long as you have a browser that supports JavaScript. For Windows users: Click here to download the JavaScript QR code generator: https://github.com/davidshimjs/qrcodejs/archive/04f46c6a0708418cb7b96fc563eacae0fbf77674.zip Extract the files somewhere, then proceed edit index.html per the instructions below. For MacOS or Unix users: $ # This specific version is to avoid the risk that if someone hijacks `davidshimjs`'s
$ # repository (or he goes rogue), you will still be using the version that I vetted.
$ # For the truly paranoid you don't trust GitHub either, and you will want to verify the code you download yourself.
$ wget https://github.com/davidshimjs/qrcodejs/archive/04f46c6a0708418cb7b96fc563eacae0fbf77674.zip
$ unzip qrcodejs-04f46c6a0708418cb7b96fc563eacae0fbf77674.zip
$ cd qrcodejs-04f46c6a0708418cb7b96fc563eacae0fbf77674/
$ # We need to edit index.html so that it supports pasting your PGP key
$ # Open the file in a text editor like Notepad, vi, or nano
$ vi index.html Change line 11 from: <input id="text" type="text" value="http://jindo.dev.naver.com/collie" style="width:80%" /><br /> to: <textarea id="text" type="text" value="http://jindo.dev.naver.com/collie" style="width:80%" /></textarea><br /> Now navigate to the directory you get here with Explorer, Finder, or Nautilus, etc. For example: $ pwd
/Users/george/Documents/Code/qrcodejs/qrcodejs-04f46c6a0708418cb7b96fc563eacae0fbf77674
$ open . Now, double click on the index.html file you just edited and saved. You will most likely need to break up your PGP key into quarters or even smaller to create nice big QR codes that you can easily scan later. After pasting in the text area, click away from the text box and your QR code should appear. Save each one as you go and name them appropriately so that you know their order! After you've created all the codes, scan them with, for example, a mobile phone QR code scanner app. For the paranoid, keep this device offline once you've installed a barcode reader and then perform a full wipe and factory reset of the device before putting it back online. This will prevent the QR scanner app from leaking your PGP key. If you have a large key or lots of keys I recommend paperbak , although be sure to write down instructions on how to recover the data later. Just as important as how you back it up is how you restore it from a backup. I'd probably try this with dummy data just to be sure you know exactly how it works. Worth noting you can protect your private key with a passphrase, so even if it's hosted with a cloud provider they can't see your private key, but then all your password security is reduced to that passphrase rather than the full private key, not to mention cloud providers can disappear overnight. | {
"source": [
"https://security.stackexchange.com/questions/51771",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12941/"
]
} |
51,959 | I know there are many discussions on salted hashes, and I understand that the purpose is to make it impossible to build a rainbow table of all possible hashes (generally up to 7 characters). My understanding is that the random salted values are simply concatenated to the password hash. Why can a rainbow table not be used against the password hash and ignore the first X bits that are known to be the random salt hash? Update Thanks for the replies. I am guessing for this to work, the directory (LDAP, etc) has to store a salt specific to each user, or it seems like the salt would be "lost" and authentication could never occur. | It typically works like this: Say your password is "baseball". I could simply store it raw, but anyone who gets my database gets the password. So instead I do an SHA1 hash on it, and get this: $ echo -n baseball | sha1sum
a2c901c8c6dea98958c219f6f2d038c44dc5d362 Theoretically it's impossible to reverse a SHA1 hash. But go do a google search on that exact string , and you will have no trouble recovering the original password. Plus, if two users in the database have the same password, then they'll have the same SHA1 hash. And if one of them has a password hint that says try "baseball" -- well now I know what both users' passwords are. So before we hash it, we prepend a unique string. Not a secret , just something unique. How about WquZ012C . So now we're hashing the string WquZ012Cbaseball . That hashes to this: c5e635ec235a51e89f6ed7d4857afe58663d54f5 Googling that string turns up nothing (except perhaps this page), so now we're on to something. And if person2 also uses "baseball" as his password, we use a different salt and get a different hash. Of course, in order to test out your password, you have to know what the salt is. So we have to store that somewhere. Most implementations just tack it right on there with the hash, usually with some delimiter. Try this if you have openssl installed: [tylerl ~]$ openssl passwd -1
Password: baseball
Verifying - Password: baseball
$1$oaagVya9$NMvf1IyubxEYvrZTRSLgk0 This gives us a hash using the standard crypt library. So our hash is $1$oaagVya9$NMvf1IyubxEYvrZTRSLgk0 : it's actually 3 sections separated by $ . I'll replace the delimiter with a space to make it more visually clear: $1$oaagVya9$NMvf1IyubxEYvrZTRSLgk0
1 oaagVya9 NMvf1IyubxEYvrZTRSLgk0 1 means "algorithm number 1" which is a little complicated , but uses MD5. There are plenty others which are much better , but this is our example. oaagVya9 is our salt. Plunked down right there in with our hash. NMvf1IyubxEYvrZTRSLgk0 is the actual MD5 sum, base64-encoded. If I run the process again, I get a completely different hash with a different salt. In this example, there are about 10 14 ways to store this one password. All of these are for the password "baseball": $1$9XsNo9.P$kTPuyvrHqsJJuCci3zLwL.
$1$nLEOCtx6$uSnz6PF8q3YuUhB3rLTC3/
$1$/jZJXTF3$OqDuk8T/cEIGpeKWfsamf.
$1$2lC.Cb/U$KR0jkhpeb1sz.UIqvfYOR. But, if I deliberately specify the salt I want to check, I'll get back my expected result: [tylerl ~]$ openssl passwd -1 -salt oaagVya9
Password: baseball
Verifying - Password: baseball
$1$oaagVya9$NMvf1IyubxEYvrZTRSLgk0 And that's the test I run to check to see if the password is correct. Find the stored hash for the user, find the saved salt, re-run that same hash using saved salt, check to see if the result matches the original hash. Implementing This Yourself To be clear, this post is not an implementation guide. Don't simply salt your MD5 and call it good. That's not enough in today's risk climate. You'll instead want to run an iterative process which runs the hash function thousands of times. This has been explained elsewhere many times over, so I won't go over the "why" here. There are several well-established and trusted options for doing this: crypt : The function I used above is an older variation on the unix crypt password hashing mechanism built-in to all Unix/Linux operating systems. The original (DES-based) version is horribly insecure; don't even consider it. The one I showed (MD5-based) is better, but still shouldn't be used today. Later variations, including the SHA-256 and SHA-512 variations should be reasonable. All recent variants implement multiple rounds of hashes. bcrypt : The blowfish version of the crypt functional call mentioned above. Capitalizes on the fact that blowfish has a very expensive key setup process, and takes a "cost" parameter which increases the key setup time accordingly. PBKDF2 : ("Password-based Key Derivation Function version 2") Created to produce strong cryptographic keys from simple passwords, this is the only function listed here that actually has an RFC . Runs a configurable number of rounds, with each round it hashes the password plus the previous round's result. The first round uses a salt. It's worth noting that its original intended purpose is creating strong keys , not storing passwords , but the overlap in goals makes this a well-trusted solution here as well. If you had no libraries available and were forced to implement something from scratch, this is the easiest and best-documented option. Though, obviously, using a well-vetted library is always best. scrypt : A recently-introduced system designed specifically to be difficult to implement on dedicated hardware. In addition to requiring multiple rounds of a hashing function, scrypt also has a very large working memory state, so as to increase the RAM requirement for implementations. While very new and mostly unproven, it looks at least as secure as the others, and possibly the most secure of them all. | {
"source": [
"https://security.stackexchange.com/questions/51959",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40448/"
]
} |
51,974 | I was wondering if two factor authentication is really necessary if you are using high entropy, long, unique passwords for each site. From my experience 2fa just beefs up security a little more, by adding another key necessary to get into a lock. But if the first key is sufficient(20 plus characters), does it really matter? I only log into sites on machines I trust so I am not super worried about keyloggers or malware that was previously installed on the machine. It seems to me the main purpose of 2fa is to either pad bad passwords, or protect users against replay attacks. | A better comparison is like having a guard inside a locked door. If you lose your key, the guard can still keep someone that isn't you from walking in. It isn't possible to guarantee that even the most secure password won't be compromised due to some attack. Having multiple factors, particularly one that is never directly shared, adds an entire additional tier of security. You still gain a lot from two factor authentication and you still need it for the same reason you don't use a weak password when using two factor authentication. The best security is using a strong version of both. We just don't bother with three factors because while one would be easy to compromise, two would be much more difficult to compromise at the same time and if they can get two at the same time, three isn't much harder. | {
"source": [
"https://security.stackexchange.com/questions/51974",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40459/"
]
} |
52,041 | I know that the best options to use for storing passwords are bcrypt/PBKDF2/scrypt. However, suppose you have to audit a system and it uses SHA-512 with salt. Is that "fine"? Or it is a vulnerability that must be addressed, even thought your site is a discussion forum? Of course, if it is weak, it must be addressed, because users passwords may be used on other sites as well. The question is - is it too weak not to tolerate it as a possibility. | No, you're solving the wrong problem. Moving from SHA-1 or SHA-256 to SHA-512 doesn't make cracking the hash significantly harder. Hashes generally aren't reversed by means of some mathematical property of the algorithm, so advancing the algorithm doesn't changing the security very much. Instead, hashes are brute-forced using a "guess and check" technique . You guess a password, compute the hash, and then check to see if you guessed correctly. And repeat, and repeat, and repeat until you finally get it. So the way to slow down the attacker is to slow down the hashing process. If it takes longer to check each single password, then the attacker can't guess as many of them. And in this case, SHA-512 isn't appreciably slower than SHA-256 or SHA-1 or MD5. So you're not really adding any security. Instead, the common thread between techniques like bcrypt, PBKDF2, and scrypt, is that they all run the hashing function over and over and over, thousands of times for just one single password guess. So say a site uses PBKDF2 at 10,000 iterations: this means that the attacker has to expend as much time and resources on a single guess as he otherwise would have to on an entire dictionary of 10,000 passwords. By slowing the attacker, you limit the number of passwords he ultimately can guess, and therefore decrease the likelihood of him eventually guessing correctly. Many installations tailor their configuration to fit existing hardware. So for example, LUKS uses a minimum of 1000 iterations, or however many it takes to consume 1/8th of a second, whichever is longer. | {
"source": [
"https://security.stackexchange.com/questions/52041",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10859/"
]
} |
52,115 | My mum (on Gmail, using Chrome) received an email from a friend's Hotmail address. She opened the email (very obviously a phishing email) and clicked a link in it. This opened a webpage with loads of medical ads on. She closed the page and deleted the email. She did not notice anything else happen when she clicked the link. For example, she did not see a download start and did not click anything on the page that opened. The URI of the link she clicked was hxxp://23.88.82.34/d/?sururopo=duti&bugenugamaxo=aGViZTFzaGViZUBob3RtYWlsLmNvLnVr&id=anVuYWx4QGdvb2dsZW1haWwuY29t&dokofeyo=anVuYWx4 [DON'T visit that address!] Immediately (although she didn't know at the time) about 75 emails were sent from her Gmail address to a selection of her contacts. They are visible in the Sent Mail list in her Gmail account. This happened between 17:08 and 17:10 GMT. Here the source of one: Return-Path: <[email protected]>
Received: from localhost (host86-152-149-189.range86-152.btcentralplus.com. [86.152.149.189])
by mx.google.com with ESMTPSA id r1sm16019263wia.5.2014.02.23.09.10.15
for <[email protected]>
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Sun, 23 Feb 2014 09:10:16 -0800 (PST)
Message-ID: <[email protected]>
Date: Sun, 23 Feb 2014 09:10:16 -0800 (PST)
MIME-Version: 1.0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
From: [email protected]
Return-Path: [email protected]
Subject: Bar gain
<span style=3D"VISIBILITY:hidden;display:none">Mount your brooms said Madam=
Hooch Three two one =20
</span><br /><u>[email protected] has sent you 3 offline broadcast</u><=
br /><a href=3D"hxxp://23.88.82.8/d/?ba=3Djurofaxovu&maremiditigehavuve=3Da=
nVuYWx4QGdvb2dsZW1haWwuY29t&id=3DaGVsZW5fY19odWdoZXNAaG90bWFpbC5jb20=3D&guv=
iwafaloco=3DaGVsZW5fY19odWdoZXM=3D" >Locate Full Email Content</a> Here's the Gmail "Activity information" window: Note that the IP address in that list, 86.152.149.189, is the same as in the header of that email. One of my mum's friends reports that she received one of the emails and clicked on the link in it. She says that her email account then sent out a load of emails too. I don't know what my mum's IP address was at the time this happened. So maybe it was 86.152.149.189. I don't understand how this happened. She had an impressively strong password (which I've now changed) that she doesn't use for anything else and she didn't type this password into the page that opened. How on earth could clicking a link in an email allow an attacker authenticate themselves with the Gmail SMTP server as my mum and then to send a load of emails as her to her contacts? And how could it have got the addresses of her contacts? Update subsequent to Iserni's answer : My mum confirms that she did indeed enter her Gmail password when "Gmail" asked for it after the page of medical ads closed. Her aunt received one of the emails and was also asked to enter her Gmail login details. She says she did because the original email came from my mum. Clever attack | IMPORTANT : this is based on data I got from your link, but the server might implement some protection. For example, once it has sent its "silver bullet" against a victim, it might answer with a faked "silver bullet" to the same request, so that anyone investigating is led astray. I have tried sending a fake parameter of cHVwcGFtZWxv to see whether it triggered any different behaviour, and it did not. Still, that's no great guarantee. UPDATE - the above still holds, but I've been making tests from random IPs not traceable to my main session - the attacking server does not discriminate, and will blithely answer to a query regardless of browser, referer, and JS/Flash/Java support. The link you received contained, already embedded in the URL, the following parameters - I have slightly changed them so the correct form won't appear in Google searches of Stack Exchange (I swapped the first letters). [email protected]
[email protected] The link injects a Javascript that first of all retrieves your location through a Geotrack API call, then loads another script. ( I had initially mistaken this for a GMail command; my bad ). The second script loads a web page, but also presents several replicas of Login pages of popular accounts (Hotmail, GMail and so on) depending on the incoming email: GMail accounts get a fake GMail page, and so on, all of these pages saying what amounts to "Oooh, session expired! Would you mind logging in again? ". For example, clicking here ( do not do so while logged in GMail, just in case) hxxp://23.88.82.8/d/[email protected]&jq=SVQ7RmxvcmVuY2U= will display a fake Google account login (for a nonexisting user 'puppa'). The real login pages come from http://ww168.scvctlogin.com/login.srf?w...
http://ww837.https2-fb757a431bea02d1bef1fd12d814248dsessiongo4182-en.msgsecure128.com which are fire-and-forget domains. The server that receives the stolen usernames and passwords is apparently always the same, a ShineServers machine on 31.204.154.125, a busy little beaver . Most of these URLs have been submitted to various services and were seen as far back as January. Phishing and Two-Factor authentication I'm of two minds about the usefulness of TFA in this scenario. As I see it, and I may well be mistaken or overlooking something, the victim clicks on the link gets "disconnected" and prompted to "reconnect" by a phishing screen enters [username and] password attacker attempts login and gets redirected to "Enter Secure Code" a secure code is sent to the victim attacker sends to victim an "Enter Secure Code" screen (most?) victims enter secure code too victim account is compromised What could one do Check out the URL appearing on the address bar. Verify SSL certificates. Never login to anything unless it comes from a bookmark or a manually typed link, paying attention to common misspellings. If a login screen appears during navigation, just close the browser and reopen it. Enter into the paranoid habit of always inserting a wrong password first, one you would never use, then the correct one on the "Login failed" screen. If the wrong password gets accepted... (of course, the attacker might always reply WRONG! to the first attempt. He has to balance the cost of scaring some victims against the benefit potential of capturing some others. As long as the number of two-attempters is negligible, two-attempting is a winning strategy for them. If everybody does it, it won't work). There are services, such as OpenDNS as pointed out by @Subin, or embedded in the browser itself, that verify the incoming site against a distributed list and refuse to connect to a known phishing site. What could a developer do Maybe, just maybe, it would be possible to develop a "This page looks like this other page" application. Probably it would be terribly heavy on the system. In its most basic and thwartable form, if the HTML code contains 'Enter Google password' and the URL is not gmail, then a large blood-red banner appears saying JUST DON'T . Another (thwartable again) possibility is to employ a honey-token approach and deny form submissions that contain a password. What could Google do This is a bit of a pet peeve of mine. The phishing screen uses data on Google servers, for Pete's sake, so that those servers clearly see a login logo being requested by your mom with a referer of phishers'r'us dot com . What do those servers do? They blithely serve the logo as is! If I were to manage such a server, a request for your avatar image (or any image) from any page not on my site would, yes, indeed get an image. I would probably get in no end of trouble for the image I'd choose. But it would be very unlikely that someone would willingly enter his/her password on such a screen. Of course, the attackers would just mirror the images on their websites. But I can think of many other tricks. For example, if a browser on 1.2.3.4 asked me for a login avatar, I might be wary of a password confirmation coming from address 9.8.7.6 a few seconds later, especially if other passwords for other accounts had come in similar circumstances from the same address in the last few minutes. A twist : as suggested by a commenter (which I still have to thank for the insight), Google has actually oversight on the incoming requests as well as GMail displayed messages . With a bit of data analysis, it can then know with good certainty phishing sites almost in real time, and phishers mirroring sites doesn't thwart this kind of analysis very much (it is mostly based on data garnered from the victim). Then Google can supply the addresses of known sites to a browser extension (e.g. Chrome site protection). I still think that they could do both - defend the login screen and use data mining to find out who the phishers are - but I'll accept that I am not justified in saying that Google is actually doing nothing . More complicated tricks Also, I might complicate the login screen with challenge/responses invisible to the user that the attacker would have to match, and based on browser fingerprinting. You want to log in, you send the password from the same login screen that prompted you. This too can be thwarted, quite easily. But having to do twenty easy things to compromise an account is difficult . Also because if you do seventeen right, I (the server) mark your address, and maybe redirect you to a fake sandbox account if you do succeed to log in in the next hours. And then I just look at what you do. You do little, I replicate on the real account and if you're honest, you'll never even know. More than X too-similar emails, or sent too fast, and I'll know. Of course the account will remain open and blithely accept all your spam. Why not. Send it? Well... that's another matter, now, isn't it? | {
"source": [
"https://security.stackexchange.com/questions/52115",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27326/"
]
} |
52,361 | My email-provider's website ( http://www.gmx.de ) recently started linking to the (German) site http://www.browsersicherheit.info/ which basically claims that due to its capabilities to modify a site's appearance, Adblock Plus (and others) might actually be abused for phising. Here's a quote from that site plus its translation: Solche Add-ons haben Zugriff auf alle Ihre Eingaben im Browser und können diese auch an Dritte weitergeben – auch Ihr Bank-Passwort. Dies kann auf allen Web-Seiten passieren. Sicherheitsmechanismen wie SSL können das nicht verhindern. translated: Such addons can access all your browser's input and can also forward them to third parties - even your banking password. This can happen on all websites. Security mechanisms such as SSL cannot avoid that. Ok, they mention other (pretty obviously crapware) addons, but is Adblock Plus really a security threat or do that site's operators simply use the opportunity to try and scare inexperienced users into viewing their ads again? | It is not. This is a FUD ( fear, uncertainty, and doubt ) campaign by GMX because they want to display their ads. There is absolutely no security risk from the mentioned ad blockers. They added some crapware to the list to make it look more legitimate. Of course such campaigns are very unusual, especially from such a big and well known company like GMX. Unfortunately, I have no English source at hand (because it's a German only campaign) but since you speak German you may want to read this article at heise.de . Update #1: United Internet, the company behind GMX, received a lot of criticism for misleading customers by falsely claiming that there is a security risk on their PC. The Wall Street Journal (German edition) named the warnings displayed on GMX and the site they link to a "scare campaign". Update #2: GMX now says that they will no longer display the link when you use ad blockers but will still display it if you use crapware that injects adverts, the list at the site http://www.browsersicherheit.info/ has been updated accordingly and now lists only a small collection of crapware. This list is by no means complete so it is not a reliable source when you want to know if your browser has crapware installed. However, United Internet still maintains it's position that they do not want users who visit their sites to use ad blockers and said they will develop other anti-blocking methods in the future ( German source ). | {
"source": [
"https://security.stackexchange.com/questions/52361",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3272/"
]
} |
52,461 | A professor told us today, that MD5 is weak. I understand his chain of thought but pointed out, that IMHO MD5 is a good way to go if you would use a long (even really long) dynamic salts and static pepper. He stared at me and said NO! IMHO the possibility to "brute-force" a md5 hash with a any dictionary is even simple. If you would use a dynamic/various salt it would be hardened to get a match with a complexity of O(2^n) and if I use a pepper before and after my salted password hash it would be not 100% safe but could take a long while to compute it.. | There are lots of known cryptographic weaknesses in MD5 which make it unusable as a message digest algorithm, but not all of these also apply in the context of password hashing. But even when we assume that these do not exist, MD5 is still a bad password hashing algorithm for one simple reason: It's too fast . In any scenario where an attacker obtained the hashed passwords, you have to assume that they also obtained the salt of each password and the pepper. The only reason to use a pepper is so you can't use a rainbow table precomputed before the attack, because you need a different one for each database. The only reason to use a salt is so you can't use the same rainbow table for the whole password database, because the same password for two different accounts will have a different hash. The length of pepper and salt don't matter that much. Their only purpose is to make sure that each value is unique. More length doesn't make the attack notably harder (there is more data to hash, but that's a linear increase at most). Bottom line is, a short salt is all that is needed to make sure that the attacker has to brute-force all possible passwords to find the correct hash for every single account. And that's where MD5's weakness comes into play: It's a fast and memory-conserving algorithm. That means an attacker can compute the hash of a large number of passwords per second. Using specialized hardware (like FPGA arrays or ASICs) worth a few thousand dollar you can compute the hashes of all possible 8-character passwords for a given salt in mere hours. For better security, use a slow algorithm like bcrypt. It means that your system needs some more CPU cycles to authenticate users, but the payoff is usually worth it because an attacker will also need a whole lot more processing power to brute-force your password database should they obtain it. | {
"source": [
"https://security.stackexchange.com/questions/52461",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34358/"
]
} |
52,464 | For a user to defend against something like Don't understand how my mum's Gmail account was hacked , they should pay attention to URL bar before typing their credentials. But how can you make sure it's really the browsers URL bar that you are looking at? Phishing site could look non-suspicious by: Opening a popup without URL bar and emulating the URL bar functionality; Going into fullscreen mode and emulating URL bar (think mobile device browsers); Possibly many other ways, depending on device where the browser is running. Is there a common (to different browsers, operating systems) guideline for users to detect such an attack? | There are lots of known cryptographic weaknesses in MD5 which make it unusable as a message digest algorithm, but not all of these also apply in the context of password hashing. But even when we assume that these do not exist, MD5 is still a bad password hashing algorithm for one simple reason: It's too fast . In any scenario where an attacker obtained the hashed passwords, you have to assume that they also obtained the salt of each password and the pepper. The only reason to use a pepper is so you can't use a rainbow table precomputed before the attack, because you need a different one for each database. The only reason to use a salt is so you can't use the same rainbow table for the whole password database, because the same password for two different accounts will have a different hash. The length of pepper and salt don't matter that much. Their only purpose is to make sure that each value is unique. More length doesn't make the attack notably harder (there is more data to hash, but that's a linear increase at most). Bottom line is, a short salt is all that is needed to make sure that the attacker has to brute-force all possible passwords to find the correct hash for every single account. And that's where MD5's weakness comes into play: It's a fast and memory-conserving algorithm. That means an attacker can compute the hash of a large number of passwords per second. Using specialized hardware (like FPGA arrays or ASICs) worth a few thousand dollar you can compute the hashes of all possible 8-character passwords for a given salt in mere hours. For better security, use a slow algorithm like bcrypt. It means that your system needs some more CPU cycles to authenticate users, but the payoff is usually worth it because an attacker will also need a whole lot more processing power to brute-force your password database should they obtain it. | {
"source": [
"https://security.stackexchange.com/questions/52464",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26331/"
]
} |
52,584 | Just came across this bit of ruby that can be used to decrypt Snapchat photos taken out of the cache on a phone, apparently adapted from here . To my surprise, it worked without a problem, considering the problems around Snapchat's security which have been well publicized lately (Mostly the stuff around the whole phone number/username leak as far as I recall). require 'openssl'
ARGV.each do|a, index|
data = File.open(a, 'r:ASCII-8BIT').read
c = OpenSSL::Cipher.new('AES-128-ECB')
c.decrypt
c.key = 'M02cnQ51Ji97vwT4'
o = ''.force_encoding('ASCII-8BIT')
data.bytes.each_slice(16) { |s| o += c.update(s.map(&:chr).join) }
o += c.final
File.open('decyphered_' + a , 'w') { |f| f.write(o) }
end So, my question is, what exactly are they doing wrong here, and what could they be doing better in order to improve the security of their application in this regard rather than what they're doing now, considering that people often send intimate things that were never meant to be shared for longer than 10 seconds only to one person, and also considering the popularity of this app? tldr/for all those who don't really care to know how computers work but still want to know what is up: Basically, let's say you have 40 million people who use Snapchat, with 16.5 million users sending each other pictures, and each picture in its own tiny locked safe every day. Now, what if you gave those 16.5 million people all the same flimsy, plastic key to open each and every one of these lockboxes to capture the Snapchat media? | This is a serious problem in password-management. The first problem here is the way they managed his key in their source code. SnapChat states that they send the photos encrypted over internet, and it is true after all, but they are using a "pre-shared" key to encrypt this data ( badly using also AES in ECB mode ) so, every user around the planet has the key to decipher each photo. The problem here is, how did internet get the key? Piece of cake, they just included it in every app, and somebody just searched for it . What is this magic encryption key used by any and all Snapchat app? M02cnQ51Ji97vwT4 You can find this (in the Android app) in a constant string located in com.snapchat.android.util.AESEncrypt; no digging required, it is quite literally sitting around waiting to be found by anyone. On a more positive note (perhaps), in the 3.0.4 (18/08/2013) build of the Android app, there is - oddly enough - a second key! 1234567891123456 It is a very bad practice to hardcode a password in your source (no matter if it is in your headers or in your binaries), the main problem being anyone could find it with a simple "strings" command into your binary ( or by looking in someplace you used to share your code with your friends ): strings binaryFile Then the malicious user can have a look to each string and check if that is the password he is looking for. So, if your really need to hardcode a password in your code you better hide it, but this will just be " security through obscurity " and the malicious user will end up finding the key (so you better think in a different approach). What can they do to improve their security? Well they could have generated a key for each photo, or they can pre-share a key between the clients that are going to share a picture, public/private keys; there are plenty of options. | {
"source": [
"https://security.stackexchange.com/questions/52584",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28506/"
]
} |
52,643 | A century old adage: The more the merrier . In general, does this adage hold true in regards to the number of anti-virus software you should have on your PC? Are there any limits before it actually has the opposite effect? | Most anti-virus vendors advise not to use their products together with those from others. That's not (just) because they fear competition. Live virus-scanners scan files on access. When they notice that a process accesses a file, they try to access it before the process to scan it. They even try to do that when that process is another virus-scanner. When you have two live-scanners on a system, both will try to be the first to open a file. When virus scanner A detects that scanner B opens a file, A will try to access it first to protect B from any viruses in it. B will register this attempt to read the file, and in turn will try to scan it before A does. The result is that both virus scanners are caught in an infinite loop. This problem, however, only applies to live-scanners. When you use on-demand scanners which don't monitor file access and only scan a filesystem when they are prompted to do so, you can use multiple of them one after another. | {
"source": [
"https://security.stackexchange.com/questions/52643",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36540/"
]
} |
52,656 | Debian (stable) is a well respected server Linux distro. I was surprised to see that in their hardening walkthrough ( https://wiki.debian.org/HardeningWalkthrough ) they do not support position independent executables (and ASLR and a few other useful security flags) in the latest stable build (Wheezy), while most other distro's do support these things. Since Debian stable has stood the test of time, I am thinking I must have assumed these security features are a lot more important than they actually are in practice. Can someone explain why Debian is able to get away without having these security features and yet not be hacked to the stone age every day? | Most anti-virus vendors advise not to use their products together with those from others. That's not (just) because they fear competition. Live virus-scanners scan files on access. When they notice that a process accesses a file, they try to access it before the process to scan it. They even try to do that when that process is another virus-scanner. When you have two live-scanners on a system, both will try to be the first to open a file. When virus scanner A detects that scanner B opens a file, A will try to access it first to protect B from any viruses in it. B will register this attempt to read the file, and in turn will try to scan it before A does. The result is that both virus scanners are caught in an infinite loop. This problem, however, only applies to live-scanners. When you use on-demand scanners which don't monitor file access and only scan a filesystem when they are prompted to do so, you can use multiple of them one after another. | {
"source": [
"https://security.stackexchange.com/questions/52656",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41296/"
]
} |
52,693 | If there is a need for source code to have a password in it, how
should this be secured? This is purely an example, but say there is an
app that is using an API, and you don't want to expose your API key,
yet you need to have it in the source code, because it is used. How do
you effectively have a string that the user cannot retrieve yet can be
used. Does not seem possible without asking the server for the string. RE: Why can we still crack snapchat photos in 12 lines of Ruby? | Short answer: you can't . You can't never protect a password that you are distributing. You might hide it between some strings and use other operations to "cover" the password but, in the end you will have to put it all together to make your function to operate. And here is where the cracker is going to take it. There is no easy way to solve this problem and usually it means that you have not chosen the best security scheme or, if you feel it is enough, maybe it means that you just don't need this kind of security. And if you really, really, really need to do in that way you will have to go with "security by obscurity" after all, the longer it takes to be cracked, the better. You better have some detection system for when this happens. As an example, consider the gaming industry all these years with their copy protections and so on, if there would have been a way to achieve security within the code itself that would mean the end of "piracy". | {
"source": [
"https://security.stackexchange.com/questions/52693",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41335/"
]
} |
52,834 | When visiting Gmail in Chrome, if I click on the lock icon in the address bar and go to the connection tab, I receive a message 'no certificate transparency information was supplied by the server' (before Chrome 45, the message was displayed as 'the identity of this website has been verified by Google Internet Authority G2 but does not have public audit records'). What exactly does it mean that the certificate does not have public audit records? Are their certain threats a site using a certificate without public audit records has that a site using a certificate with public audit records does not? | Note : If you're here because your certificate isn't trusted by Chrome, this is not the reason. Chrome will still trust certificates without CT information. If your certificate isn't trusted, there is an additional factor that you may have missed. This has to do with the concept of Certificate Transparency . The Problem Browsers currently trust certificates if four conditions are met: (a) the certificate is signed by a trusted CA, (b) the current time is within the valid period of the certificate and signing certs (between the notBefore and notAfter times), (c) neither the certificate nor any signing certificate has been revoked, and finally, (d) the certificate matches the domain name of the desired URL. But these rules leave the door open to abuse. A trusted CA can still issue certificates to people who shouldn't have them. This includes compromised CAs (like DigiNotar ) and also CAs like Trustwave who issued at least one intermediate signing certificate for use in performing man-in-the-middle interception of SSL traffic. A curated history of CA failures can be found at CAcert's History of Risks & Threat Events to CAs and PKI . A key problem here is that CAs issue these certificates in secret. You won't know that Trustwave or DigiNotar has issued a fraudulent certificate until you actually see the certificate, in which case you're probably the perpetrator's target, not someone who can actually do any real auditing. In order prevent abuse or mistakes, we need CAs to make the history of certificates they sign public . The Solution The way we deal with this is to create a log of issued certificates. This can be maintained by the issuer or it can be maintained by someone else. But the important point is that (a) the log can't be edited, you can only append new entries, and (b) the time that a certificate is added to the log is verified through proper timestamping. Everything is, of course, cryptographically assured to prevent tampering, and the public can watch the contents of the log looking to see if a certificate is issued for a domain they know it shouldn't have. If your browser then sees a certificate that should be in the log but isn't, or that is in the log but something doesn't match (e.g. the wrong timestamp, etc), then the browser can take appropriate action. What you're looking at in Chrome, then, is an indication as to whether a publicly audible log exists for the certificate you're looking at. If it does, Chrome can also check to see whether the appropriate log entry has been made and when. How widely is it used? Google maintains a list of "known logs" on their site . As of this writing, there are logs maintained by Google, Digicert, Izenpe, and Certly, each of which can maintain the audit trail for any number of CAs. The Chrome team has indicated that EV certificates issued after 1 Jan 2015 must all have a public audit trail to be considered EV. And after the experience gained dealing with EV certificate audit logs has been applied, they'll continue the rollout to all certificate issuers. How to check the logs Google added a Certificate Transparency lookup form to their standard Transparency Report, which means you can now query for the domains you care about to see which certificates for those domains show up in the transparency logs. This allows you to see, for example, which certificates out there are currently valid for your domain, assuming the CAs cooperate. Look for it here: https://www.google.com/transparencyreport/https/ct/ Remember that if you want to track a given domain name to be alerted when a certificate is updated, then you should follow the logs directly. This form is useful for doing point-in-time queries, not for generating alerts. | {
"source": [
"https://security.stackexchange.com/questions/52834",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3689/"
]
} |
52,877 | On our views in a Java web application, currently I am using hashCode as Id's for objects so that at server end I can get the same object back. However, I am wondering how secure Java's hashCode really is so that someone cannot hack it to retrieve other person's objects. I am not very inclined towards Encryption-Decryption mechanism as it causes much CPU. What other fast yet secure mechanisms I can use ? | You're breaking one of the hashCode() taboos; you're using its output as a key identifier. That's wrong. You see, the output of hashCode() (in its default implementation) is a 32-bit unsigned integer, that's roughly 4 billion unique hashCodes. Sounds quite a lot? Well, not so much. Applying the birthday problem in this case shows as that with about 77000 objects, you have about 50% chance of collision. 50% chance of two objects having the the same hashCode. Another issue is that the implementation of hashCode() can change from one Java version to the other. It's not meant to be permanent identifier of an object, so there's nothing forcing it to be consistent across versions. If you insist on using hashes as object identifiers, then it's much better to have your own method instead of hashCode() to use for your key identifiers (for example, getMySpecialHashKey() . You can uses something like MessageDigest.getInstance("SHA-256") to digest the object into a nice 256-bit key. My recommendation: Ditch the whole idea of hashing the object and rather generate a random identifier for your object when you construct it. Something along the lines of (in Java) SecureRandom secRand = new SecureRandom();
byte[] objIdBytes = new byte[16]; //128-bit
secRand.nextBytes(objIdBytes);
String objId = Base64.encodeBase64String(objIdBytes); //Here's your ID You also seem to bind access to an object only to knowledge of its key. That's also wrong. A proper permission-based model with proper authentication and authorization is needed. | {
"source": [
"https://security.stackexchange.com/questions/52877",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6862/"
]
} |
52,980 | Could somebody please explain to me the differences between the following attacks? sniffing snooping spoofing My professors used them all in his documents, but I'm not sure, if those are 3 different attacks or just synonyms. | Sniffing and snooping should be synonyms. They refer to listening to a conversation. For example, if you login to a website that uses no encryption, your username and password can be sniffed off the network by someone who can capture the network traffic between you and the web site. Spoofing refers to actively introducing network traffic pretending to be someone else. For example, spoofing is sending a command to computer A pretending to be computer B. It is typically used in a scenario where you generate network packets that say they originated by computer B while they really originated by computer C. Spoofing in an email context means sending an email pretending to be someone else. | {
"source": [
"https://security.stackexchange.com/questions/52980",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41664/"
]
} |
53,020 | How come I'm allowed to reboot a computer that I don't own, put in a USB, boot ubuntu from it and then access all files stored on the drives available (even critical files such as system files on C drive in Windows)? Isn't there a way to prevent people from doing this, without putting up a password on the BIOS? | The file and folder/directory permissions on an operating system are managed and enforced by... you guessed it right, that operating system (OS). When the operating system is taken out of the picture (booting a different operating system), then those permissions become meaningless. One way to think of it: You hire a big bodyguard (OS) to protect your house. You give him a list (permissions) of the allowed guests (users) and which areas (files and folders) they're allowed to visit. How useful are those lists when the bodyguard is asleep (not booting from that OS)? Generally, it is assumed that once an attacker has physical access to your system, they own your system. Even a BIOS password won't help you in this case. One way to solve this problem is using full-disk encryption using software such as TrueCrypt , Bitlocker , and others. The problem, in your case, is that you'll have to setup a password (or key) to be inputted whenever you reboot the system. Weigh in your options and decide. | {
"source": [
"https://security.stackexchange.com/questions/53020",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41706/"
]
} |
53,031 | What's wrong with you, you crazy fool , you're not supposed to be able to retrieve a password in plain text! I know. Hear me out. Assume I've got a service that's similar to Mint.com. For those that aren't familiar with Mint, it works like this: User signs up to Mint.com much like they would any other online service - by providing an email address & password. Here's where things get interesting. The user then selects their bank name from a dropdown, and provides their online banking username and password to Mint. This is where things get really interesting. Mint stores these credentials and uses them to retrieve the list of credit/debit card transactions made by the user automatically - at different times of the day. They do this by automatically logging into the online banking site - using the users credentials (presumably through a browser emulator of sorts). Now I need a secure way of protecting the online banking credentials of my customers. Here's the big question: How would YOU do it? Yes, I've already read this question and this one . The answers all suggest using a more secure method of communication with the 3rd party (either authenticating using API keys, or passing hashed parameters). In my opinion, and it's just that - an opinion - it seems to me like a genuine use-case where I have incredibly sensitive information that NEEDS to be available in plain text - at least periodically throughout the day. Interesting fact: Mint.com has over 10 million users | In the case you've described you're storing information on behalf of the user, and you're not using it to authenticate the user. Hence, while the contents of what you're storing includes passwords (almost exclusively), you're not really "storing passwords" in the traditional sense. You're storing secrets . Adjust your strategy accordingly. Both of these problems are solved; you just need to apply the correct solution. When "storing passwords" (that is, authenticating the user), you go through the hashing, the the salt, the key stretching, etc., that you're familiar with. But what about storing secrets . First and foremost, you avoid the problem if possible . This is why concepts like API access tokens exist. You don't need my Facebook password because you can't use it. You need an access token which Facebook is willing to give you with my permission. The next best solution is to tie access to user login . The information is encrypted using a key derived from the password you use to log in - which I don't know and don't store. So I (the server owner) can't access your data unless you type in your password. This is popular because it's powerful. In fact, Windows has been doing this for a long time, which is why changing your password can cause encrypted files to become inaccessible. It's also the reason why your Windows password is stored in plain text in memory while you're logged in. An implementation snafu which I would advise you to avoid. Next on our list, you can separate your processes such that unencrypted data is never available on outward-facing machines. Bonus points if an HSM is involved. The gist here is that the user provides his secrets to the web server, which are quickly encrypted using the public key to some secret crypto device which is totally inaccessible because it's not connected. The plain secrets get immediately forgotten, and the encrypted data then gets shipped off to some cold storage somewhere. Eventually the secrets get decrypted somewhere else with the assistance of that crypto thingy and get used. Only, the point where this happens has NO INTERNET ACCESS. Or at least no path from the outside in. Finally, you can try the solution above, but fail miserably. I only mention this because really all other solutions are just crappy variations on the ones above: encrypting in the database, using an application password, storing the password on another server, storing the data on another server, salting your encryption key with [ silly idea here ], and so on. And finally, my standard warning for questions like this applies. I'll write it really big: The fact that you're asking this question means that you shouldn't do it Seriously. Storing peoples banking credentials? If you don't understand what sort of trouble you're getting yourself in to, if you're asking the Internet for suggestions on how to do this, if all the solutions I mentioned weren't ALREADY top-of-mind to you as the only viable options, then you shouldn't be implementing this . People are trusting you to do this right. And you're not going to do this right. Not because you didn't ask the right questions, but because you haven't solved this problem often enough to understand what hidden pitfalls you'll have overlooked. This is difficult stuff: not difficult to do , but difficult to not make mistakes. Don't betray the trust of your customers by getting yourself in over your head. | {
"source": [
"https://security.stackexchange.com/questions/53031",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41712/"
]
} |
53,290 | When trying to encrypt files, I get the following error in KGpg editor window: The encryption failed with error code 2 On the command line I get: $ gpg --list-keys
/home/user/.gnupg/pubring.gpg
---------------------------------
pub 2048D/5E04B919 2012-02-02 [expires: 2016-02-01]
uid Firstname Lastname <[email protected]>
uid [jpeg image of size 4005]
$
$ gpg --encrypt file-to-encrypt
You did not specify a user ID. (you may use "-r")
Current recipients:
Enter the user ID. End with an empty line: [email protected]
No such user ID. This used to work both with editor and on the command line with the same key. The Current recipients: is empty. Why is that? UPDATE: When trying to specify the user ID on the command line using the -r option, I get the following: $ gpg -r [email protected] --encrypt file-to-encrypt
gpg: [email protected]: skipped: unusable public key
gpg: file-to-encrypt: encryption failed: unusable public key Info: $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.10
Release: 12.10
Codename: quantal
$ dpkg -s gnupg
Package: gnupg
Status: install ok installed
Priority: important
Section: utils
Installed-Size: 1936
Maintainer: Ubuntu Developers <[email protected]>
Architecture: amd64
Multi-Arch: foreign
Version: 1.4.11-3ubuntu4.4 | I figured out what the problem and solution was so I give an answer with details should anyone run into the same problem, it may be helpful. The problem is somewhat ambiguous, no really informative error message is given. It turned out that the encryption sub-key was expired. Strangely, gpg --list-keys did NOT show the expired sub-key!! Once the sub-key expiry was extended, it was included in the output of gpg --list-keys . Also, KGpg does not show in any way that the sub-key is expired nor it allows to extend the expiry of the sub-key (only the main key's expiry can be changed). The output of gpg --list-keys before the solution (I changed personal details): $ gpg --list-keys
/home/user/.gnupg/pubring.gpg
---------------------------------
pub 2048D/5E04B919 2012-02-02 [expires: 2016-02-01]
uid Firstname Lastname <[email protected]>
uid [jpeg image of size 4005] Nothing more. However, gpg --edit 5E04B919 showed that the sub-key is expired $ gpg --edit 16AE78C5
gpg (GnuPG) 1.4.11; Copyright (C) 2010 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
pub 2048D/5E04B919 created: 2012-02-02 expires: 2016-02-01 usage: SCA
trust: ultimate validity: ultimate
sub 1024g/16AE78C5 created: 2012-02-02 expired: 2014-02-01 usage: E
[ultimate] (1). Firstname Lastname <[email protected]>
[ultimate] (2) [jpeg image of size 4005]
gpg> After some Google search, I found this mailing list archive which pointed me to the right direction to extend the expiry of the sub-key using gpg command line: http://lists.gnupg.org/pipermail/gnupg-users/2005-June/026063.html For completeness, here's the relevant segment from the above linked mailing list archive: gpg --edit-key [key ID] then
Command> key N where N is the subkey's index.
e.g. if the subkey whose validity you want to extend is the first listed
subkey, or if it is the only listed subkey, then the command would be Command> key 1 this will put a * after the word sub, indicating that this particular
subkey has been selected. then Command> expire and follow the prompts.
Hope this works for you, it works for me (Macintosh OS X 10.4.1)
Charly I followed the instructions and extended the sub-key expiry. After this gpg --list-keys gave a different output: $ gpg --list-keys
/home/user/.gnupg/pubring.gpg
---------------------------------
pub 2048D/5E04B919 2012-02-02 [expires: 2016-03-12]
uid Firstname Lastname <[email protected]>
uid [jpeg image of size 4005]
sub 1024g/16AE78C5 2012-02-02 [expires: 2016-03-12] After this, everything was back to normal, I could encrypt files, etc. | {
"source": [
"https://security.stackexchange.com/questions/53290",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41924/"
]
} |
53,474 | I notice that Chrome does not execute scrips that are part of the web request, http://vulnerable_site?pageTitle=<script>alert('xss')</script> With such links since the script is as part of the web request, Chrome does not execute the script. Does that prevent all kinds of Reflected XSS attacks with Chrome browser ? Edit: I've already read this thread; although the title seems similar, the contents are different. | No, because the XSS filter only looks whether it sees XSS code in the input back in the HTML outputted by your server. For example, if Chrome sees your web page is accessed with an URL that contains the following: ?q=<script>alert("XSS!")</script> and if the HTML returned by the server contains this: <p>You have searched for <b><script>alert("XSS!")</script> it knows that this code is most likely the result of it being included in the request, and neutralizes it. However, if the code is not found in the request, for example if the application accepts input that is encoded in some way, the filter may not be able to figure out that the code is the result of some XSS code embedded in the request. As seen in the commit log for WebKit, until it was forked recently the rendering engine in Chrome, they are trying to address the most common bypasses, where the XSS-code in the URL and the resulting XSS-code in the HTML look slightly different. If none of these rules for special cases apply, the XSS will be left through. For example, if they aren't decoding Base64-encoded data in the URL, if the web application were to accept input encoded with Base64, it may be possible to XSS a web application. An example: ?q=PHNjcmlwdD5hbGVydCgnWFNTIScpPC9zY3JpcHQ+ ( PHNjcmlwdD5hbGVydCgnWFNTIScpPC9zY3JpcHQ+ is <script>alert("XSS!")</script> encoded in Base-64), if not filtered, would result in response HTML like this: <p>You have searched for <b><script>alert("XSS!")</script> Additionally, it won't stop XSS occurring with data that is not embedded in the HTML unencoded, but is treated in an unsafe way by JavaScript. For example, consider a page which contains the following JavaScript: eval(location.hash.substring(1)) This will execute any code trailing the # in the URL, but it is not filtered out by Chrome. ↪ You can see this example in action here ↪ Another example, which uses Base64 | {
"source": [
"https://security.stackexchange.com/questions/53474",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34963/"
]
} |
53,481 | I was about to reset my Facebook password and got this error: Your new password is too similar to your current password. Please try another password. I assumed that Facebook stores only password hashes, but if so, how can they measure passwords similarity? This should be impossible with good hashing function, right? Question is - how is this possible and what are the implications? Thanks in advance. UPDATE I didn't make it clear - I was not asked to provide old and new password. It was the "reset password" procedure, where I only provide a new password, so most of answers of suggested duplicate are not applicable. UPDATE2 mystery solved - see comment (from Facebook engineer) | Let's hope and assume that Facebook stores only hashes of current password (and potentially previous passwords). Here is what they can do: user sets first password to "first" and fb stores hash("first"). later on, users resets password and is asked to provide new password "First2" Facebook can generate bunch of passwords (similar to the new one): ["First2", "fIrst2", "firSt2", ... "first2", ... "first", ... ] and and then compare hash of each with the stored hash. This is the only solution that comes to my mind. Any other? | {
"source": [
"https://security.stackexchange.com/questions/53481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3897/"
]
} |
53,594 | There are very few websites that hash the users password before submitting it to the server. Javascript doesn't even have support for SHA or other algorithms. But I can think of quite a few advantages, like protection against cross-site leaks or malicious admins, which SSL does not provide. So why is this practise so uncommon among websites? | Inventor of JavaScript password hashing here Way back in 1998 I was building a Wiki, the first web site I'd built with a login system. There was no way I could afford hosting with SSL, but I was concerned about plaintext passwords going over the Internet. I'd read about CHAP (challenge hash authentication protocol) and realised I could implement it in JavaScript. I ended up publishing JavaScript MD5 as a standalone project and it has become the most popular open source I've developed. The wiki never got beyond alpha. Compared to SSL it has a number of weaknesses: Only protects against passive eavesdropping. An active MITM can tamper with the JavaScript and disable hashing. Server-side hashes become password equivalents. At least in the common implementation; there are variations that avoid this. Captured hashes can be brute forced. It is theoretically possible to avoid this using JavaScript RSA. I've always stated these limitations up front. I used to periodically get flamed for them. But I maintain the original principle to this day: If you've not got SSL for whatever reason, this is better than plaintext passwords. In the early 2000s a number of large providers (most notably Yahoo!) used this for logins. They believed that SSL even just for logins would have too much overhead. I think they switched to SSL just for logins in 2006, and around 2011 when Firesheep was released, most providers switched to full SSL. So the short answer is: Client-side hashing is rare because people use SSL instead. There are still some potential benefits of client-side hashing: Some software doesn't know if it will be deployed with SSL or not, so it makes some sense to include hashing. vBulletin was a common example of this. Server relief - with computationally expensive hashes, it makes sense for the client to do some of the work. See this question . Malicious admins or compromised server - client-side hashing can prevent them from seeing plaintext passwords. This is usually dismissed because they could modify the JavaScript and disable hashing. But in fairness, that action increases their chances of being detected, so there is some merit to this. Ultimately though these benefits are minor, and add a lot of complexity - there's a real risk that you'll introduce a more serious vulnerability in your attempt to improve security. And for people who want more security than password, multi-factor authentication is a better solution. So the second short answer is: because multi-factor authentication provides more security than client-side password hashing. | {
"source": [
"https://security.stackexchange.com/questions/53594",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24542/"
]
} |
53,596 | If I got an SSL certificate for my website and use an SSL secured connection (HTTPS), is this safe enough to send my login and password data or should I add some encryption or hashing? And how safe is SSL against Man In The Middle attacks? Can they grab or even modify the data sent and received over HTTPS? And what about GET and POST, are both of them encrypted or is just the answer of the server encrypted or even nothing? I read Wikipedia and a lot of Google results about SSL and HTTPS but I don't really get it. I really hope that you are able to answer my questions in a simple way so I can finally understand how safe SSL and HTTPS really are. | Principle of HTTPS operation HTTP protocol is built on top of TCP. TCP guarantees that the data will be delivered, or it is impossible to deliver (target not reachable, etc.). You open a TCP connection and send HTTP messages through it. But TCP does not guarantee any level of security. Therefore an intermediate layer named SSL is put between TCP and HTTP and you get the so called HTTPS. This way of working is called tunneling – you dump data into one end of (SSL) tunnel and collect it at the other one. SSL gets HTTP messages, encrypts them, sends them over TCP and decrypts them again at the other end. Encryption protects you from eavesdropping and transparent MITM attack (altering the messages). But SSL does not only provide encryption, it also provides authentication. Server must have a certificate signed by a well known certification authority (CA) that proves its identity. Without authentication, encryption is useless as MITM attack is still possible. The attacker could trick you into thinking that he is the server you want to connect to. Private chat with the devil is not what you want, you want to verify that the server you are connecting to really is the one you want to connect to. Authentication protects you from MITM. Weak points So where are the weak points? Endpoints of secure connection. The transfer could be secure, but what about the server itself? Or the client? They may not. Not using HTTPS. Users can be tricked into not using the scheme in various ways. Untrustworthy CAs. They break the authentication part, allowing for MITM attack. Weak encryption mechanism. Crypto technologies age in two ways: Serious flaws might be found in their design, leading to attacks much more efficient than brute force, or their parameters and processing power increase due to Moore's law might allow for a feasible brute-force attack. Implementation of the scheme. Well, if you specify A and implement B, properties of A may not hold for B. Direct answers You seem to say that you secured the transfer (using SSL). This is not enough, the security of your server can be compromised – you should not store passwords there in plain text, use their hashed form, with salt added, … SSL encrypts data both when sending and receiving. MITM attacks are possible virtually only when the attacker has certificate signed by an authority the client trusts. Unless the client is tricked into not using HTTPS, nobody can read nor modify the messages being sent. GET and POST are just two methods of making HTTP request. There are several other, too. Method is just a property of HTTP request. All messages are secured, both requests and responses, regardless of HTTP method being used. | {
"source": [
"https://security.stackexchange.com/questions/53596",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41925/"
]
} |
53,658 | An answer to this question says Facebook generates a bunch of password guesses to see if they hash the same as a previous version of the password. Why bother? If a service forces every password to have sufficient length and complexity, why should it care if the changed password is similar to the previous password, since in theory each password is already sufficiently long and complex to meet security requirements? Does Facebook's policy really prevent some kind of attack where hackers start with long complex password guesses and then try minor variations, or is it just an irritant for users, preventing them from using what are actually sufficiently good new passwords? | Because if Facebook can algorithmically produce similar passwords, then so can a password cracker. The sequence could go like this:
Password compromised -> user changes it to something similar -> new password compromised algorithmically by trying similar passwords to known previous one. Also, imagine a scenario where an account is being specifically targeted by an actual human being. The attacker may know previous passwords or have an idea what they roughly could have been (e.g. the account owner's ex romantic partner or something). In this case, a password that was similar to a previous one would be more likely to be guessed. | {
"source": [
"https://security.stackexchange.com/questions/53658",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40404/"
]
} |
53,673 | The UK is getting a new £1 coin . Its designers, the Royal Mint, claim that unlike current coins, it includes built in technology for high speed authentication and verification everywhere from ATMs to vending machines and point-of-sale. How does this work? Something like RFID? Does this mean the coins (and therefore their users) can be tracked? | They are using a system called ISIS which also has some more details available here . It appears to be a form of micro-tagging embedded within the currency itself (based on the comment the same technology has been used in fuels and perfumes). Basically, a specially manufactured particle is constructed and then mixed as an additive with the coin. This additive can later be detected automatically. Since the composition of the additive is unknown, it can't be reproduced easily but yet machines can read the signature of it. The tagging technology can be used to tag specific coins (this is done for some high end unique items that are tagged), but it would generally be very expensive to do so. It is far more likely that a standard formulation is used for all coins of a given value. It would also still very likely require that the coin be directly handled to read as it is not a radio based technology. | {
"source": [
"https://security.stackexchange.com/questions/53673",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1163/"
]
} |
53,773 | Recently I've been reading about Web application firewalls and the fact that they protect against most frequent attacks like injections, XSS or CSRF. However, a good application should already be immune against these exploits, so why do companies prefer buying those expensive devices to try to protect (WAFs aren't perfect either) apps with security flaws instead of fixing those security flaws in the first place ? Thanks for your detailed answers, I never thought such a newbie question would get so much attention. | When deploying security, it is often a good idea to apply multiple layers. Just because you have a lock on your bedroom door doesn't mean you don't put one on the front door to your house. You may also apply a generic set of WAF rules in front of multiple applications. A WAF may be part of a larger suite for IDS/IPS, it could also help with the performance of the application if the WAF is inline so that the application doesn't waste resource on the blocking, logging, db queries, etc. You also make an assumption that the organization has the resources and skill to gain reasonable assurance about their application's security. If it's a third party application or has third party modules, those components may not be easily upgraded or it may be closed source or against the license to modify the program. | {
"source": [
"https://security.stackexchange.com/questions/53773",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
53,810 | When registering for an SSL cert, I was able to validate that I "owned" the domain I was creating the cert for by having a valid @domain.com email address. If I worked for a large company, say Microsoft or something, and have a valid [email protected] email address, how am I prevented from being able to create a valid SSL cert for microsoft.com? Maybe Microsoft has something in place to handle this, but what if the company is a bit smaller and doesn't have anything in place? | It's not just any email address at that domain. I have a valid gmail address, but that's not enough to convince Verisign that I own gmail.com. Instead, at the very least, you need to control one of a specific set of addresses, including the email address listed in the whois record for the domain, and also often some of the following: [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] In addition to that, depending on the domain in question and often triggered an by automatic flagging system, they may require additional manual validation by an employee of the CA. If you were to try to get a certificate for microsoft.com, for example, it probably wouldn't work even if you did control one of the email addresses listed above. | {
"source": [
"https://security.stackexchange.com/questions/53810",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41296/"
]
} |
53,878 | Every once in a while (when I think out loud and people overhear me) I am forced to explain what a buffer overflow is. Because I can't really think of a good metaphor, I end up spending about 10 minutes explaining how (vulnerable) programs work and memory allocation, and then have about 2 sentences on the actual exploit ("so a buffer overflow fills the buffer up with nonsense and overwrites the pointer to point to whatever I want it to point to"). By this time, most people have become suicidal... What is a good way to explain a buffer overflow to laymen? If possible, please include an "overflow" component, but also at least a lead-in to why this means the attacker can get what he wants. Remember, people of average (and below average) intelligence should be able to get an idea of what I am talking about, so while you should absolutely feel free (encouraged, actually) to explain what each part of your metaphor (analogy?) represents, don't rely on any super-technical descriptions... PS, a related question explaining in technical terms what the buffer overflow does: What is a buffer overflow? | Imagine you have a list of people you owe money to. Also, when you write over something, it replaces what was there before instead of writing over the top of it. (The analogy breaks down a bit here, because you pens don't work that way in real life, but computer memory does) You pay someone a $500 deposit on a $5000 car, so you now owe them $4500. They tell you their name is John Smith. You write the amount (4500) and the name (John Smith) in the table. Your table now looks like this: Later your table reminds you to pay them back. You pay the $4500 (plus interest) and erase it from the table, so now your table is blank again. Then you get a $1000 loan from someone else. They tell you their name is "John Smithxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9999999999".
You write the amount (1000) and the name (John Smithxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9999999999) in your table. Your table now looks like this: (the last 0 from 1000 was not written over. This is unimportant.) When writing the name, you didn't stop when you got to the end of the "name" column, and kept writing into the "amount owing" column! This is a buffer overflow. Later, your table reminds you that you owe $99999999990 to John Smithxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. You find him again and pay him almost 100 billion dollars. | {
"source": [
"https://security.stackexchange.com/questions/53878",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34536/"
]
} |
53,980 | Lots of sites these days, that don't deal with sensitive data, enable encryption. I think it's mostly to make (paranoid?) users feel safer. In cases where there is a user's account being logged in, their personal data accessed, I see how it can be useful. But what if I'm just reading a news site? Everyone has access to that, it's all over newspapers even. What's the point of encrypting such easily accessible information? Many users have public social network profiles with their interests listed in them, political and religious beliefs, or that information can be easily found from their personal blogs and websites. They don't see that as a threat to their personal life, so they choose not to hide it. How can viewing non-encrypted popular public content harm them? And how does encrypting such pages protect their privacy? | The issue you're dealing with, here, is that if you decide not to encrypt a connection, you're making assumptions regarding the sensitivity of the data that goes over that connection. Unfortunately, it is impossible to properly make that assumption because: You might not have fully understood all the implication of the data (for instance, if Twitter didn't encrypt data, it could be used by government agencies to spot dissidents and opponents). Data can become sensitive after they are being transmitted (for instance, answering "who was your first grade teacher" in a web chat with old school buddies could lead to a compromise of your iTunes account later on). In the end, it is the same as dealing with sensitive paper documents: you can decide to shred what's sensitive only but all it takes for that model to crumble is a single mistake. It's much easier to simply destroy ALL documents securely and not worry about sorting. Given the relatively low cost and of using connection security and the fact that it wards you against all the above problems (mostly), it makes a lot more sense to encrypt everything that to try to cherry-pick the "right" content to be encrypted. | {
"source": [
"https://security.stackexchange.com/questions/53980",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29563/"
]
} |
53,981 | Sync , a new product from BitTorrent, Inc., has been cited as a viable alternative to other cloud-storage platforms. The Sync FAQ indicates that an encryption scheme is being used, but does not comment on specifics. Does there exist any information about the type of encryption that BitTorrent Sync is using, and whether or not the implementation is secure? | The issue you're dealing with, here, is that if you decide not to encrypt a connection, you're making assumptions regarding the sensitivity of the data that goes over that connection. Unfortunately, it is impossible to properly make that assumption because: You might not have fully understood all the implication of the data (for instance, if Twitter didn't encrypt data, it could be used by government agencies to spot dissidents and opponents). Data can become sensitive after they are being transmitted (for instance, answering "who was your first grade teacher" in a web chat with old school buddies could lead to a compromise of your iTunes account later on). In the end, it is the same as dealing with sensitive paper documents: you can decide to shred what's sensitive only but all it takes for that model to crumble is a single mistake. It's much easier to simply destroy ALL documents securely and not worry about sorting. Given the relatively low cost and of using connection security and the fact that it wards you against all the above problems (mostly), it makes a lot more sense to encrypt everything that to try to cherry-pick the "right" content to be encrypted. | {
"source": [
"https://security.stackexchange.com/questions/53981",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21882/"
]
} |
54,038 | I see a lot of posts all over the web asking whether or not they should be using SSL to secure their website, or if it's really necessary to do so when the content of their site does not contain or request sensitive data. Let's make the assumption here that the cost of the certificate is not the issue, considering it's not extremely expensive to get an SSL certificate. Why would you even want to host a page that isn't secured with SSL? Most all web servers support SSL out of the box, and it's usually quite simple to get it setup (especially with IIS.) | There are a number of reasons not to use SSL, none of which being a good reason in itself, but cumulatively they can explain a lot of things. The main reason not to use SSL is an effect of the strongest force in the Universe, i.e. laziness. However easy setting up SSL is, not setting it up will still be easier. This alone explains why so many sites still use HTTP only, not HTTPS. A great many sites can get away with it, and not being attacked, because there just are not enough attackers around to attack every site, by a long shot (and attackers are no less lazy than everybody else). Among other reasons, one can cite the following: Hosting several HTTPS Web sites with distinct names on the same IP address has long been difficult, especially when the various sites don't know each other (either the server uses a certificate with all the names, but this can result in apparent and unfortunate associations, or the server relies on SNI , which does not work with Internet Explorer on WinXP). SSL prevents some types of caching, in particular the transparent proxying that some ISP are quite fond of. This implies extra bandwidth requirements for the server (hard data on the increase is difficult to come by, and depends on the site type; for instance, a Web-mail interface like Gmail would be unlikely to benefit from heavy caching anyway, contrary to a picture-heavy site). In (much) older days, HTTPS Web sites were not indexed as thoroughly as non-SSL sites, resulting in a widespread idea that you get better indexing by shunning SSL (that one has been wrong for quite some time now, but old ideas are hard to eradicate). Some people still have the feeling that SSL implies a heavy computational cost (that one isn't correct either, but still common). As an ironic twist, some people fear that using SSL would project the impression that they do care about security, thus increasing the reputation backlash if (when) they get hacked. The idea being that if you never claim or let it believe that you ever gave any attention to the concept of security, then maybe people will be more indulgent when they discover how much indeed you disregard it. | {
"source": [
"https://security.stackexchange.com/questions/54038",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/42634/"
]
} |
54,120 | Gmail was recently changed to require HTTPS for everyone , whether they want to use it or not. While I realize that HTTPS is more secure, what if one doesn't care about security for certain accounts? Some have criticized Google for being Evil by forcing them into a secure connection even if they don't want to be secure. They argue that if it's just their own account, shouldn't they be the only one to decide whether or not to secure themselves? Note: This question was posted in reference to the article linked above in order to provide a canonical answer to the question being asked off-site (which is why it was answered by the same person who asked it). | It's not just about you . By forcing users to use TLS , they're creating a more secure environment for everyone . Without TLS being strictly enforced, users are susceptible to attacks such as sslstrip . Essentially, making unencrypted connections an option leads to the possibility of attackers forcing users into unencrypted connections . But that's not all. Requiring TLS is the first step in moving toward HSTS enforcement on the google.com domain. Google already does opportunistic HSTS enforcement -- which is to say that they don't require TLS, but they do restrict which certificates are allowed to be used on Google.com (nb: this technique is now called HPKP) . That's an improvement, but it's not ultimately a solution. For full HSTS enforcement, they need to ensure that requiring TLS on all Google services within the domain won't break any necessarily third-party solutions. Once enforcement is turned on, it can't easily be turned off. So by moving services one-by-one to strict TLS enforcement, they are paving the way toward making HSTS enforcement across the domain a reality. Once this enforcement is in place, browsers will simply refuse to connect to Google over an insecure or compromised connection. By shipping this setting in the browser itself, circumvention will become effectively impossible. Disclaimer: I work for Google now but I didn't work for Google when I wrote this. This is my opinion, not Google's (as should be immediately clear to anyone with a basic understanding of chronology). | {
"source": [
"https://security.stackexchange.com/questions/54120",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
54,285 | I'm still new to information security, but I have read a bit about the one-time pad . The point that sticks out to me the most is that it is supposedly unbreakable. Has this method of encryption ever been incorporated in any internet web applications on a larger scale? How would they pass they key? | The problem with a one time pad, is that is must be equal in length (or longer) than the data being encrypted , and must never, ever, be reused. Just as you indicate, how would they send the key? , the OTP must then be sent in a secure way... however that is the problem that is usually left to the user and is generally why OTP is useless. If you have the ability to send a large OTP securely, then you might think you could simply send the stuff you want to encrypt directly to the recipient over that secure channel. However the benefit of a OTP is to move secrecy through time . If you have a secure channel now (e.g. an in-person meeting) you can exchange an OTP. You have banked some secrecy for later. Later when you do not have a secure channel, you can use up a part of your pad to make your message secret. The improvement over OTP is where they make a small seed (a key) and use an expander called a PRNG to expand that key into what is essentially a stream cipher. This way you're only sending a small amount of data securely, and can encrypt lots of data with that expanded key. This is called "key stretching". | {
"source": [
"https://security.stackexchange.com/questions/54285",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/42515/"
]
} |
54,353 | Maybe this question sounds obvious, but I wonder how dangerous might be publishing a public key for an asymmetric encryption system? I know public keys are meant for encrypting messages by anyone who's meant to do so, that's why we can even download a public cert of the most common CAs from web browsers. But is it secure if I publish my public key on a webserver so anyone can download it? What risks am I facing doing this? Thanks. | None, that's why it is called a public key. It can not be used to access anything encrypted for you without solving math problems that are currently prohibitively difficult to solve. It is possible that in the future it may be possible to solve these problems and that would cause the public key to allow messages to be decoded, but there is no current known threat. The flip side is that if you do not share your public key, then your private key doesn't do you any good. The only reason to use asymmetric cryptography over symmetric is if you need to let someone have your public key. Otherwise, if you are just doing stuff for yourself, symmetric is far faster and more secure. | {
"source": [
"https://security.stackexchange.com/questions/54353",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41650/"
]
} |
54,639 | I'm currently using nginx with the following ciphers: ssl_ciphers HIGH:!aNULL:!eNULL:!LOW:!ADH:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS; I would like to maintain compatibility to older browsers, especially also older mobile browsers and therefore not completely disallow SHA1. How can I achieve that SHA256 is preferred over SHA1 for MAC (Message Authentication Code) and always used when possible. I can i.e. force SHA256 to be applied by adding SHA256:!SHA: to my ssl_ciphers string but this would also disallow SHA1 completely. With the ssl_cipher at the beginning it tends however to just use SHA1. Any recommendations? Update 29.12.2014 Thanks everybody for the constructive inputs and discussion. Even though I still think that the Mozilla page on Server side TLS overall covers the topic quite good - I would only recommend the Modern compatibility with the limitation that the DSS ciphers should be removed from it and explicitly disallowed (!DSS) as recommended in the comment by Anti-weakpasswords - thanks for spotting it. ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DSS:!DES:!RC4:!3DES:!MD5:!PSK Interestingly ssllabs did not alert or down rate for this... Further I prefer to use custom generated Diffie-Hellman parameters. Even though the standard ones are obviously considered safe. What are the OpenSSL standard Diffie-Hellman parameters (primes)? openssl dhparam -check -out /etc/ssl/private/dhparams.pem 2048 increase that to 4096 for paranoia and fun if you like. | First, let's go over how cipher suite negotiation works, very briefly. For example, we can use the TLS 1.2 document RFC 5246 starting at section 7.4.1.2 to see, in the short short form: ClientHello: The client tells the server which cipher suites the client supports Now the server picks one I'll discuss how to control which one it picks next! ServerHello: The server tells the client which cipher suite it has chosen, or gives the client a failure message. Now, as to the actual selection. I've used the nginx ssl module documentation , the Qualys 2013 article on Configuring Apache, Nginx, and OpenSSL for Forward Secrecy , and the Hynek Hardening Your Web Server’s SSL Ciphers article for reference. The latter two cover both Apache and Nginx (as both use OpenSSL as a base). Essentially, you need to tell Nginx to use the order you select, and you need to select an order. To see what the results of that order would be, you can use the OpenSSL command line, e.g. openssl ciphers -v 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED' NOTE: You may want to remove :!3DES from that string; 3-key triple-DES isn't efficient, but it is still secure in and of itself to more or less 112 bits of security, and is very, very common. Use the above command to determine which cipher suites will be most preferred and least preferred in your configuration, and change it until you like the results. The references I've given have their own strings; I amended it slightly to get the above example (removing RC4 and SEED, and putting every TLS 1.2 cipher suite above any 'SSLv3' cipher suite, for example). Then, for Nginx in particular, you would alter your configuration file to include something like: ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED"; Add in SSLv3 to ssl_protocols if you really insist on it. The ssl_prefer_server_ciphers will inform nginx to use the order we specify, and ignore the order the client presents their cipher list in. Now, if the only shared cipher suite between the ClientHello and the list OpenSSL ciphers -v ... gives is our least preferred cipher, that's of course what nginx will use. If nothing matches, then we send the client a failure notice. The ssl_ciphers command is the meat of the choice, here, as nginx will inform OpenSSL of our preferred cipher suite list. Please, please use the openssl ciphers -v command to see the results you get on your platform. Ideally, check it again after changing OpenSSL versions. Also, please read Scott Helme's article on Setting up HSTS (HTTP Strict Transport Security) in nginx , which will allows a host to enforce the use of HTTPS on the client side. Be sure to include the HSTS header inside the http block with the ssl listen statement. Edited to add: At least after this (if not before also), go to Qualys SSL Labs to see HTTPS security information and to Test Your Server that's been kept pretty well up to date for the last few years. Recommendations change regularly, and sometimes even frequently reverse themselves (RC4, for example, what nearly whiplash inducing). You can also even Test Your Browser ! | {
"source": [
"https://security.stackexchange.com/questions/54639",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43005/"
]
} |
54,783 | Following Turkey's recent social site blocks, I am wondering how can you efficiently accomplish that as a country. Similar for a big company. Blocking IPs → easy to circumvent, (proxys, tunnels, etc)
Blocking/Redirecting DNS → type the address or similar as above Deep Packet Inspection → very resource intensive, can it be done in a scale of a whole country? And again, encrypting traffic, HTTPS, SSH etc… Terminating all connections at country gateways, inspecting traffic, and then encrypting traffic again? Seems very very time consuming. Is there any (obvious?) or other way I missed? I am talking generally and not for Turkey's example. And dropping all encrypted traffic does not seem to be an option for a country. | You have covered the main ones. In short: it's very hard, if not impossible, to effectively block a site you want. You can make it hard by using the techniques you've mentioned: blocking IPs, redirecting DNS, blocking HTTP requests to certain sites / containing certain keywords. These methods are thwartable by proxies (in the case of deep packet inspection, encrypted proxies would be required) so you end up with a chase situation: as you block a site, proxies will spring up, and as you block those proxies even more will start. The closest anyone has come is North Korea, and they manage this by controlling all sites in their country's intranet. So the most effective methods: Whitelist (North Korea's method) - Only allowing the sites you control. Blocking all encrypted traffic + deep packet inspect (China's method) - this solution allows the communication that pass your criteria for what is acceptable and blocks communications that you are unable to determine the content of. Both of these methods require complete control / authority over all the internet infrastructure in the area you want to censor. As you've said, blocking all encrypted traffic doesn't really work - China has found this too: although they tried to, people are getting round this by using Steganography which is the practice of concealing messages, for example: I am illustrating a hidden message because I
hate to see unanswered questions. I will help
you to understand. Reading the first word from each line reveals the hidden message "I hate you" (there are, of course, more intelligent ways to to Steganography but that's just an illustration) | {
"source": [
"https://security.stackexchange.com/questions/54783",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31942/"
]
} |
54,846 | I mainly use 2 passwords: 1 is a 4 word full lowercase passphrase of 18 letters long which I use wherever possible. The other is basically 3 words and a digit with the first word in full uppercase and only 14 digits long. I use this whenever the first passphrase is not valid due to length or character set constraints. the words are fairly common (top 5000 words on popular TV shows). I've calculated the entropy for both passwords using http://rumkin.com/tools/password/passchk.php . the entropy for both passwords is about equal at 68 bits, give or take half a bit. I mainly use these passwords for gaming related matters (MMO accounts, desktop clients, forums). I do not use these passwords for financial data, apart from 2 empty paypal accounts and a read-only prepaid credit card statement. I don't know how valid these entropy numbers are, but judging from these parameters, are these passwords safe enough for their intended purpose? And what is the general guideline for password entropy for different purposes? | Password meters are no good. Well, that's a bit simplistic, so let me say it in more details: a "password meter" application like the one you used is mindless and generic; what it measures is the effort of breaking your password, using the mindless and generic strategy that the password meter author thought of. In particular, that password meter system has no idea that your passwords have been generated by assembling words taken randomly from a short list. However, an attacker who is intent on breaking your password will know that, and adapt: you just wrote it on a public forum, so it has become public information. A correct entropy computation does not work over the actual password value, but over the process by which the password was generated. We simply assume that the attacker is aware of the process, and knows all of it except the actual random choices. With 4 words from a list of 5000, you get one password in a set of 5000 4 with uniform selection probability (that's an important assumption), so the entropy here is 49.15 bits (because 2 49.15 is approximately equal to 5000 4 ). With 3 words, you get 36.86 bits. For more on entropy calculation, see this answer . (The wisdom of entering your password in a Web-based "password meter" is questionable, too. The one you link to claims that it does all the computations in Javascript and your password does not leave your browser, but did you really check the Javascript source to make sure of it ?) As far as passwords go, 36.86 bits of entropy are rather good. Entropy from passwords selected by average users is much lower than that. Such a password will be broken by an attacker who got the corresponding hash IF the hash was not done properly (e.g. some homemade construction with a couple of SHA-1 invocation), but even then chances are that other users will fall first. However , you are doing something real wrong. It is right there, in your first sentence: I mainly use 2 passwords: 1 is a 4 word full lowercase passphrase of 18 letters long which I use wherever possible . Emphasis is mine; it shows the problem. You are reusing passwords. That is Bad. You shall not reuse passwords. When you use the same password on N sites, you lower the security of your account on all sites to the level provided by the worst of the N . Moreover, when that site gets hacked and your password stolen, and your password shows up on lists of login+password exchanged over P2P networks, you will not know which site did it wrong. A lot of sites still store plaintext passwords (a shooting offence) so any widely reused password MUST be assumed to have already leaked. If you use site-specific passwords, then any damage will be contained to the specific culprit. Of course, this implies some storage system on your side, e.g. KeePass , or a low-tech "passwords.txt" file (you have to take care of where you store it and use it, but that can be managed with decent physical security), or even a printed list that you keep in your wallet. In practice , separation of passwords for damage containment will be a lot more important to your security than password entropy. | {
"source": [
"https://security.stackexchange.com/questions/54846",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34161/"
]
} |
54,889 | I always assumed that the process for getting a (trusted, not self-signed) certificate was more or less like this: You generate a public and private key pair From this key pair you generate a certificate You submit your public key, certificate and other (company) information to a CA The CA checks that the information you provided is correct The CA signs your certificate However, lately I am doubting this. People told me that in fact the CA itself generates the public and private key pair and certificate and signs it and sends all of that to you... This would seem to me to be very insecure in the sense that all private keys of all certificates would be in the hands of just a few CA's. I have been reading this question with a lot of interest: How do certification authorities store their private root keys? . However it only discusses the CA's own private keys. So do the CA's have a copy of the private keys of the certificates they sign or not? | Depending on how the CA does things, it may or may not have a copy of your private key. Usually it doesn't. The normal method is that you generate your private/public key pair on your own machine, then send the public key to the CA as part of a certificate request . The CA assembles and signs the certificate, and sends it back to you. Your private key never left your machine, and the CA never saw it. However, in some cases, it is a good idea to let the CA generate the key pair, and send it to you. One situation where this is desirable is for asymmetric encryption keys: if you lose a private key, then you lose all the data which has been encrypted with the corresponding public key, since you can no longer decrypt it. Therefore, encryption private keys should be backupped somewhere, and having the CA generate the private key makes it easy for the CA to enforce a comprehensive, inescapable backup system. To know what happens with a specific CA, have a look at what they return to you: if the CA sends to you a PFX (PKCS#12) archive, then CA has the private key, or at least had the private key (whether it saved it is another matter). On the other hand, if the CA sends to you a raw certificate only, then it does not have the private key. In any case, the whole process should be documented by the CA with which you are doing business; if the CA does not document what it does, and in particular where private keys are generated and whether they are stored or not, then find another CA. CA which do things opaquely cannot be trusted. Indeed, it is a basic requirements of all "CA best practice guides" (e.g. Web Trust ) that a CA must document everything. | {
"source": [
"https://security.stackexchange.com/questions/54889",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43457/"
]
} |
55,015 | I've taken a graph of the amount of CVE reports concerning the JRE per Year. Now as you can see this spiked in 2012-2013, which could have been guessed easily, if you look at the amount of news items concerning java in the past years. However, I'm having trouble finding an explanation why: Did Java get more popular? Did Java just become more popular for
hackers? Is it because of the acquisition by Oracle? | I think this is a "trend effect" which is also the drive under everything about fashion (in the "clothing" sense). Please allow the local Frenchman to talk about fashion. Fashion is a deeply self-contradictory social behaviour. People who follow fashions seek both: to gain acceptance in a given local group by displaying adherence to perceived agreed upon codes (e.g. the arbitrary choices of cloth shapes and textures and colours); to gain visibility within the same group by displaying a bold (implicitly: bolder than other group members) will to embody the most up-to-date or even future social codes for that group. In effect, the fashion-victim must be both a leader and a follower. If the context were electronics, we would say that we observe a circuit with positive feedback, which must necessarily exhibit sharp transitions between locally stable configurations. An extra effect is that, in clothing fashion, the only universal effect is fast depreciation: no fashion may ever remain active for more than a few months. In short words, fashion is fast-pace, and when it tilts one way ever so slightly, everybody rushes in that direction. This explains the way fashions come and go with violent abruptness. Hackers are the geek version of fashion victims. Their interest and efforts are always driven by what seems to be "hot subjects". People who spend their days and nights on keyboards are often very sensitive to social exclusion (since they get little society on average) so they abhor the idea of concentrating on an "has been" technology which would deprive them of the last shreds of peer recognition that they may hope for. Therefore, when a topic seems to promise glory, they all run towards it. "Glory" can here be equated with "slashdottable". In the specific case of security and Java, the trigger may well have been, indeed, the acquisition of Sun by Oracle. Oracle is a known "bad guy" so there always is some fame in finding security holes in Oracle's products (computer people have always had a soft spot for nihilism). Moreover, the security model of Java (the applet model) looks ripe with potential vulnerabilities: in the Java applet model, the "security perimeter", which is the boundary between the hostile world (the applet code itself) and the protected world (the host system) goes through the standard library API: hundreds of system classes must check and enforce the complex system of permissions. The attack surface is huge . There MUST be holes now and then. Sun's people were quite good at what they did, but making the applet model safe would take divine development powers. As soon as a few bugs were found and publicized, the idea of unclaimed reputation riches went through the collective hackers' minds like a fire in the savannah, and they all rushed. Such is the power of Bonanza . Once brains are ablaze with the promise of wealth (in this case, Twitter followers or Slashdot scores), there is no stopping them. It will end soon, though. "Java bugs are soooo 2013 !" | {
"source": [
"https://security.stackexchange.com/questions/55015",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32737/"
]
} |
55,023 | I was reading up on the history of the PGP encryption software when I realised its creator was under criminal charges for munitions export without a license for releasing the source code of PGP. What was so dangerous about PGP at that point in time that it was an offence under the law? I mean, PGP is just an encryption and decryption algorithm; what am I missing here? | PGP was considered dangerous because it could have allowed Soviet spies and military officers to plan the nuclear annihilation of the western world without the CIA realizing what's happening before it's too late. Time for some history. During World War II, the importance of cryptography for military use became apparent. Being able to crack enemy cryptography while also having cryptography systems for oneself which can not be cracked, proved to be an important military factor which could result in victory or defeat. During the subsequent universal arms-race during the cold war, all sides were aware of this. Having the upper hand in cryptographic technology over the other side was considered a strategical factor which could turn the tide in another world-war. That meant that any knowledge-transfer of cryptography know-how from the Western to the Eastern world had to be prevented. As a result of this doctrine, cryptographic technology was considered of military value and thus filed under Category XIII in the United States Munitions List. That meant any data storage medium which contained cryptographic software was legally considered like live ammunition when it came to moving it across borders. From today's point of view it might seem absurd to try to contain knowledge through export restrictions designed for physical goods , but it fitted into the isolationist viewpoint of the military strategists of the cold war era. Also remember that this was the 70s, long before the internet age. This was decades before the time where you were able to obtain any software in the world via the internet through your favorite piracy website. Getting a piece of software from computer A to computer B usually meant to put it on a physical medium like a floppy disk, magnetic tape or (even earlier) punch-cards, and the movement of such physical media across borders seemed controllable (at least in theory). Technology marched on. In the 80s, the first international computer networks emerged, and the hacker community began to flourish. The world became increasingly interconnected and soon it became apparent that containing knowledge within geographical borders was an exercise in futility. But as usual, politics and laws didn't keep up with technical innovation, so when PGP emerged in the 90s, it was still subject to cold war era laws regarding cryptography exporting. The algorithms it used were open secrets, available to anyone in the world capable of buying a modem and making long-distance phone calls. Hackers were tattooing them on their bodies to ridicule the cryptography export restrictions. But as a commercial company, PGP had to play along and find a loophole in the form of exporting their source code in printed form and re-transcribe it. Although the restrictions on cryptographic technology have been relaxed in the past decades, some of them are still in place . | {
"source": [
"https://security.stackexchange.com/questions/55023",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36540/"
]
} |
55,061 | I have a small number of employees who use a company computer but these people aren't very tech-savvy. They use an email client and a messaging client. I'm pretty sure they wouldn't click on a .exe or .zip file in an email without thinking, and I know that's one area of concern. However, I'm thinking about images. In fact, regardless of how capable a person is with technology, I believe that attaching things (code or anything else) to an image can be a security risk. What can be attached to images to harm another? I believe that images can pose a security risk as they 'automatically execute' or something. There are so many ways that images can be received by a computer (or a phone or tablet, of course): email iMessage (or any other messaging app) someone right-clicking and saving an image from a web page just viewing a web page of course downloads the image to cache What precautions do I need to take regarding the above four things? Can someone just attach some code to an image and it execute? What do I need to do to prevent images being used against my computers? I'm guessing you couldn't just attach code to an image and iMessage someone's iPhone. What about Android? | The other answers mostly talk about attaching arbitrary code to images via steganographic techniques, but that's not very interesting since it requires that the user be complicit in extracting and executing that. The user could just execute malicious code directly if that's their goal. Really you're interested in whether there's a possibility of unexpected, arbitrary code execution when viewing an image. And yes, there is such a possibility of an attacker constructing a malicious image (or something that claims to be an image) that targets specific image viewing implementations with known flaws. For example, if an image viewer allocates a buffer and computes the necessary buffer size from a naive width * height * bytes_per_pixel calculation, a malicious image could report dimensions sufficiently large to cause the above calculation to overflow, then causing the viewer to allocate a smaller buffer than expected, then allowing for a buffer overflow attack when data is read into it. Specific examples: http://technet.microsoft.com/en-us/security/bulletin/ms05-009 http://technet.microsoft.com/en-us/security/bulletin/ms04-028 http://www.adobe.com/support/security/bulletins/apsb11-22.html https://www.mozilla.org/security/announce/2012/mfsa2012-92.html http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-1205 http://en.wikipedia.org/wiki/Windows_Metafile_vulnerability In general, these sorts of things are difficult to protect against. Some things you can do: Keep your systems and applications updated. Enable DEP . Enable ASLR if possible. Avoid running programs with administrative privileges. On Windows, Microsoft's EMET could also provide some protection. | {
"source": [
"https://security.stackexchange.com/questions/55061",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32062/"
]
} |
55,075 | If you haven't heard of the Heartbleed Bug , it's something to take a look at immediately. It essentially means that an attacker can exploit a vulnerability in many versions of OpenSSL to be able to gain access to a server's private key . It is not a theoretical threat, it is a demonstrable and reproducible threat. See the above link for more information. The question I think most organizations are asking themselves is the following: Does every company now need to create new public/private keypairs and ask their CA to invalidate the original signed keypairs? | It means much more than just new certificates (or rather, new key pairs) for every affected server. It also means: Patching affected systems to OpenSSL 1.0.1g Revocation of the old keypairs that were just superseded Changing all passwords Invalidating all session keys and cookies Evaluating the actual content handled by the vulnerable servers that could have been leaked, and reacting accordingly. Evaluating any other information that could have been revealed, like memory addresses and security measures Summarized from heartbleed.com (emphasis mine): What is leaked primary key material and how to recover? These are the crown jewels, the encryption keys themselves . Leaked
secret keys allows the attacker to decrypt any past and future traffic
to the protected services and to impersonate the service at will. Any
protection given by the encryption and the signatures in the X.509
certificates can be bypassed. Recovery from this leak requires
patching the vulnerability, revocation of the compromised keys and
reissuing and redistributing new keys. Even doing all this will still
leave any traffic intercepted by the attacker in the past still
vulnerable to decryption. All this has to be done by the owners of the
services. What is leaked secondary key material and how to recover? These are for example the user credentials (user names and
passwords) used in the vulnerable services. Recovery from this leaks
requires owners of the service first to restore trust to the service
according to steps described above. After this users can start
changing their passwords and possible encryption keys according to the
instructions from the owners of the services that have been
compromised. All session keys and session cookies should be invalided
and considered compromised. What is leaked protected content and how to recover? This is the actual content handled by the vulnerable services . It
may be personal or financial details, private communication such as
emails or instant messages, documents or anything seen worth
protecting by encryption. Only owners of the services will be able to
estimate the likelihood what has been leaked and they should notify
their users accordingly. Most important thing is to restore trust to
the primary and secondary key material as described above. Only this
enables safe use of the compromised services in the future. What is leaked collateral and how to recover? Leaked collateral are other details that have been exposed to the
attacker in the leaked memory content. These may contain technical
details such as memory addresses and security measures such as
canaries used to protect against overflow attacks. These have only
contemporary value and will lose their value to the attacker when
OpenSSL has been upgraded to a fixed version. | {
"source": [
"https://security.stackexchange.com/questions/55075",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2374/"
]
} |
55,076 | CVE-2014-0160 http://heartbleed.com This is supposed to be a canonical question on dealing with the Heartbeat exploit. I run an Apache web server with OpenSSL, as well as a few other utilities relying on OpenSSL (as client). What should I do to mitigate the risks? The bug dissected Check if your site is vulnerable (Duckduckgo.com is, for instance!) I looked at some of the data dumps
from vulnerable sites,
and it was ... bad.
I saw emails, passwords, password hints.
SSL keys and session cookies.
Important servers
brimming with visitor IPs.
Attack ships on fire off
the shoulder of Orion,
c-beams glittering in the dark
near the Tannhäuser Gate.
I should probably patch OpenSSL. Credit: XKCD . | There is more to consider than just new certificates (or rather, new key pairs) for every affected server. It also means: Patching affected systems to OpenSSL 1.0.1g Revocation of the old keypairs that were just supersceded Changing all passwords Invalidating all session keys and cookies Evaluating the actual content handled by the vulnerable servers that could have been leaked, and reacting accordingly. Evaluating any other information that could have been revealed, like memory addresses and security measures Neel Mehta (the Google Security engineer who first reported the bug) has tweeted : Heap allocation patterns make private key exposure unlikely for #heartbleed #dontpanic. Tomas Rzepka (probably from Swedish security firm Certezza ) replied with what they had to do to recover keys: We can extract the private key successfully on FreeBSD after
restarting apache and making the first request with ssltest.py Private key theft has been also demonstrated by CloudFlare Challenge . And Twitter user makomk chimed in with : I've recovered it from Apache on Gentoo as a bare prime factor in
binary, but your demo's a lot clearer...It has a lowish success rate,
more tries on the same connection don't help, reconnecting may,
restarting probably won't...Someone with decent heap exploitation
skills could probably improve the reliability. I'm not really trying
that hard. I summarized the bullet points above from heartbleed.com (emphasis mine): What is leaked primary key material and how to recover? These are the crown jewels, the encryption keys themselves . Leaked
secret keys allows the attacker to decrypt any past and future traffic
to the protected services and to impersonate the service at will. Any
protection given by the encryption and the signatures in the X.509
certificates can be bypassed. Recovery from this leak requires
patching the vulnerability, revocation of the compromised keys and
reissuing and redistributing new keys. Even doing all this will still
leave any traffic intercepted by the attacker in the past still
vulnerable to decryption. All this has to be done by the owners of the
services. What is leaked secondary key material and how to recover? These are for example the user credentials (user names and
passwords) used in the vulnerable services. Recovery from this leaks
requires owners of the service first to restore trust to the service
according to steps described above. After this users can start
changing their passwords and possible encryption keys according to the
instructions from the owners of the services that have been
compromised. All session keys and session cookies should be invalided
and considered compromised. What is leaked protected content and how to recover? This is the actual content handled by the vulnerable services . It
may be personal or financial details, private communication such as
emails or instant messages, documents or anything seen worth
protecting by encryption. Only owners of the services will be able to
estimate the likelihood what has been leaked and they should notify
their users accordingly. Most important thing is to restore trust to
the primary and secondary key material as described above. Only this
enables safe use of the compromised services in the future. What is leaked collateral and how to recover? Leaked collateral are other details that have been exposed to the
attacker in the leaked memory content. These may contain technical
details such as memory addresses and security measures such as
canaries used to protect against overflow attacks. These have only
contemporary value and will lose their value to the attacker when
OpenSSL has been upgraded to a fixed version. | {
"source": [
"https://security.stackexchange.com/questions/55076",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13820/"
]
} |
55,119 | If I have a web crawler (using a non-patched version of OpenSSL) that can be coaxed to connect to an evil https-site, can they get everything from my process memory? To attack a server you can keep reconnecting to get more 64kb blocks (if I understand correctly), but can a client be forced to reconnect many times, to get more blocks? | Yes, clients are vulnerable to attack. The initial security notices indicated that a malicious server can use the Heartbleed vulnerability to compromise an affected client. Sources below (all emphasis is mine). Since then, proof of concept attacks have validated this position - it is utterly certain that clients running apps that use OpenSSL for TLS connections may be vulnerable. heartbleed.com : ...When [Heartbleed] is
exploited it leads to the leak of memory contents from the server to
the client and from the client to the server . Ubuntu Security Notice USN-2165-1 : An attacker could use this issue to obtain up to 64k of memory
contents from the client or server RFC6520 : 5. Use Cases Each endpoint sends HeartbeatRequest messages... OpenSSL Security Advisory 07 Apr 2014 : A missing bounds check in the handling of the TLS heartbeat extension
can be used to reveal up to 64k of memory to a connected client or
server . Client applications reported to be vulnerable (Credit to @Lekensteyn except where otherwise stated): MariaDB 5.5.36 wget 1.15 (leaks memory of earlier connections and own state) curl 7.36.0 git 1.9.1 (tested clone / push, leaks not much) nginx 1.4.7 (in proxy mode, leaks memory of previous requests) links 2.8 (leaks contents of previous visits!) All KDE applications using KIO (Dolphin, Konqueror). Exim mailserver OwnCloud Version Unknown | Source Note that some of these programs do not use OpenSSL. For example, curl can be built with Mozilla NSS and Exim can be built with GnuTLS (as is done on Debian). Other common clients: Windows (all versions): Probably unaffected ( uses SChannel/SSPI ), but attention should be paid to the TLS implementations in individual applications. For example, Cygwin users should update their OpenSSL packages. OSX and iOS (all versions): Probably unaffected. SANS implies it may be vulnerable by saying " OS X Mavericks has NO PATCH available ", but others note that OSX 10.9 ships with OpenSSL 0.9.8y, which is not affected. Apple says : "OpenSSL libraries in OS X are deprecated, and OpenSSL has never been provided as part of iOS" Chrome (all platforms except Android): Probably unaffected ( uses NSS ) Chrome on Android: 4.1.1 may be affected ( uses OpenSSL ). Source . 4.1.2 should be unaffected, as it is compiled with heartbeats disabled . Source . Mozilla products (e.g. Firefox, Thunderbird, SeaMonkey, Fennec): Probably unaffected, all use NSS | {
"source": [
"https://security.stackexchange.com/questions/55119",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43672/"
]
} |
55,127 | When accepting public keys from someone setting up an identity provider for access to resources protected by a service provider using SAML 2.0, do you absolutely need to have a unique certificate? Is this covered in the SAML specifications? If they don't, I assume that use of certificates as a layer of defense is rendered void. An example might be someone setting up a test IdP and reusing the certificate for production. | Yes, clients are vulnerable to attack. The initial security notices indicated that a malicious server can use the Heartbleed vulnerability to compromise an affected client. Sources below (all emphasis is mine). Since then, proof of concept attacks have validated this position - it is utterly certain that clients running apps that use OpenSSL for TLS connections may be vulnerable. heartbleed.com : ...When [Heartbleed] is
exploited it leads to the leak of memory contents from the server to
the client and from the client to the server . Ubuntu Security Notice USN-2165-1 : An attacker could use this issue to obtain up to 64k of memory
contents from the client or server RFC6520 : 5. Use Cases Each endpoint sends HeartbeatRequest messages... OpenSSL Security Advisory 07 Apr 2014 : A missing bounds check in the handling of the TLS heartbeat extension
can be used to reveal up to 64k of memory to a connected client or
server . Client applications reported to be vulnerable (Credit to @Lekensteyn except where otherwise stated): MariaDB 5.5.36 wget 1.15 (leaks memory of earlier connections and own state) curl 7.36.0 git 1.9.1 (tested clone / push, leaks not much) nginx 1.4.7 (in proxy mode, leaks memory of previous requests) links 2.8 (leaks contents of previous visits!) All KDE applications using KIO (Dolphin, Konqueror). Exim mailserver OwnCloud Version Unknown | Source Note that some of these programs do not use OpenSSL. For example, curl can be built with Mozilla NSS and Exim can be built with GnuTLS (as is done on Debian). Other common clients: Windows (all versions): Probably unaffected ( uses SChannel/SSPI ), but attention should be paid to the TLS implementations in individual applications. For example, Cygwin users should update their OpenSSL packages. OSX and iOS (all versions): Probably unaffected. SANS implies it may be vulnerable by saying " OS X Mavericks has NO PATCH available ", but others note that OSX 10.9 ships with OpenSSL 0.9.8y, which is not affected. Apple says : "OpenSSL libraries in OS X are deprecated, and OpenSSL has never been provided as part of iOS" Chrome (all platforms except Android): Probably unaffected ( uses NSS ) Chrome on Android: 4.1.1 may be affected ( uses OpenSSL ). Source . 4.1.2 should be unaffected, as it is compiled with heartbeats disabled . Source . Mozilla products (e.g. Firefox, Thunderbird, SeaMonkey, Fennec): Probably unaffected, all use NSS | {
"source": [
"https://security.stackexchange.com/questions/55127",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43701/"
]
} |
55,249 | On several pages , it is re-iterated that attackers can obtain up to 64K memory from the server or client that use an OpenSSL implementation vulnerable to Heartbleed (CVE-2014-0160). There are dozens of tools that reveal the bug in server applications. So far I have not seen a single tool that exploits the bug in client applications. Is it that hard to exploit the bug at clients? Are clients actually vulnerable or not? | As a matter of fact, yes , clients are vulnerable. So far the attention has been focused on servers as they are much more open to exploitation. (Almost) everyone can connect to a public HTTP/SMTP/... server. This blog describes how the bug actually works (it mentions dtls_process_heartbeat() , but tls_process_heartbeat() is affected in the same way). This function is used both for clients and server applications, so indeed clients should be vulnerable too. According to RFC 6520, heartbeats should not be sent during handshakes. In practice, OpenSSL accepts heart beats right after the sending a ServerHello (this is what Jared Stafford's ssltest.py does). Upon further testing, I have discovered that servers can abuse clients by sending a Heartbeat right after sending the ServerHello too. It triggers the same bug. A proof of concept can be found in my repo at https://github.com/Lekensteyn/pacemaker . From its README: The following clients have been tested against 1.0.1f and leaked
memory before the handshake: MariaDB 5.5.36 wget 1.15 (leaks memory of earlier connections and own state) curl 7.36.0 (https, FTP/IMAP/POP3/SMTP with --ftp-ssl) git 1.9.1 (tested clone / push, leaks not much) nginx 1.4.7 (in proxy mode, leaks memory of previous requests) links 2.8 (leaks contents of previous visits!) KDE 4.12.4 (kioclient, Dolphin, tested https and ftps with kde4-ftps-kio) Exim 4.82 (outgoing SMTP) It has been demonstrated that 64 KiB of memory (65535 bytes) can indeed returned. It has also been demonstrated that clients ( wget , KDE Dolphin, ...) can leak data like previous requests possibly containing passwords. | {
"source": [
"https://security.stackexchange.com/questions/55249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2630/"
]
} |
55,279 | I read a lot about IP spoofing but I am not sure how easy it is really to do. Let's say I am in Spain, can I somehow connect to a server in the US with an IP address that is allocated to Mexico? Won't the routers simply refuse to forward my traffic? I know you won't get any response as it would be routed to Mexico but I am confused how you can contact the US server at all with the wrong IP. | Actually, you can't. Whenever you need IP traffic to be bidirectional , IP spoofing is no use. The contacted server would not reply to you but to someone else, the address you spoofed. IP spoofing is then normally "useful" only to disrupt communications - you send harmful packets, and you don't want them being traceable to yourself. In specific situations you can use a double spoofing to gather a measure of bidirectionality. For example let us suppose that we know of a system somewhere that has poor sequence generators - whenever you send a packet to it, it will reply with a packet containing a monotonically incrementing number. If nobody was connecting to the system except you, you would expect getting 1, 2, 3, 4... . Now let us further suppose that you're interested in whether another system is replying to specific packets (e.g. you're running a port scan), and you wish to receive some information but don't want the target system to have your real address. You can send to that system a spoofed packet pretending to be from the poorly-sequencing machine. Now there are three possibilities: the target system does not reply, it replies, or it actively counterattacks and (e.g.) scans the pretended source to determine the why's and wherefore's of that first packet. What you do is, you scan - without spoofing - the poorly-sequencing machine (PSM). If nobody except you has connected to it, which means that the target machine hasn't replied to the PSM, you'll get 1-2-3-4-5. If it replied once , you will get 1-3-5-7 (the packets 2, 4 and 6 having been sent by the PSM to the TM in response to the TM-PSM replies to the spoofed packets from you to the TM. If the TM made more connections, you'll get something like 2-11-17-31 or such. The PSM knows your real address, of course, but the TM does not. This way you can spoof a connection and still gather some information. If the PSM's security level is low enough, this, combined with the fact that your "scan" of the PSM is harmless, is (hopefully) enough to prevent consequences to you. Another possibility is to spoof a nearby machine. For example you are in network 192.168.168.0/24, have IP 192.168.168.192, and you have promiscuous access to some other machine address space, say 192.168.168.168. You just have to "convince" the router serving both you and the .168 machine that you are indeed the .168 machine , and take the latter offline or disrupt its communications (or wait until it is offline for reasons of its own, e.g. a colleague logging off for lunch). Then the replies to the spoofed .168 packets will sort of whizz by past you, but as long as you can sniff them while they pass, and the real .168 isn't able to send a "That wasn't me!" reply, from the outside the communication will appear to be valid and point back to the .168 machine. This is sort of like pretending to be your front door neighbour, while that apartment is really untenanted. You order something through mail, the packet gets delivered to the other's front door, you tell the deliveryman "Oh yes, mr. Smith will come back in half an hour, I'll just sign for him" and get the package. | {
"source": [
"https://security.stackexchange.com/questions/55279",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172721/"
]
} |
55,283 | Should I change all of my online passwords due to the heartbleed bug? Edit: I found a list of vulnerable sites on GitHub and checked all my critical sites. Everything I was really concerned with was not vulnerable according to the list. | Short answer: Yes, all passwords. Long answer: At first sight, you only need to change the secret key of the certificate. But due to several reasons, all passwords are affected. Here's why: Reason 1: Chained attack Someone captured the secret key of the certificate. From that time on, he could decrypt all the traffic to that site. If you logged on for whatever service on that website, your password was revealed. Probably the most common service is Webmail, so let's use that as an example. Reading your emails, the attacker found out which other services you are using. Using the password reset mechanism, the attacker simply reset all the passwords, confirmed the reset emails and deleted those emails of course. The attacker now has access to all your services. The website owner (whose secret key was stolen) became aware of the security leak and fixes it. The service (we assumed Webmail) is no longer vulnerable. The website owner informs you about the leak and asks you to change your password. The attacker can still use all the other services as long as you don't notice that you cannot login anymore (because he changed the password). This means: For services which you use often, it's more likely to detect that it was misused. For services you don't use often, you'll not notice. Therefore you at least have to check each single password, whether it is affected or not. Good thing on this attack vector: You'll be aware of the issue. Reason 2: Access to database Someone captured the secret key of the certificate. From that time on he could decrypt all the traffic to that site. If the admin of the website logged on to do some administration, moderation or whatever, the attacker now has the password of the admin. With that password, the attacker gets access to the database. Depending on the security of the database, the attacker can read the usernames the passwords in plaintext (worst case) vulnerable password hashes (e.g. MD5 hash, unsalted) (bad case) secure salted hashes (best case) The attacker calculates the password from the hash The website owner fixes the problem You are informed by the website owner to change the password. Since you as the user cannot know how securely the password was stored in the database, you need to consider that the attacker has the password (and username). This is a problem, if you reused the password for other services. Bad thing: You don't know whether you are affected, because logging in to other services still works (for you and for the attacker). Only solution: Change all passwords. Reason 3: not only secrets certificate keys are leaked I looked at some of the data dumps
from vulnerable sites,
and it was ... bad.
I saw emails, passwords, password hints. As posted by XKCD #1353 : So the attacker could already have your password in plain text even without access to the database and without chained attack. Notes The second problem, described by @Iszy , still remains: Wait to change your password until the service has fixed the Heartbleed issue. This is a critical hen-and-egg issue, because you can only reliably change all the passwords, when all services you use are updated. | {
"source": [
"https://security.stackexchange.com/questions/55283",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43914/"
]
} |
55,343 | Most of my friends who are not experienced in computers want to know what Heartbleed is and how it works. How would one explain Heartbleed to someone without a technical background? | The analogy of the bank and bank employee You call the bank to request a new bank account, to make an appointment - whatever. Somehow you and the bank make sure that you are who you are, and the bank is actually the bank. This is the TLS process that secures the connection between you and the bank, and we assume this is handled properly. The roles in this play The bank: a webserver The bank employee: the OpenSSL service for that server You (the bank robber): a bot fetching all it can get from that server Staying connected - the heartbeat A bank employee answers your call. You request some information. The employee says to wait a minute and disables his microphone. He can hear you, you cannot hear him. Then it's quiet. For a long time. You start to wonder if he hung up. So you say "hello?" The employee is instructed to echo whatever you say, and replies with "hello". This is the heartbeat to check if there is still a connection. Now with this peculiar bank employee, you need to say first how many words you are going to use before you ask if the employee is still online. So instead of saying "hello", you need to say "one: hello", or "two: hello there". The employee now knows he can reply with repeating those (first) two words, and then can continue to work on your request. This is the heartbeat protocol. The problem - the heartbleed - no check on what is returned OK, you're bored, and you make a joke. You say "thousand: hello". The employee doesn't check that you only said one word (hello), and starts to reply with "hello" plus then the next 999 words that he says, or thinks about, or has in memory, before putting the mic off. This is the bug that causes the problem. Those 999 words are unrelated. Most of it will be useless, remarks about the weather, request for coffee, lunch appointments etc. Some of it can be important information about the bank or other customers. A transport of gold, a customer is going to bring in $1m, the code for entering the bank or the safe, etc. There is no check if there are actually 1000 words to be replied. Plus you can do this request over and over again - the employee won't complain and nobody else is going to notice what is going on. There is one limit. You will only get information from this one bank employee, and only the stuff he talks or thinks about. Other employees are not affected. You cannot see what is on his desk or in his rolodex. (Analogy: only data in memory (RAM) is at risk; data on the harddisk which is not read into memory, and data from other programs and processes is safe.) Doing this you don't know what information you will get, but doing it for a long time over and over again, you will get enough information to finally be able to break in without anyone noticing it. You can enter the bank after hours, open the safe, etc. This is the risk involved. The solution - check request and renew codes If the employee would think for a moment he would only reply with one word and then disable the microphone so you cannot hear anymore what he is discussing. By making this check, you will stay connected and know that the employee has not hung up, but will not hear any random info anymore. In effect the employee needs new instructions on what to echo. This is fixed with the update to the latest version of OpenSSL. The bank will have to renew security keys for entering the bank and safe, because it is unknown whether someone has the old codes. | {
"source": [
"https://security.stackexchange.com/questions/55343",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36976/"
]
} |
55,355 | I am at a small firm (12 PC on the internal network). We use this network architecture now: Internet(Modem) --> Router --> Firewall --> Switch -- > Internal Network (clients PCs)
| |
|- Wifi Router |- DHCP & AD Server
|- HTTP file server The Main server and the clients PCs are (relatively) new (to other components), they were bought in 2005. But the other hardwares are creepy old ones, lot of them is older than 15 year. They became the bottlenecks of the network. Here I describe all of the components in a short shoot: The Router is an old ASUS (Rx3041) Firewall is a MS Server 2003 R2 with ISA 2006 installed (now formerly Forefront Threat Management Gateway) Definitely this is the main bottleneck of our network, we need to restart this server more than one per week, because it randomly start to drop allowed connections. DHC & AD Server is a MS SVR 2008 R2 and it also operates as an internal file server and internal HTTP server (We run a Redmine issue tracker on it. But not on IIS.) Wifi Router is just simply an access point to the internet outside of our firewall for our wireless devices such phones and tablets (but no laptops or computers, which we use for work.) HTTP File server is an old laptop, which is out of the firewall too (However we have a rule for it to reach it from internal network, so we use it as a network drive to publish some stuff. While our programmers work out of office, so we don't have any sensitive information on most of the client PCs except one. Our server store sensitive information too. (source code, our clients information, etc..) We want to fresh up our network a bit. We plan to use only one router instead of 2 routers and a firewall. It looks like this: Internet(Modem) --> Router with wifi --> Switch -- > Internal Network (clients PCs)
|
|- HTTP file server (DMZ maybe?) We want to use the Asus RT-AC66U router. I read this great answer from Bill Frank, but that question is three year old and things change fast. Maybe routers evolved enough to be able to secure a network. So my question: is this a viable option? Could a router protect us? Or must we use (still) a hardware firewall to protect our network? (We should, but must we?) We choose this option because of the matter of money. If I could I setup a new router and place the whole network behind a firewall (on new machine) but management say there is no easy money can be spent for that. Should I fight for a new firewall server? Or the router will be enough? | The analogy of the bank and bank employee You call the bank to request a new bank account, to make an appointment - whatever. Somehow you and the bank make sure that you are who you are, and the bank is actually the bank. This is the TLS process that secures the connection between you and the bank, and we assume this is handled properly. The roles in this play The bank: a webserver The bank employee: the OpenSSL service for that server You (the bank robber): a bot fetching all it can get from that server Staying connected - the heartbeat A bank employee answers your call. You request some information. The employee says to wait a minute and disables his microphone. He can hear you, you cannot hear him. Then it's quiet. For a long time. You start to wonder if he hung up. So you say "hello?" The employee is instructed to echo whatever you say, and replies with "hello". This is the heartbeat to check if there is still a connection. Now with this peculiar bank employee, you need to say first how many words you are going to use before you ask if the employee is still online. So instead of saying "hello", you need to say "one: hello", or "two: hello there". The employee now knows he can reply with repeating those (first) two words, and then can continue to work on your request. This is the heartbeat protocol. The problem - the heartbleed - no check on what is returned OK, you're bored, and you make a joke. You say "thousand: hello". The employee doesn't check that you only said one word (hello), and starts to reply with "hello" plus then the next 999 words that he says, or thinks about, or has in memory, before putting the mic off. This is the bug that causes the problem. Those 999 words are unrelated. Most of it will be useless, remarks about the weather, request for coffee, lunch appointments etc. Some of it can be important information about the bank or other customers. A transport of gold, a customer is going to bring in $1m, the code for entering the bank or the safe, etc. There is no check if there are actually 1000 words to be replied. Plus you can do this request over and over again - the employee won't complain and nobody else is going to notice what is going on. There is one limit. You will only get information from this one bank employee, and only the stuff he talks or thinks about. Other employees are not affected. You cannot see what is on his desk or in his rolodex. (Analogy: only data in memory (RAM) is at risk; data on the harddisk which is not read into memory, and data from other programs and processes is safe.) Doing this you don't know what information you will get, but doing it for a long time over and over again, you will get enough information to finally be able to break in without anyone noticing it. You can enter the bank after hours, open the safe, etc. This is the risk involved. The solution - check request and renew codes If the employee would think for a moment he would only reply with one word and then disable the microphone so you cannot hear anymore what he is discussing. By making this check, you will stay connected and know that the employee has not hung up, but will not hear any random info anymore. In effect the employee needs new instructions on what to echo. This is fixed with the update to the latest version of OpenSSL. The bank will have to renew security keys for entering the bank and safe, because it is unknown whether someone has the old codes. | {
"source": [
"https://security.stackexchange.com/questions/55355",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43436/"
]
} |
55,606 | "The fix for this bug is simple: check that the length of the message actually matches the length of the incoming request." Why do we even have the client report the length at all? If we can know the length of the incoming request, can't we just infer the length of the message from that? (This is a programming and protocol design question.) | For TLS with the purpose of liveliness (keep-alive) checks, there's no reason to: Encode a payload size field in the heartbeat request/response header (the length of the payload comes from the record layer rrec.length in OpenSSL code -- you just have to subtract off the fixed HB header size from this), Allow HBs to be variable size -- a small HB size (in the range of ~4-32 bytes) would work perfectly -- just enough for sequence number, Add padding to the payload, OR Perform PMTU discovery (defined below) So the design is flawed and overcomplicated in regards to ordinary TLS. Note TLS is the widely used protocol we really care about, encrypting all HTTPS traffic. In the vulnerable OpenSSL commit all the generated Heartbeat requests have a small fixed payload (18 bytes) and when processing a received HB response, OpenSSL only checks the first two bytes of it which contain the HB sequence number. Source: t1_lib.c (containing all the TLS HB code) when generating a HB (only described in tls1_heartbeat ), it fixes the payload size at 18. Processing a HB response in tls1_process_heartbeat also only does any meaningful processing if the payload is exactly 18. Note processing of a request in TLS is the vulnerable part that undermined HTTPS. Background Before getting to the claimed justification, I have to introduce three concepts: DTLS, PMTU, and PMTU discovery that are all unrelated to liveliness checks, but deal with the other proposed use for the Heartbeat extension. Skip to proposed justification if you are familiar. TLS (encryption on TCP) and DTLS (encryption on UDP) Regular TLS adds encryption on top of TCP. TCP is a transport layer protocol that provides a reliable transport stream, meaning the application receives a reconstructed data stream with all packets presented to the application in the original order once everything is there, even if some had to wait some extra time for packets to be resent. TCP also provides congestion control (if packets are being dropped because of congestion, TCP will adjust the rate packets are sent). All HTTP, HTTPS, SFTP traffic is sent over TCP. Datagram TLS (DTLS) is a newer protocol that adds encryption on top of UDP (and similar datagram protocols like DCCP where an application has full control on how to send packets). These are transport layer protocols that do not provide reliable streams, but send packets directly between client/server applications as controlled by an application. With TCP if a packet is lost it automatically gets resent and delays sending further packets until the missing packets get through. UDP gives packet level control to the application, which is often desirable for real-time communication like two-way video chat. If packets A, B, C, D were sent but packet C was lost, it doesn't make sense to either wait for C to be resent before showing packet D to the user -- causing a lengthy pause. PMTU For DTLS, it is desirable to know the path maximum transmission unit . An MTU for a single link between routers is maximum packet size that can be sent. Different routers and types of links often support different MTUs. The Path MTU (the smallest MTU on the path your packets take through the network) will generally not be known beforehand as its a property of the path through the network. If you send datagrams that are larger than the PMTU, they would have to fragment at the smallest MTU point which is undesirable for several reasons (inefficient, fragmented packets may be dropped by firewalls/NAT, its confusing to the application layer, and ipv6 by design will never fragment packets). So in the context of DTLS, the RFC forces the data from your record layer to fit in a single DTLS packet (that is smaller than the PMTU). (With TLS these PMTU issues are handled at the TCP level; not the application layer so you TLS can be agnostic to PMTU). PMTU discovery There are protocols to discover PMTU — specifically packetization layer Path MTU discovery (RFC 4821) . In this context, you probe the network by sending packets of various size (configured to not fragment) and keep track of the upper bound and lower bound of the PMTU depending whether your packets made it through the network or not. This is described in RFC4821. If a probe packet makes it through, you raise the lower bound, if it gets lost you lower the upper bound until the upper/lower bound are close and you have your estimated PMTU, which is used to set upper size on your DTLS packets. Claimed Justification of HB Having Payload Header, padding, having up to 2-byte size field The heartbeats RFC RFC6520 says you can use Heartbeats for path MTU discovery for DTLS: 5.1. Path MTU Discovery DTLS performs path MTU discovery as described in Section 4.1.1.1 of
[RFC6347]. A detailed description of how to perform path MTU
discovery is given in [RFC4821]. The necessary probe packets are the
HeartbeatRequest messages. DTLS applications do need to estimate PMTU. However, this is not done by DTLS, its done by the application using DTLS. Looking at the quoted section of the DTLS RFC Section 4.1.1.1 of RFC6347 it states "In general, DTLS's philosophy is to leave PMTU discovery to the application." It continues to give three caveats for why DTLS has to worry about PMTU (DTLS applications must subtract off the DTLS header to get effective PMTU size for data, DTLS may have to communicate ICMP "Datagram Too Big" back to the application layer, and DTLS handshakes should be smaller than PMTU. Earlier in the DTLS RFC it declares the DTLS record MUST fit in a single datagram smaller than the PMTU, so PMTU discovery/estimation must be done by the application using DTLS. In PMTU discovery it makes sense to have a small field describing the length of the payload, have a large amount of arbitrary padding, and having something that echos back, yup got your request with this size MTU even though I'm only sending you back the sequence number (and for efficiency can drop the padding on the response). Granted it doesn't make sense if you describe the size of the payload to allow the payload to be bigger than about ~4-32 bytes, so payload size could be fixed or described by a one byte field, even if arbitrarily long padding could be concatenated. Analysis of Claim OpenSSL HB implementation and description in the HB RFC, doesn't describe or perform this PMTU discovery protocol. MTU is not present in the code. OpenSSL does provide only one mechanism to generate a HB request in TLS and DTLS, but it is of fixed size (18 byte payload, not configurable). There's no functionality to send a sequence of HBs to probe for PMTU discovery in the OpenSSL code, or detailed description of how HBs are used in a probing process, There's no indication that HBs are configured to not fragment (so they could even be used in this probing manner), If you wanted to use HB to do PMTU discovery, the application writer would have to write all the code themselves for client and server. The payload field even in the context of finding PMTU only needs to be one byte (even if its not fixed) There's no reason for them not to just do the entire probing process to use a HB packet versus any arbitrary type of packet in their client/server applications; e.g., using an ordinary UDP packet. PMTU discovery only makes sense in context of DTLS, so why are these completely unnecessary features present in TLS heartbeats -- applications that do not need to be PMTU aware? At best this was a seriously flawed design (in TLS) incorporating a YAGNI features, that was then coded up badly to fully trust a user provided header field without any sanity testing. At worst, the PMTU sections were just a complicated cover story to allow insertion of vulnerable code that provides some semblance of justification. Searching through the IETF TLS mailing list If you search the IETF TLS mailing list , you can find interesting nuggets. Why is the payload/padding length uint16, and why is there padding if its to be discarded? PMTU discovery . The same asker (Juho Vähä-Herttua) states he would strongly prefer packet verification: read payload length, padding length, and verify that it matches record length (minus header) . Also Simon Josefsson : I have one mild concern with permitting arbitrary payload. What is the
rationale for this? It opens up for a side channel in TLS. It could
also be abused to send non-standardized data. Further, is there any
reason to allow arbitrary sized payload? In my opinion, the
payload_length, payload and padding fields seems unnecessary to me. Michael Tüxen's response is largely inadequate (maybe some feature wants to be added on top to say calculate RTT) and summarizes with "The point here is that for interoperability, it
does not matter what the payload is, it is only important that it is
reflected." Also of note reason for random padding "[we] randomize ... the data in the heartbeat message to attempt to head of any issues occurring from weak or flawed ciphers. " followed by question "Are there any papers or cipher documentation discussing how using randomized data in a packet would solve possible future cipher flaws?", followed by an paper on deterministic authenticated encryption. The response is great : Indeed, but this is not a generic encryption mode like CBC or CTR. It
is specifically designed to encrypt random keys, and thus depends on
its randomness. Typical encryption modes are specifically designed to
prevent someone distinguishing a given plaintext encryption from a
random one. Now, if one would like to use a subliminal channel in TLS, the
heartbeat extension now provides an unbounded channel. It's interesting that that comment was never addressed. (Other than someone else commenting " Well that's a whole different issue...."). | {
"source": [
"https://security.stackexchange.com/questions/55606",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44405/"
]
} |
55,723 | Are there any programming languages that are designed to be robust against hacking? In other words, an application can be hacked due to a broken implementation, even though the design is perfect. I'm looking to reduce the risk of a developer incorrectly implementing a specification. For example Heartbleed would not have happened if the language used could guard against a Buffer Over-Read . SQL Injections might not happen if there was a language enforced way to encode/decode HTML data Sensitive data can be saved to Pagefiles in some languages where low-level controls of securely erasing memory aren't available. Pointer issues/overflows occur more often in C when compared to managed code Numerical rounding errors can occur when using the developer uses the wrong datatype for the wrong data Denial Of Service attacks might be reduced if the app is correctly is multi-threaded Code signing may reduce the threat of runtime security issues ( link , link ) Question Is there a language that addresses many or most of these issues? It's acceptable for the language to be scoped for a particular use-case such as WebApps, Desktop, Mobile, or Server usages. Edit:
A lot of people addressed the buffer-overflow issue, or say that the programmer is responsible for security. I'm just trying to get an idea if there exist languages whose main purpose was to lend itself to security as much as possible and reasonable. That is, do some languages have features that make them clearly more (or less) secure than most other languages? | The Ada language is designed to prevent common programming errors as much as possible and is used in critical systems where a system bug might have catastrophic consequences. A few examples where Ada goes beyond the typical built-in security provided by other modern languages: Integer range type allows specifying an allowed range for an integer. Any value outside of this range will throw an exception (in languages that do not support a range type, a manual check would have to be performed). := for assignment = for equality checks. This avoids the common pitfall in languages that use = for assignment and == for equality of accidentally assigning when an equality check was meant (in Ada, an accidental assignment would not compile). in and out parameters that specify whether a method parameter can be read or written avoids problems with statement group indentation levels (e.g. the recent Apple SSL bug ) due to the use of the end keyword contracts (since Ada 2012, and previously in the SPARK subset) allow methods to specify preconditions and postconditions that must be satisifed There are more examples of how Ada was designed for security provided in the Safe and Secure Booklet (PDF). Of course, many of these issues can be mitigated through proper coding style, code review, unit tests, etc. but having them done at the language level means that you get it for free. It is also worth adding that despite the fact that a language designed for security such as Ada removes many classes of bugs, there is still nothing stopping you from introducing business logic bugs that the language doesn't know anything about. | {
"source": [
"https://security.stackexchange.com/questions/55723",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/30618/"
]
} |
55,924 | After installing a CAcert personal certificate, every time I land on the BBC weather site it asks me to identify myself with a certificate. Why would any non-malicious web site do that unless I've requested to sign in first? The weather report is visible, so it's not like authentication is needed for any content. The certificate question is asked every time, and a single mistype would be enough to give them my certificate. Since I don't want that to happen, how do I tell browsers to never identify on this site with a certificate? "Remember this decision" on Firefox does not account for pressing Cancel. I’m using HTTPS Everywhere . | You shouldn't really be worrying about this, the certificate contains only your public key, which is supposed to be public anyway. The only issue is the privacy concern of giving away the information in your certificate to any site that asks for it. Summary of the issue: The BBC weather page has a request to http://www.live.bbc.co.uk . HTTPS Everywhere is changing the request to httpS://www.live.bbc.co.uk . The HTTP server at www.live.bbc.co.uk is configured to ask for a client certificate for secure connections. It's likely that BBC just want to identify their employees in order to show special functionalists in the page (Inline editing of the news articles, corrections, etc.) Remember: By using HTTPS Everywhere, you're overriding the default behavior of the sites you're visiting. The problem you're having is the result of that. The quickest solution is to disable the HTTPS Everywhere rule/option for BBC. How did I find this? I dug in Wireshark a bit when making a request to the weather page, and looked for who's sending me the Certificate Request TLS message. Voilà! (Credit to Daniel Kahn Gillmor for the idea ) Why wasn't this message popping before? Because before you configured your client certificate, Firefox had thought you're not interested in client authentication thing (After all, you had no certificates installed, so no point of giving you the option to choose one). Once you added one certificate, Firefox started thinking "Maybe my human does have a certificate for this site". The certificate can be valid for any number of domain if not explicitly specified. (Note: I'm not sure if it can even be explicitly specified) How can I make it disappear? You have a couple of options here. You can either disable the BBC rule in HTTPS Everywhere because t's only partially supported anyway (BBC doesn't officially have HTTPS enabled for normal browsing). Another solution would be to configure your browser to automatically make the selection for you. From your browser's settings/options/configurations. Why would any non-malicious web site do that unless I've requested to sign in first? Convenience. Have a valid certificate? You're automatically logged in once you visit the site. | {
"source": [
"https://security.stackexchange.com/questions/55924",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1220/"
]
} |
55,991 | I have moved this question from stackoverflow to this place. I know it may be a question about 'opinion' but I am not looking for a private opinion but a source of the final decision to keep it this way. I have been taught that nobody tries to open a door when one does not know that the door even exist. The best defense would be then to hide a door. It could be easily seen in the old war movies - nobody would keep a hideout in the light. It was always covered with something suggesting that 'there is nothing interesting there.' I would assume that in cryptography that would work the same way.
Why would then hash generated by MD5 started from $1$, and telling what this is a hash in the first place, and then what kind of hash it is (MD5)? Now, I see that sha512 does exactly the same thing. Isn't it a weakness by itself? Is there any particular reason why we would have it done this way? The main question the is: Should I scramble my hash before storing it to hide this from a potential enemy? If there is no need for that then why? To avoid answers that suggest that obscurity is not security, I would propose this picture. It is WWII. You have just received a hint that SS is coming to your house suspecting that you are hiding partisans, and this is true. They have no time to escape. You have two choices where you could hide them - in the best in the world safe, or in the hidden hole underneath the floor, hidden so well that even your parents would did not suspect that it is there. What is your proposal? Would you convince yourself that the best safe is the best choice? If I know there is a treasure hidden on an island then I would like to know which island it is or I will not start searching. I am still not convinced. Chris Jester-Young so far gave me something to think about when suggesting that there can be more algorithms generating the same hash from different data. | First, there's Kerckhoffs's principle which is always desirable: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge. where in this case the password is the key. So its not a goal to keep the cryptosystem secret. Second, you are wrong about those being md5 or sha512 hashes; the values stored in your /etc/shadow are md5crypt or sha512crypt, which involves a strengthening procedure (many rounds of a md5 or sha512 hash). Now if your four choices are MD5crypt, sha256crypt, sha512crypt, and bcrypt (the most popular choices in linux systems), here are four hashes all generated with $saltsalt$ (or equivalent) as a salt and hashing the password not my real password : >>> import crypt
>>> crypt.crypt('not my real password','$1$saltsalt')
'$1$saltsalt$4iXfpnrgHRXkrDbPymCE4/'
>>> crypt.crypt('not my real password','$5$saltsalt')
'$5$saltsalt$E0bMpsLR71z8LIvd6p2tD4LZ984JxyD7B9lPLhq4vY7'
>>> crypt.crypt('not my real password','$6$saltsalt')
'$6$saltsalt$KnqiStSM0GULvZdkTBbiPUhoHemQ7Q06YnvuJ0PWWZbjzx3m0RCc/hCfq54Ro3fOwaJdEAliX9igT9DD2oN1u/'
>>> import bcrypt
>>> bcrypt.hashpw('not my real password', "$2a$12$saltsaltsaltsaltsalt..")
'$2a$12$saltsaltsaltsaltsalt..FW/kWpMA84AQoIE.Qg1Tk5.FKGpxBNC' Even without the annotation, its fairly straightforward to figure out which scheme they each use (md5crypt, sha256crypt, sha512crypt, and bcrypt are 34,55,98, and 60 chars long respectively (in base64 encoding with annotation and salt). So unless you suggest truncating the hash, or altering the hashes properties the annotation for consistency doesn't lose any security. It also gives you a method to gracefully update user passwords. If you decide that md5crypt is no longer secure, you can switch users' hashes to bcrypt on next login (and then after a period of time deactivate all accounts left on md5crypt). Or if your algorithm like bcrypt (when it was $2$) needs to be updated, because of a flaw in design you can readily identify flawed schemes when the fixed scheme went to $2a$. Even worse, you could try saying, I'm going to modify sha512 with new constants and round keys. That would make it superhard to break -- right? No, it just makes it super hard for you to know you didn't accidentally introduce a major vulnerability. If they can get at your /etc/shadow, they probably can also get at the library used to log you in and with time could reverse engineer your hashing scheme and this will be MUCH MUCH simpler than breaking a strong password. Again, the expected time to brute force a very strong passphrase stored in sha256 hash is O(2^256 ), e.g., a billion computers doing a billion sha256crypts per nanosecond (each involving ~5000 rounds of sha256), would take 300000000000000000000000 (3 x 10^23) times the the age of the universe to break it. And with sha512crypt, if each of the ~10^80 atoms in the observable universe each did a billion sha512crypts every nanosecond it would still take 10^38 times the age of the universe. (This assumes you have a 256-bit and 512-bit or higher entropy passphrase). | {
"source": [
"https://security.stackexchange.com/questions/55991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44832/"
]
} |
56,022 | According to news reports , arrests have already been made in relation to the Heartbleed bug. It sounds like this person managed to gain access to the website's database by capturing the credentials the app used to access the database. This person then apparently used those credentials to access the database. My question is, what part is illegal here? He was charged with "one count of unauthorized use of a computer and one count of mischief in relation to data." So, is it illegal to send a heartbeat request to a server knowing that the request will result in data leakage? If that data contained nothing but random bits, would it still be illegal or must it contain sensitive data to become illegal? Say passwords or other such info was present, does it then become illegal to have done it? Or does it become illegal to then take those credentials and log into a public-facing admin interface to the database? What I'm confused about is where is the line between illegal hacking and just using information which is publicly visible? If a website leaves its DB credentials on its homepage with a link to a phpMyAdmin frontend to that DB, is it illegal to log in and look around? At risk of asking multiple and broad questions which will lead to this question being closed, are there any rules of thumb to abide by when curious snooping around to see how something works crosses the line to become illegal? | The fundamental question here is authorization , not access . If you break into your neighbor's house, clearly you are in violation of the law. But if he lets you in, then you are not. So what if you have a key? If he gave you the key along with permission to enter (to feed his dog while he's away), then you have authorization to enter. No trespass there. On the other hand, if you find the key under his doormat, that does not imply authorization , even though it grants you access . You can get in easily enough, but it's trespassing. Now, say you go door-to-door checking to see if anyone left a key under their doormat. You just go inside the vulnerable houses and have a look around; you don't steal anything, you're just looking. That's what's happening here with the Heartbleed problem. Someone is using their knowledge of a vulnerability (e.g. key sometimes appears under the doormat) to gain access, but they are not authorized to have access. Yes, the keys they retrieve are accessible to anyone who understands the vulnerability, just as a key under a doormat is likewise technically accessible to the public. But that doesn't make it legal to use it. | {
"source": [
"https://security.stackexchange.com/questions/56022",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44864/"
]
} |
56,069 | I just read about Indosat trying to take over the Internet by mistake . I read on a Polish infosec-related website that most of their announced routes failed to propagate, but some of them reached the whole internet. This made me wonder - what security mechanisms protect the BGP protocol and why did they fail for these few particular routes, hurting Akamai and others? | A word on BGP BGP is the routing protocol that makes the internet work. The current revision is BGP v4 , and has been in use since 1995. Internet Service Providers (ISPs) are in control of one or many networks, and they use BGP to advertise their networks to their peers by exchanging routing information (internet routes) about the network they control. Those networks are independent routing domains called Autonomous Systems (AS) . Each AS is attributed a unique identifier (AS Number, or ASN), and there are about 40'000 ASN allocated today. The process of routing between those ASes is called interdomain routing, and BGP is the protocol used to achieve this. Each BGP router contains a Routing Information Base (RIB) which stores the routing information. When a router decides to update its RIB, it will subsequently propagate this information to all of the other neighboring BGP routers to which it is connected by saying " Hey, I can route to network 1.2.3.0/12 via AS numbers X,Y,Z ". They will in turn decide whether to update their own tables and propagate the information further. Note that there is no automatic peers discovery in BGP, instead peers have to be manually configured to exchange routing information. BGP exposure to attacks The BGP protocol itself doesn't provide many security mechanisms. BGP design did not include specific protections against attacks or deliberate errors coming from peers or outsiders that could cause disruptions of routing behavior. Examples of such attacks include: As stated in BGP-VULN (RFC4272 ), there are no mechanisms internal to BGP that protect against attacks that modify, delete, forge, or replay data , any of which has the potential to disrupt overall network routing behavior. As a TCP/IP protocol, BGP is subject to all TCP/IP attacks, e.g., IP spoofing, session stealing, etc. Any outsider can inject believable BGP messages into the communication between BGP peers, and thereby inject bogus routing information; When a router advertise new routing information, BGP does not ensure that it uses the AS number it has been allocated, which means a BGP router can advertise routes with any AS number; An AS can advertise a prefix from address space unassigned by or belonging to another AS. This is called prefix hijacking and it can be used to perform man-in-the-middle attack which was initially demonstrated at DefCon 16 by Alex Pilosov and Anton "Tony" Kapela; BGP does not provide a shared, global view of correct routing
information that would make it much easier to detect invalid or
malicious routes. In short, BGP is highly exposed to false route advertisements and there is no plan today to remediate this situation. BGP Security protections So it looks pretty bad on a security standpoint. But on the other hand, there are also some positive security aspects in BGP: BGP peering associations tend to be long-lived and static. This means
that those associations between peers and route advertisment can be
monitored efficiently, and allows for a quick reaction from the
various stakeholders in case of an internet hijacking. There are a
lot of services providing BGP monitoring (some free and
open-source and commercial products ); BGP routers can implement route filtering, which allows to perform
inbound announcement selection, although there is no largely deployed standard for that and this is up to each ISP to decide if and how they want to implement it; BGP implements a TCP MD5 signature option (a MAC , as defined in RFC 2385 ) to protect BGP sessions against TCP Reset (RST) Dos
attacks. It is currently supported by most router manufacturers (e.g. Cisco ) and open-source OSes (e.g. FreeBSD ). But this
protection only cover a limited number of attacks, and relies on weak
technologies; The lack of cumbersome security protection measures also allows quick
reactions against DDoS and malware spreading via sinkholes and
blackhole routing . Future improvements Recent efforts within the standards bodies and in the research community have attempted to provide new architectures for BGP security. This includes: Using Public Key Infrastructure to share public cryptographic keys
between peers and allow routers to establish the integrity of BGP
announcements on a global scale, as well as attesting that an entity is authorized to advertise a particular resource; Using IPsec to secure the BGP session and messages passed between peers; Establishing an independent authority for validating interdomain
route information. This third-party authority would be used by BGP
routers to verify the announcements they receive. A major update to BGP would be required to remediate those issues and offer an adequate level of protection against sophisticated BGP attacks. RFC 4278 , a maturity study of BGP security mechanisms , considers the marginal benefit of such schemes in this situation would be low, and not worth the transition effort. Additional resources you might find interesting: Implementing the TCP authentication option TCP-AO and The TCP Authentication Option RFC What is the purpose of BGP TTL security? BGP Looking Glass : As per their website: BGP Looking Glass servers are servers on Internet which can be
accessed remotely for the purpose of viewing routing info.
Essentially, the server acts as a limited, read-only portal to routers
of whatever organization is running the Looking Glass server.
Typically, publicly accessible looking glass servers are run by ISPs
or NOCs. | {
"source": [
"https://security.stackexchange.com/questions/56069",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15648/"
]
} |
56,078 | I know that some of PHP's random functions are insecure due to them not being completely random and are considered a bad practice. My question is how an attacker go about finding that the developer used an insecure function to create a token, such as a forgotten password token or CSRF token? | A word on BGP BGP is the routing protocol that makes the internet work. The current revision is BGP v4 , and has been in use since 1995. Internet Service Providers (ISPs) are in control of one or many networks, and they use BGP to advertise their networks to their peers by exchanging routing information (internet routes) about the network they control. Those networks are independent routing domains called Autonomous Systems (AS) . Each AS is attributed a unique identifier (AS Number, or ASN), and there are about 40'000 ASN allocated today. The process of routing between those ASes is called interdomain routing, and BGP is the protocol used to achieve this. Each BGP router contains a Routing Information Base (RIB) which stores the routing information. When a router decides to update its RIB, it will subsequently propagate this information to all of the other neighboring BGP routers to which it is connected by saying " Hey, I can route to network 1.2.3.0/12 via AS numbers X,Y,Z ". They will in turn decide whether to update their own tables and propagate the information further. Note that there is no automatic peers discovery in BGP, instead peers have to be manually configured to exchange routing information. BGP exposure to attacks The BGP protocol itself doesn't provide many security mechanisms. BGP design did not include specific protections against attacks or deliberate errors coming from peers or outsiders that could cause disruptions of routing behavior. Examples of such attacks include: As stated in BGP-VULN (RFC4272 ), there are no mechanisms internal to BGP that protect against attacks that modify, delete, forge, or replay data , any of which has the potential to disrupt overall network routing behavior. As a TCP/IP protocol, BGP is subject to all TCP/IP attacks, e.g., IP spoofing, session stealing, etc. Any outsider can inject believable BGP messages into the communication between BGP peers, and thereby inject bogus routing information; When a router advertise new routing information, BGP does not ensure that it uses the AS number it has been allocated, which means a BGP router can advertise routes with any AS number; An AS can advertise a prefix from address space unassigned by or belonging to another AS. This is called prefix hijacking and it can be used to perform man-in-the-middle attack which was initially demonstrated at DefCon 16 by Alex Pilosov and Anton "Tony" Kapela; BGP does not provide a shared, global view of correct routing
information that would make it much easier to detect invalid or
malicious routes. In short, BGP is highly exposed to false route advertisements and there is no plan today to remediate this situation. BGP Security protections So it looks pretty bad on a security standpoint. But on the other hand, there are also some positive security aspects in BGP: BGP peering associations tend to be long-lived and static. This means
that those associations between peers and route advertisment can be
monitored efficiently, and allows for a quick reaction from the
various stakeholders in case of an internet hijacking. There are a
lot of services providing BGP monitoring (some free and
open-source and commercial products ); BGP routers can implement route filtering, which allows to perform
inbound announcement selection, although there is no largely deployed standard for that and this is up to each ISP to decide if and how they want to implement it; BGP implements a TCP MD5 signature option (a MAC , as defined in RFC 2385 ) to protect BGP sessions against TCP Reset (RST) Dos
attacks. It is currently supported by most router manufacturers (e.g. Cisco ) and open-source OSes (e.g. FreeBSD ). But this
protection only cover a limited number of attacks, and relies on weak
technologies; The lack of cumbersome security protection measures also allows quick
reactions against DDoS and malware spreading via sinkholes and
blackhole routing . Future improvements Recent efforts within the standards bodies and in the research community have attempted to provide new architectures for BGP security. This includes: Using Public Key Infrastructure to share public cryptographic keys
between peers and allow routers to establish the integrity of BGP
announcements on a global scale, as well as attesting that an entity is authorized to advertise a particular resource; Using IPsec to secure the BGP session and messages passed between peers; Establishing an independent authority for validating interdomain
route information. This third-party authority would be used by BGP
routers to verify the announcements they receive. A major update to BGP would be required to remediate those issues and offer an adequate level of protection against sophisticated BGP attacks. RFC 4278 , a maturity study of BGP security mechanisms , considers the marginal benefit of such schemes in this situation would be low, and not worth the transition effort. Additional resources you might find interesting: Implementing the TCP authentication option TCP-AO and The TCP Authentication Option RFC What is the purpose of BGP TTL security? BGP Looking Glass : As per their website: BGP Looking Glass servers are servers on Internet which can be
accessed remotely for the purpose of viewing routing info.
Essentially, the server acts as a limited, read-only portal to routers
of whatever organization is running the Looking Glass server.
Typically, publicly accessible looking glass servers are run by ISPs
or NOCs. | {
"source": [
"https://security.stackexchange.com/questions/56078",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36337/"
]
} |
56,268 | What are the benefits of storing known_hosts in a hashed form? From what I read, it is supposed to protect the list of servers I am connecting to, presumably in a scenario where my account has been compromised (and known_hosts file stolen) If my account were indeed to be compromised, having known_hosts hashed would be very little consolation. An attacker could see from my bash history to which servers I am connecting. And also from my .ssh/config where all my servers are listed. Are there any benefits that I am missing in my description here? | I don't think you are missing much. The only change is that if a machine is compromised, the idea is to minimize how much usable information is given to an attacker. In the known_hosts file, more information is not necessary to include (computing a few hundred HMACs is not onerous work), unlike in ~/.ssh/config where it needs to be included on the Address line if you wish to connect via your alias (hashing wouldn't work) and in your command line history - if you choose to keep one. Presumably you could have a very large known_hosts (e.g., if you sync it with another computer when you setup the account), but say not use .ssh/config and not keep a command line history or have never connected to most machines in the commandline history. In those situations, hashing the IP addresses used in your known_hosts could lessen exposure in the event of a compromise. Furthermore, HashKnownHosts is a configurable option, and the default is to not hash (probably for reasons you specified -- it doesn't help much). See man ssh_config : HashKnownHosts Indicates that ssh(1) should hash host names and addresses when they are added to ~/.ssh/known_hosts.
These hashed names may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be disclosed. The default is “no”. Note that existing names and addresses in known hosts files will not be converted automatically, but may be manually hashed using ssh-keygen(1). Use of this option may break facilities such as tab-completion that rely on being able to read unhashed host names from ~/.ssh/known_hosts. Note the format of a hashed known_hosts line (example taken from here - my current configuration is not to hash) for an entry for 192.168.1.61 : |1|F1E1KeoE/eEWhi10WpGv4OdiO6Y=|3988QV0VE8wmZL7suNrYQLITLCg= ssh-rsa ... where the first part F1E1KeoE/eEWhi10WpGv4OdiO6Y= is a random salt - that acts as a key for the HMAC-SHA1 to hash 192.168.1.61. You can verify in the command line with (BSD / Mac OS X): #### key=`echo F1E1KeoE/eEWhi10WpGv4OdiO6Y= | base64 -D | xxd -p`
#### echo -n "192.168.1.61" | openssl sha1 -mac HMAC -macopt hexkey:$key|awk '{print $2}' | xxd -r -p|base64
3988QV0VE8wmZL7suNrYQLITLCg= or on GNU/linux with: #### key=`echo F1E1KeoE/eEWhi10WpGv4OdiO6Y= | base64 -d | xxd -p`
#### echo -n "192.168.1.61" | openssl sha1 -mac HMAC -macopt hexkey:$key|awk '{print $2}' | xxd -r -p|base64
3988QV0VE8wmZL7suNrYQLITLCg= where we just decoded the salt and used it as a key in a sha1 HMAC, and then re-encode the hash in base64. Just specifying as another answer originally presumed that the HMAC may have used the user's private ssh key to compute hash-based message authentication code, but this is not the case. | {
"source": [
"https://security.stackexchange.com/questions/56268",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28654/"
]
} |
56,271 | example4.php?id=id=2-1 I came across the above injection and I can not figure out why(how) does this work and what's interesting is it returns all the records from the database. Anything guys? educate me Thanks | I don't think you are missing much. The only change is that if a machine is compromised, the idea is to minimize how much usable information is given to an attacker. In the known_hosts file, more information is not necessary to include (computing a few hundred HMACs is not onerous work), unlike in ~/.ssh/config where it needs to be included on the Address line if you wish to connect via your alias (hashing wouldn't work) and in your command line history - if you choose to keep one. Presumably you could have a very large known_hosts (e.g., if you sync it with another computer when you setup the account), but say not use .ssh/config and not keep a command line history or have never connected to most machines in the commandline history. In those situations, hashing the IP addresses used in your known_hosts could lessen exposure in the event of a compromise. Furthermore, HashKnownHosts is a configurable option, and the default is to not hash (probably for reasons you specified -- it doesn't help much). See man ssh_config : HashKnownHosts Indicates that ssh(1) should hash host names and addresses when they are added to ~/.ssh/known_hosts.
These hashed names may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be disclosed. The default is “no”. Note that existing names and addresses in known hosts files will not be converted automatically, but may be manually hashed using ssh-keygen(1). Use of this option may break facilities such as tab-completion that rely on being able to read unhashed host names from ~/.ssh/known_hosts. Note the format of a hashed known_hosts line (example taken from here - my current configuration is not to hash) for an entry for 192.168.1.61 : |1|F1E1KeoE/eEWhi10WpGv4OdiO6Y=|3988QV0VE8wmZL7suNrYQLITLCg= ssh-rsa ... where the first part F1E1KeoE/eEWhi10WpGv4OdiO6Y= is a random salt - that acts as a key for the HMAC-SHA1 to hash 192.168.1.61. You can verify in the command line with (BSD / Mac OS X): #### key=`echo F1E1KeoE/eEWhi10WpGv4OdiO6Y= | base64 -D | xxd -p`
#### echo -n "192.168.1.61" | openssl sha1 -mac HMAC -macopt hexkey:$key|awk '{print $2}' | xxd -r -p|base64
3988QV0VE8wmZL7suNrYQLITLCg= or on GNU/linux with: #### key=`echo F1E1KeoE/eEWhi10WpGv4OdiO6Y= | base64 -d | xxd -p`
#### echo -n "192.168.1.61" | openssl sha1 -mac HMAC -macopt hexkey:$key|awk '{print $2}' | xxd -r -p|base64
3988QV0VE8wmZL7suNrYQLITLCg= where we just decoded the salt and used it as a key in a sha1 HMAC, and then re-encode the hash in base64. Just specifying as another answer originally presumed that the HMAC may have used the user's private ssh key to compute hash-based message authentication code, but this is not the case. | {
"source": [
"https://security.stackexchange.com/questions/56271",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40221/"
]
} |
56,290 | I received an email to the email address listed on our website (it's a generic [email protected] to help "weed out" spam, whereas each employee has something like [email protected] to which I manually forward any "real" email that comes in through the website.) I received the following email (with actual names replaced with placeholders): Dear Sir, We are the department of Asian Domain Registration Service in China. I have something to confirm with you. We formally received an application on April 14, 2014 that a company which self-styled "Some Other Corp. Ltd." . were applying to register some "ourcompany" Asian countries top-level domain names. Now we are handling this registration, and after our initial checking, we found the name were similar to your company's, so we need to check with you whether your company has authorized that company to register these names. If you authorized this, we will finish the registration at once. If you did not authorize, please let us know within 7 workdays, so that we will handle this issue better. Best Regards, Some Fake Person The email is "plain" (no attachments, no embedded images, just some basic HTML), the email address is "normal" and not consisting of random characters. Is this a common email scam; and if so, what is their motive? | Yes, this email is a scam. Ignore it! I work at a major web hosting firm, and our customers receive these emails on a frequent basis. There are a number of characteristics that are visible from this perspective which confirm that they are a scam: The emails are never sent by a recognizable, reputable domain registrar. Most of them use generic names, such as "Asian Domain Registration Service" in your email, "China domain registration center", or the like. The emails never have senders, headers, or signatures which explicitly link them with a registrar, and there is usually not even any accredited registrar with the right name. The domain registrations are never being made by a recognizable company or organization. You've obscured the relevant name as "Some Other Corp, Ltd" here, but the names are often generic ("Foo Trading Co") or incomprehensible ("FANGSHI Co"). Attempts to identify or contact them are never successful (or find only unrelated companies). They are frequently sent in relation to a domain name which would be meaningless under an Asian top-level domain. Many of them involve domain names containing names of people or locations — for example, these types of emails might claim that a Chinese trading company is attempting to register the domain "johndoe-hardware.hk" or "newyork-blahblah.asia". There is no apparent logic to these registrations. Despite supposedly coming from many different registrars, these emails always follow a very similar format. There are multiple templates, so the wording can vary, but the formula is always precisely the same. In particular, the domain names being registered are ALWAYS only under "Asian" TLDs (typically .asia , .cn , .hk , .tw , and .in ), never under any other TLDs. Additionally, many of these emails also claim that the bundle includes an "Internet keyword" or "Internet trademark", which doesn't even exist. We have advised a very large number of our customers to disregard these emails, and not once has any of the domains mentioned actually been purchased by the individual or company that was supposedly attempting to acquire them. None of the people who have written about receiving these emails online has had this outcome, either. Everything points to these emails being a widespread scam! Further reading: Canadian Trade Commissioner Service: Domain name registration in China SCAMWatch (Australian Competition & Consumer Commission): Think carefully about unsolicited offers to register domain names overseas I can only post two links, but you'll find a bunch more if you do a Google search for "Chinese domain scam" or something of the sort. It's widely attested. | {
"source": [
"https://security.stackexchange.com/questions/56290",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/38377/"
]
} |
56,296 | To clarify, my question isn't on how to protect myself from phishing. What I'm curious about is how exactly software can identify whether or not a website is designed for phishing, ignoring word identifiers/scanners to looking for spam/phishing sounding material. | Yes, this email is a scam. Ignore it! I work at a major web hosting firm, and our customers receive these emails on a frequent basis. There are a number of characteristics that are visible from this perspective which confirm that they are a scam: The emails are never sent by a recognizable, reputable domain registrar. Most of them use generic names, such as "Asian Domain Registration Service" in your email, "China domain registration center", or the like. The emails never have senders, headers, or signatures which explicitly link them with a registrar, and there is usually not even any accredited registrar with the right name. The domain registrations are never being made by a recognizable company or organization. You've obscured the relevant name as "Some Other Corp, Ltd" here, but the names are often generic ("Foo Trading Co") or incomprehensible ("FANGSHI Co"). Attempts to identify or contact them are never successful (or find only unrelated companies). They are frequently sent in relation to a domain name which would be meaningless under an Asian top-level domain. Many of them involve domain names containing names of people or locations — for example, these types of emails might claim that a Chinese trading company is attempting to register the domain "johndoe-hardware.hk" or "newyork-blahblah.asia". There is no apparent logic to these registrations. Despite supposedly coming from many different registrars, these emails always follow a very similar format. There are multiple templates, so the wording can vary, but the formula is always precisely the same. In particular, the domain names being registered are ALWAYS only under "Asian" TLDs (typically .asia , .cn , .hk , .tw , and .in ), never under any other TLDs. Additionally, many of these emails also claim that the bundle includes an "Internet keyword" or "Internet trademark", which doesn't even exist. We have advised a very large number of our customers to disregard these emails, and not once has any of the domains mentioned actually been purchased by the individual or company that was supposedly attempting to acquire them. None of the people who have written about receiving these emails online has had this outcome, either. Everything points to these emails being a widespread scam! Further reading: Canadian Trade Commissioner Service: Domain name registration in China SCAMWatch (Australian Competition & Consumer Commission): Think carefully about unsolicited offers to register domain names overseas I can only post two links, but you'll find a bunch more if you do a Google search for "Chinese domain scam" or something of the sort. It's widely attested. | {
"source": [
"https://security.stackexchange.com/questions/56296",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45127/"
]
} |
56,307 | I often use cat on the console to view the contents of files, and every now and then I accidentally cat a binary file which basically produces gibberish and system beeps. However today I've encountered a situation where the output from the cat utility got redirected to the console input so I got stuff like this: -bash: 2c: command not found
-bash: 1: command not found
-bash: 1: command not found
-bash: 112: command not found
-bash: 112: command not found
-bash: 1: command not found
-bash: 0x1: command not found
-bash: 2c1: command not found
-bash: 2c: command not found
-bash: 1: command not found
-bash: 1: command not found
-bash: 112: command not found
-bash: 112: command not found
-bash: 1: command not found
-bash: 0x1: command not found
-bash: 2c1: command not found
-bash: 2c1: command not found
-bash: 2c1: command not found
-bash: 2c1: command not found ... ... This got me thinking that a specifically crafted binary file could create quite a mess on the system?!... Now I do realize using cat recklessly like this is not particularly smart, but I would actually like to know what is going on here. What characters produce the effect of suddenly dumping the content on standard input... Note: I was in Mac OS X terminal while doing this, I've actually called diff -a to compare two firmware rom images and print the differences out(I thought there would be just a few bytes of differences but there where almost 8 MB of differences printed to the screen) Later I tried, on purpose, to cat one of the files and got the same effect like I've pasted here. - UPDATE - - UPDATE - - UPDATE - I've posted this here late at night yesterday and this morning I tried to replicate the behaviour and I can not. Unfortunately I can not be sure if some escape characters caused the gibberish from the binary to be executed on the console automatically or if at the end of the cat I just got a bunch of characters left(as If I've pasted them) on the command line and I've probably pressed enter accidentally to get a clear line... When I try to cat the file in question now I get this when it completes(scroll right to see): D?k(Fli9p?s?HT?78=!g??Ès3?&é?? =??7??K?̓Kü<ö????z(;???????j??>??ö?Ivans-MacBook-Pro:FI9826W-2.11.1.5-20140121 NA ivankovacevic$ 1;2c1;2c1;2;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c;1;1;112;112;1;0x1;2c1;2c;1;1;112;112;1;0x1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c My actual prompt is: Ivans-MacBook-Pro:FI9826W-2.11.1.5-20140121 NA ivankovacevic$ Where: FI9826W-2.11.1.5-20140121 NA is the current working dir.
As you see it was camouflaged in the binary gibberish and I might have pressed enter reflexively or something. This in itself is a bit wrong of cat because obviously my prompt might have been even better "camouflaged." But it is less serious than I initially thought. Although I'm still not 100% sure that it did not execute automatically last night when I tried, because there was also another peculiar thing that happened last night, before this. I've called cat on another very similar file that caused Terminal app to quit with: Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x00007fcb9a3ffffa Now I'm thinking that maybe a combination of these two events have caused the auto execution of gibberish on the console. But I can not replicate that behaviour again. The files in question are firmwares for a Foscam IP camera, here are the links: International site: http://foscam.com/Private/ProductFiles/FI9826W-2.11.1.5-20140120.zip And then the file inside: FI9826W_app_ver1.11.0.40_OneToAll.bin calling cat on that one will cause Terminal to quit. US site: http://foscam.us/downloads/FI9826W-2.11.1.5-20140121%20NA.zip and then the file: FI9826W_app_ver1.11.0.40_OneToAll_A.bin cat-ing that one will cause that paste of 1;2c1;2c1;2;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c.... characters on the command line | Yes , it's a potential risk, see CVE-2003-0063 , or CVE-2008-2383 or CVE-2010-2713 , or CVE-2012-3515 or OSVDB 3881 , or CVE-2003-0020 or any of the similar ones listed here ... Some more in comments below also. Update it's not just a potential risk, it's a real risk .
rxvt-unicode (versions 2.7—9.19, patched in 9.20) allows read/write access to X window properties , this can enable user-assisted execution of arbitrary commands, this issue has been assigned CVE-2014-3121 , more details here https://bugzilla.redhat.com/show_bug.cgi?id=1093287 . More recently (October 2019) iTerm2 versions up to v3.3.5 were found to have the same class of problem : display of malicious content can enable the integrated tmux and permit command execution, see CVE-2019-9535 . This topic also has good coverage here: https://unix.stackexchange.com/questions/73713/how-safe-is-it-to-cat-an-arbitrary-file and a thorough analysis of the underlying problem from Gilles here: https://unix.stackexchange.com/questions/15101/how-to-avoid-escape-sequence-attacks-in-terminals . Explanation What you are observing is a side-effect of how certain escape sequences behave: some of them stuff characters (usually also containing escape sequences) directly into the terminal input buffer . All in the name of backward compatibility, of course. The standard xterm escapes which are described using the term "Report <something>" do this. This behaviour permits programs to query/set terminal (or other) properties " in band " rather than via ioctls or some other API. As if that wasn't bad enough, some such sequences can contain a newline , which means that whatever is reading from the terminal (your shell) will see what appears to be a complete user command. Here's a neat way to use this, with bash's read to print an escape (as a prompt) then immediately read and split the reply into variables: IFS=';' read -sdt -p $'\e[18t' csi8 rows cols
echo rows=$rows cols=$cols These sequences can vary by terminal, but for rxvt and derived, the graphics query escape includes a newline (example using bash and $'' strings, see doc/rxvtRef.txt in the source)` : $ echo $'\eGQ'
$ 0
bash: 0: command not found This escape sends \033G0\n into the terminal input buffer (or digit 1 instead of 0 if you have a graphics-capable rxvt ). So, combine this escape with other sequences which behave similarly: echo $'\x05' $'\e[>c' $'\e[6n' $'\e[x' $'\eGQ' for me this causes 11 attempts to run various commands: 1 , 2c82 , 20710 (my rxvt version string), 0c5 , 3R (5 and 3 were the cursor coords), 112 and 0x0 . Exploitable? With rxvt and most recent terminal emulators you should "only" be able to create a limited set of mostly numeric sequences. In old terminal emulators it was possible (some CVEs listed above) to access the clipboard, the window icon and titlebar text to construct more malicious strings for invocation (one current slight exception is if you set the answerbackString X resource string, but that cannot be directly set using this method). The flaw then is allowing arbitrary read and write access to something that passes for state or storage within escape sequences that stuff data into the input buffer. rxvt requires compile time changes to activate, but urxvt helpfully has an -insecure command line option that enables some of the more exciting features: $ echo $'\e]2;;uptime;\x07' $'\e[21;;t' $'\eGQ'
bash: l: command not found
17:59:41 up 1448 days, 4:13, 16 users, load average: 0.49, 0.52, 0.48
bash: 0: command not found The three sequences are: \e]2;...\x07 set window title; \e[21;;t query window title, place in input buffer; \eGQ query graphics capability, which adds \n to input buffer. Again, depending on terminal, other features such as font size, colors, terminal size, character set, alternate screen buffers and more may be accessible though escapes. Unintended modification of those is at least an inconvenience, if not an outright security problem. Current versions of xterm restrict potentially problematic features via "Allow*" resources. CVE-2014-3121 Prior to v9.20, urxvt did not also guard read and write access to X properties ( mostly used by window managers ). Write read access (or more precisely, access to sequences which echo potentially arbitrary strings) now requires the -insecure option. $ echo $'\e]3;xyzzy=uptime;date +%s;\x07'
$ xprop -id $WINDOWID xyzzy
xyzzy(UTF8_STRING) = 0x75, 0x70, 0x74, 0x69, 0x6d, 0x65, 0x3b, 0x64, 0x61, 0x74, \
0x65, 0x20, 0x2b, 0x25, 0x73, 0x3b This can be trivially used to stuff arbitrary strings into the terminal input buffer. When the escape sequence to query a property is invoked (along with helpful \eGQ which adds a newline): $ echo $'\e]3;?xyzzy\x07' $'\eGQ'
$ 3;uptime;date +%s;0
bash: 3: command not found
17:23:56 up 1474 days, 6:47, 14 users, load average: 1.02, 1.20, 1.17
1400603036
bash: 0: command not found Multiple commands, preserving whitespace and shell metacharacters.
This can be exploited in a variety of ways, starting with cat-ing an untrusted binary of course, further ideas in H.D. Moore's short paper (2003). Followup For the escape sequence you ask about: 1;112;112;1;0x1;2 This is: Request Terminal Parameters (DECREQTPARM) and Send Device Attributes : $ echo $'\e[x' $'\e[0c'
;1;1;112;112;1;0x1;2c The second one ( \e[0c ) is the same as ^E (using rxvt ). There are some escape sequences in there too. the full sequence written for each is, respectively: \e[1;1;1;112;112;1;0x
\e[?1;2c | {
"source": [
"https://security.stackexchange.com/questions/56307",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32489/"
]
} |
56,371 | How can I create a password, which when directly hashed (without any salt) with md5 will return a string containing the 8 characters "SALT ME!". The hope is that a naive developer browsing through his user database will see the "hash", realize the insecurity of his application, and eventually make the world a better place for everyone. Md5 outputs 128 bits, which is 16 bytes. If I had a 16-byte message, getting the original plaintext password would be equivalent to a pre-image, which to my knowledge is practically impossible. However, I'm only looking for 8 specific bytes in my hash. Is obtaining such a password feasible in day-timeframes on a typical computer? If so, how can I compute such a password? | The output of MD5 is binary: a sequence of 128 bits, commonly encoded as 16 bytes (technically, 16 octets , but let's use the common convention of bytes being octets). Humans don't read bits or bytes. They read characters . There are numerous code pages which tell how to encode characters as bytes, and, similarly, to decode bytes into characters. For almost all of them (because of ASCII ), the low-value bytes (0 to 31) are "control characters", hence not really representable as characters. So nobody really reads MD5 output directly. If someone is "reading" the hash values, then these values are most probably encoded into characters using one of the few common conventions for that. The two most prevalent conventions are hexadecimal and Base64 . With hexadecimal, there are only digits, and letters 'a' to 'f' (traditionally lowercase for hash values). You won't get "SALT ME!" in an hexadecimal output... With Base64, encoding uses all 26 unaccentuated latin letters (both lowercase and uppercase), digits, and the '+' and '/' signs. You could thus hope for "SaltMe" or "SALTME". Now that is doable, because each character in Base64 encodes 6 bits, so a 6-letter output corresponds to 36 bits only. Looking for a password which yields either "SaltMe" or "SALTME" will be done in (on average) 2 35 tries, i.e. within a few minutes or hours with some decently optimized code. Note, though, that someone who actually spends some time to read Base64-encoded hash values probably has some, let's say, "social issues", and as such might not react the way you hope. And it is done: When hashing with MD5 then Base64-encoding the result: infjfieq yields: SALTMEnBrODYbFY0c/tf+Q== lakvqagi yields: SaltMe+neeRdUB6h99kOFQ== | {
"source": [
"https://security.stackexchange.com/questions/56371",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19671/"
]
} |
56,389 | (Sorry I know this is a complete noob question and at the risk of posting a somewhat duplicate topic. I have a basic understanding of public/private key, hashing, digital signature... I have been searching online & stack forum last couple days but cannot seem to find a satisfactory answer.) Example: I am surfing on open wifi and I browse to for the 1st time. Server sends back its SSL certificate. My browser does its thing and verifies that the cert is signed by a CA that it trusts and all is well. I click around on the website. BUT! Question: Can someone actually please explain to me in a simple way how does my browser actually verify that the server certificate is legitimate? Yeah okay so on the certificate itself it says it is issued by, say "Verisign" but what is the actual cryptographic magic happens behind the scene to validate that it isn't a bogus certificate? I have heard people explain "SSL certificates are verified using the signing CA's public key" but that doesn't make sense to me. I thought public key is to encrypt data, not to decrypt data. So confused... appreciate it if someone could enlighten me. Thanks in advance! | You are correct that SSL uses an asymmetric key pair. One public and one private key is generated which also known as public key infrastructure (PKI). The public key is what is distributed to the world, and is used to encrypt the data. Only the private key can actually decrypt the data though. Here is an example: Say we both go to walmart.com and buy stuff. Each of us get a copy of
Walmart's public key to sign our transaction with. Once the
transaction is signed by Walmart's public key, only Walmart's private
key can decrypt the transaction. If I use my copy of Walmart's public
key, it will not decrypt your transaction. Walmart must keep
their private key very private and secure, else anyone who gets it can
decrypt transactions to Walmart. This is why the DigiNotar breach was such a big deal Now that you get the idea of the private and public key pairs, it's important to know who actually issues the cert and why certs are trusted. I'm oversimplifying this, but there are specific root certificate authorities (CA) such as Verisign who sign certs, but also sign for intermediary CA's. This follows what is called Chain of Trust, which is a chain of systems that trust each other. See the image linked below to get a better idea (note the root CA is at the bottom). Organizations often purchase either wildcard certs or get registered as a intermediate CA themselves who is authorized to sign for their domain alone. This prevents Google from signing certs for Microsoft. Because of this chain of trust, a certificate can be verified all the way to the root CA. To show this, DigiCert (and many others) have tools to verify this trust. DigiCert's tool is linked here . I did a validation on gmail.com and when you scroll down it shows this: This shows that the cert for gmail.com is issued by Google Internet Authority G2, who is in turn issued a cert from GeoTrust Global, who is in turn issued a cert from Equifax. Now when you go to gmail.com, your browser doesn't just get a blob of a hash and goes on it's way. No, it gets a whole host of details along with the cert: These details are what your browser uses to help identify the validity of the cert. For example, if the expiration date has passed, your browser will throw a cert error. If all the basic details of the cert check out, it will verify all the way to the root CA, that the cert is valid. Now that you have a better idea as to the cert details, this expanded image similar to the first one above will hopefully make more sense: This is why your browser can verify one cert against the next, all the way to the root CA, which your browser inherently trusts. Hope this helps you understand a bit better! | {
"source": [
"https://security.stackexchange.com/questions/56389",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45113/"
]
} |
56,514 | I have a friend who has a website developed in PHP on which we can browse all his files one after one (of course, we can not read the content of the PHP files). Do you think this is a security hole?
If yes, in which sense? | What you're describing is normal directory listing In itself, directory listing is not a security issue. If the security of your system is compromised after figuring out the structure of your files and directories, then you're relying on security through obscurity , which is bad. Examples of this bad practice include: Using secret directory names to access sensitive files. Limiting the execution privileged functions to only access their URLs rather than using proper permissions. Leaving special doors/backdoors for developers. However , as part of a good security policy, after implementing proper security measures, it's beneficial to obscure the working parts of your system. The less you show about your system, the less information an attacker can get on you, which means you're making their job more difficult. "So, what should I do?" you ask. Simple: Disable directory listing in your web server configurations. In Apache, you go to your httpd.conf and find the line where it says Options Includes Indexes Remove Indexes from the line, then restart your apache. | {
"source": [
"https://security.stackexchange.com/questions/56514",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
56,623 | My application interacts with many other HTTPS based services. As we use them at considerable frequency, I am worried about the performance impact of using HTTPS. Is there any mechanism ( time bound or any other permanent) which I can use to prevent the HTTPS handshake and other potential bottlenecks ? Ofcourse I do not want to go with HTTP :) | In SSL there are connections , and there are sessions . A connection starts with a handshake, and ends when either party states it by sending a close_notify alert message. Typical Web browsers and servers will maintain connections open for some time, closing them after one or two minutes of inactivity; one or several HTTP requests and responses are sent over that connection. In normal HTTPS contexts, there is a one-to-one mapping between the SSL connections and the underlying TCP connections: for each TCP connection (to port 443), there will be a single SSL connection, and when the SSL connection ends, the underlying TCP connection is closed. A session relates to the asymmetric cryptography which occurs in a "full handshake". The handshake process, which occurs at the beginning of a connection, is about establishing the cryptographic algorithms and keys which will be used to protect the data for that connection. There are two sorts of handshakes: The full handshake is what a client and server do when they don't know each other (they have not talked previously, or that was long ago). In the full handshake, certificates are sent, and asymmetric cryptography (RSA, Diffie-Hellman...) occurs. The abbreviated handshake is what a client and server remember each other; more accurately, they remember the algorithms and keys that they established in a previous full handshake, and agree to reuse them (technically, they reuse the "master secret" and derive from it fresh encryption keys for this connection). A connection with a full handshake, and the set of connections with abbreviated handshake who reuse that full handshake, constitute together the session . The abbreviated handshake is more efficient than the full handshake, because it implies one less network round-trips, smaller handshake messages, and no asymmetric cryptography at all. For performance , in all generality, you want the following: Keep connections open as much as possible. An "open connection" uses a few memory resources (both parties must remember the keys) and system resources (for the underlying TCP connection, which is kept open). However, there is no network traffic involved in keeping a connection alive, except possibly the optional " TCP keepalive ". When connections cannot be kept open (e.g. to free system resources), it is best if the client and server remember sessions so that they may do abbreviated handshakes if they reconnect. Web servers have various default and configurable policies. For instance, Apache (with mod_ssl ) uses a cache which is defined both for size and for duration ; the server "forgets" a session when either its cache is full and it needs some extra room, or when the timeout has been reached, whichever condition occurs first. If you have control over the servers' configuration, then you may want to increase the "inactivity timeout" for connection termination, and also to increase the session cache size and duration. If you do not have control over the servers, then your question is somewhat moot: whatever you do, it will have to be compatible with what the servers offer. You can somehow force a server not to forget a session by regularly opening a new connection with an abbreviated handshake, but that is not necessarily a good idea (usually, when a server forgets sessions, it is for a more-or-less good reason). Anyway, you shall make measures . This is a question about performance; abstract reasoning cannot give definitive answers. The usual rule of thumb is that performance issues do not exist until they have been duly measured in real life, or at least in a reasonably representative prototype system. In any case, it is hard to obtain the same security functionalities of SSL with a smaller cost. Replacing SSL with "something custom" is unlikely to provide much improvement performance-wise without sacrificing security in some way. For a walk-through of SSL, see this answer . Having some knowledge of the SSL internals really helps a lot in thinking about any design involving SSL. | {
"source": [
"https://security.stackexchange.com/questions/56623",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6862/"
]
} |
56,697 | Given a certificate¹ and a private key file², how can I determine if the public key on the certificate matches the private key? My initial thought was to simply encrypt some text with the public key on the cert, and attempt to decrypt it with the private key. If it roundtrips, we've got a winner. I just can't figure out how to do this with OpenSSL. Alternatively, if I could generate the public key from the private key, I could just compare their fingerprints. SSH seems to have a command for this: ssh-keygen -y -f my_key > my_key.pub But the hashes don't match. (I'm nearly certain I have the key corresponding to the cert, as the webserver is serving with it, but I'd like an easier way that spinning up a server to check.) ¹ a .crt file, in x509 format, I think. OpenSSL can read it with: openssl x509 -text -in that_cert.crt ² An RSA private key. | I'm going to assume you have ssl.crt and ssl.key in your current directory. If you want to see what's in your certificate it's # openssl x509 -in ssl.crt -text -noout Two of the things in here will be the RSA public Key's Modulus and Exponent (in hex). If you want to see what's in your private key it's # openssl rsa -in ssl.key -text -noout Note the public key is usually in there (at the very least the modulus is required to be in there for the private key to work, and the public exponent is usually 65537 or 3). So you can simply check if the modulus and public exponent match. Granted, if you want to check that the private key is actually valid (that is d and e are valid RSA exponents for the modulus m), you would need to run # openssl rsa -check -in ssl.key -noout EDIT (2018): Please note if you are checking that a private key coming from an untrusted source corresponds with a certificate, you MUST CHECK that the private key is valid. See here for an example where not checking the validity of a "leaked" private key lead to a CA improperly revoking a certificate. You may skip this step if you know you validly generated the keypair. Now you can simply generate the public key from both the certificate and the private key and then use diff to check that they don't differ: # openssl x509 -in ssl.crt -pubkey -noout > from_crt.pub
# openssl rsa -in ssl.key -pubout > from_key.pub
# diff from_crt.pub from_key.pub Or as a one liner that doesn't create files (using process substitution ): # diff <(openssl x509 -in ssl.crt -pubkey -noout) <(openssl rsa -in ssl.key -pubout) If the keys match, diff shouldn't return anything. (You probably will see "writing RSA key" output to stderr from the second command). Note your webserver probably would loudly complain if the certificate and private key didn't match. E.g., with nginx using the wrong key (same size, same public exponent, but last year's key) for the certificate nginx is using: # sudo /etc/init.d/nginx restart
* Restarting nginx nginx
nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/private/wrong_key.key") failed
(SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
nginx: configuration file /etc/nginx/nginx.conf test failed | {
"source": [
"https://security.stackexchange.com/questions/56697",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17749/"
]
} |
56,863 | Fairly frequently, the contact form on my blog gets comments that look similar to this (each field represents a text box users can enter into the HTML form on the blog): Name: 'ceguvzori' Email: '[email protected]' Website: 'QrSkUPWK' Comment: vaB5LN <a href="http://pepddqfgpcwe.com/">pepddqfgpcwe</a>,
[url=http://hvyhfrijavkm.com/]hvyhfrijavkm[/url],
[link=http://cwiolknjxdry.com/]cwiolknjxdry[/link], http://ubcxqsgqwtza.com/ I'd consider them to be spam, but the sites they link to don't exist, so they aren't helping SEO or spreading malicious links. Not even the email host, avbhdu.com , exists. What is the purpose of these comments? | They're probing your site. First, whether the comment will be published. Second, note how they use several popular syntaxes for links - it's an attempt to check which of them will result in an actual HTML link. If your site lets those posts through, expect more spam, this time more malicious. | {
"source": [
"https://security.stackexchange.com/questions/56863",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/38377/"
]
} |
56,951 | While scanning my website with uniscan it found my robots.txt file which disallows access to /cgi-bin/ and other directories, but they are not accessible in browser. Is there a way to access the directories or files which are Disallowed? | The robots.txt does not disallow you to access directories. It tells Google and Bing not to index certain folders. If you put secret folders in there, Google and Bing will ignore them, but other malicious scanners will probably do the opposite. In effect you're giving away what you want to keep secret. To disallow folders you should set this in Apache vhost or .htaccess. You can set a login on the folder if you want. | {
"source": [
"https://security.stackexchange.com/questions/56951",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45693/"
]
} |
56,955 | I saw many posts here on this site dishing out advice on disabling HTTP TRACE method to prevent cross site tracing . I sought to do the same thing. But when I read the Apache documentation , it gives the opposite advice: Note Despite claims to the contrary, TRACE is not a security vulnerability
and there is no viable reason for it to be disabled. Doing so
necessarily makes your server non-compliant. Which should I follow? | One of the wisest security principles says that what is unused should be disabled. So the first questions is: Are you really going to use it? Do you need it to be enabled? If you are not going to use TRACE method then in my opinion it should be switched off. It will prevent your app not only against XST , but also against undiscovered vulnerabilities related to this channel, which can be found in the future. | {
"source": [
"https://security.stackexchange.com/questions/56955",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9312/"
]
} |
57,057 | Whenever an unhandled exception makes it into production somehow - whatever the reason - there's generally an option (especially with .NET programs) to print out a stack trace to the end user before the program ends completely. Even though this helps with debugging the program, if the user sends a copy of the stack trace in a bug report, it is definitely a security concern. You don't want them being able to see your code like that - not without them going through a lot of extra hassle. But what if the text of the stack trace were encrypted before being printed to the screen? Would this be something which is safe, viable, etc.? Or would it still be something particularly worth avoiding? NOTE I'm aware of decompilation and the issues that produces. There are obfuscators though, and even though they're pretty much anything but perfect, they are better than no protection at all. It has been said that locks are for honest people, but everybody still uses them. | Think about why you want to do this. It is in my opinion, entirely pointless. If the exception occurred on the server side, handle and log it there. There is absolutely no point in displaying the stack trace to the user. If the exception occurred on the client side, in a thick client style web app, desktop app, mobile app etc, there is absolutely no point in encrypting it. Any determined enough user can decompile or reverse engineer client side code. In fact, how would you encrypt the data? Encryption implies an encryption key, and this key has to be stored somewhere. I also question what a normal user can do when you display the stack trace. If crash data is important information, implement some sort of reporting functionality for diagnostics. | {
"source": [
"https://security.stackexchange.com/questions/57057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45803/"
]
} |
57,143 | CNet is reporting that all OpenID and OAuth sites are vulnerable to an attack called "Covert Redirect". What is this attack, how does it work, and as an end user, how can I mitigate the risk? | This isn't a vulnerability of/in OAuth 2.0 at all. The issue has been wildly overblown and misstated by CNET and the original finder. Here it is in a nutshell: If your web site (example.com) implements an open redirect endpoint - that is, implements a URL that will redirect the browser to any URL given in the URL parameters - AND your redirect copies URL parameters from the incoming URL to the outgoing redirect URL, then it is possible for third parties to exploit this artifact of your web site in a wide variety of nasty ways. Worst case: evil.com is able to get the auth code originally intended for your web site (example.com) and may be able to use that auth code to extract user information from the auth provider (Google, Facebook, etc) or possibly even take control of the user's account on your web site. Would evil.com be able to take control of the user's Google account using that access code? No, because the access code was minted for your site, example.com, and only works there. Who's fault is it? Yours, for implementing an open redirect on your site. Don't blame Google or Facebook or others for your poor implementation. There are a few legitimate use cases for having a redirect on your site. The biggest one is to redirect the browser after login to the web page (on your site) that the user originally requested. This only needs to redirect from your web site to pages in your web site, on your domain. How to fix it? Have your redirect endpoint (example.com/redirect?&destination=ht.tp://foo.com...) validate the destination URL. Only allow redirects to pages on your site, in your domain. More info on my blog: http://dannythorpe.com/2014/05/02/tech-analysis-of-serious-security-flaw-in-oauth-openid-discovered/ Update: There is an open redirect issue when using Facebook for OAuth user login. When you configure your app definition on Facebook, be sure to enter your domain-specific redirect URL in the redirect field provided. Facebook allows this field to be left blank. If left blank, Facebook itself acts as an open redirect while processing user logins for your web site. Facebook should fix this by simply not allowing the redirect URL field to be left blank. | {
"source": [
"https://security.stackexchange.com/questions/57143",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15444/"
]
} |
57,295 | I got this email from [email protected] , with the title: Your account has been limited until we hear from you. I think this is a scam / spoof email because I don't see any notification in my Paypal account and this is Hotmail account is not used as my Paypal login. (It used to be not any more for more than a year.) But the troubling thing is, the TO: field has my old password as my name, then my email in brackets. A screenshot below should clarify what I'm saying. I've blurred my email and the two red arrows are pointing to what was my old password in plain text. Is there anything I could do to protect myself? Does that mean the sender has me under their "contact book" with my name as my password? I have already forwarded the email to [email protected]. | It seems like the spammer got your personal information including your password through a security breach somewhere. Why did they use your password instead of your name? I would say it was an honest mistake on their side. They just mixed up the fields when designing the spam mail. When you are still using the password somewhere, you should change it ASAP. In the future you should avoid using the same password for different services. Data breaches become more and more frequent, and they even hit larger companies which really should know how to secure their systems. Using a password manager like KeePass can help you to manage all the different passwords. | {
"source": [
"https://security.stackexchange.com/questions/57295",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43949/"
]
} |
57,310 | On this ISC article on DVR compromise the author talks about the compromise of an embedded system. In particular, the attacker executes a series of echo commands on the remote host, and: This DVR has no "upload" feature. There is no wget nor is there an ftp or telnet client. ... The first echo writes 51 bytes to "/var/run/rand0-btcminer-arm" and
the second echo returns "done", indicating that the system is ready
for the next echo command. Unlike the name implies, "rand0-btcminer-arm" is not a bitcoin miner.
Instead, it just appears to be a version of "wget". I do not understand how could even the basic fundamentals of wget fit in 51 bytes. The article contains a packet dump, so I guess I could write it to file and try to reverse engineer the binary but I suspect there's something else going on here. Could anyone help me understand how is this happening? Is the "binary" doing a library call to network functionalities? | The author had made the mistake of being ambiguous and confused the readers a bit. I must admit, like you, I was confused at first, until I saw the PCAP dump. First of all, the box indeed doesn't have wget The attacker didn't use that one echo statement, he used a series of echo statements. I counted about 107 echo statements progressively building the executable rand0-btcminer-arm . At about 50 bytes each, that's about 5350 bytes . Way more than enough to achieve a simple HTTP download. Here's a snippet of them (highlighted in red): | {
"source": [
"https://security.stackexchange.com/questions/57310",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7896/"
]
} |
57,562 | I was wondering about md5 encryption. It is good, and I agree that it is unbreakable. But this is why we have rainbow tables. What if bunch of people gather together and start brute forcing and creating a hash for every single possible combination of characters. Especially if you are someone like NSA then you probably have computational power to generate hash tables for all possible combinations within relatively "short" time. Therefore wouldn't that render md5 encryption pointless? Sorry if this is inadequate question, but I simply couldn't stop thinking about this. | First of all, MD5 is not an encryption algorithm. It is a hash function. Encryption generally implies decryption, which you cannot do with a hash function. Who said MD5 is good or unbreakable? It is 'breakable'. The complexity of obtaining a collision for MD5 is around 2^64. This is the equivalent of an exhaustive key search of 64 bits, quite weak for modern cryptosystems. Another aspect you need to know is you dont need to know the hash of every possible plaintext, but only the first collision they obtain. If A and B for example hashed give the same value, you would only need to store one of them. MD5 is no longer used in 'reliable' systems. Unix passwords for example stopped being hashed using MD5 quite a long time ago. Now they use SHA-512 (equivalent of a 256 symmetric key). | {
"source": [
"https://security.stackexchange.com/questions/57562",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27627/"
]
} |
57,646 | I really like the Java programming language, but I continuously hear about how insecure it is. Googling 'java insecure' or 'java vulnerabilities' brings up multiple articles talking about why you should uninstall or disable Java to protect your computer. Java often releases a huge number of security patches at a time, and yet there are still tons of vulnerabilities left to patch. I understand that there will always be bugs in software, but the amount of vulnerabilities Java has had does not seem normal (or am I imagining that?). What's even more confusing is that if there is a single architectural decision that is creating these vulnerabilities, why not change that design? There are tons of other programming languages that don't have this problem, so there must be a better way to do whatever Java is doing wrong. So why is Java still so insecure? | If you use Java like most other programming languages, e.g. to write standalone applications, it is no less secure than other languages and more secure than C or C++ because of no buffer overflows etc. But Java is regularly used as a plugin inside the web browser, e.g. similar to Flash. Because in this case the user runs untrusted code without having explicitly installed it, the idea is to have the code run inside a limited sandbox, where it should not be able to somehow act against the system or the user (e.g. read local files and send them to the website, scan the local network etc). And this is where Java failed in the recent years, e.g. new bugs popped up sometimes on a daily basis which allowed escaping from the sandbox. Also, sometimes bugs in the byte code interpreter or native libraries lead to buffer overflows and could compromise the system, but in this regard Flash is usually considered worse. And as for the other languages being better: these usually can't even run as untrusted code inside a sandbox (exception is JavaScript and maybe Flash), so they would be even worse because there is no inherent way to limit their interaction with the system. | {
"source": [
"https://security.stackexchange.com/questions/57646",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6159/"
]
} |
57,856 | I don't want it to just check the extension of the file as these can easily be forged even MIME types can be forged using tools like TamperData. So is there a better way to check file types in PHP ? | You want PHP's Fileinfo functions, which are the PHP moral equivalent of the Unix 'file' command. Be aware that typing a file is a murky area at best. Aim for whitelist ("this small set of types is okay") instead of blacklist ("no exes, no dlls, no ..."). Do not depend on file typing as your sole defense against malicious files. | {
"source": [
"https://security.stackexchange.com/questions/57856",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25497/"
]
} |
57,909 | Jetblue's password requirements specify that, among other stringent requirements: Cannot contain a Q or Z I can't fathom a logical reason for this, unless it were say, extremely common for the left side of keyboards to break, but then you wouldn't allow 'A' either :) What would be the reason for this security requirement? | It's a leftover from the time when keypads didn't have the letters Q and Z . Security-wise, there's no reason. It's just because of old systems. To clarify: You used to be able to enter your password over the phone. Some phones didn't have the letters Q or Z, like the one on the picture below. Image courtesy: Bill Bradford on flickr.com Because of this, passwords including these characters were disallowed. They haven't changed this requirement for whatever reason: Legacy systems, poor documentation, or they just don't care. | {
"source": [
"https://security.stackexchange.com/questions/57909",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/46535/"
]
} |
57,950 | A few years ago we had that awesome Linux distribution called Damn Vulnerable Linux.
But unfortunately it looks like the project is dead. So my question is are there other
Linux distributions which are meant to be hacked (explicit in the view of exploit development). Also welcome would be applications on the Windows platform for exploit exercises (like vulnerable server). Thanks in advance | Vulnhub is a collection of vulnerable distributions along with walkthroughs contributed by the community. exploit-exercises.com provides a variety of virtual machines, documentation and challenges that can be used to learn about a variety of computer security issues such as privilege escalation, vulnerability analysis, exploit development, debugging, reverse engineering, and general cyber security issues. PentesterLab has interesting exercises, some o them are about exploit development. RebootUser has a lab that includes a Vulnix - a vulnerable Linux machine, VulVoIP - a relatively old AsteriskNOW distribution and has a number of weaknesses, and VulnVPN - a VM that you can practice exploiting the VPN service to gain access to the sever and ‘internal’ services. BackTrack PenTesting Edition lab is an all-in-one penetration testing lab environment that includes all of the hosts, network infrastructure, tools, and targets necessary to practice penetration testing. It includes: a DMZ network with two hosts targets, an “internal” network with one host target and a pre-configured firewall. PwnOS is a Debian VM of a target on which you can practice penetration testing with the goal of getting root. Holynix is an Linux vmware image that was deliberately built to have security holes for the purposes of penetration testing. Kioptrix VM is targeted at the beginner. Scene One is a pentesting scenario liveCD made for a bit of fun and learning. Sauron is a Linux system with a number of vulnerable web services. LAMPSecurity training is designed to be a series of vulnerable virtual machine images along with complementary documentation designed to teach Linux, Apache, PHP and MySQL security. OSCP , OSCE , SANS 660 and HackinkDOJO are some of the paid courses that have good practical labs. Hacking challenge websites can also provide challenges that are increasing in difficulty, fun and addictive. WeChall is a website that aggregates scores on other challenge websites and it has a category for websites with exploits . CTF (Capture The Flag) events have challenges where you are required to exploit local or remote software. Most live events are available on CTFTime but there are repositories of past events and some CTFs are still available after the live event. But for exploit development, I suggest installing vulnerable applications on your own computer where you could easily perform analysis. The application doesn't necessarily have to be a server or run on a different computer. Go to exploit-db and find old exploits there, then look for that version of the vulnerable software and start working on it. If you need hints, the actual exploit can point you in the right direction. | {
"source": [
"https://security.stackexchange.com/questions/57950",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43907/"
]
} |
58,025 | If a certificate has a limited duration of, say 5 years, but it gets somehow compromised after 2 years, waiting the 3 remaining years for it to get invalid is not a real solution to the breach problem. (3 years is eternity in IT, I guess) Also, if the encryption method gets cracked (à la WEP), you also need to update everything immediately. What are the advantages to limit the validity in time (except for the issuer to make money on a regular basis, I mean)? | The technical reason is to keep CRL size under control: CRL list the serial numbers of revoked certificates, but only for certificates which would otherwise be still valid, and in particular not expired. Without an end-of-validity period, revoked certificates would accumulate indefinitely, leading to huge CRL over time. However, since network bandwidth keeps on increasing, and since modern revocation uses OCSP which does not suffer from such inflation, this technical reason is not the main drive behind certificate expiration. In reality, certificates expire due to the following: Inertia: we set expiration dates because we have always set expiration dates. Traditions, in information security, often lead to dogma: when security is involved, people feel very inhibited when it comes to changing long-standing habits, in particular when they don't understand why the habit is long-standing in the first place. Mistrust: ideally, certificate owner should seek a renewal when technological advances are such that their keys become inadequately weak (e.g. they had 768-bit RSA because it was fine back in 1996). But they don't. Expiry dates enforce renewals at predictable moments, allowing for gradual, proactive evolution. Confusion: expiry dates can be viewed as a translation of housekeeping practices from earlier military cryptosystems, before the advent of the computer, where frequent key renewal was required to cope with the weakness of such systems. Some people think that certificate expiration somehow implements that practice (which is obsolete, but hey, Tradition is Tradition). Interoperability: existing deployed implementations of X.509 expect certificates with an expiry date; the field is not optional. Moreover, some of these implementation will refuse dates beyond January 2038 (that's the year 2038 problem ). Greed: if you are a CA in the business of selling certificates, well, you really like it when people have to come buy a new one on a yearly basis. | {
"source": [
"https://security.stackexchange.com/questions/58025",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/46649/"
]
} |
58,077 | Similar to how it can be easily done for RSA: openssl req -x509 -nodes -newkey rsa:2048 -rand /dev/urandom -keyout example.key -out example.crt -days 365 I'd like to generate an ECDSA cert/key in one step. I've tried: openssl req -x509 -nodes -newkey ec:secp384r1 -keyout ecdsa.pem -out mycert.crt -days 30 Returns the below error Can't open parameter file secp384r1 . I am trying to specify the curve to use. If a key file exists, then you can specify it with ec:example-ecdsa.pem and it will work. Possibly something like this could work with tweaking: openssl req -new -x509 -nodes -newkey ec:$(openssl ecparam -name secp384r1) -keyout cert.key -out cert.crt -days 3650 | This seemed to be the command you want: openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) -keyout cert.key -out cert.crt -days 3650 | {
"source": [
"https://security.stackexchange.com/questions/58077",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44203/"
]
} |
58,214 | My friend just posted a picture of her key to instagram and it occurred to me that with such a high res photo, the dimensions of the key could easily be worked out.
Therefore the key could be duplicated.
What's to stop someone malicious from abusing this? | The simple answer is: nothing. This has already been done for many years, with keys being cast or created from blanks using hand drawn copies, photographs, remembered shapes etc all being successfully used, both by locksmiths and criminals. A 3D printed key will do just as well, if strong enough, or it could be used to cast a key if necessary, or as pointed out by @EkriirkE - you could use a torque bar to turn the barrel. You should not ever post picture of keys to a public site, unless it is for something unimportant. | {
"source": [
"https://security.stackexchange.com/questions/58214",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/50573/"
]
} |
58,215 | I would like to move from sequential to random user IDs, so I can host profile photos publicly, i.e. example.com/profilepics/asdf-1234-zxcv-7890.jpg . How long must user IDs be to keep anyone from finding any user photos for which they have not been given the link? Does 16 lowercase letters and zero through nine provide a reasonable complexity? I'm basing this on 36 16 = 8x10 24 , conservatively estimate 10 billion user accounts reduces the space to 8x10 14 . At 1000 guesses/second, it would take 25 000 years to find an account. Unless I'm overlooking something. | It depends entirely on what you mean by "safe". If your only concern is an attacker guessing URLs, then 16 alphanumerics gives roughly 8,000,000,000,000,000,000,000,000 possible addresses, which is plenty to stop random guessing -- in order for an attacker to have a 50% chance of finding even one picture on a site with a thousand users in a year, they'd need to make 100 trillion tries per second, enough traffic to bring down even something like Amazon or Google. But there are other ways for URLs to leak: people putting them in emails or blog posts, web crawlers finding pages you didn't secure adequately, and so on. If you really need to protect something, you need to put it behind the same sort of security as the rest of your website. Personally, for making hard-to-guess URLs, I'd use GUIDs/UUIDs. The search space is absurdly huge, you don't need to coordinate generation between multiple servers, and most languages have standard routines for handling them. | {
"source": [
"https://security.stackexchange.com/questions/58215",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/35027/"
]
} |
58,509 | As someone who usually works with people in other countries it has always been a problem to send login information to each-other. For development login details like debug databases etc sure I can send them over in clear text email or something but when it comes to actual production information such as SSH keys how do you securely send them to someone when face to face contact isn't possible. | I usually use SMS. While not perfectly secure, this is more secure than email and generally adequate. It has the major benefit of not requiring any setup, such as exchanging PGP keys. You could make this more secure by sending half of the password by email and half by SMS. Alternatively (as suggested by Michael Kropat) send a file with the symmetrically encrypted password by email, and SMS the decryption password. For SSH keys, you should only transfer the public key. If you're granting a user access to a server, they should send you their public key, rather than you send them a private key. You still need to confirm the received key is authentic, but you don't need to keep it confidential. | {
"source": [
"https://security.stackexchange.com/questions/58509",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36976/"
]
} |
58,541 | There is a service called ProtonMail . It encrypts email on the client side, stores encrypted message on their servers, and then the recipient decrypts it, also on the client side and the system "doesn't store keys". My question is this: How does the decryption work? I'm a bit confused. Do I have to send my decryption key to each recipient before he can read my messages? | I am Jason, one of the ProtonMail developers. Decryption uses a combination of asymmetric (RSA) and symmetric (AES) encryption. For PM to PM emails, we use an implementation of PGP where we handle the key exchange. So we have all the public keys. As for the private keys, when you create an account, it is generated on your browser, then encrypted with your mailbox password (which we do not have access to). Then the encrypted private key is pushed to the server so we can push it back to you whenever you login. So do we store your private key, yes, but since it is the encrypted private key, we don't actually have access to your key. For PM to Outside emails, encryption is optional. If you select to encrypt, we use symmetric encryption with a password that you set for that message. This password can be ANYTHING. It should NOT be your Mailbox password. You need to somehow communicate this password to the recipient. We have a couple other tricks as well for getting around the horrible performance of RSA. We will eventually write a whitepaper with full details that anybody can understand. But something like that is a week long project in itself. I apologize in advance if my answer only makes sense to crypto people. | {
"source": [
"https://security.stackexchange.com/questions/58541",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47094/"
]
} |
58,704 | When storing user's passwords that you need to verify against (but not use as plaintext) the current state of the art is: Hash the password Use a salt Use a slow hash function - bcrypt, scrypt, etc. This provides the best possible protection against an attacker who has stolen the password database. It does not solve other issues like phishing, malware, password re-use, etc. But there is one remaining problem: the slow hash function can allow a denial of service attack. A single request burns a lot of CPU, which makes a DOS possible with a relatively small number of concurrent requests, making it difficult to use defences like IP throttling. Given the improving performance of JavaScript in browsers, it may now make sense to do this in the browser. I'm assuming here that the site is using SSL, so the JavaScript is delivered securely. If you're not using SSL, then using a slow hash function is unlikely to be a priority for you. There is at least one JavaScript implementation of bcrypt. But using this in a simple way would introduce two problems: The client needs to fetch the salt from the server. This introduces latency on login and, unless care is taken, can reveal whether a particular user account exists. If hashing is done purely on the client then the benefits of storing hashes are lost. At attacker who has stolen the password hashes can simply login using the hashes. However, I think there are acceptable solutions to both of those problems: The salt can be generated as hash(server_salt + user_name) - where server_salt is a random number that is unique to the server, public, and the same for all users. The resulting hash appears to have the required properties of a salt. The server should do a single, fast, hash operation on the hash it receives. As an example: the server stores SHA-256(bcrypt(salt, password)). The client sends bcrypt(salt, password) then the server applies SHA-256 and checks the hash. This does NOT allow an attacker to conduct a fast offline brute force attack. They can do a fast brute force of SHA-256(password) because password has a limited amount of entropy - 2^50 or 2^60 or so. But a 128-bit bcrypt(salt, password) has entropy or 2^128, so they cannot readily brute force it. So, is this a reasonable and secure approach? I am aware of the general advice to "don't roll your own crypto". However, in this case, I am attempting to solve a problem that is not solved by off-the-shelf crypto. For some basic credibility, this has been looked at by John Steven (a recognised expert in the field) with positive outcome from a "brief" analysis. | Using servername+username as salt (or a hash thereof) is not ideal, in that it leads to salt reuse when you change your password (since you keep your name and still talk to the same server). Another method is to obtain the salt from the server as a preparatory step; this implies an extra client-server roundtrip, and also means that the server would find it more difficult to hide whether a given user name exists or not (but it does not matter much in practice). What you describe is a well-known idea, usually called server relief , which can be applied to just any password hashing function in just the way you describe: Client obtains (or computes) the salt. Client computes the slow salted hash value V and sends it as password. Server stores h(V) for some fast hash function h , and uses it to verify the V value from the client. This is safe. The main drawback is that the slow hashing will go at the speed of the client, so the number of iterations may have to be lowered, possibly considerably so if the client is computationally feeble -- as is the case for anything involving Javascript. If the system must work with a browser on a not-so-recent smartphone, then the iteration count will have to be 10 to 100 times smaller than what could be done on the server, implying a corresponding reduction in resistance to brute force (in case the attacker steals the hash values stored on the server). So that's a trade-off: since the busy server offloads work on clients, it can use a higher iteration count without itself drowning under the load; but since the clients are weak, the iteration count must be lowered, usually much more than it was increased thanks to the offloading; this means that the construction is not worth the effort. Usually. Of course, if the clients are computationally strong, e.g. this is a native-code application on a gaming machine, then server relief is a good idea and will look like the way you describe it. Another possibility is offloading the work to another, third-party system, e.g. other clients (already connected): this can be done securely if the hash function includes the necessary mathematical structures. Such offloading is know as delegation . To my knowledge, delegation is offered by only a single password hashing function, called Makwa (see the specification ), one of the candidates of the current Password Hashing Competition . | {
"source": [
"https://security.stackexchange.com/questions/58704",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31625/"
]
} |
58,781 | In TrueCrypt I noticed the option to encrypt a volume with multiple encryption algorithms i.e. AES-Twofish-Serpent. Would it be useful to encrypt something with the same algorithm multiple times? For example AES-AES-AES. I would guess if a flaw or backdoor in the algorithm was discovered this defense would be useless, but would it make brute force attacks harder? EDIT: how is applying multiple iterations any different? | Yes, it makes a difference. It makes your system more risky. In answering this question, I'm going to assume that you're implementing an AES-AES cascade using a sensible mode of operation (e.g. CBC or GCM) and with independent keys. The benefit you seem to be proposing is that you could prevent brute-force attacks by using multiple layers. The problem is that, if an adversary has any chance of breaking a 128-bit key, then having to break three of them makes almost zero difference to them. You're talking about the difference between 2 128 and 2 129.58 operations. Considering that the limits of computation put the costs involved with cracking a 128-bit key somewhere in the region of 1/100th of all of the power ever produced by man, that little bit extra barely matters. The only benefit that an AES-AES cascade would bring is that many classes of attack against block ciphers are made more difficult. For example, chosen plaintext attacks rely upon getting a system to encrypt attacker-selected plaintexts, and usually involve comparing the resulting ciphertexts. However, with AES-AES cascades, you can't possibly select the input to the second cipher. There's a problem, though. Remember when I said I'd assume you'd made sane decisions? That's where things start to fall apart. By increasing the complexity of your system, you increase the number of security decisions you have to make, and increase the potential for security failures and bugs. Do you perform two sequential AES block transforms for each block process in CBC, or do you encrypt everything with AES-CBC and then encrypt it all again? Have you used two separate IVs? Are you selecting IVs independently? Are you selecting keys independently? How are you applying and checking authenticity? How are you going to safely exchange, store, or derive the keys and IVs in your protocol? Are all of your implementations strong against side-channel attacks such as DPA and timing attacks? That's a lot of questions to get right, and a lot of potential areas for failure. Finally, I'd like to remind you of the purpose of cascades: to ensure that a weakness in one cipher doesn't result in a loss of confidentiality of data. By correctly implementing a cascade of, say, AES-Serpent, you ensure that both AES and Serpent need to be broken before the data is compromised. | {
"source": [
"https://security.stackexchange.com/questions/58781",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10714/"
]
} |
58,827 | Since an IP address does not necessarily represent a specific device, but probably a whole network/company/etc. does it at all make sense to block an IP address if there is a significant amount of false login tries from it? I was planning to implement IP checking as well as tries for a specific user/account/email, but I am not sure if it is better to leave the IP check out completely therefore. On the other hand this allows an attacker to pretty much try a specific amount of passwords for every user without ever getting banned (at the same time blocking those users from being able to log in since their accounts will be locked for a while). What is the correct approach to prevent something like that (possibly without using dedicated hardware)? | The answer to this question very much depends on the security posture of your site, which decides whether the risk of unauthorised access is greater or lower than the risk of Denial of Service for some users. For high risk sites, I might go with the blocking option, especially where most of the user base is likely to be home users and therefore is likely to have distinct IP addresses. One compromise might be where you detect password guessing attacks, add some anti-automation (e.g. CAPTCHA) to logins from that IP address for a while. That has the effect of making the attack harder to pull off while not completely blocking legitimate users from the site. If you still get lots of invalid logins with the CAPTCHA completed then it would sound like you're seeing a more targeted attack (as they'd likely need to pay for a CAPTCHA solving service if your CAPTCHA is any good), and at that point I'd be more inclined to block the IP address for a while and redirect users to a message explaining the block (something like "malicious activity has been detected from your IP address, please contact support on [your_support_email_here]). | {
"source": [
"https://security.stackexchange.com/questions/58827",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28814/"
]
} |
58,849 | I've heard from several people that private repository servers like BitBucket are not really safe. I've heard rumours about code being stolen and used by people even out of private repositories. Is it true? Is there any evidence, that cases like that could have happened? | A git repository is just files. So you're asking " Are private files safe? " To which the answer is " you're asking the wrong question ". A git repository is exactly as safe as the place that it storing it for you. No more, no less. If it's GitHub, then it's exactly as safe as GitHub is, And before you ask how safe GitHub is: nobody knows the answer but them. Same story for BitBucket, Gitorius, Dropbox, Google Apps, Microsoft OneDrive and literally everywhere else you can store files (including your Git repo): Nobody can tell you how safe they are because nobody knows but the vendor. And the vendor always says they're safe. If you're paranoid, keep your files on your own hard drive. In a mattress. Buried behind the shed. | {
"source": [
"https://security.stackexchange.com/questions/58849",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47377/"
]
} |
58,857 | Is there an easy way to test an SMTP server to check for configuration issues associated with STARTTLS encryption, and report on whether it has been configured properly so that email will be encrypted using STARTTLS? Think of the Qualys SSL server tester as an analogy: it is a great tool to quickly check a webserver to see use of SSL has been properly configured, and identify opportunities for improving the configuration to provide stronger encryption. It knows how to recognize many common configuration errors and gives a grade. Is there anything like that for STARTTLS on SMTP servers? In particular, given a SMTP server, I would like to tell: whether it supports STARTTLS, whether its STARTTLS configuration has been set up properly so that email with other major email providers will end up being encrypted, whether it supports perfect forward secrecy and whether it is configured so that the perfect forward secrecy ciphersuites will be used in practice (where possible), whether it provides a suitable certificate that will pass strict validation checks, whether it has any other configuration errors. How can I do this? Facebook and Google have recently highlighted the state of STARTTLS usage on the Internet and called for server operators to enable STARTTLS and configure it appropriately so that email will be encrypted while in transit. Are there easy-to-use tools to support this goal? | Here are a several websites that provide tests that you may be interested in. SSL-Tools is a web-based tool that tests a SMTP server for each of the items you mentioned; it tests for STARTTLS support, a certificate that passes strict validation checks, support for perfect forward secrecy, and other stuff: https://ssl-tools.net/mailservers StartTLS is a web-based tool that tests a SMTP server and provides a simple grade, along with many details on the configuration of the SMTP server (though no testing of whether perfect forward secrecy is used): https://starttls.info/ (see the about page information about the service, or statistics about sites checked with their service) CheckTLS is a web-based tool provide a way to test a SMTP server for STARTTLS server as well as whether the certificate is "ok" (i.e., it passes strict validation) and partial information on what cipher was negotiated when they connected to that SMTP server (but no information about perfect forward secrecy support): https://www.checktls.com/ The following web-based tools check whether a SMTP server support STARTTLS, but do not perform any of the other checks mentioned in the question: https://luxsci.com/extranet/tlschecker.html (see http://luxsci.com/blog/how-to-tell-who-supports-tls-for-email-transmission.html for introduction) https://mxtoolbox.com/ If you have to check only one or two, try SSL-Tools and StartTLS. | {
"source": [
"https://security.stackexchange.com/questions/58857",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/971/"
]
} |
58,940 | The official TrueCrypt webpage now states: WARNING: Using TrueCrypt is not secure as it may contain unfixed security
issues This page exists only to help migrate existing data encrypted by
TrueCrypt. The development of TrueCrypt was ended in 5/2014 after Microsoft
terminated support of Windows XP. Windows 8/7/Vista and later offer
integrated support for encrypted disks and virtual disk images. Such
integrated support is also available on other platforms (click here
for more information). You should migrate any data encrypted by
TrueCrypt to encrypted disks or virtual disk images supported on your
platform. with detailed instructions for how to migrate to BitLocker below. Is it an official announcement or just a tricky deface attack ? | At this point, it is still unclear. Speculation runs rampant as to whether it's a defacement or official retirement. That said, it is noteworthy that the latest version of TrueCrypt (before the 7.2 version that's now posted) is over two years old. Also no apparent efforts have been made to support whole-disk encryption on Windows 8, which even older than TrueCrypt 7.1a if you count in the publicly-available pre-release versions of the former. Many Windows 8 users who used to rely on TrueCrypt are probably already migrated to Bitlocker for whole-disk encryption, so moving the rest of their TrueCrypt-protected data (if they haven't already) is a logical next step anyway. For anyone else, it would probably be preferable to wait until this whole mess is cleared up. The first phase of the TrueCrypt audit , covering the bootloader and Windows kernel drivers, turned up less than a dozen vulnerabilities - the worst of which were rated as "Medium" severity. The report also said the source code "did not meet expected standards for secure code". One of their recommendations mentioned: Due to lax quality standards, TrueCrypt source is difficult to review and
maintain. This will make future bugs harder to find and correct. Another note stated: The current required Windows build environment
depends on outdated build tools and software packages that are hard to get from trustworthy
sources All of this, along with the two-year lapse in new releases and lack of full support for the latest OSs, does lend to the easy belief that TrueCrypt's team may indeed be throwing in the towel. If they did choose to do that, then TrueCrypt would in fact become insecure in very much the same way as Windows XP now is - any newly discovered security vulnerabilities would not be patched. A key difference between TrueCrypt and Windows XP however, is that compatible alternatives may still be developed and updated since TrueCrypt is open-source software. Still, the very sudden and unexpected announcement is definitely worth some amount of skepticism. Until there's been further validation of the news, I would suggest that you not trust anything posted on TrueCrypt's website or SourceForge page - especially not the new "7.2" download. Update: 2014-05-29 0645Z Brian Krebs has reported on the issue, and given some sound reasoning as to why this is not likely a hoax. Additionally, he mentions that the people behind IsTrueCryptAuditedYet.com will continue their work despite the software project's current status. Speculation still runs rampant online of course. However, though the continued anonymity of the TrueCrypt development team makes any undeniably authentic confirmation of their status nigh impossible. Matthew Green made a fair point in this tweet though: But more to the point, if the Truecrypt signing key was stolen & and the TC devs can't let us know -- that's reason enough to be cautious. Really, regardless of the signing key's status specifically, the fact that the TrueCrypt developers can't (or at least so far appear to not even have made any efforts to) issue any separate and authoritative communication to validate what's happened to their website should be enough to raise significant concern. If the TrueCrypt team is calling it quits, it's time to move on and find/make alternatives. If not, their lack of out-of-band response to this incident raises serious questions (more so now than ever) as to how much we can really trust them to maintain the sort of software we want to trust with our most valuable secrets. Regardless of the status either way, it's probably best to seek alternative solutions. The recommendations on the TrueCrypt site aren't bad in general. However, they fall short of a few features TrueCrypt was known and loved for: Cross-platform compatibility Plausible deniability Hidden partitions Encrypted container files (you can do this with Bitlocker and VHDs, but it's not nearly as smooth and seamless as with TrueCrypt) Update: 2014-05-29 1450Z Jack Daniel has summed up my feelings on the topic quite well now, in a recent tweet : So, yeah: hack, troll, ragequit, whatever- silence means TrueCrypt org can't be trusted, so neither can TrueCrypt. Damn. Update: 2014-05-30 1545Z GRC has posted claims that the TrueCrypt developers have been heard from, via Steven Barnhart. https://www.grc.com/misc/truecrypt/truecrypt.htm If the source can be believed (again, the public anonymity of the TrueCrypt development team makes certain authentication nearly impossible) then TrueCrypt is indeed no longer being actively worked on by the original team. Additionally, the license prevents anyone else from legitimately being allowed to write a new "TrueCrypt" (though it is possible they may be able to fork it under a different name). One important thing to note, though GRC perhaps is a bit overly dramatic about it and may even be over-stating its value, is that the latest fully-functional version of TrueCrypt (7.1a) is - to public knowledge - still "safe" to use. Until such time as significant, and exploitable, vulnerabilities are discovered there's really no reason to consider 7.1a as inherently any more "unsafe" at the time of the truecrypt.org announcement than it was at any time before. That said, one must also bear in mind (as noted earlier in this post) that any discovered vulnerabilities in TrueCrypt 7.1a will not be fixed in any future releases. Thus, it is still wise to begin seeking other alternatives. The same holds true here as it does for Windows XP - the only substantial difference being that XP has a much higher profile and will very likely accrue a very long list of un-patchable vulnerabilities (some likely exist already) much more quickly. The Open Crypto Audit Project has tweeted a link to a "trusted archive" of TrueCrypt versions for those seeking older copies no longer available on truecrypt.org: https://github.com/DrWhax/truecrypt-archive Thanks to @Xander for pointing out the GRC article. | {
"source": [
"https://security.stackexchange.com/questions/58940",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16789/"
]
} |
59,093 | Watching the Snowden interview last night, Brian Williams asks him what degree of control the NSA has over smartphones -- in particular, whether or not they can remotely turn them on in order to collect data. Snowden replies "Yes" and goes on to say some scary things about the kinds of data that government agencies can collect. I've never heard of this before. What kind of mechanism would facilitate this? Do iPhones have some kind of wake-on-LAN feature? Is this an actual feature which is well known, or conjecture by Snowden? I see this question provides concrete evidence in the case of smart TVs in addition to some hazy assertions that "anything is possible" -- has such a thing been demonstrated to exist? | There is a semantics issue at play here that make answering definitively very difficult. What precisely did Mr. Snowden talk about when he said "Yes they can turn your phone on." Did he mean activate a device that is in a shutdown (not standby, low-power-ready-to-function) state? Doubtful. Did he mean activate a device in a low-power, standby state? Possibly. This is a no brainer, and exactly one of the features a "stand by" state is intended to facilitate. A carrier or gov agency exploiting it via code or warrant is nothing surprising. Did he mean 'turn on the microphone or other sensors when an active call is not in progress, to allow recording of ambient noises and conversations near the device?' Probably, and this is a known capability of service providers and thus government agencies for some time.[1] [1] http://en.wikipedia.org/wiki/Covert_listening_device#Remotely_activated_mobile_phone_microphones | {
"source": [
"https://security.stackexchange.com/questions/59093",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47587/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.