source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
20,814 | The title basically covers it. I want to know if using HTTPS, TLS, S/MIME, SSL will protect you from Deep Packet Inspection and Big Data Analytics? (I know that if someone fakes a cert or gets you to install a fake CA or using MITM software they can fool the average user, I want to know if a semi-diligent semi-intelligent IT security person is safe using those technologies to encrypt the data and connections) | Deep packet inspection (DPI) is a term that commonly refers to standard network middle-men, such as the routers at an ISP, examining content at a protocol layer higher than the layer they need to in order to process the packet (thus inspecting "deeper" into the packet than necessary). For example, an IP router may need to only look at the IP layer (layer 3 in the OSI model) of the packet, but if it also inspects the application layer data (layer 5/7) then it is performing deep packet inspection. HTTPS (referring to the wide suite you mentioned) is an application layer protocol. It will defend against deep packet inspection that reads the content of the application layer, but not against DPI at lower levels of the protocol wrapping. For example, HTTPS will not prevent DPI from looking at the TCP packet and examining the destination port to guess what protocol it is for. But it will prevent the DPI from learning the actual application data payload of the protocol. Big Data Analysis refers to analysis performed on very large databases of collected data, but how that data is collected has little to do with HTTPS. HTTPS is only designed to protect data in network transit, when the destination server in the HTTPS protocol reads data, the data is decrypted and HTTPS protection no longer exists. What happens at that point is up to whatever the client and server are using on top of HTTPS. Likely, the server can do pretty much whatever it wants at that point. (In other words, Big Data generally refers to Data At Rest, whereas HTTPS protects Data In Motion. Since they address different stages of the data, it makes sense that protection in one stage won't apply to data when it is moved to a different stage.) | {
"source": [
"https://security.stackexchange.com/questions/20814",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13466/"
]
} |
20,941 | How can I decrypt a message that was encrypted with a one-time pad? Would I need to get that pad, and then reverse the process? I'm confused here. | One-Time Pad is unbreakable, assuming the pad is perfectly random, kept secret, used only once, and no plaintext is known. This is due to the properties of the exclusive-or (xor) operation. Here's its truth table: A xor B = X
A | B | X
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
Number of 0s in column A = 2
Number of 1s in column A = 2
Number of 0s in column B = 2
Number of 1s in column B = 2
Number of 0s in column X = 2
Number of 1s in column X = 2 Note that it introduces no bit-skew - the number of 0s and 1s in the inputs are equal to the number of 0s and 1s in the output, i.e. two of each. Furthermore, if you know only one element from a row, you cannot predict the values of the other two, since they are equally probable. For example, let's say we know that X is 0. There's an equal probability that A = 0 and B = 0, or A = 1 and B = 1. Now let's say we know that X is 1. There's an equal probability that A = 0 and B = 1, or A = 1 and B = 0. It's impossible to predict. So, if you only know one element, you cannot possibly determine any information about A or B. The next interesting property is that it is reversible, i.e. A xor A = 0
B xor B = 0
A xor 0 = A
B xor 0 = B
A xor B xor B = A xor 0 = A
A xor B xor A = B xor 0 = B So, if we take any value and xor it with itself, the result is cancelled out and it always results in 0. This means that, if we xor a value A with a value B, then later xor that result with either A or B, we get B or A respectively. The operation is reversible. This lends well to cryptography, because: xor introduces no bitskew xor has equally probable inputs for any given output given any two of A, B, X we can compute the third As such, the following is perfectly secure: ciphertext = message xor key but only if message is the same length as key, key is perfectly random, key is only used once, and only one element is known to an attacker. If they know the ciphertext, but not the key or message, it's useless to them. They cannot possibly break it. In order to decrypt the message, you must know the entire key and the ciphertext. Keep in mind that the key must be completely random, i.e. every bit must have an equal probability of being 1 or 0, and be completely independent of all other bits in the key. This actually turns out to be rather impractical, for a few reasons: Generating perfectly random keys is hard. Software generators (and many hardware generators) often have minuscule biases and odd repeating properties. It's almost impossible to gain truly random data in anything but tiny amounts. If the attacker knows the ciphertext and can correctly guess parts of the message (e.g. he knows it's a Windows executable, and therefore must start with MZ ) he can get the corresponding key bits for the known range. These bits are useless for decrypting other parts of the message, but can reveal patterns in the key if it's poorly generated. You must be able to distribute the key, and your key must be equally as long as your message. If you can keep your key 100% secret between those of you who are authorised to read the message, why not just keep your message 100% secret instead? The weak link here is your random number generator. The security of the one time pad is entirely limited by the security of your generator. Since a perfect generator is almost impossible, a perfect one-time pad is almost impossible too. The final problem is that the key can only be used once. If you use it for two different messages, and the attacker knows both ciphertexts, he can xor them together to get the xor of the two plaintexts. This leaks all sorts of information (e.g. which bits are equal) and completely breaks the cipher. So, in conclusion, in a perfect one-time pad you need to know the ciphertext and key in order to decrypt it, but perfect one-time pads are almost impossible. | {
"source": [
"https://security.stackexchange.com/questions/20941",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13659/"
]
} |
20,951 | I was looking at the tripartite structure of a virus, and it seems to check to see if a computer is infected before infecting it with the virus. Would this be an attempt to use files that are already present to infect, instead of transferring another copy of the virus onto the machine? | The goal of most malware is to remain active as long as possible. The longer it can collect keystrokes, participate in DDoS attacks, redirect search results, send spam emails, shows popup ads, etc., the more profitable it is for the creator. To reach this goal, it has to be undetected. If a piece of malware infects a machine twice, it may leave the machine in an undefined state, or cause conflicts. It may also eat up more resources than normal. This can lead to detection through a variety of operations: Attempting to lock the same file twice, causing a crash. Injecting into running processes twice, causing memory corruption and crashes. Infecting the same file twice, causing it to be corrupted. Attempting to install multiple hooks on the same APIs / objects / system messages, causing erratic or undefined behaviour. Increased CPU, network and disk usage from the burden of having multiple copies installed. The less resources used and the less disturbance caused, the less likely the user is to notice something is wrong. Once a user notices something isn't working properly, they'll try to fix it, which might result in the malware being removed. As such, it's better for the malware creator to prevent these problems and remain covert. | {
"source": [
"https://security.stackexchange.com/questions/20951",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13659/"
]
} |
21,027 | I'm seeing a lot of log entries that appear to be failed login attempts from unknown IP addresses. I am using private and public keys to log in with SSH but I have noticed that even with private and public keys set I am able to log in to my server with filezilla without running pageant . Is this normal? What should I do to further protect myself from what seems like a brute force attack? Heres the log: Oct 3 14:11:52 xxxxxx sshd[29938]: Invalid user postgres from 212.64.151.233
Oct 3 14:11:52 xxxxxx sshd[29938]: input_userauth_request: invalid user postgres [preauth]
Oct 3 14:11:52 xxxxxx sshd[29938]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29940]: Invalid user postgres from 212.64.151.233
Oct 3 14:11:52 xxxxxx sshd[29940]: input_userauth_request: invalid user postgres [preauth]
Oct 3 14:11:52 xxxxxx sshd[29940]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29942]: Invalid user postgres from 212.64.151.233
Oct 3 14:11:52 xxxxxx sshd[29942]: input_userauth_request: invalid user postgres [preauth]
Oct 3 14:11:52 xxxxxx sshd[29942]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29944]: Invalid user postgres from 212.64.151.233
Oct 3 14:11:52 xxxxxx sshd[29944]: input_userauth_request: invalid user postgres [preauth]
Oct 3 14:11:52 xxxxxx sshd[29944]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29946]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29948]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29950]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:52 xxxxxx sshd[29952]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29954]: Invalid user admin from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29954]: input_userauth_request: invalid user admin [preauth]
Oct 3 14:11:53 xxxxxx sshd[29954]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29956]: Invalid user admin from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29956]: input_userauth_request: invalid user admin [preauth]
Oct 3 14:11:53 xxxxxx sshd[29956]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29958]: Invalid user admin from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29958]: input_userauth_request: invalid user admin [preauth]
Oct 3 14:11:53 xxxxxx sshd[29958]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29960]: User mysql not allowed because account is locked
Oct 3 14:11:53 xxxxxx sshd[29960]: input_userauth_request: invalid user mysql [preauth]
Oct 3 14:11:53 xxxxxx sshd[29960]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29962]: User mysql not allowed because account is locked
Oct 3 14:11:53 xxxxxx sshd[29962]: input_userauth_request: invalid user mysql [preauth]
Oct 3 14:11:53 xxxxxx sshd[29962]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29964]: Invalid user prueba from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29964]: input_userauth_request: invalid user prueba [preauth]
Oct 3 14:11:53 xxxxxx sshd[29964]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29966]: Invalid user prueba from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29966]: input_userauth_request: invalid user prueba [preauth]
Oct 3 14:11:53 xxxxxx sshd[29966]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:53 xxxxxx sshd[29968]: Invalid user usuario from 212.64.151.233
Oct 3 14:11:53 xxxxxx sshd[29968]: input_userauth_request: invalid user usuario [preauth]
Oct 3 14:11:53 xxxxxx sshd[29968]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29970]: Invalid user usuario from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29970]: input_userauth_request: invalid user usuario [preauth]
Oct 3 14:11:54 xxxxxx sshd[29970]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29972]: Invalid user admin from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29972]: input_userauth_request: invalid user admin [preauth]
Oct 3 14:11:54 xxxxxx sshd[29972]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29974]: Invalid user nagios from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29974]: input_userauth_request: invalid user nagios [preauth]
Oct 3 14:11:54 xxxxxx sshd[29974]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29976]: Invalid user nagios from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29976]: input_userauth_request: invalid user nagios [preauth]
Oct 3 14:11:54 xxxxxx sshd[29976]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29978]: Invalid user nagios from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29978]: input_userauth_request: invalid user nagios [preauth]
Oct 3 14:11:54 xxxxxx sshd[29978]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29980]: Invalid user nagios from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29980]: input_userauth_request: invalid user nagios [preauth]
Oct 3 14:11:54 xxxxxx sshd[29980]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29982]: Invalid user oracle from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29982]: input_userauth_request: invalid user oracle [preauth]
Oct 3 14:11:54 xxxxxx sshd[29982]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29984]: Invalid user oracle from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29984]: input_userauth_request: invalid user oracle [preauth]
Oct 3 14:11:54 xxxxxx sshd[29984]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:54 xxxxxx sshd[29986]: Invalid user oracle from 212.64.151.233
Oct 3 14:11:54 xxxxxx sshd[29986]: input_userauth_request: invalid user oracle [preauth]
Oct 3 14:11:54 xxxxxx sshd[29986]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29988]: Invalid user oracle from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29988]: input_userauth_request: invalid user oracle [preauth]
Oct 3 14:11:55 xxxxxx sshd[29988]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29990]: Invalid user ftpuser from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29990]: input_userauth_request: invalid user ftpuser [preauth]
Oct 3 14:11:55 xxxxxx sshd[29990]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29992]: Invalid user ftpuser from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29992]: input_userauth_request: invalid user ftpuser [preauth]
Oct 3 14:11:55 xxxxxx sshd[29992]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29994]: Invalid user ftpuser from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29994]: input_userauth_request: invalid user ftpuser [preauth]
Oct 3 14:11:55 xxxxxx sshd[29994]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29996]: Invalid user guest from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29996]: input_userauth_request: invalid user guest [preauth]
Oct 3 14:11:55 xxxxxx sshd[29996]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[29998]: Invalid user guest from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[29998]: input_userauth_request: invalid user guest [preauth]
Oct 3 14:11:55 xxxxxx sshd[29998]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[30000]: Invalid user guest from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[30000]: input_userauth_request: invalid user guest [preauth]
Oct 3 14:11:55 xxxxxx sshd[30000]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:55 xxxxxx sshd[30002]: Invalid user guest from 212.64.151.233
Oct 3 14:11:55 xxxxxx sshd[30002]: input_userauth_request: invalid user guest [preauth]
Oct 3 14:11:55 xxxxxx sshd[30002]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30004]: Invalid user test from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30004]: input_userauth_request: invalid user test [preauth]
Oct 3 14:11:56 xxxxxx sshd[30004]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30006]: Invalid user test from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30006]: input_userauth_request: invalid user test [preauth]
Oct 3 14:11:56 xxxxxx sshd[30006]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30008]: Invalid user test from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30008]: input_userauth_request: invalid user test [preauth]
Oct 3 14:11:56 xxxxxx sshd[30008]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30010]: Invalid user test from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30010]: input_userauth_request: invalid user test [preauth]
Oct 3 14:11:56 xxxxxx sshd[30010]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30012]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30014]: Invalid user user from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30014]: input_userauth_request: invalid user user [preauth]
Oct 3 14:11:56 xxxxxx sshd[30014]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30016]: Invalid user user from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30016]: input_userauth_request: invalid user user [preauth]
Oct 3 14:11:56 xxxxxx sshd[30016]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:56 xxxxxx sshd[30018]: Invalid user user from 212.64.151.233
Oct 3 14:11:56 xxxxxx sshd[30018]: input_userauth_request: invalid user user [preauth]
Oct 3 14:11:56 xxxxxx sshd[30018]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30020]: Invalid user user from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30020]: input_userauth_request: invalid user user [preauth]
Oct 3 14:11:57 xxxxxx sshd[30020]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30022]: Invalid user jboss from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30022]: input_userauth_request: invalid user jboss [preauth]
Oct 3 14:11:57 xxxxxx sshd[30022]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30024]: Invalid user jboss from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30024]: input_userauth_request: invalid user jboss [preauth]
Oct 3 14:11:57 xxxxxx sshd[30024]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30026]: Invalid user squid from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30026]: input_userauth_request: invalid user squid [preauth]
Oct 3 14:11:57 xxxxxx sshd[30026]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30028]: Invalid user squid from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30028]: input_userauth_request: invalid user squid [preauth]
Oct 3 14:11:57 xxxxxx sshd[30028]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30030]: Invalid user temp from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30030]: input_userauth_request: invalid user temp [preauth]
Oct 3 14:11:57 xxxxxx sshd[30030]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30032]: Invalid user svn from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30032]: input_userauth_request: invalid user svn [preauth]
Oct 3 14:11:57 xxxxxx sshd[30032]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth]
Oct 3 14:11:57 xxxxxx sshd[30034]: Invalid user ts from 212.64.151.233
Oct 3 14:11:57 xxxxxx sshd[30034]: input_userauth_request: invalid user ts [preauth]
Oct 3 14:11:57 xxxxxx sshd[30034]: Received disconnect from 212.64.151.233: 11: Bye Bye [preauth] | It is very common. Many botnets try to spread that way, so this is a wide scale mindless attack. Mitigation measures include: Use passwords with high entropy which are very unlikely to be brute-forced. Disable SSH login for root . Use an "unlikely" user name, which botnets will not use. Disable password-based authentication altogether. Run the SSH server on another port than 22. Use fail2ban to reject attackers' IP automatically or slow them down. Allow SSH connections only from a whitelist of IP (beware not to lock yourself out if your home IP is nominally dynamic !). Most of these measures are about keeping your log files small; even when the brute force does not succeed, the thousands of log entries are a problem since they can hide actual targeted attacks. A bit of security through obscurity (such as the unlikely user name and the port change) works marvels against mindless attackers: yeah, security through obscurity is bad and wrong and so on, but sometimes it works and you will not get fried by a vengeful deity if you use it sensibly . A high entropy password will be effective against intelligent attackers, though, and can only be recommended in all situations. | {
"source": [
"https://security.stackexchange.com/questions/21027",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12705/"
]
} |
21,036 | I'm doing a credential harvesting attack using SET from backtrack. I want to sent a spoof email using an open relay server. However, any outbound connection for smtp is blocked by the firewall. I want to know is it possible that using tor i can tunnel my traffic and go un-detected? If not possible with tor what others ways i can use to tunnel the traffic. | It is very common. Many botnets try to spread that way, so this is a wide scale mindless attack. Mitigation measures include: Use passwords with high entropy which are very unlikely to be brute-forced. Disable SSH login for root . Use an "unlikely" user name, which botnets will not use. Disable password-based authentication altogether. Run the SSH server on another port than 22. Use fail2ban to reject attackers' IP automatically or slow them down. Allow SSH connections only from a whitelist of IP (beware not to lock yourself out if your home IP is nominally dynamic !). Most of these measures are about keeping your log files small; even when the brute force does not succeed, the thousands of log entries are a problem since they can hide actual targeted attacks. A bit of security through obscurity (such as the unlikely user name and the port change) works marvels against mindless attackers: yeah, security through obscurity is bad and wrong and so on, but sometimes it works and you will not get fried by a vengeful deity if you use it sensibly . A high entropy password will be effective against intelligent attackers, though, and can only be recommended in all situations. | {
"source": [
"https://security.stackexchange.com/questions/21036",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21272/"
]
} |
21,050 | Whenever I look at password entropy, the only equation I ever see is E = log 2 (R L ) = log 2 (R) * L, where E is password entropy, R is the range of available characters, and L is the password length. I was wondering if there are any alternate equations for calculating entropy, which factor weak passwords into the equation. For instance, passwords with sequential characters ( 0123456789 ), common phrases ( logmein ), repeating words ( happyhappy ) or words with numbers appended ( password1 ) would all receive a lower entropy grade due to their various shortcomings. Does such an equation exist? If so, is it commonly used in the security field, or do people tend to stick with the "standard equation"? | There are equations for when the password is chosen randomly and uniformly from a given set; namely, if the set has size N then the entropy is N (to express it in bits, take the base-2 logarithm of N ). For instance, if the password is a sequence of exactly 8 lowercase letters, such that all sequences of 8 lowercase characters could have been chosen and no sequence was to be chosen with higher probability than any other, then entropy is N = 26 8 = 208827064576 , i.e. about 37.6 bits (because this value is close to 2 37.6 ). Such a nice formula works only as long as uniform randomness occurs, and, let's face it, uniform randomness cannot occur in the average human brain. For human-chosen passwords, we can only do estimates based on surveys (have a look at that for some pointers). What must be remembered is that entropy qualifies the password generation process , not the password itself. By definition, "password meter" applications and Web sites do not see the process, only the result, and uniformly return poor results (e.g. they will tell you that "BillClinton" is a good password). When the process is an in-brain one, anything goes. (I generate my passwords with a computer, not with my head, and I encourage people to do the same.) | {
"source": [
"https://security.stackexchange.com/questions/21050",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3676/"
]
} |
21,112 | On the 2nd of October NIST decided that SHA-3 is the new standard hashing algorithm, does this mean we need to stop using SHA-2 as it is not secure? What is this SHA-3 anyway? | The SHA-3 hash competition was an open process by which the NIST defined a new standard hash function (standard for US federal usages, but things are such that this will probably become a worldwide de facto standard). The process was initiated in 2007. At that time, a number of weaknesses and attacks had been found on the predecessors of the SHA-2 functions (SHA-256, SHA-512...), namely MD5 and SHA-1, so it was feared that SHA-256 would soon be "broken" or at least weakened. Since choosing and specifying a cryptographic primitive takes time, NIST began the SHA-3 process, with the unspoken but clearly understood intention of finding a replacement for SHA-2. SHA-2 turned out to be more robust than expected. We do not really know why; there are some half-baked arguments (the message expansion is non-linear, the function accumulates twice as many elementary operations than SHA-1...) but there is also a suspicion that SHA-256 remained unharmed because all the researchers were busy working on the SHA-3 candidates. Anyway, with SHA-2 doom being apparently postponed indefinitely, NIST shifted its objectives, and instead of choosing a replacement , they defined a backup plan : a function which can be kept in a glass cabinet, to be used in case of emergency. Correspondingly, performance lost most of its relevance. This highlights the choice of Keccak : among the competition finalists, it was the function which was most different from both SHA-2 and the AES; so it reduced the risk that all standard cryptographic algorithms be broken simultaneously, and NIST metaphorically be caught with their kilt down. Let's not be hasty: not only is SHA-2 still fine (both officially and scientifically), but SHA-3 is just announced : it will take a few more months before we can get a specification (although we can prepare implementations based on what was submitted for the competition). What must be done now (and should have been done a decade ago, really) is to prepare protocols and applications for algorithm agility , i.e. the ability to switch functions if the need arises (in the same way that SSL/TLS has "cipher suites"). | {
"source": [
"https://security.stackexchange.com/questions/21112",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3339/"
]
} |
21,143 | There seem to be many different 'kinds' of entropy. I've come across two different concepts: A) The XKCD example of correcthorsebatterystaple . It has 44 bits of entropy because four words randomly chosen from a list of 2048 words is 4 * log2(2048) = 44 bits of entropy. This I understand. B) The Shannon entropy of the actual string i.e. the entropy is calculated based on frequencies of the letters/symbols. Applying the Shannon formula on correcthorsebatterystaple the result is 3.36 bits of entropy per character. # from http://stackoverflow.com/a/2979208
import math
def entropy(string):
"Calculates the Shannon entropy of a string"
# get probability of chars in string
prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ]
# calculate the entropy
entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ])
return entropy
print entropy('correcthorsebatterystaple')
# => 3.36385618977 Wikipedia only adds to my confusion: It is important to realize the difference between the entropy of a set of possible outcomes, and the entropy of a particular outcome. A single toss of a fair coin has an entropy of one bit, but a particular result (e.g. "heads") has zero entropy, since it is entirely "predictable". -- Wikipedia: Entropy (information theory) I don't quite understand the distinction between the entropy of the toss (generation) and the entropy of the result (the string). When is B used and for what purpose? Which concept accurately reflects the entropy of the password? Is there terminology to differentiate between the two? True randomness could give us correctcorrectcorrectcorrect . Using
A we still have 44 bits. Using B the entropy would be the same as
that of correct . When is the difference between the two important? If a requirement specifies that a string needs to have 20 bits of
entropy—do I use A or B to determine the entropy? | The Wikipedia article explains mathematical entropy, which isn't identical to what people mean when they talk about password entropy. Password entropy is more about how hard it is to guess a password under certain assumptions which is different from the mathematical concept of entropy. A and B are not different concepts of password entropy, they're just using different assumptions as how a password is built. A treats correcthorsebatterystaple as a string of English words and assumes that words are randomly selected from a collection of 2048 words. Based on these assumptions each word gives exactly 11 bits of entropy and 44 bits of entropy for correcthorsebatterystaple . B treats correcthorsebatterystaple as a string of characters and assumes that the probability of any character to appear is the same as it is in the English language. Based on these assumptions correcthorsebatterystaple has 84 bits of entropy. So which definition you use really depends on what assumptions you make about the password. If you assume the password is an XKCD-style password (and that each word indeed has a chance of one in 2048 to appear in the password) then A is the correct way to calculate entropy. If you don't assume the password is built as a collection of words but do assume that the probability of any character to appear to be equal to the probability of it's appearance in the English language then B is the correct way to calculate entropy. In the real world none of these assumptions are correct. So if you have a "requirement that specifies that a string needs to have 20 bits of entropy" and this is for user generated passwords it's very difficult to give a precise definition of entropy. For more on this see Calculating password entropy? . If, on the other hand, you can use computer generated strings (and are using a good PRNG) then each alphanumeric character (a-z, A-Z, 0-9) will give almost 6 bits of entropy. | {
"source": [
"https://security.stackexchange.com/questions/21143",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13753/"
]
} |
21,168 | The card in question is a VISA, if that's of any importance. I've noticed this only on Amazon. All other sites I've purchased something from, ever , have needed the CVC code for the card. However, I know I never entered the CVC on Amazon when I added my card to it, and this has been bugging me ever since. How do they successfully charge the card without the CVC code? | That code isn't necessary. This may cause more fraud and more chargebacks, but Amazon keeps those numbers low so that they can offer a faster shopping experience such as one-click. The only thing necessary to make a purchase is the card number and, in all but rare cases, expiration date, whether in number form or magnetic. Most systems require more information (such as matching full name, bank phone number, physical billing address with zip code, et al) so that they can deal with fraud and/or chargebacks, and sometimes this is enforced by the issuing bank. | {
"source": [
"https://security.stackexchange.com/questions/21168",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7410/"
]
} |
21,238 | This is something I never understood about hashing functions. I know that algorithms like whirlpool and blowfish both produce outputs that don't follow this pattern, but why is it that most do? Is it some kind of hardware/software thing? If they did produce outputs that went a-z0-9 instead of a-f0-9, wouldn't that increase their complexity? | It's just hex encoded. A 16 byte md5 hash can contain non-printable characters, so it's encoded to a 32 char hex string. | {
"source": [
"https://security.stackexchange.com/questions/21238",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10212/"
]
} |
21,305 | I've heard of these tools and from what I understand, they just send tons of random data at different services and observe their reaction to it. What is the purpose of a fuzzer? How can it be applied during a pentest? | What is the purpose of a fuzzer? A fuzzer tries to elicit an unexpected reaction from the target software by providing input that wasn't properly planned for. It does this by throwing "creatively constructed" data as input to software. Expecting a phone number? Ha! I'm going to give you 1,024 characters of 0x00! Random hex! Unicode! A zero-width field! Just the punctuation: "()-" The goal is to get an error response that isn't clean... extra access, a buffer overflow, etc. Improperly handled input is essentially the crux of just about every security problem, whether by extending into unwanted memory ranges or any number of other reactions. An SQL error came back from the phone number field? SQL Injection time! So, for all that, fuzzers are useful in finding flaws, and particularly during a pentest for using those flaws as a point to try and launch exploits from. | {
"source": [
"https://security.stackexchange.com/questions/21305",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4556/"
]
} |
21,413 | Many security scanners like nikto , nessus , nmap , and w3af sometimes show that certain HTTP Methods like HEAD, GET, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, etc are vulnerable to attack. What do these methods do and how can they be exploited? I'm looking something more creative than common exploits like POST or GET injections (e.g., changed fields). It would help me to understand if your answer showed me a brief example of the normal usage of the header as compared to an exploit technique of a header. | Some of these methods are typically dangerous to expose, and some are just extraneous in a production environment, which could be considered extra attack surface. Still, worth shutting those off too, since you probably wont need them: HEAD, GET, POST, CONNECT - these are completely safe, at least as far as the HTTP Method itself. Of course, the request itself may have malicious parameters, but that is seperate from the Method... these are typically (note exception below) the only ones that should be enabled. PUT, DELETE - as mentioned by @Justin, these methods were originally intended as file management operations. Some web servers still support these in their original format. That is, you can change or delete files from the server's file system, arbitrarily. Obviously, if these are enabled, it opens you to some dangerous attacks. File access permissions should be very strictly limited, if you absolutely MUST have these methods enabled. But you shouldn't, anyway - nowadays, there are simple scripts you can use (if this is a static website - if it's an actual application, just code it yourself) to support this feature if you need it. NOTE: One valid scenario to enable these methods (PUT and DELETE) is if you are developing a strictly RESTful API or service; however, in this case the method would be handled by your application code, and not the web server. OPTIONS - this is a diagnostic method, which returns a message useful mainly for debugging and the like. This message basically reports, surprisingly, which HTTP Methods are active on the webserver. In reality, this is rarely used nowadays for legitimate purposes, but it does grant a potential attacker a little bit of help: it can be considered a shortcut to find another hole. Now, this by itself is not really a vulnerability; but since there is no real use for it, it just affects your attack surface, and ideally should be disabled. NOTE: Despite the above, OPTIONS method IS used for several legitimate purposes nowadays, for example some REST APIs require an OPTIONS request, CORS requires pre-flight requests, and so on. So there definitely are scenarios wherein OPTIONS should be enabled, but the default should still be "disabled unless required". TRACE - this is the surprising one... Again, a diagnostic method (as @Jeff mentioned), that returns in the response body, the entire HTTP Request. This includes the request body, but also the request headers, including e.g. cookies, authorization headers, and more. Not too surprising, this can be substantially misused, such as the classic Cross-Site Tracing (XST) attack, wherein an XSS vector can be utilized to retrieve HttpOnly cookies, authorization headers, and such. This should definitely be disabled. One other set of Methods bears mentioning: ALL OTHERS . For some webservers, in order to enable/disable/restrict certain HTTP Methods, you explicitly set them one way or another in the configuration file. However, if no default is set, it can be possible to "inject" additional methods, bypassing certain access controls that the web server may have implemented (poorly). See for example some more info on OWASP . | {
"source": [
"https://security.stackexchange.com/questions/21413",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4556/"
]
} |
21,509 | I did a Wireshark capture of my login into a drupal-based website. The website does not use https. And so, quite obviously, I was able to capture my username and password in plain text by simply using the http.request.method==POST filter in Wireshark. Then, I tried to access the same web page by manually adding a s to http in the url. Quite naturally, my browser showed this: Then I went ahead and did my login again, and did the Wireshark capture again. To my surprise, I do not have the any captures corresponding to the http.request.method==POST filter. So, my questions: When I am not really using https, why wasn't I able to capture login id and password in plain text? What effect did manually adding s to http have? | Despite what you may think, you actually were using HTTPS. This is perhaps an over-simplification, but here's more or less what happened: When you accessed the website with http:// as the protocol in your Address Bar, you effectively told your computer "Make a request for this webpage, from this server, and send that request to port 80 on the server". If the server is configured to allow access via plain HTTP (apparently, it was), the server will respond and your session (unless re-directed) will continue over HTTP. By changing the protocol to https:// , you instead directed your computer to attempt an SSL/TLS handshake via port 443 on the server and then send the HTTP requests through that tunnel. Obviously, this worked or you would not have been able to access the page. Since the SSL/TLS connection was successful, that meant that all subsequent HTTP requests sent through that tunnel would be secured from casual eavesdropping (e.g.: via Wireshark). But now, you ask, what about that nasty red slash through https , and the "x" on the lock? These indicators do not mean that the SSL/TLS connection was unsuccessful, or that your communications to the website are not encrypted. All these negative indicators mean is that the website is not signed by an authority your browser recognizes. This is often the case for the web interfaces on SOHO networked equipment, business applications, or websites where the administrator has chosen to use a self-signed certificate instead of purchasing one from a well-recognized authority (e.g.: VeriSign). This is analogous in some ways to selling alcohol in a jurisdiction where such sales are age-restricted. Since I don't know all the intricacies of the real-world laws, let's say for the sake of argument that the law only goes so far as to say it is illegal to sell alcohol to a person who is younger than a given age. What the law in this hypothetical case does not specify is whether you are required to check I.D. prior to a sale, or which forms of I.D. are considered to be valid proofs of age at time of sale. Authoritative proof of the buyer's age is only ultimately required in court if the legality of the sale is ever challenged. However, it is still of course a good idea to check I.D.s on every sale just to be safe. Here, you are a clerk at a convenience store in the great state of Comodo. Most of your customers will be fellow Comodoans, and so you are naturally able to easily recognize and verify their I.D.s which are issued by the Comodo Government. One day though, you get a customer from the distant state of VeriSign. What do you do now? Fortunately, your store has a book called Trusted Root Certificates which has pictures and tips on how to verify I.D.s issued from various states in your country. You check your book, compare the customer's I.D. to the relevant photograph and notes, and judge that the I.D. is indeed issued by an authority trusted by your store. Given this, you can now trust that the information on the I.D. (particularly, stating the customers identity and age) is accurate, and therefore be comfortable in knowing that you are making a legal sale. On another day, a customer comes in from overseas. His name is Drupal, and he hails from the land of DigiNotar. He says he is old enough to buy alcohol, and his DigiNotarian Government I.D. concurs with his statement. However, your Trusted Root Certificates book does not have any information to help you verify an I.D. from his country. What do you do here? Strictly speaking, by the letter of the law in this hypothetical country, your sale would still be legal if your customer is in fact as old as he says he is. You could choose to assume he is telling the truth and, if he actually is, go on with your life without ever being convicted of any crime related to that action. But, without any documentation from an authority you recognize, you are still taking the risk that he is not telling the truth. It's very possible that he is not of the proper age, despite what he and his I.D. may claim. In this case, the sale will still be completed. The product will still only be transferred between you and your customer (not immediately available to anyone else, unless your customer chooses to distribute it), but now the problem is that the alcohol has been given to someone who legally should not have it - and you could be in trouble for this. TL;DR: As long as your browser shows https:// as the protocol, you can be assured that the data in your communications are secured between your computer and some endpoint. If there's any warning signs around that https:// area though, that means that the browser does not trust that endpoint to be what it claims to be. It is then up to you to decide whether you trust the endpoint's claims to identity enough to transfer sensitive data over the connection. The connection is still secure in the sense that nobody between you and the other end of the HTTPS connection can sniff the data, but you are taking the chance that the other end isn't what it claims to be. UPDATE: Some browsers are now beginning to incorporate checks on the cipher suites and other properties of the HTTPS connection, and providing warnings accordingly. In these cases, the above still applies, but you should be aware that it will be easier for an eavesdropper or Man-in-the-Middle to break that security than if the site used stronger encryption protocols. | {
"source": [
"https://security.stackexchange.com/questions/21509",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9778/"
]
} |
22,711 | This question was inspired by this answer which states in part: The generic firewall manifest file finishes off by dropping everything I didn't otherwise allow (besides ICMP. Don't turn off ICMP). But, is it truly a good practice for a firewall to allow ICMP? What are the security implications, and are there cases where ICMP should be turned off? | Compared to other IP protocols ICMP is fairly small, but it does serve a large number of disparate functions. At its core ICMP was designed as the debugging, troubleshooting, and error reporting mechanism for IP. This makes it insanely valuable so a lot of thought needs to into shutting it down. It would be a bit like tacking >/dev/null 2>&1 to the end of all your cron entries. Most of the time when I talk to people about blocking ICMP they're really talking about ping and traceroute. This translates into 3 types 0 - Echo Reply (ping response) 8 - Echo Request (ping request) 11 - Time Exceeded That's 3 types out of 16. Let's look at a couple of the other ICMP type that are available. 4 - Source Quench (send by a router to ask a host to slow down its transmissions) 3 - Destination Unreachable (consists of 16 different kinds of messages ranging from reporting a fragmentation problem up to a firewall reporting that a port is closed) Both of which can be invaluable for keeping non-malicious hosts operating properly on a network. In fact there are two (probably more but these are the most obvious to me) very good cases where you don't want to restrict ICMP. Path MTU Discovery - We use a combination of the Don't Fragment flag and type 3 code 4 (Destination Unreachable - Fragmentation required, and DF flag set) to determine the smallest MTU on the path between the hosts. This way we avoid fragmentation during the transmission. Active Directory requires clients ping the domain controllers in order to pull down GPOs. They use ping to determine the "closest" controller and if none respond, then it is assumed that none are close enough. So the policy update doesn't happen. That's not to say that we should necessarily leave everything open for all the world to see. Reconnaissance is possible with ICMP and that is generally the reason given for blocking. One can use pings to determine if a host is actually on, or Time Exceededs (as part of a traceroute) to map out network architectures, or Rory forbid a Redirect (type 5 code 0) to change the default route of a host. Given all that, my advice is, as always, take a measured and thoughtful approach to your protections. Blocking ICMP in its entirety is probably not the best idea, but picking and choosing what you block and to/from where probably will get you what you want. | {
"source": [
"https://security.stackexchange.com/questions/22711",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3483/"
]
} |
22,762 | As the title says, my company has a policy that all passwords to e.g. our workstations and server logins must be stored in an online safe. I won't say which one but there are some out there you can look at promising the end of password pain. These passwords are then shared with the company's management - I don't know how that bit works, but they can read the passwords too. Is this really that secure? I was given two reasons why it is by my boss: If I forget my password, I can just ask him for my password. If I turn evil, they can lock me out. I don't agree with these. For the first one, surely there must be a better way for most of the things we use like Google Apps - e.g. the admin has a reset button. For the second one I can just change my password anyway and not update the password safe. So am I right that this is not secure? Or is this the only way? | None of the reasons you've given are valid reasons for escrowing your password. There's only a couple valid reasons for escrowing any sort of "authenticator" information. A couple others have touched on these, but I'll try to clarify a bit. Encryption Keys: It makes absolute sense for the organization to have access to escrow copies of your encryption keys. After all, the data you're encrypting (provided you're only using your company's encryption for work purposes, of course) is their data in the end anyway. So, they need to retain access to that data in the event you lose your key or you are separated from the company. However, the encryption key should not be the same key you use for digital signatures . Also, they should not have actual access to your authenticator - the passcode you use for the key. Instead, they should have their own escrow key that works with their authenticator to decrypt your data. Failsafe Accounts: It also makes sense that the organization should have backup copies of credentials necessary to access an Administrator-level account in the event the System Administrator's own account is locked out, or they depart the company. However, the credentials should not be for the System Administrator's own account. They should be for a local system account whose sole purpose is for emergency use. To that end, the account should also never be used for non-emergencies and its usage should be closely monitored and alerted. Traditionally, credentials for accounts like these are sealed in tamper-evident envelopes and stored in a secure, physical vault. It's conceivable that there may be digital equivalents, but I personally wouldn't trust those without a thorough review. There's two big reasons why it's a bad idea for management to have your password. The first reason is potentially very bad for you, as it could end up causing otherwise unnecessary work for you if things go wrong. However, the second actually turns this around and makes it potentially worse for the company than it is for you if things go really wrong. Potential For Abuse: The obvious one - managers now effectively have unrestricted access to the systems, regardless of whether they should, with the same privileges you have. Most simply this means that the managers may leverage this to do things on the system that they otherwise should not be doing. This also leaves the potential for them to bypass your position whenever they want to rush a particular change along without following standard procedure. Loss of Non-Repudiation: Once someone else has your credentials - and, especially in a case like this where it can be proven they do - they can impersonate you on any systems where those credentials are valid. This makes it difficult to definitively prove that any actions taken by your account were actually taken by you. If a manager does decide to use your account, and ends up royally screwing up the system, it won't be very easy to use you as a scapegoat even though your account is in the logs. Worse for the company is, if you do something to royally screw up the system while your managers have your password, they'll have a harder time proving that it was actually you that did it. TL;DR: There's no good reason I can think of for management to have any of your passwords. As for the reasons they've given: "If you forget your password..." another System Administrator can reset it for you. Or, management can "break the glass" on the emergency account (see "Failsafe Accounts" above) and do it themselves. "If you turn evil..." again you can be locked out by another System Administrator, or the emergency account. | {
"source": [
"https://security.stackexchange.com/questions/22762",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13716/"
]
} |
22,824 | I have recently learned of a way for person B to verify that a document sent from person A is indeed from person A. Person A gives Person B his public RSA key. This must be done reliably. Person A uses SHA1 to hash the document in a 160-bit hash. Person A uses his private RSA key to sign the 160-bit SHA1 hash of the document. Person A sends both the RSA-signed SHA1 hash and the document to person B. Person B verifies the hash with Person A's public RSA key. Person B generates a SHA1 to compare the document Person A sent him. If the two SHA1 hashes match, then the document is indeed from Person A. This process seems reliable, but also cumbersome to me. Why can't Person A just encrypt the document directly with his RSA private key and send that to Person B. Why encrypt a hash of the document instead and send it alongside with the document? | There is more than one reason: 1) Actually the RSA algorithm is slower. For instance : By comparison, DES (see Section 3.2) and other block ciphers are much faster than the RSA algorithm. DES is generally at least 100 times as fast in software and between 1,000 and 10,000 times as fast in hardware, depending on the implementation. Implementations of the RSA algorithm will probably narrow the gap a bit in coming years, due to high demand, but block ciphers will get faster as well. Then if you look here you can see that they list DES as taking 54.7 cycles per byte and SHA-1 taking 11.4 cycles per byte. Therefore computing the SHA-1 hash of the document and signing that is a performance optimization vs. encrypting the entire document with your private key. 2) By splitting the document from the signature you have a more flexible system. You can transmit them separately or store them in different places. It might be a case where everyone already has a copy of the document, and you just want person A to verify to person B that they have the same document (or a hash of it). 3) Thinking about it, if someone encrypts the document with a fake private key, and you decrypt it with the real public key, your algorithm can't actually tell you the result (signed or not). Unless your program can interpret the meaning of the resulting document (perhaps you know it's supposed to be XML, etc.) then you can't reliably say it was 'signed'. You either got the right message, or the wrong message. Presumably a human could tell, but not a machine. Using the hash method it assumes I already have the plain text and I just want to verify that person A signed it. Say I have a program that launches nuclear weapons. It gets a command file that's encrypted using your method, so I decrypt with the public key and send the result to my command processor. You're then relying on the command processor to know if it's a valid command. That's scary. What if the protocol of the command is just the latitude and longitude of where to target the missile, encoded into binary? You could easily just launch at the wrong target. Using a hash, you get a command and a signed hash over a plain text channel. You hash the command, check the signature, and if they don't match then you don't bother sending anything to the command processor. If they match, you send the commands on. If you want to hide the contents of the command, then you take the commands and the signature, zip them together and encrypt the whole thing with the public key of the receiving station before you send them. | {
"source": [
"https://security.stackexchange.com/questions/22824",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15146/"
]
} |
22,903 | In many tutorials and guides I see that a CSRF token should be refreshed per request. My question is why do I have to do this? Isn't a single CSRF token per session much easier than generating one per request and keeping track of the ones requested? Generating the token on a per-request basis doesn't seem to improve security beyond what a per-session token would already do. The only argument seems to be XSS protection, but this doesn't apply as when you have a XSS vulnerability the script could read out new tokens anyway. What are the benefits of generating new tokens per request? | For the reasons already discussed, it is not necessary to generate a new token per request. It brings almost zero security advantage, and it costs you in terms of usability: with only one token valid at once, the user will not be able to navigate the webapp normally. For example if they hit the 'back' button and submit the form with new values, the submission will fail, and likely greet them with some hostile error message. If they try to open a resource in a second tab, they'll find the session randomly breaks in one or both tabs. It is usually not worth maiming your application's usability to satisfy this pointless requirement. There is one place where it is worth issuing a new CSRF token, though: on principal-change inside a session. That is, primarily, at login. This is to prevent a session fixation attack leading to a CSRF attack possibility. For example: attacker accesses the site and generates a new session. They take the session ID and inject it into a victim's browser (eg via writing cookie from a vulnerable neighbour domain, or using another vulnerability like jsessionid URLs), and also inject the CSRF token into a form in the victim's browser. They wait for the victim to log in with that form, and then use another form post to get the victim to perform an action with the still-live CSRF token. To prevent this, invalidate the CSRF token and issue a new one in the places (like login) that you're already doing the same to the session ID to prevent session fixation attacks. | {
"source": [
"https://security.stackexchange.com/questions/22903",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15195/"
]
} |
22,941 | It's common knowledge that if somebody has physical access to your machine they can do whatever they want with it 1 . So why do we always lock our computers? If somebody has physical access to my computer, it doesn't really matter if it's locked or not. They can either boot a live CD and reset my password or read my files (if it's not encrypted), or perform a cold boot attack to get my encryption keys from memory (if it is encrypted). What's the point of locking a computer besides keeping the average coworker from messing with your stuff? Does it provide any real security benefit, or is it just a convenience to deter untrained people? 1. unless the computer has been off for a while and you're using full-disk encryption | In some places they have a saying: "opportunity makes the thief". All you're doing by screen-locking a computer is making the cost of hacking it just a little bit harder. Security is an economic good, with a price and a value. The value of locking is somewhat larger than the price of locking it. Sort of like how in good neighborhoods, you don't need to lock your front door. In most neighborhoods, you do lock your front door, but anyone with a hammer, a large rock or a brick could get in through the windows. In some neighborhoods, not only do you lock the door, you have a solid-core door with a deadbolt, and you have steel gratings over the windows. In the best neighborhoods, the value of the steel gratings isn't worth the price, but in bad neighborhoods, the value does exceed the price. | {
"source": [
"https://security.stackexchange.com/questions/22941",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6813/"
]
} |
23,006 | Edit : Updated to put more emphasis on the goal - peace of mind for the user, and not beefing up the security. After reading through a few discussions here about client side hashing of passwords, I'm still wondering whether it might be OK to use it in a particular situation. Specifically I would like to have the client - a Java program - hash their password using something like PBKDF2, using as salt a combination of their email address and a constant application-specific chunk of bytes. The idea being that the hash is reproducible for authentication, yet hopefully not vulnerable to reverse-engineering attacks (to discover the literal password) other than brute force if the server data is compromised. Goal: The client side hashing is for the peace of mind for the user that their literal password is never being received by the server, even if there is the assurance of hashing it in storage anyway. Another side benefit (or maybe a liability?) is that the hashing cost of an iterated PBKDF2 or similar rests with the client. The environment characteristics are: All client-server communication is encrypted. Replayed messages are not permitted. ie. the hash sent from the client cannot effectively be used as a password by an eavesdropper. Temp-banning and blacklisting IPs is possible for multiple unsuccessful sign in attempts within a short time frame. This may be per user account, or system wide. Concerns: "Avoid devising homebaked authentication schemes." The salt is deterministic for each user, even if the hashes produced will be specific to this application because of the (identical) extra bytes thrown into the salt. Is this bad? Authentications on the server end will happen without any significant delay, without the hashing cost. Does this increase vulnerability to distributed brute force authentication attack? Rogue clients can supply a weak hash for their own accounts. Actually, not too worried about this. Should the server rehash the client hashes before storing? Thoughts? | Hashing on the client side doesn't solve the main problem password hashing is intended to solve - what happens if an attacker gains access to the hashed passwords database. Since the (hashed) passwords sent by the clients are stored as-is in the database, such an attacker can impersonate all users by sending the server the hashed passwords from the database as-is. On the other hand, hashing on the client side is nice in that it ensures the user that the server has no knowledge of the password - which is useful if the user uses the same password for multiple services (as most users do). A possible solution for this is hashing both on the client side and on the server side. You can still offload the heavy PBKDF2 operation to the client and do a single hash operation (on the client side PBKDF2 hashed password) on the server side. The PBKDF2 in the client will prevent dictionary attacks and the single hash operation on the server side will prevent using the hashed passwords from a stolen database as is. | {
"source": [
"https://security.stackexchange.com/questions/23006",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15295/"
]
} |
23,021 | There are quite a few cases where people are called out for disclosing the front-face of a credit or debit card (e.g. this tweet from Brian Krebs or this twitter account ). So I was wondering what the impact of this disclosure for the card holder is likely to be. From the front of a card, a fraudster could get the card PAN (16-digit number) start date/expiry date and cardholder name. Also for debit cards, the cardholders account number and sort code (that may vary by region). So the question is, what's the likely impact of the disclosure of this information (i.e. what frauds could be committed). Some initial thoughts I had were :- Cardholder Not Present transactions shouldn't be possible as the CVV hasn't been disclosed The card wouldn't be clonable with just that information as there's other information needed for the magstripe. | You don't actually need the CVV to perform transactions, they're just required by most retailers as a means of verifying that you have the physical card in your possession. From Wikipedia (unsourced): It is not mandatory for a merchant to require the security code for making a transaction, hence the card is still prone to fraud even if only its number is known to phishers. On most EFTPOS systems, it's possible to manually enter the card details. When a field is not present, the operator simply presses enter to skip, which is common with cards that don't carry a start date. On these systems, it is trivial to charge a card without the CVV. When I worked in retail, we would frequently do this when the chip on a card wasn't working and the CVV had rubbed off. In such cases, all that was needed was the card number and expiry date, with a signature on the receipt for verification. | {
"source": [
"https://security.stackexchange.com/questions/23021",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37/"
]
} |
23,192 | I'm the main developer for an Open Source JavaScript library. That library is used in the company I work for, for several clients. Every now and then there is a client that feels paranoid about implementing a library he has never heard about and ask me why he should trust this code. I understand the concern and usually I point them to a github repository where the library is hosted and show other projects and clients that are using the same library. Sometimes they review the code on github, and everything runs smoothly after that. But this time the client is a little bit more paranoid. He asked me what kind of security check the library has gone through and told me that their systems are "validated with the top 10 OWASP checks/scans". After some research the closest thing I found is this document that list top 10 vulnerabilities in web applications in 2010, by OWASP . I think not all of these apply, since I'm not providing a web application but just a javascript library. And my understanding is that these vulnerabilities most of the time need to be checked manually by a security specialist rather than an automated scan. Is there any way I can assert security standards in a JavaScript library? UPDATE 1 Even though I'm not a security expert I'm a web developer and I understand the common flaws that can cause vulnerabilities on Web Applications. What I need is some way to prove especially for a non-technical person that this library has been checked at least for minimal threats and exploits and is in fact secure to be used on their website. What comes to my mind is maybe a neutral company or consultant specializing in web security that can review the code and attest to its quality. Is this a common practice? UPDATE 2 Imagine someone hands you a large javascript file to include in your site as part of an integration. That script will be running inside your site. You probably want to make sure where that file comes from and who was the developer that created it. Imagine some rogue developer at Facebook decided to inject some malicious code inside the like button script to steal data or cookies from sites where it's run at. When you include libraries from well-known companies or Open Source projects that are reviewed by multiple people (like jQuery) this is a very unlikely case. But when you include a script from a small company or a solo developer I can see that as being a concern. I don't want to look for exploits in my library as I know I have included none. I just want to prove somehow that the code is safe, so users don't have this kind of concern when using it. | To avoid client-side security issues, you need to learn about the security requirements for client-side code and the common mistakes. OWASP has good resources. Make sure you read about DOM-based XSS, as that is one of the most common security mistakes. As far as security best practices, I have several suggestions: To avoid XSS, abide by the rules found in Adam Barth on Three simple rules for building XSS-free web applications . Avoid setInnerHtml() and .innerHtml = . Instead, use setInnerText() or DOM-based operations (to make sure you don't introduce script tags, i.e., to avoid DOM-based XSS). Avoid document.write() . Avoid eval() . Its use tends to be correlated to security flaws. Similarly, avoid other APIs that turn a string into code and execute it, like setTimeout() with a string argument, setInterval() with a string argument, or new Function() . Turn on Javascript "strict mode" . It helps avoid some subtle areas of Javascript that have been responsible for security problems before. Make sure your code is compatible with a strict Content Security Policy (here's a tutorial ), such as script-src 'self'; object-src 'self' . See also Security Concerns on clientside(Javascript) , which is on a related topic. I don't know of any static analysis tools to scan Javascript and look for security problems. If you follow Doug Crockford's recommendations about how to use Javascript (e.g., as per his book, Javascript: The Good Parts), you could use JSLint . It's a pretty aggressive lint tool. If your code is JSLint-clean, that's a positive mark. But JSLint is not focused primarily on security. And, if you take legacy code and run JSLint on it, you're probably going to get inundated with a pile of warnings. | {
"source": [
"https://security.stackexchange.com/questions/23192",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10762/"
]
} |
23,256 | It was not until recently that I began to question the use for the Server field in the HTTP Response-Header. I did some research: RFC 2616 states: 14.38 Server The Server response-header field contains information about the
software used by the origin server to handle the request. The field
can contain multiple product tokens (section 3.8) and comments
identifying the server and any significant subproducts. The product
tokens are listed in order of their significance for identifying the
application. Server = "Server" ":" 1*( product | comment ) Example: Server: CERN/3.0 libwww/2.17 If the response is being forwarded through a proxy, the proxy
application MUST NOT modify the Server response-header. Instead, it
SHOULD include a Via field (as described in section 14.45). Note: Revealing the specific software version of the server might
allow the server machine to become more vulnerable to attacks
against software that is known to contain security holes. Server
implementors are encouraged to make this field a configurable This, however, makes no mention of the purpose of this field. This seems like information disclosure to me. These server strings give away a lot of information that is great for anyone trying to fingerprint the server. Automated scanning tools would quickly identify unpatched or vulnerable servers. Having my web server present version information for itself and modules like OpenSSL seems like a bad idea. Is this field needed... for anything? If so, what? Is it already best practice / common place to disable or change this field on servers? I would think that, from a security perspective, we would want to give the enemy (ie: Everyone) as little information as possible while still allowing business to continue. Here is an interesting write-up on information warfare. | Server information should be removed from HTTP responses, and its an insecure default to leak this data. This isn't a major security risk, or even a medium security risk - but I don't feel comfortable just announcing such details to my adversaries. Having an exact version number leaks when, and how often you patch your production systems - even if the version is current. An adversary knowing the patch cycle, means that they know when you are the weakest. The HTTP Host header probably most useful for the Netcraft Web Server Survey . But in terms of HTTP it shouldn't matter. That is why we have standards, so that clients and servers written by different vendors can work together. | {
"source": [
"https://security.stackexchange.com/questions/23256",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10753/"
]
} |
23,270 | I am looking for the best intelligence forensic software that will give me the best solution for gathering information around the web. please recommend me the best software out there ... and if you think you are qualified enough to gather information about import export I can pay top dollar for your time. | Server information should be removed from HTTP responses, and its an insecure default to leak this data. This isn't a major security risk, or even a medium security risk - but I don't feel comfortable just announcing such details to my adversaries. Having an exact version number leaks when, and how often you patch your production systems - even if the version is current. An adversary knowing the patch cycle, means that they know when you are the weakest. The HTTP Host header probably most useful for the Netcraft Web Server Survey . But in terms of HTTP it shouldn't matter. That is why we have standards, so that clients and servers written by different vendors can work together. | {
"source": [
"https://security.stackexchange.com/questions/23270",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15451/"
]
} |
23,357 | Some sites I have been a member of in the past don't go through the normal "Forgot Password?" process. Instead of e-mailing me a unique password reset link or something of the like, I have received the password in my e-mail in plain text . I would imagine that to make this possible, the password would have to be stored in plain text somewhere in the database. Is this necessarily true? | Yes you should be concerned if you use the password on other websites or have personal data stored on it. This is one of the reasons you have to have different passwords for every website. They are storing your password plain text. I'd stay away from with personal data. It's one of those basic security principles they violated, chances are that there will be more. | {
"source": [
"https://security.stackexchange.com/questions/23357",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5008/"
]
} |
23,371 | For a REST-api it seems that it is sufficient to check the presence of a custom header to protect against CSRF attacks, e.g. client sends "X-Requested-By: whatever" and the server checks the presence of "X-Requested-By" and drops the request if the header isn't found. The value of the header is irrelevant. This is how Jersey 1.9's CsrfProtectionFilter works and it is described in this blog post: http://blog.alutam.com/2011/09/14/jersey-and-cross-site-request-forgery-csrf/ . The blog post also links to NSA and Stanford papers stating that the custom header itself is sufficient protection: The first method involves setting custom headers for each REST request
such as X-XSRF-Header. The value of this header does not matter;
simply the presence should prevent CSRF attacks. If a request comes
into a REST endpoint without the custom header then the request
should be dropped. HTTP requests from a web browser performed via form, image, iframe,
etc are unable to set custom HTTP headers. The only way to create a
HTTP request from a browser with a custom HTTP header is to use a
technology such as Javascript XMLHttpRequest or Flash. These
technologies can set custom HTTP headers, but have security policies
built in to prevent web sites from sending requests to each other
unless specifically allowed by policy. This means that a website
www.bad.com cannot send a request to http://bank.example.com with the
custom header X-XSRFHeader unless they use a technology such as a
XMLHttpRequest. That technology would prevent such a request from
being made unless the bank.example.com domain specifically allowed
it. This then results in a REST endpoint that can only be called via
XMLHttpRequest (or similar technology). It is important to note that this method also prevents any direct
access from a web browser to that REST endpoint. Web applications
using this approach will need to interface with their REST endpoints
via XMLHttpRequest or similar technology. Source: Guidelines for implementing REST It seems however, that most other approaches suggest that you should generate a token and also validate this on the server. Is this over-engineering? When would a "presence of" approach be secure, and when is also token validation required? | Security is about defence in depth. Simply checking the value is sufficient at the moment , but future technologies and attacks may be leveraged to break your protection. Testing for the presence of a token achieves the absolute minimum defence necessary to deal with current attacks. Adding the random token improves the security against potential future attack vectors. Using a per-request token also helps limit the damage done by an XSS vulnerability, since the attacker needs a way to steal a new token for every request they make. This is the same reasoning used in modern cryptographic algorithms, where n rounds are considered a minimum for safety, but 2n+1 rounds (for example) are chosen in the official implementation to ensure a decent security margin. Further reading: CSRF with JSON POST Why refresh CSRF token per form request? | {
"source": [
"https://security.stackexchange.com/questions/23371",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15512/"
]
} |
23,383 | As someone who knows little about cryptography, I wonder about the choice I make when creating ssh-keys. ssh-keygen -t type , where type is either of dsa,rsa and ecdsa. Googling can give some information about differences between the types, but not anything conclusive. So my question is, are there any "easy" answers for developers/system administrators with little cryptography knowledge, when to choose which key type? I'm hoping for an answer in the style of "Use DSA for X and Y, RSA for Z, and ECDSA for everything else", but I also realise it's quite possible such simple answers are not available. | In practice, a RSA key will work everywhere. ECDSA support is newer, so some old client or server may have trouble with ECDSA keys. A DSA key used to work everywhere, as per the SSH standard ( RFC 4251 and subsequent), but this changed recently: OpenSSH 7.0 and higher no longer accept DSA keys by default. ECDSA is computationally lighter, but you'll need a really small client or server (say 50 MHz embedded ARM processor) to notice the difference. Right now , there is no security-related reason to prefer one type over any other, assuming large enough keys (2048 bits for RSA or DSA, 256 bits for ECDSA); key size is specified with the -b parameter. However, some ssh-keygen versions may reject DSA keys of size other than 1024 bits, which is currently unbroken, but arguably not as robust as could be wished for. So, if you indulge in some slight paranoia, you might prefer RSA. To sum up, do ssh-keygen -t rsa -b 2048 and you will be happy. | {
"source": [
"https://security.stackexchange.com/questions/23383",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15519/"
]
} |
23,407 | I ran a scan with nmap -n -vv -A x.x.x.x --min-parallelism=50 --max-parallelism=150 -PN -T2 -oA x.x.x.x With the following result: Host is up (0.032s latency).
Scanned at 2012-10-25 16:06:38 AST for 856s
PORT STATE SERVICE VERSION
1/tcp open tcpwrapped
3/tcp open tcpwrapped
4/tcp open tcpwrapped
.
.
19/tcp open tcpwrapped
20/tcp open tcpwrapped
21/tcp open tcpwrapped
22/tcp open tcpwrapped
23/tcp open tcpwrapped
.
.
64623/tcp open tcpwrapped
64680/tcp open tcpwrapped
65000/tcp open tcpwrapped
65129/tcp open tcpwrapped
65389/tcp open tcpwrapped Scan methodology was I'm sure that this is a firewall's or load balancer's game.
I tried many ways, such as change source port, source IP, fragmentation, etc.. Do you have any idea/suggestion to bypass this case? On another hand, do you know how to do that in a firewall policy (on any firewall)? | " tcpwrapped " refers to tcpwrapper , a host-based network access control program on Unix and Linux. When Nmap labels something tcpwrapped , it means that the behavior of the port is consistent with one that is protected by tcpwrapper. Specifically, it means that a full TCP handshake was completed, but the remote host closed the connection without receiving any data. It is important to note that tcpwrapper protects programs , not ports. This means that a valid (not false-positive) tcpwrapped response indicates a real network service is available, but you are not on the list of hosts allowed to talk with it. When such a large number of ports are shown as tcpwrapped , it is unlikely that they represent real services, so the behavior probably means something else. What you are probably seeing is a network security device like a firewall or IPS. Many of these are configured to respond to TCP portscans, even for IP addresses which are not assigned to them. This behavior can slow down a port scan and cloud the results with false positives. EDIT: Since this post was flagged as plagiarism and deleted, I would like to point out that the assumed source ( this page on SecWiki.org ) was also written by me. This Security.StackExchange answer (October 31, 2013) predates that page (November 12, 2013) by nearly two weeks. | {
"source": [
"https://security.stackexchange.com/questions/23407",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6454/"
]
} |
23,452 | We want to study for the CEH program and have downloaded 12 DVDs that 6 DVDs are software key-loggers, Trojans, etc. that are all detected by antivirus. This prevents us from examining them and learning how they work. I have instructed students not to uninstall antivirus as running these malicious files is not safe on its own. It might even spread on the network. One of the students suggests to use Windows XP mode . Is this safe? I see these articles 1 and 2 here but the answers are contradictory and confuse us. Are virtual machines safe for downloading and installing Trojans, key-loggers, etc.? Is there another way to solve this problems, e.g. set up a lab, to show what happens to victims of the malware? | Are virtual machines safe for this? The answer is the same as for a lot of questions of the form "Is X safe?": no, it's not absolutely safe. As described elsewhere, bugs in the virtual machine or poor configuration can sometimes enable the malware to escape. So, at least in principle, sophisticated malware might potentially be able to detect that it's running in a VM and (if your VM has a vulnerability or a poor configuration) exploit the vulnerability or misconfiguration to escape from your VM. Nonetheless, it's pretty good. Probably most malware that you run across in the field won't have special code to escape from a VM. And running the malware in a VM is certainly a lot safer than installing it directly onto your everyday work machine! Probably the biggest issue with analyzing malware samples in a VM is that some malware authors are starting to get smart and are writing their malware so that it can detect when it is run in a VM and shut down when running inside a VM. That means that you won't be able to analyze the malicious behavior, because it won't behave malicious when it's run inside a VM. What alternatives are there? You could set up a sacrificial machine on a local machine, install the malware on there, then wipe it clean. Such a test network must be set up extremely carefully , to ensure that the malware can't propagate, can't spread to other machines of yours, and can't do any harm to others. References: Is it safe to install malware in a VM (Summary: "There is no simple answer", and there are some risks) How secure are virtual machines really? False sense of security? (Summary: there are definitely some risks that could allow malware to escape the VM) Does a Virtual Machine stop malware from doing harm? (Summary: there have occasionally been vulnerabilities that has enabled malware to escape the VM) | {
"source": [
"https://security.stackexchange.com/questions/23452",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15505/"
]
} |
23,509 | Recently, I found a user plug a USB Wifi stick on his desktop, and set up an AP without password.
How can we detect or block this via firewall rules or other approach? | Something left unsaid, Why is the user wanting a WiFi? As long as the user feels they have a legitimate need they will continue to find workarounds to any of your attempts at blocking it. Discuss with the users what they are trying to accomplish. Perhaps create an official wifi network ( use all the security methods you wish - it will be 'yours' ). Or, better, two - Guest and Corporate WAPs. WiFi is not something which needs be banned "just because," however it does need specific attention, just like all other aspects of security. | {
"source": [
"https://security.stackexchange.com/questions/23509",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15580/"
]
} |
23,561 | When browsing to a website over HTTPS, the web browser typically does a lot of work in the background - negotiating a secure channel, validating the site's certificate, verifying the trust chain, etc. If your browser is configured to use a web proxy, the current HTTP protocol supports a CONNECT method - this enables your browser to setup the TLS tunnel to the website, without revealing any of the request contents to the proxy (other than the name of the site, of course). What does the browser do regarding the identity of the proxy server? If the proxy does not have a certificate, then potentially it could be impersonated by a MitM. Even if the proxy does have a certificate, if it's not checked, it might still be possible to impersonate it. Does the protocol (or any companion RFC) define how this should be handled? How do common web browsers typically handle this? If the proxy does not have a certificate, is there any feedback to the user? If the proxy does have a certificate, but it is invalid, is there feedback then? Additionally, is there a provision for securely authenticating the web proxy, even when the request is plain HTTP? For example, connecting to the proxy over HTTPS even though the request to the website is over HTTP... NOTE that I am not asking how to identify the website , whether tunnelled through the proxy or having it intercept the SSL chain. Rather, I want to verify the identity of the proxy itself, to make sure no unauthorized server is MitMing me... | As is customary, let's first answer the exact question which was asked. Right now, using HTTPS to connect to the proxy is not widely supported. The squid documentation has some information on the subject; to sum things up: Chrome supports it, but it must be configured through a proxy auto-configuration script because there is no GUI support. This also means not using the "system-wide proxy configuration". Firefox does not support it, although the feature has been marked as requested since 2007. (UPDATE: Supported in FF 33+ ) No indication on other browsers, which can be surmised to mean "no support whatsoever". What Chrome does when encountering a "somewhat invalid" certificate for the proxy should be tested, but it is likely to depend on the exact browser version (which changes all the time, often transparently), the operating system (since certificate handling tends to be delegated to the OS), and in what ways the certificate is not valid (expired, bad server name, unknown trust anchor,...). Also, there is an inherent chicken-and-egg issue here: full certificate validation, as is mandated by X.509 , includes checking revocation, i.e. download of CRL for all certificates in the path. URL for CRL download are usually located in the certificates themselves. But if you are using a proxy for your HTTP traffic, then the CRL download will need to go through that proxy... and you have not yet validated the proxy certificate. No chicken, no egg. We may also note that the proxy auto-configuration file format does not really support HTTPS proxying. There is no formal standard for this file, but custom is to follow an old Netscape draft which says that a PAC file defines a Javascript function which can return proxy hostnames and ports , but not protocol. So, no HTTPS. For its HTTPS-proxy support, Chrome implicitly uses an extension to this de facto convention by stuffing an HTTPS URL where it has no right to be, and hoping for the browser to make some sense out of it. As is customary, let's see what alternate proposals can be made. To ensure protected communication between the client and the proxy, the two following solutions may be applicable: Use a VPN . This is highly generic, and since it operates at OS level, it will apply to all browsers. Of course, it requires that whoever installs the thing on the client machine has administrative rights on that machine (you cannot do this as a simple unprivileged user). Use a SSH-based SOCKS proxy . On your client system, run: ssh -N -D 5000 theproxy ; then set your browser to use localhost:5000 as SOCKS proxy. This solution requires that you have the proxy server ( theproxy ) is also a SSH server, and that you have an account on it, and that the SSH port is not blocked by some ill-tempered firewall. The SOCKS proxy solution will ensure that the traffic goes through the proxy machine, but does not include caching , which is one of the reasons we usually want to use a proxy in the first place. It can be altered by using some additional tools, though. SOCKS proxying is about redirecting all TCP/IP traffic (generically) through a custom tunnel. Generic support for applications is possible, by "replacing" the normal OS-level network calls with versions which use SOCKS. In practice, this uses a specific DLL which is pushed over the standard OS libraries; this is supported in Unix-based systems with LD_PRELOAD , and I suppose this can be done with Windows too. So the complete solution would be: You use a SSH-based SOCKS tunnel from your client to the proxy machine. The SOCKS client DLL is applied on the browser, and configured to use localhost:5000 as SOCKS proxy. The browser just wants to use theproxy:3128 as plain HTTP proxy. Then, when the browser wants to browse, it opens a connection to theproxy:3128 , which the DLL intercepts and redirects to a SOCKS tunnel that it opens to localhost:5000 . At that point, SSH grabs the data and sends it to theproxy under the protection of the SSH tunnel. The data exits on theproxy , at which point the connection to port 3128 is purely local (thus immune from network-based attackers). Another way to add caching on the SSH-SOCKS setup is to apply a transparent proxy . Squid can do that (under the name "interception caching"). Yet another way to do caching with SSH protection is to use SSH to build a generic tunnel from your machine to the proxy. Run this: ssh -N -L 5000:localhost:3128 theproxy and then set your browser to use localhost:5000 as an HTTP proxy (not SOCKS). This will not apply proxying on alternate protocols such as FTP or Gopher, but who uses them anyway ? As is customary, let's now question the question. You say that you want to protect against a man-in-the-middle attack . But, really, the proxy is a MitM. What you really want is that the proxy is the only entity doing a MitM. HTTPS between the browser and the proxy (or SSH-SOCKS or a VPN) can protect only the link between the client and the proxy, and not at all between the proxy and the target Web server. It would be presumptuous to claim that MitM attacks are reliably thwarted without taking into account what happens on the Wide Internet, which is known to be a harsh place. For end-to-end protection against MitM, use SSL, i.e. browse HTTPS Web servers. But if you do that, then extra protection for the browser-to-proxy link is superfluous. Protecting traffic between browser and proxy makes sense if you consider proxy authentication . Some proxies require explicit authentication before granting access to their services; this is (was ?) common in big organizations, which wanted to reserve "unlimited Internet access" to some happy few (usually, the Boss and his favourite underlings). If you send a password to the proxy, then you do not want the password to be spied upon, hence SSL. However, an attacker who can eavesdrop on the local network necessarily controls one local machine with administrator privileges (possibly, a laptop he brought himself). His nuisance powers are already quite beyond simple leeching on the Internet bandwidth. Also, limiting Internet access on a per user basis maps rather poorly to modern operating systems, which tend to rely on Internet access for many tasks, including software updates. Internet access is more efficiently controlled on a per usage basis. Therefore, I think that protecting access to a HTTP proxy has some merits, but only in rather uncommon scenarios. In particular, it has very little to do with defeating most MitM attacks. | {
"source": [
"https://security.stackexchange.com/questions/23561",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/33/"
]
} |
23,627 | I know there are three method for wifi security. What are the relative strengths of the password encryption in WEP, WPA and WPA2 PSK? | The schemes you mention are protocols for securing 802.11x traffic over wireless networks. They don't mandate how the AP password is encrypted or hashed during storage. However, the security of the protocol does rely on making the key secure. WEP relies on a broken RC4 implementation and has severe flaws in various aspects of its protocol which make breaking WEP near-trivial. Anyone with a laptop and a $20 wifi antenna can send special de-auth packets, which cause legitimate clients to re-authenticate to the AP. The attacker can then capture the initialization vectors from these re-authentication packets and use them to crack the WEP key in minutes. Due to the severity of the break, WEP is now considered deprecated. See this question for more details on WEP security. WPA improves upon this, by combining RC4 with TKIP , which helps defend against the IV-based attacks found in WEP. It also improves upon the old handshake mechanism, to make it more resistant to de-auth attacks. Whilst this makes a large improvement, vulnerabilities were found in the way that the protocol worked, allowing an attacker to break TKIP keys in about 15-20 minutes. You can read more about the attack at this other question . WPA2 closes holes in WPA, and introduces an enhanced version of TKIP, as well as CCMP . The standard also bring support for AES , which provides even further security benefits. At current there are no known generic attacks against WPA2. | {
"source": [
"https://security.stackexchange.com/questions/23627",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13687/"
]
} |
23,648 | Whilst the current CA system works very well for a lot of people, it does put a lot of power into individual CAs' hands, and makes a CA hack potentially devastating for customers and business. What alternatives are there to certificate authorities, and what are the challenges in switching to such systems? I've seen proposals for a variety of distributed P2P-like systems, but nothing has ever come of them. | "Alternatives" depend on whether you want something which may work in the future, or something which works right now, with existing browsers. Right now , Web browsers expect that servers send X.509 certificates , and then they validate them against the set of root CA integrated in the browser (or operating system). This means that if you want something that works without any extra software installation, then all you can do is alter the set of root CA (your trust anchors ). You can: Remove some of the CA ( it has been said that you can remove most of them without really impacting your browsing experience, because not all trusted CA are equally active). Add new trust anchors. Note that a trust anchor is "a name-and-public-key that you trust a priori ". Trust anchors are usually encoded as certificates (because the format is convenient for that), and traditionally are self-signed (mostly because in a certificate, there is a field for a "signature" which cannot be left empty). But you can trust certificates which are not self-signed; you can even trust end-entity certificates (i.e. you trust the specific certificate of a given SSL server: this is called direct trust ). With direct trust, you manually manage what you trust with very fine granularity. Note that, with direct trust, you lose the benefits of revocation (a directly trusted certificate cannot be untrusted except by a manual intervention on your part, whereas revocation is about propagating mistrust information automatically). (One can note that the direct trust model is exactly what SSH follows, and it works well in the usage context of SSH .) In the future , there could be other models. There are several proposals, some being backed by existing software add-ons. To my knowledge, none has reached the "generally usable" level yet. There are, for instance: DNSSEC and DANE : DNSSEC is about mapping a PKI on the DNS system, primarily so as to authenticate the data returned by the DNS itself, but this can be leverage to distribute SSL server keys (what DANE is about). This is still a hierarchical PKI, with a limited set of trust anchors; the difference with the existing set of root CA is mostly a change of players: the new set of gatekeepers would be smaller, and only partially overlaps with existing root CA. Also, DNSSEC/DANE binds the process of authenticating Web servers to the DNS infrastructure and actors, and it is not clear whether this is good or bad. A Web of Trust like OpenPGP does. The WoT model relies on graph superconnectivity. To keep things simple, with a WoT everybody is a CA, and each actor is its own and unique root CA; to cope with gullible or downright malicious "CA", WoT users (i.e. Web browsers) accept a target certificate as valid only if they can verify it through many paths which go through distinct CA and all concur to posit the target server key. The decentralized nature of WoT is very popular among people who distrust governments and anything which looks like an administration. However, WoT security is very dependent on critical mass : it will not give you much until sufficiently many people collaborate. See the Monkeysphere project for an implementation (with browser add-ons)(I don't know much about the project quality, though). Convergence is a descendant of the Perspectives project . If you get down to it, Convergence relies on Hope and Laziness . What Convergence assumes is that wannabe attackers will find it too hard / expensive / tiresome to actively impersonate a given server with a fake certificate for both a long time and in the view of many people. Namely, practical Man-in-the-Middle attacks are either close to the victim (e.g. users of a given WiFi hotspot), in which case they impact only very few people, or close to the server, which requires something wide-scale (like heavy DNS poisoning) which will be quite conspicuous, hence will not last long. Under this model, Convergence is a nice, relatively inexpensive solution, with only minimal centralization (it still requires notaries, and de facto centralization can be expected, but not as thorough as the current set of root CA). An important point is that the current system "works". Actual attacks which target the PKI are very rare (much publicized, but still rare). The actual system with roots CA is sufficiently hard to break that attackers find it easier to bypass it through some other ways (foremost of which being abusing the credulity of human users). As such, there is little incentive for replacing it with another system which, at best, will do the same. So, while the alternate proposals have some merit (mostly in financial and political terms), I deem it relatively improbable that any of them dislodges the existing root CA in the near future, for "the Web" as we know it. For closed architectures which are not the Web, and where you control both client and server, you can use whatever system you want, but closed architecture are precisely the situation where hierarchical PKI work very well. | {
"source": [
"https://security.stackexchange.com/questions/23648",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
23,690 | HTTP Strict Transport Security (HSTS) is a very useful feature at preventing OWASP a9 violations and attacks like SSLStrip which try and prevent the client from making a secure connection. This technology however isn't in older versions of web browsers(most notibly IE). In June 2015 Microsoft finally added support for HTTP Strict Transport Security to IE 11 on Windows 7, 8.1, and 10. Microsoft Edge supports it as well. Both of those will do HSTS pre-load for sites that are on the Chromium pre-load list. However not all users use the latest web browsers. So how do you protect users with browsers that don't support HSTS? What is the "best" level of transport security that a web application can provide despite serving content to an insecure client? (Shout out to Tylerl for bringing up this question.) | You can't. The best you can do is to use SSL sitewide. Have all HTTP connections immediately redirect the user over to HTTPS (redirect over to the front page via HTTPS, e.g., http://www.example.com/anything.html should redirect to https://www.example.com/ ). Don't serve any content over HTTP (other than an immediate redirect to your front page, over HTTPS). Set the secure flag on all cookies (obviously). This isn't going to stop a man-in-the-middle attack that downgrades the user to HTTP. Without HSTS, you can't prevent that kind of attack. You just can't do any better than that. Oh well. That's just how it goes with IE. See also Options when defending against SSLstrip? . | {
"source": [
"https://security.stackexchange.com/questions/23690",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/975/"
]
} |
23,691 | I did an nmap scan on an advanced office printer that has a domain name and is accessible from outside the corporate network. Surprisingly I found many open ports like http:80, https:443, and svrloc:427 and some others. The OS fingerprint says somewhere: "...x86_64-unknown-linux-gnu...", which may indicate that this is some sort of an embedded Linux running some server software for the printer's functionality. How can I know whether this printer is increasing the attack surface on the network? For example, can it be exploited through a remote privilege escalation vul. and then launch a tunneling attack on other hosts in the network? | You can have some serious fun playing with printers, photocopiers and other such devices - even UPSes. Security is usually an afterthought at best, if not totally absent. Stuff I've seen: Default credentials used everywhere , and web-based config panels storing passwords in plain-text, often within a generated config file. I've never seen anything better than plain MD5 on passwords, and in one case I saw CRC32. Document names and usernames leaked via SNMP, usually via open read access to the device and over SNMPv1/2 where no transport security is used. Default or hilariously weak SNMP private namespace names (usually "private", "SNMP" or the manufacturer name), allowing you to reconfigure TCP/IP settings, inject entries into the routing table, etc. remotely, and there are often ways to alter settings that can't be set in the control panel. It's pretty trivial to soft-brick the device. UPnP enabled on the device in default setup, allowing for more remote configuration fun. Often you can print test pages, hard-reset the device, reset web-panel credentials, etc. Again it's usually possible to modify TCP/IP settings and other networking properties. Very outdated 2.2.x and 2.4.x kernels, often with lots of nice root privilege escalation holes. Badly written firmware upgrade scripts on the system, allowing you to flash arbitrary firmware to internal microcontrollers. You could use this to brick the device, or install a rootkit if you're willing to spend a lot of time developing it. Custom or old SMB daemons, often vulnerable to RCE. Easy to pwn remotely. Services running as root, user groups set up incorrectly, file permissions improperly set. Printing jobs ran asynchronously by executing shell scripts, making it easy to escalate your privileges up to that of the daemon (often root). Poorly written FTP servers built into the device. I'd bet good money that a fuzzer could crash most of those FTP daemons. All of the usual webapp fails, but especially file upload vulnerabilities. Here's where things get extra fun. Once you've pwned the printer, you can usually get hold of usernames and other juicy information from SMB handshakes. You'll also often find that the password to the printer's web control panel is re-used for other network credentials. At the end of the day, though, the printer is an internal machine on the network. This means that you can use it to tunnel attacks to other machines on the network. On several occasions I've managed to get gcc and nmap onto a photocopier, which I then used as a base of operations. What's the solution? First, you need to recognize that printers and photocopiers are usually fully-fledged computers, often running embedded Linux on an ARM processor. Second, you need to lock them down: Update the firmware of the device to the latest version. Firewall the printer off from the internet. This should be obvious, but it's often missed. TCP/IP-based printers / photocopiers usually bind to 0.0.0.0 , so they can quite easily sneak onto the WAN. If you can make the printer listen only to traffic from the LAN, do so. Change the default credentials on the web control panel. Again, obvious, but still not done very often. Find any services running on the device and attempt to break into them yourself. Once you're in, change passwords and turn off what's unnecessary. Get yourself an SNMP discovery tool and dig around what's available for your printer. SNMP has a bit of a learning curve, but it's worth taking a look. If you do internal network monitoring, set up a rule to watch for anything unusual coming out of the printer. This cuts false positives right down and gives you a good indication of when something dodgy is happening. All in all, if it's a device plugged into your network it is probably pwnable, and should be part of your risk management. | {
"source": [
"https://security.stackexchange.com/questions/23691",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13641/"
]
} |
23,755 | I am developing a vending machine and want to make it secure. In a comment to my previous question , @Polynomial said "Vending machines (and similar devices) can often be pwned via buffer overflows on all sorts of easily accessible interfaces." I thought that buffer-overruns were the result of user input, and I certainly don't plan to have any in my vending machine (well, cash of course and the keypad to select a vend, but otherwise ... I will update the firmware and the pricing data from files on an SD card, but that is being discussed in my previous question). I quibble over "easily accessible interfaces" since all interfaces are inside a physically locked cabinet and, as was pointed out in the previous question, once the door is open it's easy enough to take the cash and run. But, I freely confess that am a total n0b at this security thing. Please advise me of what I need to watch out for. How would you - theoretically - hack or otherwise steal from a vending machine? What could cause me the greatest loss? And what can I do to prevent it? [Update] v1.0 is a soft-drinks vending machine, but we could use it for other porpoises - maybe even ticketing. V1.0 is not networked, but I hope that v2 will be (at which point, I plan to encrypt all network traffic). I am thinking about @tylerl saying that most attacks would be physical and am doing a lot of testing on pulling the power during vend/refund, to see if I can get both the soda and the cash. It doesn't sound much, but if it can be done consistently and word gets around ... I also think a tilt sensor would be a good idea and cheap to include. | When I wrote that comment, I was referring to vending machines in a generalised sense - including things like ticket machines as well as the obvious food-dispensing ones. I was specifically thinking about a model of ticket vending machine at a place I frequent, which has the following interfaces: Keypad Credit / debit card terminal RFID for contactless use of a customer loyalty card, but also for employee override. Ethernet cable plugged in the back. JTAG port behind the front panel (requires you to unlock the front panel, but it's only a tubular pin lock and can easily be defeated) So, how might we own such a vending machine? Well, it's got enough interfaces for us to try... Keypad The keypad is an interesting vector, but it's unlikely to fall to any form of buffer overflow since there aren't really any buffers involved. At most we might be able to figure out some sort of back-door access code that gets us into a config screen, but it's doubtful. Credit / debit card terminal The one near me has an Ingenico i3300 card reader fixed into a recess in the side of the machine. I happen to have one to hand (yay eBay!) and can approach the reverse engineering of it in two ways: Attack the hardware. There's an FCC ID on the device, which I used to pull up the regulatory information from the Office of Engineering and Technology . The FCC deal with emissions testing and a bunch of other stuff, and as part of an application the company must provide detailed documentation of the product, internal and external photos (great for me, since opening the device myself would trip the tamper detect) and other test data. From there, I might discover a weakness in how the card reader detects intrusions, and find a way to open it and mess with the internal firmware. If I screw up, it's not a big deal - I can pick up another for less than £10 on eBay. Alternatively, I could remove the real board and replace it with my own, with an XBee / bluetooth / 802.11x device that transmits card info and pin numbers. Attack the software. There has been a lot of research into this (e.g. PinPadPwn) and many devices are vulnerable to buffer overflows from custom cards. It's possible to program the chips on chip & pin cards to install firmware mods onto a device, simply via putting them in the device as if you were a normal customer. It's then possible to come back later and download card numbers and pins onto another special card. Scary, eh? RFID This is a likely source of ownage. It's a bi-directional communications port that allows us to send and receive data that will be directly handled by code on the machine itself, rather than a separate module. A lot of RFID data contains strings and integers, so overflows are likely. We could also take a look at capturing the data from an employee override swipe, which could open up new possibilities to steal stuff from the machine. In order to actually fuzz the device, we'd need to have the vending machine in our possession. This time, I don't happen to have one to hand. The physical possession requirement with such a large bit of kit does give a barrier to entry, but it's possible to get them second hand. A discrete RFID sniffer should be able to record data from live transactions, though, which could be used to replay communications. Ethernet When I saw the ethernet cable, I giggled like a script kiddy finding an SQL injection hole. It's trivial to unplug one of these cables and insert a pass-through device to record and alter traffic going to and from the device. You could do this with an embedded device like a Wifi Pineapple. It's low cost and potentially high-yield, because you can monitor and fuzz live devices from a distance. I've got no idea what data is going down those lines, but it'd be fun to find out. JTAG If you can get the cover off, the JTAG port is the pinnacle of pwnage. The device is probably an embedded Linux system running on an ARM chip, so getting access to the JTAG gives you full control over the processor and RAM. You'd be able to pull out a memory image (probably containing firmware) and analyse it, and later go back and make changes. If the bad guys can get at your JTAG, you're owned. So, how can you stop this from being attacked? For a large part you can't, but mitigation is an important thing. Here are a few tips: Remember that you're dealing with money, and take security precautions as such. Tip sensors on food vending machines aren't useful if a bad guy can turn them off with a magnet, or change the software to avoid the alarm. Choose card machine vendors that have a long track history of no security issues, and get some insurance against any failings in that department. Lock communications ports down, and have a way to lock any ethernet cables into the device so that they can't be easily replaced. Use transport security (e.g. SSL) if you're talking to external devices via TCP/IP. If they're network-connected devices, segment them from your internal network. Plugging a device into an ethernet socket is so damn easy . Have your software reviewed by a security consultant - especially if you've got RFID or NFC involved. Don't use tubular pin locks that can be defeated with a damn pen. Use a proper lock. Stay paranoid. | {
"source": [
"https://security.stackexchange.com/questions/23755",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4321/"
]
} |
23,874 | I'm careful to use strong passwords (according to How Big is Your Haystack , my passwords would take a massive cracking array 1.5 million centuries to crack), I don't reuse passwords across sites, and I use two-factor authentication where it's available. But typing in those passwords all the time is a real hassle, especially on a phone or tablet. A friend recently asked why I don't just use a relatively weak password for sites like Gmail where I have two-factor authentication enabled, and I didn't really have a good answer. Even if someone brute-forced my password, they'd still need to be in physical possession of my phone to get in. So: Is it safe to weaken my Gmail password for my own convenience? Or is there a realistic scenario that I'm not taking into consideration? Clarification: I'm not talking about using a trivial password (e.g. "a") - no site will let me do that anyway. I'm talking about going from a 16-character password with a search space on the order of 10 30 to an 8-character password with a search space on the order of 10 14 . | Specifically for Google , if you use two-factor authentication it is safe to "weaken" your password "from a 16-character password with a search space on the order of 10 30 to an 8-character password with a search space on the order of 10 14 " as long as you use a good 8-character password (i.e. completely random and not re-used across sites). The strength of two-factor authentication lies in the assumption that the two factors require different kinds of attack and it is unlikely that a single attacker would perform both kinds of attacks on a single target. To answer your question we need to analyze what attacks are possible on weaker passwords compared to stronger passwords and how likely it is that someone who is able to attack weaker passwords but not longer passwords will attack the second authentication factor. Now the security delta between "a 16-character password with a search space on the order of 10 30 " and "an 8-character password with a search space on the order of 10 14 " isn't as large as you may think - there aren't that many attacks that the weaker password is susceptible to but the stronger one isn't. Re-using passwords is dangerous regardless of the password length. The same is true for MITM, key loggers and most other common attacks on passwords. The kind of attacks in which the password length is meaningful are dictionary attacks - i.e. attacks in which the attacker does an exhaustive search for your password in a dictionary. Trying all possible passwords in the login screen is obviously not feasible for a search space of 10 14 , but if an attacker obtains a hash of your password then it may be feasible to check this hash for a search space of 10 14 but not for a search space of 10 30 . Here is where the fact that you've specified Google in your question is important. Google are serious about password security and do what it takes to keep your hashed passwords secure. This includes protecting the servers on which the hashed passwords reside and using salt, pepper and key stretching to thwart a hacker who has somehow managed to get the hashed passwords. If an attacker has succeeded in circumventing all the above, i.e. is able to obtain Google's database of salts and hashed passwords and is able to obtain the secret pepper and is able to do an exhaustive search with key stretching on a search space of 10 14 , then unless you're the director of the CIA that attacker won't be wasting any time on hacking your phone to bypass the second authentication factor - they will be too busy hacking the hundreds of millions of Gmail accounts that don't use two-factor authentication. Such a hacker isn't someone targeting you specifically - it's someone targeting the whole world. If your data is so valuable that such a powerful hacker would target you specifically then you really shouldn't be putting your data in Gmail in the first place. For that matter you shouldn't be putting it on any computer that is connected to the Internet. | {
"source": [
"https://security.stackexchange.com/questions/23874",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15942/"
]
} |
23,880 | Is there any reason for this other than key/certificate management on the client-side? | What is authentication ? That's making sure that whoever is at the other end of the tunnel is who you believe . It really depends on the kind of identity that you want to use. For most Web sites, the interesting notion is continuity . They do not really care who is connected; indeed, the point of a Web site is to be readable by everybody . The kind of authenticity that a Web site wishes to achieve is to make sure that two successive requests are really from the same client (whoever that client may be). This is because the Web site experience is that of a logical succession of pages, driven by the user's actions (the links on which he clicks), and meddling with that movie-like framework is what the attacker is after. The user and the Web site designers think in terms of sessions and the attacker wants to hijack the session. This authenticity is achieved by several mechanisms: Successive HTTP requests from the same client go through the same connection (with the "keep-alive" feature). SSL offers a session resumption mechanism, which reuses the negotiated shared key (that's the "abbreviated handshake" described in section 7.3 of the SSL/TLS standard ). The HTTP request can include a cookie which the server chooses; typically, a random, user-specific value is sent as a cookie to the user, and, upon seeing it coming back in subsequent requests, the server is convinced that the requests come from the same user. The cookie is sufficient to ensure continuity. What extra value would client certificates add ? Well, not much. Certificates are a way to distribute key/name bindings. There are mostly three scenarios where client certificates (in a Web context) are relevant: The Web server needs an extended notion of user identity, which is defined by someone else . For instance, imagine a governmental service that can be accessed by citizens, but only after proper authentication, e.g. an online election system. What makes the citizen, with his definite name and birth date, is managed by the State at large, but not the same part of it than the one running the service. Client certificates are then a way to transport the authentication from the PKI which issued the certificate to the citizen, to the online election system which is not at all entitled to say who is named what, but must nonetheless keep clear records of who connects. The system designer has little trust in the robustness of existing Web browsers. If a user's browser is hijacked, then the secret cookie can be stolen and the user has basically lost, forever. On the other hand, if the user has a smart card, and that smart card stores a private key (which is used in combination with a client certificate), then a complete browser hijacking is still a big issue, but it is more contained : since the smart card will commit honourable seppuku instead of letting the precious private key be revealed, the situation can be recovered from. Once the mandatory format-and-reinstall has been performed, things are secure once again. The Web site does not only want authenticity, it also wishes to get non-repudiation . Non-repudiation is a legal concept which needs some support from the technical parts of the system, and that support is digital signatures . We are outside of what SSL/TLS provides, here. In no way can a SSL/TLS client authentication be a proof which could be used to resolve some legal conflict between the user and the server itself (a bank server cannot show the transcript of the connection and say "see, that user really asked me to buy these actions at that price", because the bank could easily have fabricated the whole transcript). For such things, one needs client certificates and some client-side code which uses the certificate to sign the actual data. However, once the hard work of issuing certificates to clients has been performed, it makes sense to just use them for HTTPS. In the common case of a Web server, none of these scenarios apply. So client certificates are not used, because they would raise practical issues while not adding any extra value. That would be a solution in search of a problem. | {
"source": [
"https://security.stackexchange.com/questions/23880",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15936/"
]
} |
23,897 | My bank has recently sent me a Digipass/Secure Key, which looks like a tiny calculator . You press the green button to turn it on, type a PIN to unlock it, then press the green button again to generate a 6-digit code that you type when logging in. However, I don't actually understand how this device increases security. I'm assuming every device is linked to a person's account. But there is no communication from the device. I can press the button 10 times and generate 10 different codes, any of which seem to work. How does the bank know the code is genuine? | There are two standard ways to build such a device: Time-based. The device has a secret key K (known only to the device and to your bank). When you press the button, The device computes F(K, T) (where T is the current time) and outputs it as a 6-digit code. Your bank, which also knows K , can compute the same function. To deal with the fact that the clocks might not be perfectly synchronized, the bank will compute a range of values and test whether the 6-digit code you provide falls anywhere in that range. In other words, the bank might compute F(K, T-2) , F(K, T-1) , F(K, T) , F(K, T+1) , F(K, T+2) , and if the code you provide matches any of those 5 values, the bank accepts your login. I suspect this is not how your device works, since your device always gives you a different value every time you press the button. Sequence-based. The device has a secret key K (known only to the device and to your bank). It also contains a counter C , which counts how many times you have pressed the button so far. C is stored in non-volatile memory on your device. When you press the button, the device increments C , computes F(K, C) , and outputs it as a 6-digit code. This ensures that you get a different code every time. The bank also tracks the current value of the counter for your device, and uses this to recognize whether the 6-digit code you provided is valid. Often, the bank will test a window of values. For instance, if the last counter value it saw was C , then the bank might compute F(K, C+1) , F(K, C+2) , F(K, C+3) , F(K, C+4) and accept your 6-digit code if it matches any of those four possibilities. This helps ensure that if you press the button once and then don't send it to the bank, you can still log on (you aren't locked out forevermore). In some schemes, if there is a gap in codes (e.g., because you pressed the button a few times and then didn't send the code to the bank), you will need to enter two consecutive valid codes before the bank will log you on. Based upon what you've told us, I would hypothesize that your device is probably using the sequence-based approach. | {
"source": [
"https://security.stackexchange.com/questions/23897",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15950/"
]
} |
24,036 | I recently did a Bing search for Putty and can only guess at which distribution is "trusted", contains no malware, or sleuthing code . If you needed to download Putty for a high security Windows installation, where would you get the Binaries from? Would you compile from source? | The official site is www.chiark.greenend.org.uk/~sgtatham/putty , you can find the download in the download section . If you want to play it safe, you can verify the signature of the download . In my opinion compiling it from source is as safe as downloading the binary and checking the signature (make sure to also verify the key itself with at least one trusted signer). Unless you review the source code (including all needed libraries) there is no point in spending the added effort of compiling it yourself since both parts, the source code and the binaries, are signed with the same key. The only advantage you gain by compiling it yourself is the opportunity to review the code so as to mitigate the risk that the authors of PuTTY could have add some backdoors or malware to it. But again, you would have to thoroughly review the code and all needed libraries to actually gain that benefit. | {
"source": [
"https://security.stackexchange.com/questions/24036",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
24,163 | I'm a moderator at a forum. We want to have a new style for the forum.
We're thinking about announcing a competition to the users to come up with the best CSS design; we would adopt the best submission. How dangerous would this be? How dangerous is it to use CSS styles from someone we don't trust? Is it possible for a CSS designer to add a malicious code or function to the CSS style itself? | It is not advisable to use CSS styles from a source you don't trust, without some sort of review. There are some risks, particularly on older browsers. Some older browsers provide a way to embed JavaScript inside of CSS, so that the JavaScript will be automatically executed as soon as the browser loads the CSS. Browsers with this problem include IE6, IE7, as well as later versions of IE in IE7 compatibility mode; also IE Mobile 8. (In those older browsers, this is supported through CSS constructs like url , expression(...) , behavior , -moz-binding , -o-link , and probably more.) This weakness of older browsers allows an attacker who supplies malicious CSS to do anything an XSS attack can do. Using CSS styles from an attacker is basically a self-inflicted XSS vulnerability. Fortunately, modern browsers have closed all of these JavaScript pathways. Unfortunately, some users still use older browsers, so if you use CSS from an untrusted source, you'll be putting those users at risk. That said, I would recommend taking a risk management perspective. How great is the risk? How great is the benefit? In this case, I suspect the benefits are probably worth taking a slight risk, particularly if you adopt some mitigations to protect yourself. I would recommend: Review all of the proposed CSS before loading it into your site. Make sure you understand it, and it isn't obfuscated. Make sure it looks clean and well-organized and readable. Make sure it doesn't load external CSS or other external resources. See whether it looks reasonable to you. If you spot it doing stuff you don't understand, maybe don't use it. Check the source. Are they a trusted user of your community, who have been spending time on your site for a long time? Or are they a new user who you have little history of? There's probably less risk from a trusted member of the site, and more risk from an unknown. If it were a site I was running, I'd probably do it. Yes, I'd use the above mitigations to protect myself -- but I wouldn't let security get in the way of having fun things. Other resources: CSS security, from ha.ckers , Ending expressions, from MSDN | {
"source": [
"https://security.stackexchange.com/questions/24163",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10158/"
]
} |
24,195 | When a machine has been infected with malware, most of us here immediately identify the appropriate action as "nuke it from orbit" - i.e. wipe the system and start over. Unfortunately, this is often costly for a company, especially if backups are configured in a less-than-optimal fashion. This can produce resistance from management and users who just want to carry on using the machine for work. After all, as far as they're concerned, they can "just run AV over it" and everything will be fine. How do you explain the problem to management and users of a non-technical nature? I could easily produce a technical reason, but I'm having trouble coming up with the appropriate wording for non-technical users. I'd especially appreciate any way of speaking that the recipient can identify with, e.g. appealing to a manager's sense of risk management. | In my experience management doesn't like to listen to clever analogies. Depending on the person they care about the bottom line in dollars or hours of productivity. I would explain: The actual bottom line is that a compromise of our data will cost the
company approximately X dollars + Y hours to recover. This is Z%
likely to happen given the malware that is on this machine. A new
install will cost A dollars + B hours to recover. You pick the
appropriate action. It's short and clear and doesn't really leave them any room to argue. They will clearly understand the risk and should make the right decision. | {
"source": [
"https://security.stackexchange.com/questions/24195",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
24,291 | In a web applications context, when a user wants to change their current password, generally they would have to enter their current password first. However at this point, the user has already been authenticated using their current password to log in. I somewhat understand the existing password is required to prevent malicious users (who may access the current session on the user's machine) from changing the password. However can't this argument be used in any situation? Why not ask for the password every time a request for sensitive information is made? How is the act of changing a password any different? | If a user leaves their computer unattended for a few minutes (while logged in), we don't want someone else to be able to walk by and quickly change their password. For one thing, this would allow the attacker to change the associated email address, too, and now the legitimate owner is never getting his/her account back. For another thing, just think of the potential for office pranks! Changing your password is a sensitive enough operation that it makes sense to require the user to re-authenticate. And, since changing your password is a relatively rare operation, this doesn't introduce much inconvenience for users: it only changes the user experience in the rare cases where you change your password. | {
"source": [
"https://security.stackexchange.com/questions/24291",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16206/"
]
} |
24,310 | In terms of a home network, is there any reason to set up a router firewall so that all outgoing ports are blocked, and then open specific ports for things such as HTTP, HTTPS, etc. Given that every computer on the network is trusted, surely the amount of extra security provided by blocking outgoing ports would be pretty much negligible? | Blocking outbound traffic is usually of benefit in limiting what an attacker can do once they've compromised a system on your network. So for example if they've managed to get malware onto a system (via an infected e-mail or browser page), the malware might try to "call home" to a command and control system on the Internet to get additional code downloaded or to accept tasks from a control system (e.g. sending spam) Blocking outbound traffic can help stop this from happening, so it's not so much stopping you getting infected as making it less bad when it's happened. Could be overkill for a home network tho' as there's a lot of programs which make connections outbound and you'd need to spend a bit of time setting up all the exceptions. | {
"source": [
"https://security.stackexchange.com/questions/24310",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16216/"
]
} |
24,400 | I came across this article when looking for ways of hardening my laptop, and this guide seemed extremely thorough. It is perhaps overly paranoid, but it will be good to have a guide I can trust should I come in a position, most likely work related, that requires this level of protection. My questions are: are there any holes in the above guide that the author has failed to mention that would allow an attacker access? And to what lengths would a would be attacker have to go to gain access, without resorting to beating you with a spanner? | Some commentary: Defending from common thieves Rather than wiping Windows 7, I've left it as a honeypot operating
system. If a thief steals the laptop, when they turn it on, it will
automatically boot up into Windows, without so much as even being
prompted for a password. I installed a free application called Prey
which will allow me to grab loads of information from the laptop, such
as its location, and pictures from the built in webcam. Unlikely. Most laptop thieves have been in prison at one point in their lives, so they've learnt a thing or two from their fellow inmates. They are wise to this trick. When they steal electrical items that can communicate, they take batteries out (and SIM cards if applicable). Most mobile phone thieves carry opening tools for iPhones, so they can quickly remove the SIM. They'll then sell the laptop onto a fence (someone who sells on stolen goods) who will take out the drive and wipe it, and install a new OS on it. Usually a pirated copy of Windows. Defending from experts I consider full disk encryption to be essential if you want to secure your laptop. However, there are several attacks against machines that use full disk encryption; I decided to address as many of them as possible. Agreed that full-disk crypto is good! Let's move on... Evil maid attacks Even if you have a machine which uses full disk encryption, the boot partition and boot loader need to be stored somewhere unencrypted. Typically, people store it on the hard drive along with the encrypted partitions. The problem with doing this is, whenever you go to your machine, you don't know if somebody has tampered with the unencrypted data to install a software keylogger to capture your password. To get around this, I installed my boot partition and boot loader on a Corsair Survivor USB stick. I wanted a USB stick which would never leave my side. This is a real threat, albeit rather unlikely. However, the USB stick booter does not protect you. It makes the bad guy's job more difficult, but if they're going to the trouble of tampering with hardware, they could just as easily install a hardware keylogger under your keyboard. Lock the laptop in a case when it's out of your sight. I always do this in hotels, because it helps prevent theft too. I also suggest investing in a decent Kensington lock , which can be attached around pipes and various other strong fixtures. On a typical system with disk encryption, the encryption key is stored in RAM. This would be fine, if it weren't for the fact that there are several ways for an attacker with physical access, to read the contents of the RAM on a machine which is running, or which has been running recently. You might think that your machine's RAM is wiped as soon as it loses power, but that is not the case. It can take several minutes for the RAM to completely clear after losing power. Cooling the RAM with spray from an aircan, can extend that time period. This is misleading. At room temperature, modern DDR3 loses integrity below the 50% confidence mark at around 3-15 seconds after power-down. DDR2 tends to do so at around 20-30 seconds. This makes a cold-boot attack on DRAM modules very unlikely, even if you almost immediately leave the room and someone jumps in and drops your laptop into a convenient vat of liquid nitrogen. It's just not going to happen. However, you should avoid sleep mode, which puts the system in a low-power state and continues to refresh the RAM. In such a case, an attacker could freeze the DRAM modules on-site and take them to a lab for analysis. Hibernate isn't a problem if you're using full-disk encryption - the machine state is stored on disk and the system is completely shut down. You could password protect the BIOS and disable booting from anything other than the hard drive, but that still doesn't protect you. It won't protect against coldboot attacks, but it protects against a lot of other stuff. You'll also find that a lot of BIOS implementations offer a boot password too, which will make things more difficult for an attacker or thief, since the machine won't even POST without the password. The second defence I used is far more interesting. I use something called TRESOR. TRESOR is an implementation of AES as a cipher kernel module which stores the keys in the CPU debug registers, and which handles all of the crypto operations directly on the CPU, in a way which prevents the key from ever entering RAM. This only prevents your crypto keys from being stolen. The RAM will still contain all sorts of other important data, such as file system cache. It'll also contain your LSA keys, LSA vault and in-memory SAM (or the equivalent structures in Linux) which can be used to recover account password hashes. If these are cracked, it might give an attacker an idea of what your disk encryption password is. As such, I highly recommend using a unique password for your disk encryption. Attacks via firewire If a machine has a firewire port, or a card slot which would allow an attacker to insert a firewire card, then there's something else you need to address. It is possible to read the contents of RAM via a firewire port. Yep, and the same is possible via cardbus. If you're paranoid, you'll need to physically disable these interfaces. So instead, I built firewire as a set of kernel modules, and prevent the modules from loading under normal circumstances using /etc/modprobe.d/blacklist . Won't work. The OS can't interact with the device if you disable support or remove drivers, but a firewire device or cardbus module can still function in any way it likes at the hardware level - it just can't interact directly with your OS software. DMA requests and interrupts can still be sent by the device, allowing it to read memory. This allows the attacker to collect it later, or have it transmitted over RF. Preparing for disk encryption you should completely wipe a new hard drive with random data before setting up disk encryption. This is to make it impossible for somebody to be able to detect which parts of the drive have had encrypted data written to them. Doing this, is as simple as creating a partition on the space you want to fill with random data, and then using the "dd" command to copy data directly to that partition device in /dev/ from /dev/urandom . This took a few hours to run on my system. This is a silly way to do it. TrueCrypt automatically wipes the entire disk with random data as part of the volume creation wizard. It's also faster than direct reads from /dev/urandom , because TC does its own strong AES-based random data generation on top of what the system provides. Don't bother doing it manually with dd - you might get it wrong. I complicated this procedure slightly by using something I purchased called an EntropyKey. The EntropyKey provides a much larger source of "real" random data, as opposed to the much more limited "pseudo" random data that is generated by the operating system. This is maybe a good idea. I don't see any real technical analysis of this device, so I can't really say whether it's good or not. Passing various statistical checks doesn't really mean a whole lot. It uses avalanche noise from the P-N junction of a transistor, which is a known good source of random noise, since it is created by the probabilistic effect of quantum tunnelling. However, the noise isn't without bias. Due to various electrical properties of a circuit, such as inductance and capacitance, you'll see small biases towards particular waveforms that "resonate" with various power planes and loops within the board's copper pathways. This bias can be reduced with a software filter called von Neumann whitening , which involves translating bit pairs from 01 to 0, 10 to 1, and discarding 11 and 00 bit pairs. This reduces the output speed of the generator by a factor of at least 1/4 (for an idea random source), but removes bias. I can't tell if this is being done within the EntropyKey. More on disk encryption When I initially did the installation, I chose to protect the full disk encryption key with a passphrase. It is also possible to protect it with a keyfile. The advantage of using a keyfile is that you can store it on an external device. An attacker can't just observe you entering the password, they also need to get hold of the keyfile. It's also much more difficult to brute force. This is fine as long as you have both the keyfile and a password. It essentially provides two-factor authentication. If you need to use swap. Make sure it is encrypted too. The easiest way to make sure everything is encrypted is to create an encrypted device, and then use LVM on top of that so that all of your partitions and swap end up on top of the same encrypted device. Wise words. On Windows you should configure your virtual memory (swap) to be placed only on disks that are part of your full-disk encryption regime. The laptop I purchased has something called a Trusted Platform Module. This TPM can handle a number of crypto operations it's self. It also provides a random number generator similar to the EntropyKey. Apparently a lot of modern laptops contain one of these. I'd avoid this, due to past problems with TPMs and government backdoors. Whilst I can't cite any concrete evidence, it makes me nervous enough to avoid it. I use Firefox as my web browser. Surfing the web scares me; the browser strikes me as the most likely way in for a remote attacker. And yet, most people run the browser under the same user id as the rest of their programs. So if the browser is compromised, all of the files that your user can access are also instantly compromised. To try and minimise any damage if this happens, I decided to run Firefox in its own account. Again, smart. It'd be even smarter to sandbox it entirely, as well as run it under its own user. This is more difficult for Windows, so I'd say your best bet is to have your user account set as a limited user for most tasks (with UAC switched on!) and only switch to an admin account when you actually need to. Something like Sandboxie can also help. All of my incoming email is encrypted using my public GPG key. I detailed how I do this here. This means that I need to store my private GPG keys on my laptop. They're protected by a passphrase, but is this enough? If my account was compromised, an attacker could key log my passphrase and then steal my keys. Luckily, when I purchased my laptop, I ticked the "Smartcard Reader" option. I then purchased an OpenPGP Smartcard. My encryption and signing subkeys have been transferred to the smartcard, and the master key has been removed from my laptop. Useful, but this fails to take into account the "evil maid" attack mentioned earlier. It's possible to sniff the card's communications, or tamper with it so that it stores the keys for later retrieval. I use the following Firefox addons to minimise the chance of MITM attacks against my browsing, and to prevent most XSS/CSRF attacks: Certificate Patrol, Cipherfox, DNSSEC Validator, HTTPS Everywhere, HTTPS Finder, NoScript, Perspectives and Request Policy. Good list! I'd also include Collusion, AdBlock Plus and Greasemonkey. Collusion catches various tracking cookies and allows you to block them. AdBlock Plus is useful for killing off ads, which can be a source of various nasties from tracker cookies to malware. Greasemonkey is a user-script addon which lets you inject your own JavaScript into selected sites or pages. It can be useful for disabling various functionality on particular pages, or adding your own functionality. For example, I have a script that looks for various URL shortener links and replaces their target with a JavaScript popup box, allowing me to decide what to do with the link. I installed an application called blueproximity. It detects when my phone is in range, via bluetooth. If my phone moves out of range, the screen automatically locks. I've no doubt that this can be prevented via spoofing my phone, but it adds another layer of security. A cool gimmick, but easily spoofed. Bluetooth security is crap. All in all, a pretty neat article. It has its flaws, but it also has a lot of sound advice. If you follow even half of it you should be secure against all but the most determined attackers. At the end of the day, it's a balancing act between the risks you care about and the time/money you want to put into it. | {
"source": [
"https://security.stackexchange.com/questions/24400",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15903/"
]
} |
24,407 | I was giving a presentation to my colleagues about cryptography basics in which I explained about asymmetric algorithm and its use. One of the common question from the audience about asymmetric algorithm encryption/decryption is, why can't we decrypt the cipher data using the same key(e.g. public key) that we've used to encrypt(like symmetric algorithm). I know it is the mathematical property that prevents this but I really don't know how to explain in plain english. The question is more like if we do "10 + 2"(assume 10 as plaintext and 2 is key) then why can't we do "12 - 2"(12 is ciphertext and 2 is encryption key) to get the original data. Can anyone help me to explain the principle of asymmetric algorithm in plain english? | It's like one of these: Say you want to secure something in a box. Anyone can close the lock (public key). This means anyone will be able to put something into the box and lock the box (they won't be able to open the lock once it's locked (you just pinch these closed)). The key to open the lock is something only you have (private key). You are the only one that will be able to open the lock and see what's inside the box. I suggest you buy on of these to demonstrate how they work. | {
"source": [
"https://security.stackexchange.com/questions/24407",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16288/"
]
} |
24,444 | What set of GCC options provide the best protection against memory corruption vulnerabilities such as Buffer Overflows, and Dangling Pointers? Does GCC provide any type of ROP chain mitigation? Are there performance concerns or other issues that would prevent this GCC option from being on a mission critical application? I am looking at the Debian Hardening Guide as well as GCC Mudflap . Here are the following configurations I am considering: -D_FORTIFY_SOURCE=2
-fstack-protector --param ssp-buffer-size=4
-fPIE -pie
-Wl,-z,relro,-z,now (ld -z relro and ld -z now) Are there any improvments that can be made to this set of options? We are most worried about protecting WebKit. | I don't code for gcc, so hopefully someone else can add to this, or correct me. I'll edit it with responses. Some of these will not work for all circumstances. -Wall -Wextra Turn on all warnings to help ensure the underlying code is secure. -Wconversion -Wsign-conversion Warn on unsign/sign conversion. -Wformat-security Warn about uses of format functions that represent possible security problems. -Werror Turns all warnings into errors. -arch x86_64 Compile for 64-bit to take max advantage of address space (important for ASLR; more virtual address space to chose from when randomising layout). -mmitigate-rop Attempt to compile code without unintended return addresses, making ROP just a little harder. -mindirect-branch=thunk -mfunction-return=thunk Enables retpoline (return trampolines) to mitigate some variants of Spectre V2. The second flag is necessary on Skylake+ due to the fact that the branch target buffer is vulnerable. -fstack-protector-all -Wstack-protector --param ssp-buffer-size=4 Your choice of "-fstack-protector" does not protect all functions (see comments). You need -fstack-protector-all to guarantee guards are applied to all functions, although this will likely incur a performance penalty. Consider -fstack-protector-strong as a middle ground. The -Wstack-protector flag here gives warnings for any functions that aren't going to get protected. -fstack-clash-protection Defeats a class of attacks called stack clashing . -pie -fPIE Required to obtain the full security benefits of ASLR. -ftrapv Generates traps for signed overflow (currently bugged in gcc, and may interfere with UBSAN). -D_FORTIFY_SOURCE=2 Buffer overflow checks. See also difference between =2 and =1 . -Wl,-z,relro,-z,now RELRO (read-only relocation). The options relro & now specified together are known as "Full RELRO". You can specify "Partial RELRO" by omitting the now flag.
RELRO marks various ELF memory sections readonly (E.g. the GOT ). -Wl,-z,noexecstack Non-executable stack. This option marks the stack non-executable, probably incompatible with a lot of code but provides a lot of security against any possible code execution. ( https://www.win.tue.nl/~aeb/linux/hh/protection.html ) -fvtable-verify=[std|preinit|none] Vtable pointer verification. It enables verification at run time, for every virtual call, that the vtable pointer through which the call is made is valid for the type of the object, and has not been corrupted or overwritten. If an invalid vtable pointer is detected at run time, an error is reported and execution of the program is immediately halted.( https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html ) -fcf-protection=[full|branch|return|none] Enable code instrumentation of control-flow transfers to increase program security by checking that target addresses of control-flow transfer instructions (such as indirect function call, function return, indirect jump) are valid. Only available on x86(_64) with Intel's CET. ( https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html ) If compiling on Windows, please Visual Studio instead of GCC, as some protections for Windows (ex. SEHOP) are not part of GCC, but if you must use GCC: -Wl,dynamicbase Tell linker to use ASLR protection. -Wl,nxcompat Tell linker to use DEP protection. | {
"source": [
"https://security.stackexchange.com/questions/24444",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/975/"
]
} |
24,449 | On the surface, the inadvisability of security through obscurity is directly at odds with the concept of shared secrets (i.e. "passwords"). Which is to say: if secrecy around passwords is valuable, then by extension surely it must be of some value to keep the algorithm that uses the password secret as well. Many (arguably misguided) organizations may even go as far as to say that the system is the password. But to what degree can the secrecy of an algorithm be relied upon? Or perhaps more appropriately, in what way is secrecy surrounding an algorithm doomed to fail while secrecy surrounding passwords is not? If, on the other hand, secrecy of an algorithm is desirable, then how important is it? To what lengths should a developer reasonably go to keep his crypto secret? EDIT To be clear, this isn't about creating a new, untested algorithm, but rather keeping secret the details of which algorithm you choose. For example, the technique Windows uses to hash passwords is to apply known hashing algorithms in a sequence which is not published and which appears to change between different versions of Windows. | Much of the work on passwords and keys is related to controlling where they are stored and copied. A password is stored in the mind of a human user. It is entered on a keyboard (or equivalent) and goes through the registers of a CPU and the RAM of the computer, while it is processed. Unless some awful blunder is done, the password never reaches a permanent storage area like a hard disk. An algorithm exists as source code somewhere, on the machine of a developer, a source versioning system, and backups. There are design documents, which have been shown to various people (e.g. those who decide whether to fund the development of the system or not), and often neglectfully deposited on an anonymous shelf of in some layer of crust on the typical desktop. More importantly, the algorithm also exists as some executable file on the deployed system itself; binary is not as readable as source code but reverse engineering works nevertheless. Therefore we cannot reasonably consider that the algorithm is secret, or at least as secret as the password (or the key). Really, cryptographic methods were split one century ago into the algorithm and the key precisely because of that: in a functioning system, part of the method necessarily leaks traces everywhere. Having a key means concentrating the secrecy in the other half, the part which we can keep secret. "Security through obscurity" is an expression which uses the term obscurity , not secrecy . Cryptography is about achieving security through secrecy . That's the whole difference: a password can be secret ; an algorithm is, at best, obscure . Obscurity is dispelled as soon as some smart guy thinks about bringing a metaphorical lantern. Secrecy is more like a steel safe: to break through it, you need more powerful tools. Smart guy Auguste Kerckhoffs already wrote it more than a century ago. Despite the invention of the computer and all of today's technology, his findings still apply. It took a while for practitioners of cryptography to learn that lesson; 60 years later, Germans were still putting a great deal in the "secrecy" of the design of the Enigma machine . Note that when Germans put the 4-rotor Navy Enigma into use, Allied cryptographers were inconvenienced (routine cracking stopped for a few months) but were not totally baffled because some captured documents from the preceding year alluded to the development of the new version, with a fourth "reflector" rotor. There you have it: algorithm secrecy could not be achieved in practice. An additional twist is that algorithm obscurity can harm security . What I explain above is that obscurity cannot be trusted for security: it might increase security, but not by much (and you cannot really know "how much"). It turns out that it can also decrease security. The problem is the following: it is very hard to make a secure cryptographic algorithm. The only known method is to publish the algorithm and wait for the collective wisdom of cryptographers around the world to gnaw at it and reach a conclusion which can be expressed as either "can be broken that way" or "apparently robust". An algorithm is declared "good" only if it resisted the onslaught of dozens or hundreds of competent cryptographers for at least three or four years. Internet, academic procrastination and human hubris are such that, with the right communications campaign, you can get these few hundreds of cryptographers to do that hard assessing job for free -- provided that you make the algorithm public (and "attractive" in some way). If you want to maintain the algorithm obscure, then you cannot benefit from such free consulting. Instead, you have to pay. Twenty good cryptographers for, say, two years of effort: we are talking about millions of dollars, here. Nobody does that, that's way too expensive. Correspondingly, obscure algorithms are invariably much less stress-tested than public algorithms, and therefore less secure . (Note the fine print: security is not only about not being broken, but also about having some a priori knowledge that breaches won't happen. I want to be able to sleep at night.) Summary: You should not keep your algorithm secret. You do not know how much your algorithm is secret. You cannot keep your algorithm secret. But you can and must keep your password secret, and you can know "how much" secret it is (that's all the "entropy" business). | {
"source": [
"https://security.stackexchange.com/questions/24449",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
24,493 | I am curious because, I experienced something bizarre recently. About a month ago, someone asked me to find out a price for a T-shirt printing machine, and probably for the first time, I pressed these keys and started searching, searching, for long and many found many results only using through the Google search engine. But now, even after two weeks has passed.. I see almost all the Google ads in the site I visit including YouTube have been changed, and only advertise T-shirt printing machine. Like the ones I was searching. There is only one explanation for this, and Google has been controlling what I search and keeps log of these keywords. This might be normal for some people, and most of you might have known about this, but I find it unjust that a site can keep track of what you type and what you look for without asking permission. That's right! isn't it? We can agree to the terms and conditions when we signup to Google, but, they should be mentioned in the terms, or be specified in a very apparent manner. edit: There have been responses from users, which according to them: Google has been snooping information from their private emails. | NOTE : I work at Google now. I didn't work there when I wrote this. This is my own opinion, not Google's. But it's the only opinion that makes any sense. It's also probably the most important thing I've written on this site; you must understand this to understand what online privacy is. Advertisers use what information they have to try to best guess what sort of ads you will be most interested in. In Google's case, your search activity is probably the best indicator they have, but ad clicks and ad impressions are also considered. In Amazon's case, for example, your purchase and product browsing history is their best indicator, and you'll probably notice that their suggestions closely mirror your recent history — even if that most recent history dates back to two years ago. My own search and browsing habits tend to favor highly technical content; servers, programming, malware, etc. The ads I see when browsing under that profile therefore tend to also favor technical content: colocation, hosting, software, etc. This is totally Fine By Me™ . When I watch TV, I have to endure a depressing amount of ads about feminine incontinence, retirement homes, and herpes medication. But on the Internet, the ads are all software and servers. Do I think that's creepy? Hell no. The fewer herpes ads the better, IMO. To be clear: I'm a strong proponent of online privacy. However, I manage my online privacy by controlling the information I make available online. I don't expect others to maintain my privacy for me; the concept doesn't even make sense. If you don't want them to know something, then don't tell them. Telling them and then demanding that they forget is a recipe for disaster on numerous fronts, and even comical from a security standpoint. My search history is carefully curated; if I don't want a search associated with me, I use a private browsing session. Sure, I could use a service that promises to not remember what I tell them, but I would be an idiot if I were to depend on that promise. Remember Hushmail? Still, I prefer to use a service that allows me to craft my own online preference profile so that they can filter out all the crap I clearly don't want. Is this legal? — So far yes. I would hope that it remains so, since the unintended consequences of making it illegal would be so far reaching and unexpected that it would have devastating consequences for completely innocent Internet users and site operators. Internet regulation reliably makes things worse. Does this bother me? — Of course not. If I buy an apple from a market, is it creepy for the vendor to ask me the next day whether I liked my apple? Do I think he's spying on me? If I tell him I liked it, is it creepy for him to suggest that I buy more apples at a subsequent visit? No, of course not. It's just good customer service. If he tells the fruit vendor next door that I like apples, should that be illegal? Of course not: It's his information to give, just like any conclusions I make about him are my information to share as I see fit. Vendors online remember what we tell them just like vendors at your local market. My fruit vendor may remember that I visited his store even though I didn't buy anything, and yet I don't assume that he's spying on me. I'm visiting him, not the other way around. Likewise, when I visit Google, I don't think it's spying for them to remember what I ask them. The biggest problem with online privacy is the implicit belief that because I connect to the Internet from the privacy of my own home, anything I do on the Internet also happens in the privacy of my own home. This is lunacy. Everything you do on the Internet is absolutely public unless you can verifiably prove otherwise. | {
"source": [
"https://security.stackexchange.com/questions/24493",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16343/"
]
} |
24,557 | In Windows Firewall with Advanced Settings I can create a rule which blocks all inbound or outbound traffic for particular program by pointing to its .exe file. The problem is that this program has many .exe files in its directory, as well as additional ones in its sub directories. So my question is: do I need to make separate rules for each .exe file, which in this case would mean about 50 rules? Or is there a way to block the traffic for a group of .exe files based on their location on the local hard drive? | You can use a Simple Batch File. Open Notepad and copy/paste the script below into a blank document. Save the file as BLOCKALL.BAT. Now copy that file to the same directory as the EXEs you want to block and double click it. It will add outbound rules to advanced Windows Firewall settings blocking all EXEs in that folder and sub-folders as well. It is tested with Windows 7, but it should work with other versions of Windows that use Windows Firewall. NOTE : Batch starts itself in system32. Thus you need to prepend it with cd /d "%~dp0" to make it work in current directory. The resulting script would be as follows: @ setlocal enableextensions
@ cd /d "%~dp0"
for /R %%a in (*.exe) do (
netsh advfirewall firewall add rule name="Blocked with Batchfile %%a" dir=out program="%%a" action=block
) | {
"source": [
"https://security.stackexchange.com/questions/24557",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16238/"
]
} |
24,561 | I've run ssltest on web application and it found "Chain issues - Contains anchor" (section "Additional Certificates (if supplied)") What does it mean? Should it be fixed? Can it be exploited? | When the server sends its certificate to the client, it actually sends a certificate chain so that the client finds it easier to validate the server certificate (the client is not required to use exactly that chain, but, in practice, most client will use the chain and none other). This is described in the SSL/TLS standard , section 7.4.2, with, in particular, this enlightening excerpt: The sender's
certificate MUST come first in the list. Each following
certificate MUST directly certify the one preceding it. Because
certificate validation requires that root keys be distributed
independently, the self-signed certificate that specifies the root
certificate authority MAY be omitted from the chain, under the
assumption that the remote end must already possess it in order to
validate it in any case. Since that's a "MAY" case (the "MAY", "MUST", "SHOULD"... terminology in RFC has very precise meanings explained in RFC 2119 ), the server is allowed to include the root certificate (aka "trust anchor") in the chain, or omit it. Some servers include it, others do not. A typical client implementation, intent on using exactly the chain which was sent, will first try to find the chain certificates in its trust store; failing that, it will try to find an issuer for the "last" chain certificate in its trust store. So, either way, this is standards compliant, and it should work. (There is a minor source of confusion with regards to chain order. In true X.509 , the chain is ordered from trust anchor to end-entity certificate. The SSL/TLS "Certificate" message is encoded in reverse order, the end-entity certificate, which qualifies the server itself, coming first. Here, I am using "last" in SSL/TLS terminology, not X.509.) The only bad thing that can be told about sending the root in the chain is that it uses a bit of network bandwidth needlessly. That's about 1 kB data per connection which includes a full handshake . In a typical session between a client (Web browser) and a server, only one connection will be of that type; the other connections from the client will use "abbreviated handshakes" which build on the initial handshake, and do not use certificates at all. And each connection will be kept alive for many successive HTTP requests. So the network overhead implied by the root-sending is slight. | {
"source": [
"https://security.stackexchange.com/questions/24561",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5501/"
]
} |
24,567 | We use OpenLDAP version 2.4.24 $ /usr/local/libexec/slapd -VV
@(#) $OpenLDAP: slapd 2.4.24 (Mar 5 2011 06:36:43) $
steve@sunblade2500:/bigdisk/SOURCES/S10/openldap-2.4.24/servers/slapd OpenLDAP version 2.4.33 is currently available (27.11.2012). We need to decide if we upgrade expert in OpenLDAP left us are in our version risky vulnerabilities? is there a good web interface to check $SOFTWARE_NAME and $VERSION
and see known vulnerabilities with risk score. I found http://www.cvedetails.com/vulnerability-list/vendor_id-439/Openldap.html but can't filter on version number. | When the server sends its certificate to the client, it actually sends a certificate chain so that the client finds it easier to validate the server certificate (the client is not required to use exactly that chain, but, in practice, most client will use the chain and none other). This is described in the SSL/TLS standard , section 7.4.2, with, in particular, this enlightening excerpt: The sender's
certificate MUST come first in the list. Each following
certificate MUST directly certify the one preceding it. Because
certificate validation requires that root keys be distributed
independently, the self-signed certificate that specifies the root
certificate authority MAY be omitted from the chain, under the
assumption that the remote end must already possess it in order to
validate it in any case. Since that's a "MAY" case (the "MAY", "MUST", "SHOULD"... terminology in RFC has very precise meanings explained in RFC 2119 ), the server is allowed to include the root certificate (aka "trust anchor") in the chain, or omit it. Some servers include it, others do not. A typical client implementation, intent on using exactly the chain which was sent, will first try to find the chain certificates in its trust store; failing that, it will try to find an issuer for the "last" chain certificate in its trust store. So, either way, this is standards compliant, and it should work. (There is a minor source of confusion with regards to chain order. In true X.509 , the chain is ordered from trust anchor to end-entity certificate. The SSL/TLS "Certificate" message is encoded in reverse order, the end-entity certificate, which qualifies the server itself, coming first. Here, I am using "last" in SSL/TLS terminology, not X.509.) The only bad thing that can be told about sending the root in the chain is that it uses a bit of network bandwidth needlessly. That's about 1 kB data per connection which includes a full handshake . In a typical session between a client (Web browser) and a server, only one connection will be of that type; the other connections from the client will use "abbreviated handshakes" which build on the initial handshake, and do not use certificates at all. And each connection will be kept alive for many successive HTTP requests. So the network overhead implied by the root-sending is slight. | {
"source": [
"https://security.stackexchange.com/questions/24567",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16066/"
]
} |
24,582 | On my work laptop I regularly create a VPN connection that I use to remote desktop to our web server. Is this safe to do on a coffee shop where random people are connected to the same wifi network? | Yes, a VPN connection encrypts the connection between your computer and the remote VPN host. The connection would just look like gibberish to anyone sniffing the traffic, either in the coffee shop or on the Internet. It is worth noting that the same applies to any content sent over HTTPS even if you aren't using a VPN. It is also worth noting that if you are using the current version of Microsoft Terminal Services (ie remote desktop), the VPN connection isn't even strictly necessary (from a security stand point) as the remote desktop connection itself is also encrypted. Note that this setting can be optionally reduced by administrative configuration on the network though, so the VPN still isn't a bad idea. | {
"source": [
"https://security.stackexchange.com/questions/24582",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5169/"
]
} |
24,637 | I work for a company which has ~16,000 employees. Periodically, our VP of IT sends out a newsletter with "tech-tips" and misc IT stuff. The topic of this week's newsletter was "password security". The introductory paragraph caught my attention: We just decrypted all user passwords in use to see if employees are using strong passwords. We used a combination of brute force,rcracki, hashcat/oclHhashcat and john-the-ripper tools to decrypt the passwords. This was followed by a typical newsletter discussing good password practices: Don't use dictionary words; be sure to use mixed case/symbols; don't write your password on a yellow-sticky by your monitor; etc... Now, I'm no cryptography whiz, but I was skeptical that he claimed that they had "decrypted all user passwords". I can believe that they maybe ran all the hashes through their tools and "decrypted" a large portion of them, but is it really reasonable that they would have the computing resources to claim to have cracked them all ? (BTW, is "decrypted" even correct in this context?) I emailed him asking if he meant to say that they had run all passwords THROUGH the cracking tools, and merely found a large number of weaker ones. However he replied that, no, they had indeed decrypted ALL of the user passwords. I can appreciate the security lesson he's trying to teach here, but my password is 8 random characters, generated by KeePass. I thought it was pretty good, it's something similar to Q6&dt>w} (obviously that's not really it, but it's similar to that). Are modern cracking tools really that powerful? Or is this guy probably just pulling my leg in the name of a good security lesson? P.S. I replied to his email asking if he could tell me what the last two characters of my password were. No reply yet, but I will update if he manages to produce it! EDIT: Some answers are discussing my particular password length. Note that not only is he claiming that they cracked MY password (which is believable if they singled me out), but that he's claiming that they did this for ALL users - and we have well over 10,000 employees!! I'm not so naive as to think that that means 10,000 good, secure passwords, but if even 1% of the users have a good password, that's still 100 good secure passwords they claim to have cracked! | The only realistic way that 100% of passwords got cracked is if you're storing LM hashes on windows. LM hashes split into 2 seven character chunks making brute force/rainbow table attacks practicable (they're also case insensitive for added ease). Rainbow tables exist for this and it's easily do-able. Outwith that, anyone with 10+ character passwords that aren't in a dictionary (or findable by mutating dictionary words) aren't going to get cracked on any reasonable system, even with weak algorithms (e.g. md5) and no salt. AFAIK rainbow tables aren't practical on passwords that long (for reference free rainbow tables have a 2.8 TB pack of MD5 hashes which tops out at some nine character passwords (not full char set). One point I would make is that if I was the VP of IT I'd be concentrating on getting rid of LM hashes rather than just telling people about good password practices for the very reason that he was able to retrieve 100% of passwords :) | {
"source": [
"https://security.stackexchange.com/questions/24637",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16510/"
]
} |
24,714 | We have users sync their data at company with their home computer.
What's the best way to block it? Block *.dropbox.com Find out all dropbox IPs and block IPs. For windows users, deploy GPO to prevent dropbox installation. | Using Dropbox is not inherently a greater security risk than other methods of data transfer. I work at a security consulting firm and we often use Dropbox to move encrypted archives to our clients. We also use SFTP, but this seems to be problematic for some of our clients. A better policy is that all company data must be encrypted at rest. This policy should include company laptops, servers, cloud services and anywhere else you maybe storing company data. Make sure you educate your employees about storing and transferring data in a secure manner. Blocking Dropbox may have adverse effects, such as forcing employees to use less secure methods of transferring data. I have found that employees will find creative ways of doing their job, and its not always secure. | {
"source": [
"https://security.stackexchange.com/questions/24714",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15580/"
]
} |
24,748 | I'm a university lecturer and a web and desktop software developer. For many reasons I want to learn software security to change my field in the long run. It's been a few days that I've started studying tcp/ip as my first step in this still self-learning process. I thought it's better to share my efforts with the community to get as much as I can from experienced experts. I appreciate any guideline and etc which you will share. Do you think I'm on the right track? Do you think it's a must to learn tcp/ip very deep? | IT in general, IT security in particular, is an area where you should always learn. When you do not want to learn any further, then it is time to retire. Therefore, you should already be eager to learn TCP/IP, and your question should be: "do I learn TCP/IP first , or is there something more urgent ?" Knowing the internals of TCP/IP is an invaluable tool for understanding what is going on; it is very enlightening. I warmly recommend that you study it. Similarly, I recommend some knowledge of assembly, possibly electronics. Grasping the internal structure of protocols and languages and architectures allows you to keep track of the ever changing field of IT security with much less effort than simply looking at the surface of things. (For instance, in my everyday work, knowing how SSL works turns out to be extremely useful, on a daily basis.) | {
"source": [
"https://security.stackexchange.com/questions/24748",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16347/"
]
} |
24,850 | I am developing an application which has a client-server relationship, and I am having trouble deciding on the algorithm by which the session identifier is determined. My goal is to restrict imposters from acquiring other users' private data. I'm considering two options: Option 1: Generate a random 32-character hex string, store it in a database, and pass it from the server to the client upon successful client login. The client then stores this identifier and uses it in any future request to the server, which would cross-check it with the stored identifier. Option 2: Create a hash from a combination of the session's start time and the client's login username and/or hashed password and use it for all future requests to the server. The session hash would be stored in a database upon the first request, and cross-checked for any future request from the client. Other info: Multiple clients can be connected from the same IP simultaneously, and no two clients should have the same session identifier. Question: Which of these options is a better approach, with regards to my concerns (below) about each? My concern over the first option is that the identifier is completely random and therefore could be replicated by chance (although it's a 1 in a 3.4 * 10 38 chance), and used to "steal" one user's (who would also need to be using the client at the time) private data. My concern over the second option is that it has a security flaw, namely that if a user's hashed password is intercepted somehow, the entire session hash could be duped and the user's private data could be stolen. Thanks for any and all input. | The basic concept of a session identifier is that it is a short-lived secret name for the session , a dynamic relationship which is under the control of the server (i.e. under the control of your code). It is up to you to decide when sessions starts and stop. The two security characteristics of a successful session identifier generation algorithm are: No two distinct sessions shall have the same identifier, with overwhelming probability. It should not be computationally feasible to "hit" a session identifier when trying random ones, with non-negligible probability. These two properties are achieved with a random session ID of at least, say, 16 bytes (32 characters with hexadecimal representation), provided that the generator is a cryptographically strong PRNG ( /dev/urandom on Unix-like systems, CryptGenRandom() on Windows/Win32, RNGCryptoServiceProvider on .NET...). Since you also store the session ID in a database server side, you could check for duplicates, and indeed your database will probably do it for you (you will want this ID to be an index key), but that's still time wasted because the probability is very low. Consider that every time you get out of your house, you are betting on the idea that you will not get struck by lightning. Getting killed by lightning has probability about 3*10 -10 per day ( really ). That's a life threatening risk , your own life, to be precise. And yet you dismiss that risk, without ever thinking about it. What sense does it make, then, to worry about session ID collisions which are millions of times less probable, and would not kill anybody if they occurred ? There is little point in throwing an extra hash function in the thing. Properly applied randomness will already give you all the uniqueness you need. Added complexity can only result in added weaknesses. Cryptographic functions are relevant in a scenario where you not only want to have session, but you also want to avoid any server-based storage cost; say, you have no database on the server. This kind of state offloading requires a MAC and possibly encryption (see this answer for some details). | {
"source": [
"https://security.stackexchange.com/questions/24850",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16679/"
]
} |
24,896 | Knowledge of a CA private key would allow MitM attackers to transparently supplant any certificates signed by that private key. It would also allow cyber criminals to start forging their own trusted certificates and selling them on the black market. Given the huge profits that could be made with such knowledge, and the fact that a highly trusted and ubiquitous certificate (such as any of the main Verisign keys) would be a very difficult thing to revoke quickly, it stands to reason that there would be highly motivated and well funded criminal elements attempting to get their hands on such keys on a regular basis. How do certification authorities deal with this threat? It sounds like a real nightmare, having to ring-fence the keys away from all human eyes, even the sysadmins. All the while the keys have to be used on a daily basis, often by internet-connected signing services. | Serious certification authorities use heavy procedures. At the core, the CA key will be stored in a Hardware Security Module ; but that's only part of the thing. The CA itself must be physically protected, which includes proactive and retrospective measures. Proactive measures are about preventing attacks from succeeding. For instance, the CA will be stored in a vault, with steel doors and guards. The machines themselves are locked, with several padlocks, and nobody holds more than one padlock key. Physical security is of paramount importance; the HSM is only the deepest layer. Retrospective measures are about recovering after an incident. The HSM will log all signatures. The machine is under 24/7 video surveillance, with off-site recording. These measures are about knowing what has happened (if you prefer, knowing a priori that, should a problem occur, we will be able to analyze it a posteriori ). For instance, if "illegal" certificates have been emitted but the complete list of such certificates can be rebuilt, then recovery is as "easy" as revoking the offending certificates. For extra recovery, the CA is often split into a long-lived root CA which is kept offline, and a short-lived intermediate CA. Both machines are in the cage and bunker; the root CA is never connected to a network. The root CA is physically accessed, with dual control (at least two people together, and video recording) on a regular basis, to emit the certificate for the intermediate CA, and the CRL. This allows revoking an intermediate CA if it got thoroughly hacked (to the point that its private key was stolen, or the list of fraudulently emitted certificates cannot be rebuilt). Initial setup of a serious root CA involves a Key Ceremony with herds of auditors with prying eyes, and a formalism which would not have been scorned by a Chinese Emperor from the Song dynasty. No amount of auditing can guarantee the absence of vulnerabilities; however, that kind of ceremony can be used to know what was done, to show that security issues have been thought about , and, come what may, to identify the culprit if trouble arises. I have been involved in several such ceremonies; they really are a big "security theater" but have merits beyond the mere display of activities: they force people to have written procedures for everything . The question is now: are existing CA really serious, in the way described above ? In my experience, they mostly are. If the CA has anything to do with VISA or MasterCard, then you can be sure that HSM, steel and ill-tempered pitbulls are part of the installation; VISA and MasterCard are about money and take it very seriously. For the CA which are included in Web browsers and operating systems, the browser or OS vendor tends to require a lot of insurance. There again, this is about money; but the insurance company will then require all the physical measures and accounting and auditing. Formally, this will use certifications like WebTrust . This is true even for infamous CA like DigiNotar or Comodo: note that while they got hacked and fake certificates were issued, the said certificates are known and were revoked (and Microsoft added them to a list of "forbidden certificates" which can be seen as a kind of "revoked, and we really mean it" -- software must go out of his way to accept them nonetheless). The "weak" CA are mostly the State-controlled root CA. Microsoft can refuse to include a root key from a private venture, if Microsoft feels that enough insurance has not been provided (they want to be sure that the CA is financially robust, so that operations can be maintained); I know of a bank with millions of customers who tried to get their root key included in Windows, and were dismissed on the basis that they were "too small". However, Microsoft is much weaker against official CA from sovereign states; if they want to do business in country X, they cannot really afford to reject the root CA key from government X. Unfortunately, not all governments are "serious" when it comes to protecting their CA keys... | {
"source": [
"https://security.stackexchange.com/questions/24896",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9377/"
]
} |
25,113 | If you purchase an existing domain name that was already used by someone else what are the ways in which the domain could have been broken by the previous owner? Are such problems common and are there tools to detect them before purchasing a domain? Two examples:
An HTTP server serving the domain could have returned permanent redirect making a domain unusable for visitors that received the redirect until they clear browsers caches. Similarly, a server could have returned HTTP "Strict-Transport-Security" header making a domain unusable over HTTP for visitors that received the header. Any other examples? | Some common risks to check: Domain has Bad reputation - check for any existing negative online reviews for the domain. Domain is Blocked in search results - Risk of search engine turning off the domain in its search results due to the previous content, malware etc. Domain is Black listed - Domain on black lists such as Web of Trust and spam lists. Sometimes the Way Back Machine can show you the domain history. EDIT: IP / Trademark infringement - the domain you purchased may infringe registered trademarks: consult your legal advisor before purchasing | {
"source": [
"https://security.stackexchange.com/questions/25113",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16847/"
]
} |
25,119 | I have a program that has to detect scanning activity of worms. I need some samples of worms' traffic (pcap files) to try the program, and notes about the procedures which must be taken when running such files to keep my computer safe. Are there any safe capture files that contain the scanning activity only without the worm payload? There is a sample capture for slammer worm at wireshark.org, but when I tried to open it I got a warning. What I should do in this case?? | Some common risks to check: Domain has Bad reputation - check for any existing negative online reviews for the domain. Domain is Blocked in search results - Risk of search engine turning off the domain in its search results due to the previous content, malware etc. Domain is Black listed - Domain on black lists such as Web of Trust and spam lists. Sometimes the Way Back Machine can show you the domain history. EDIT: IP / Trademark infringement - the domain you purchased may infringe registered trademarks: consult your legal advisor before purchasing | {
"source": [
"https://security.stackexchange.com/questions/25119",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16852/"
]
} |
25,170 | If an attacker obtains a file that has been encrypted using an OpenPGP public key, what information can the attacker deduce? For example, to what degree of certainty can the attacker deduce the identity of the intended recipient? | The key ID of the recipient is included in plain-text in the encrypted file. Other possibly interesting information "hidden in plain sight" is just the size of the file, or the name of the encrypted file (if someone just sends it without alteration of course.) What you might not realise is that the recipient Key ID is effectively an optional field. Section 5.1 goes on to say: An implementation MAY accept or use a Key ID of zero as a "wild card"
or "speculative" Key ID. In this case, the receiving implementation
would try all available private keys, checking for a valid decrypted
session key. This format helps reduce traffic analysis of messages. You can encrypt using the -R (or --hidden-recipient ) flag with gpg to avoid revealing the recipient's public key in an encrypted message. $ gpg -e -R [email protected] message.txt
$ $ gpg --verbose --verbose --decrypt message.txt.gpg
:pubkey enc packet: version 3, algo 1, keyid 0000000000000000
data: [2047 bits]
gpg: public key is 00000000
gpg: anonymous recipient; trying secret key aaaaaaaa ...
gpg: anonymous recipient; trying secret key bbbbbbbb ...
gpg: anonymous recipient; trying secret key cccccccc ...
:encrypted data packet:
length: 76
mdc_method: 2
gpg: encrypted with RSA key, ID 00000000
gpg: decryption failed: secret key not available
$ As this point, gpg iterates through all the private keys it has trying to obtain a valid session key, as it cannot identify the public key used for encryption. However, also see this answer for ways to differentiate between recipients if the attacker has access to a large number of messages. A practical aside -- secondary clues may be in various logs. For instance, an attacker who obtains such a message might also be able to access (say) a .bash_history file with the recipient's address, or a web-server log with IP addresses that provide clues to who POST'ed or GETs the file, etc. | {
"source": [
"https://security.stackexchange.com/questions/25170",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15712/"
]
} |
25,172 | An OpenPGP encrypted file will include the key ID of the intended recipient's public encryption key, as explained in this question . Is there any way to remove that information from the resulting encrypted file? Does gpg provide an option to not include that information? If not, what workarounds are recommended? I want to encrypt a file for a specific recipient and share it with any third party without revealing the identity of the recipient or of the sender. (It may be assumed that the recipient's public key is widely shared and associated with the recipient's real identity.) | Use the -R (or --hidden-recipient ) flag in gpg to do this. More details in this answer. | {
"source": [
"https://security.stackexchange.com/questions/25172",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15712/"
]
} |
25,241 | Is there a concept where pre-computed tables can be used for prime number factorization ? Is it possible that a computer can generate millions of prime numbers, store it and then effectively determine the factors ? | It's unlikely. The primes involved are huge , so the keyspace is massive. Just how massive depends on your key size, but let's pick 512-bit primes for a lower bound example. The prime counting function gives us an estimate of how many prime numbers are below a given number. It is difficult to compute precisely, but a close estimate is defined as π(x) = x / ln(x) , where ln is the natural logarithm. As such, we can compute an estimate of the expected number of primes below the highest value in an n -bit number by computing π(2^n) . If we want to exclude all numbers that aren't exactly n -bit, we compute π(2^n) - π(2^(n-1)) . This isn't technically required , but it gives us a nice lower bound of how many large primes there are for that key size. For n = 512 the number of primes required for an exhaustive list is 1.885×10 151 . If we can store every prime in a 512-bit entry, that's 1.207×10 153 bytes , which is 132 orders of magnitude more than we have disk storage capacity in the world. So no, not really feasible. | {
"source": [
"https://security.stackexchange.com/questions/25241",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11218/"
]
} |
25,282 | Our company has been using several tool (Veracode, Appscan, etc) to verify that our applications meet minimum security requirements. I have inherited a legacy application which has several flaws. I've managed to mitigate most of the vulnerabilities, but am having troubles fixing this next one because I simply do not know what it is doing. In our report, it seems that it is changing one of our POST variables to 25%27+having+1%3D1--+ What is this and what is it effectively doing? What can I do to prevented by this type of attack? | When this string is decoded from its url-encoded form it becomes the following: 25' having 1=1-- This string, when placed as is into, for example, the following (PHP) database query function: mysql_query("SELECT * FROM users WHERE username = '$username'"); Becomes this: mysql_query("SELECT * FROM users WHERE username = '25' having 1=1--'"); Note here that the %27 (') breaks out of the argument in the WHERE clause and continues the executable part of the statement. The -- after 1=1 makes the rest of the statement a comment which is not executed. The HAVING statement in SQL is supposed to be used within queries which use the GROUP BY operator, and should fail in queries which do not. My guess here is that this string is being used to check simply for the presence of an unsanitised variable which gets placed into an executed query. To prevent this type of attack I would suggest using a good input sanitation function, or parameterised queries. The implementation of this depends on the programming environment in question. Addition : The normal use of 1=1 in SQL injection queries is to cause all rows to be returned, voiding any other WHERE conditions. An example input could be: a' OR 1=1 -- When inserted as the $password parameter in a query such as: SELECT * FROM users WHERE username = '$username' AND password = '$password' The statement becomes: SELECT * FROM users WHERE username = 'mark' AND password = 'a' OR 1=1 -- The resulting dataset will include all entries in the 'users' table, as 1=1 is always true. | {
"source": [
"https://security.stackexchange.com/questions/25282",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16944/"
]
} |
25,332 | This might be far too narrow, but it is a unique problem to ITSec professionals. A loved one is just starting out in a new programming career and I get the joy of watching her learn the most basic programming concepts from scratch. She is at the top of her class in each of her college courses, producing high quality work, and she has attracted so much attention that she is already getting contract work. As ITSec pros, we talk about infusing the development cycle with secure coding practices and design, but how does that apply to a brand-new learner? A new programmer is at the start of their own 'lifelong development cycle', at it were. At what point is it appropriate, from an educational perspective, to switch from the mindset of 'getting it to work' to 'it absolutely must be secure'? At what point should a student 'fail' an assignment because of a security issue? She completely understands the need to produce secure code, and wants to, but none of her classes have introduced the idea and keeps coming to me for code review and analysis. When should the switch be made to force her to re-design all the class assignments to use secure design? I don't want to cut a promising career short by introducing frustrating requirements, but I also want to give that new career the best possible start. In addition, are there resources to help me teach her the basics of secure coding from a beginning programmer's perspective? I find that I'm making it up as I go... I welcome your advice. At what point is it appropriate, from an educational perspective, to switch from the mindset of 'getting it to work' to 'it absolutely must be secure'? At what point should a student 'fail' an assignment because of a security issue? When should the switch be made to force her to re-design all the class assignments to use secure design? Are there resources to help me teach her the basics of secure coding from a beginning programmer's perspective? As a side note: I have noticed that by raising the bar in my code review of her class assignments, just in terms of basic 'validate and sanitize' security, she has ended up producing very high quality code in general. From this one example, I think I can see the value of starting one's education this way because it forces an even deeper understanding of data flow and programming logic. | I would say a great way to learn is for her to break the applications she has already written. Assuming she is writing web applications, point her towards the OWASP Top 10. Have her see if she can find any of those flaws in her own code. There is no better way to learn about security concepts than actually seeing it happen on your own code. Once a flaw has been found, have her rewrite the application to fix the flaw. Doing so will allow her to appreciate the effect of things like sanitation and validation of user inputs and parameterized queries. Take incremental steps. I wouldn't jump straight into designing a new application with security in mind before truly understanding what type of codes result in security flaws. | {
"source": [
"https://security.stackexchange.com/questions/25332",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6253/"
]
} |
25,374 | Everyone knows that if they have a system that requires a password to log in, they should be storing a hashed & salted copy of the required password, rather than the password in plaintext. What I started to wonder today is why the don't they also store the user ID in a similar hashed & salted password? To me this would seem logical because I can't see any drawbacks, and if the db was compromised, the attackers would need to "crack" the password and the username hash before they could compromise that account. It would also mean that if usernames were hashed and salted email addresses, they would be more protected from being sold on to SPAMmers. | You see that thing up there where it displays your username? They can't do that if the username is stored hashed now can they? One word, usability. | {
"source": [
"https://security.stackexchange.com/questions/25374",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11137/"
]
} |
25,375 | RSA Security commonly uses keys of sizes 1024-bit, 2048-bit or even 3072-bit. And most Symmetric algorithms only between 112-bit and 256-bit. I do realize that the current keys are secure enough for today's hardware, but as computers get faster, should we not consider an insanely large key size like a million bits or so to protect ourselves against super computer systems that has not been invented yet? So in other words what is the consequences of choosing a cipher key that is too large and why does everyone restrict their key sizes? | I dug out my copy of Applied Cryptography to answer this concerning symmetric crypto, 256 is plenty and probably will be for a long long time. Schneier explains; Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe. One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT , where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.) Given that k = 1.38 × 10 −16 erg/K, and that the ambient temperature of the universe is 3.2 Kelvin, an ideal computer running at 3.2 K would consume 4.4 × 10 −16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump. Now, the annual energy output of our sun is about 1.21 × 10 41 ergs. This is enough to power about 2.7 × 10 56 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2 192 . Of course, it wouldn't have the energy left over to perform any useful calculations with this counter. But that's just one star, and a measly one at that. A typical supernova releases something like 10 51 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states. These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space. The boldness is my own addition. Remark: Note that this example assumes that there is a 'perfect' encryption algorithm. If you can exploit weaknesses in the algorithm, the key space might shrink and you'd end up with effectively less bits of your key. It also assumes that the key generation is perfect - yielding 1 bit of entropy per bit of key. This is often difficult to achieve in a computational setting. An imperfect generation mechanism might yield 170 bits of entropy for a 256 bit key. In this case, if the key generation mechanism is known, the size of the brute-force space is reduced to 170 bits. Assuming quantum computers are feasible, however, any RSA key will be broken using Shor's algorithm. (See https://security.stackexchange.com/a/37638/18064 ) | {
"source": [
"https://security.stackexchange.com/questions/25375",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17017/"
]
} |
25,437 | Scenario: I am hosting a website for a my client, who we'll call S. S owns the domain s.com and I own the servers that actually host the website. S now wants to enable SSL on their website. I have generated a private key and CSR on my server, and sent the CSR to S so that they can submit it to their CA. However, like all clients, S is cheap. They don't want to spend the money on a new certificate. Instead, they want to give me their existing SSL certificate for *.s.com and its corresponding private key. This certificate and key is already in use by S and some of S's other vendors. I already know that this is a bad idea, but why? What problems does this cause now and potentially in the future? For bonus points, does sharing a private key like this cause any problems in the context of PCI standards? | There are several reasons why wildcard certificates are bad: The same private key has to go on the systems that have different security levels, so your key is only as good as your least-protected system. Giving it out to third-party vendors is a really bad idea , as then it completely escapes your control. You have to keep meticulous records that show exactly where your wildcard private key is installed, so that when you have to replace it, you don't have to play "Where is Waldo" across all your sites. Most importantly, if the private key and wildcard certificate are stolen at any point, the attackers can then impersonate any system in that wildcard space. Common mistake is to say "we'll use wildcard certs on low-security systems, but named certs on all important systems" -- you have to do the exact opposite! If the attackers have *.s.com, they can impersonate any domain in .s.com space, regardless if it's using any other certificates. So if you have "www.s.com" with a wildcard and "login.s.com" with a one-off cert, attackers can impersonate "login.s.com" regardless of what's on "login.s.com". The easiest way to take advantage of stolen wildcard keys is by DNS poisoning or via rogue wireless APs. All that being said, you have to evaluate the risks. If your friend S is already giving out this wildcard cert and key to all the vendors, then refusing to accept it doesn't lower his risks in any significant way. You'll just appear obstinate and uncooperative and lose his business. Do your best to educate, but also understand that some people just won't care. If, however, he's bound to conform to PCI-DSS, then I'm pretty sure he's not compliant, as I think you're supposed to have logs of all access to private encryption keys. I'll let others more familiar with PCI-DSS expand on that. | {
"source": [
"https://security.stackexchange.com/questions/25437",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4906/"
]
} |
25,585 | A developer, let's call him 'Dave', insists on using home-brew scripts for password security. See Dave's proposal below. His team spent months adopting an industry standard protocol using Bcrypt . The software and methods in that protocol are not new, and are based on tried and tested implementations that support millions of users. This protocol is a set of specifications detailing the current state of the art, software components used, and how they should be implemented. The implementation is based on a known-good implementation. Dave argued against this protocol from day one. His reasoning was that algorithms like Bcrypt, because they are published, have greater visibility to hackers, and are more likely to be targeted for attack. He also argued that the protocol itself was too bulky and difficult to maintain, but I believe Dave's primary hangup was the fact that Bcrypt is published. What I'm hoping to accomplish by sharing his code here, is to generate consensus on: Why home-brew is not a good idea, and What specifically is wrong with his script /** Dave's Home-brew Hash */
// user data
$user = '';
$password = '';
// timestamp, random #
$time = date('mdYHis');
$rand = mt_rand().'\n';
// crypt
$crypt = crypt($user.$time.$rand);
// hash
function hash_it($string1, $string2) {
$pass = md5($string1);
$nt = substr($pass,0,8);
$th = substr($pass,8,8);
$ur = substr($pass,16,8);
$ps = substr($pass,24,8);
$hash = 'H'.sha1($string2.$ps.$ur.$nt.$th);
return $hash
}
$hash = hash_it($password, $crypt); | /** Dave's Home-brew Hash^H^H^H^H^Hkinda stupid algorithm */
// user data
$user = '';
$password = '';
// timestamp, "random" #
$time = date('mdYHis'); // known to attackers - totally pointless
// ^ also, as jdm pointed out in the comments, this changes daily. looks broken!
// different hashes for different days? huh? or is this stored as a salt?
$rand = mt_rand().'\n'; // mt_rand is not secure as a random number generator
// ^ it's even less secure if you only ask for a single 31-bit number. and why the \n?
// crypt is good if configured/salted correctly
// ... except you've used crypt on the username? WTF.
$crypt = crypt($user.$time.$rand);
// hash
function hash_it($string1, $string2) {
$pass = md5($string1); // why are we MD5'ing the same pass when crypt is available?
$nt = substr($pass,0,8); // <--- BAD BAD BAD - why shuffle an MD5 hash?!?!?
$th = substr($pass,8,8);
$ur = substr($pass,16,8);
$ps = substr($pass,24,8); // seriously. I have no idea. why?
// ^ shuffling brings _zero_ extra security. it makes _zero_ sense to do this.
// also, what's up with the variable names?
// and now we SHA1 it with other junk too? wtf?
$hash = 'H'.sha1($string2.$ps.$ur.$nt.$th);
return $hash
}
$hash = hash_it($password, $crypt); // ... please stop. it's hurting my head.
summon_cthulhu(); Dave, you are not a cryptographer. Stop it. This home-brew method offers no real resistance against brute force attacks, and gives a false impression of "complicated" security. In reality you're doing little more than sha1(md5(pass) + salt) with a possibly-broken and overly complicated hash. You seem to be under the illusion that complicated code gives better security, but it doesn't. A strong cryptosystem is strong regardless of whether the algorithm is known to an attacker - this fact is known as Kerckhoff's principle . I realise that it's fun to re-invent the wheel and do it all your own way, but you're writing code that's going into a business-critical application, which is going to have to protect customer credentials. You have a responsibility to do it correctly. Stick to tried and tested key derivation algorithms like PBKDF2 or bcrypt , which have undergone years of in-depth analysis and scrutiny from a wide range of professional and hobbyist cryptographers. If you'd like a good schooling on proper password storage, check out these links: How to store salt? Do any security experts recommend bcrypt for password storage? How to securely store passwords? | {
"source": [
"https://security.stackexchange.com/questions/25585",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17288/"
]
} |
25,684 | I need to explain SQL injection to someone without technical training or experience. Can you suggest any approaches that have worked well? | The way I demonstrate it to complete non-techies is with a simple analogy. Imagine you're a robot in a warehouse full of boxes. Your job is to fetch a box from somewhere in the warehouse, and put it on the conveyor belt. Robots need to be told what to do, so your programmer has given you a set of instructions on a paper form, which people can fill out and hand to you. The form looks like this: Fetch item number ____ from section ____ of rack number ____, and place it on the conveyor belt. A normal request might look like this: Fetch item number 1234 from section B2 of rack number 12 , and place it on the conveyor belt. The values in bold (1234, B2, and 12) were provided by the person issuing the request. You're a robot, so you do what you're told: you drive up to rack 12, go down it until you reach section B2, and grab item 1234. You then drive back to the conveyor belt and drop the item onto it. But what if a user put something other than normal values into the form? What if the user added instructions into them? Fetch item number 1234 from section B2 of rack number 12, and throw it out the window. Then go back to your desk and ignore the rest of this form. and place it on the conveyor belt. Again, the parts in bold were provided by the person issuing the request. Since you're a robot, you do exactly what the user just told you to do. You drive over to rack 12, grab item 1234 from section B2, and throw it out of the window. Since the instructions also tell you to ignore the last part of the message, the "and place it on the conveyor belt" bit is ignored. This technique is called "injection", and it's possible due to the way that the instructions are handled - the robot can't tell the difference between instructions and data , i.e. the actions it has to perform, and the things it has to do those actions on. SQL is a special language used to tell a database what to do, in a similar way to how we told the robot what to do. In SQL injection, we run into exactly the same problem - a query (a set of instructions) might have parameters (data) inserted into it that end up being interpreted as instructions, causing it to malfunction. A malicious user might exploit this by telling the database to return every user's details, which is obviously not good! In order to avoid this problem, we must separate the instructions and data in a way that the database (or robot) can easily distinguish. This is usually done by sending them separately. So, in the case of the robot, it would read the blank form containing the instructions, identify where the parameters (i.e. the blank spaces) are, and store it. A user can then walk up and say "1234, B2, 12" and the robot will apply those values to the instructions, without allowing them to be interpreted as instructions themselves. In SQL, this technique is known as parameterised queries. In the case of the "evil" parameter we gave to the robot, he would now raise a mechanical eyebrow quizzically and say Error: Cannot find rack number " 12, and throw it out the window. Then go back to your desk and ignore the rest of this form. " - are you sure this is a valid input? Success! We've stopped the robot's "glitch". | {
"source": [
"https://security.stackexchange.com/questions/25684",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9633/"
]
} |
25,772 | A stranger walks up to you on the street. They say they lost their phone and need to make a phone call (has happened to me twice, and maybe to you). What's the worst a phone call could do? Let's assume they don't run, don't plug any devices into the phone, they just dial a number and do whatever, and hang up. | A few scams I've seen making the rounds: Use it to dial a premium rate number owned by the group. In the UK, 09xx numbers can cost up to £1.50 per minute, and most 09xx providers charge around 33%, so a five minute call syphons £5 into the group's hands. If you're a good social engineer, you might only have a 10 minute gap between calls as you wander around a busy high-street, so that's £15 an hour (tax free!) - almost triple minimum wage. Use it to send premium rate texts. The regulations on there are tighter, but if you can get a premium rate SMS number set up, you can charge up to £10 per text. A scammer would typically see between £5 and £7 of that, after the provider takes a cut. It's also possible to set up a recurring cost, where the provider sends you messages every day and charges you up to £2.50 for each one. By law the service provider must automatically cancel it if they send a text sayin STOP, but every extra message you send gains you money. Set up an app in the app store, then buy it on peoples' phones. This can be very expensive for the victim, since apps can be priced very high - some up to £80. In-app purchases also work. This is precisely why you should be prompted for your password on every app purchase and in-app purchase, but not all phones do so! Install a malicious app, such as the mobile Zeus trojan. This can then be used to steal banking credentials and email accounts. This seems to be gaining popularity on Android phones. | {
"source": [
"https://security.stackexchange.com/questions/25772",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17662/"
]
} |
25,876 | Let's say I have a message that I want to keep safe for the next 100 years. Is it theoretically possible? Let's say the message is unique (raw picture data, raw video video data, raw text data) and the key is unique. Can anything prevent (near)future brute force attack? | TL;DR in bold: We don't have crystal balls to predict where technology will take us, but the purpose of cryptography is to develop algorithms that have just this kind of resistance on a very fundamental level. Mathematically speaking, in terms of "honestly" brute-forcing a single plaintext from a single ciphertext , knowing everything that would otherwise be needed to decrypt the plaintext other than the key, a 256-bit block cipher should be considered uncrackable by the laws of thermodynamics ; our Sun will not emit enough photons in its active life (before it becomes a black dwarf) to enable a computer that works at the electron level with perfect efficiency to try every possible key in the 256-bit keyspace. The computer would be starved for energy after trying about 2^218 key values, meaning that if it tried keys perfectly randomly without replacement, and the key was also chosen perfectly randomly, it would only have a one in 275 billion chance to have found the correct key, given roughly 5 billion years before the Sun goes nova. There is, however, one distinct disadvantage to the use of any computational hardware or software in existence today; it is already obsolete. It would be foolish to assume that any binary implementation currently used to encipher messages, nor any computer, language or even logical structure used to develop or execute that binary code, will still be in existence in 100 years. We already have problems with the availability of hardware that can read or write storage devices that were universal standards only 20 years ago, like floppy disks and tape drives. Even power plugs have changed fundamentally in the last 50 years, and those plugs and receptacles (and the exact specs of the power they use) are only regional standards as it is. Hell, 100 years ago we were riding horses, not driving cars. So, if you plan to encrypt something and then store the device used to produce it, I can virtually guarantee that in 100 years everything about the device will be so obsolete it will be unusable. Given this 100-year expected timespan, I would rely on one of the simplest and yet most secure ciphers known to man; the one-time pad. Simply create a numeric "alphabet" relating a number value to every character symbol you wish to be able to encipher (don't forget any needed punctuation, such as spaces, commas, periods, single/double quotes etc; you must not use more than 100 symbols, but most fully-formatted English-language messages will be well underneath this limit, even with capital letters). Then, gather a series of numbers, from zero to 99. You will need one of these numbers per character of the message. These numbers must be genuinely random; serious users of one-time pads usually gather bit data from environmental sources, such as measurements of background radiation detected on a radio telescope. For today's purposes, you could purchase a 100-sided die and give it a roll around the inside of a bowl to produce each number. It should also be acceptable to use a CSPRNG, provided that it is properly seeded from a source of true entropy (provided that the seed value is discarded after use, you probably won't need enough numbers from the PRNG for an attacker to be able to predict them). Make sure the memory is fully cleared from any electronic device used to generate random data. To encrypt, take the random pad, the numeric alphabet, and the message you wish to encrypt into a place where you can't be seen by anyone else or any surveillance equipment (you know, just in case). Take the first character of the message, look up its alphabet code number, then get the first number from the random pad, and subtract the random number from the character code. If you end up with a number less than zero, subtract that number from 100. Look the resulting number up on the alphabet chart, and write down the character as the first character of the ciphertext. Repeat with the second character of the plaintext and the second number of the random pad. Continue through the message until you have encoded every character of the plaintext message. You must have used every number of the random pad, in order, once and only once. When you're done, you burn or otherwise completely destroy your copy of the plaintext, and physically secure the random pad and alphabet. This is important; nobody should be able to get their hands on the random pad for 100 years, but they must be able to get to it in 100 years. The alphabet is not technically a secret, but it's inclusion with the ciphertext is a clue as to the use of a one-time pad (leading people to go searching for the pad), so I'd recommend keeping it with the pad. The ciphertext itself is perfectly secure in its encrypted form; you can chisel it into the stonework of the Supreme Court building if you wanted (and could do it fast enough to finish before the DC Metro Police took you away). It could be any of a number of alphabet ciphers, lending a little entropy to mask the discovery that it's a OTP in the first place. To decrypt, someone must have the ciphertext, the same one-time pad and the same alphabet table that you used to encrypt. They take the ciphertext's first character, look up the number in the alphabet table, then add the number from the one-time-pad and modulo by 100 (simply truncating the hundreds place wherever it shows up). They turn the resulting number back into a character using the alphabet, and write it down. They then continue, character by character and number by number. Again, they must use every number of the random pad, in order, once and only once. The one-time pad is a very old system, first being described in 1882 and re-invented in 1917, and it has the ultimate advantage; it is provably impossible to crack . Being genuinely random, there is no mathematical pattern to any of the numbers of the random pad (which forms the "key" of the cipher), and because each number is only ever used once (if you have more than 100 characters you will see the same two-digit number in the pad twice, but you can never predict when), the pad itself is never repeated. Thus, any attempt to decipher the message without the exact same pad is futile; it could produce gibberish, or it could produce a message of exactly the same number of characters, but whose content is something completely different. The only way to be sure that every character of the message is decrypted correctly is to have the exact same sequence of random numbers that encrypted the thing in the first place. The disadvantages that keep it being used more universally are that it ideally requires a pad of infinite length (to handle any number of messages of any length), and there's a large possibility of offset error; someone can miss some or all of a message sent by the other person, so one person has crossed off fewer numbers on their pad, resulting in all further messages between the two parties being indecipherable and no real way to recover without exchanging a new pad. In your case, these are moot, but pretty much every other cipher system invented in the last 100 years has attempted to mitigate these disadvantages (primarily the infinite-length key) while trading as little as possible of the ideal security of the system. | {
"source": [
"https://security.stackexchange.com/questions/25876",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2337/"
]
} |
25,996 | Say I have an X.509 cert and a private key that corresponds to it. I can import X.509 certs easily enough into Windows but what about private keys? Is the only way I can do that by converting both the cert and the private key to a "Personal Information Exchange (PKCS #12)" file and importing that? Maybe this question would be better on superuser.com? Either way, thanks! | The answer to your question is Yes . You must convert the X.509 into a PFX and import it. There is no separate key store in Windows. You can convert your certificate using OpenSSL with the following command: openssl pkcs12 -export -out cert.pfx -inkey private.key -in cert.crt -certfile CACert.crt | {
"source": [
"https://security.stackexchange.com/questions/25996",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15922/"
]
} |
26,049 | I'm trying to go to the URL below, and Chrome warns me that the wildcard certificate is not valid for this domain. https://chart.apis.google.com/chart?cht=qr&chs=100x100&chl=otpauth%3A%2F%2Ftotp%2FTest123%204%3Fsecret%3DTKQWCOOJ7KJ4ZIR At first I thought it was a quirk on how chrome handles URLs with many dots in a wildcard cert, however I see this text in the warning, and am also unable to click through You cannot proceed because the website operator has requested heightened security for this domain. Question Is this a problem with Chrome's wild card certs and sub domains that are more than 2 layers deep? Does the error mean that the website owner has "done something" to make wildcard certificates not work for subdomains? | Please try this Chrome hack: when browser shows the page with the invalid certificate message, type in your keyboard the word "proceed" and then hit Enter. You should be able to proceed to the requested page. On newer versions of Chrome, you may have to type "danger" and hit Enter instead. | {
"source": [
"https://security.stackexchange.com/questions/26049",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
26,142 | I'm looking for a way to protect clients from a MITM attack without using a VPN, and ToR. My thought is that client certificates might do the job, but I'm not entirely sure since not too much server or browser support has been extended in this area. Can anyone tell me if Client Certificates provide any MITM protection? Is that protection just for the authentication exchange or does it protect the entire TLS session? Is there any good resource for studying this in deep detail? | A man-in-the-middle attack is when an attacker inserts himself between client and server, and impersonates the client when talking to the server, and impersonates the server when talking to the client. "Impersonation" makes sense only insofar as there is an expected peer identity; you cannot impersonate an anonymous client. From the point of view of the server , if the client shows a certificate and uses his private key as part of a CertificateVerify message (as described in the SSL/TLS standard , section 7.4.8), and the server validates the client certificate with regards to a set of trust anchors which does not include an hostile or incompetent root CA, then the server has a guarantee that it is talking to the right client (the one identified by the client certificate). The guarantee holds for all subsequent data within the connection (it does NOT extend to data exchanged prior to the handshake where the client showed a certificate, in case that handshake was a "renegotiation"). This means that a client certificate indeed protects against the specific scenario of a rogue CA injected in the client trust store. In that scenario, the "attacker" succeeded in making the client trust a specific root CA that is attacker-controlled, allowing the attacker to run a MitM attack by creating on-the-fly a fake certificate for the target server (this is exactly what happens with some "SSL content filtering" proxies that are deployed in some organizations). Such MitM breaks client certificate authentication, because what the client signs as part of a CertificateVerify message is a hash computed over a lot of data, including the server certificate -- in the MitM scenario, the client does not see the "right" certificate and thus computes a wrong signature, which the server detects. From the point of view of the client , if a client certificate was requested by the server and shown by the client, then the client knows that the true server will detect any MitM. The client will not detect the MitM itself; and if there is an ongoing MitM attack, then the client is really talking to the attacker, and the attacker will not tell the client "by the way, you are currently being attacked". In that sense, a client certificate prevents a true MitM (aka "two-sided impersonation") but does not protect against simpler frauds (one-sided impersonation, i.e. fake servers). Bottom line: In the presence of SSL with mutual client-server authentication (both send a certificate to the other), a successful MitM requires the attacker to plant rogue CA in both the client and the server. However, if the attacker can plant a rogue CA in the client only, the client is still in a rather bad situation (even though a complete MitM is thwarted). | {
"source": [
"https://security.stackexchange.com/questions/26142",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
26,179 | Which one is more secure and least possible to be broken through cryptanalysis AES or 3DES (no matter performance)? I need to use encryption for my projects to store and secure sensitive information which includes bank accounts, sort codes, and third party data related bank. I am currently considering using 3DES in CFB mode, but I am not very sure if it is the best option and what are other alternatives. I know the title does not give much idea what the question is about, but I couldn't think of something better. | Go for AES. AES is the successor of DES as standard symmetric encryption algorithm for US federal organizations. AES uses keys of 128, 192 or 256 bits, although, 128 bit keys provide sufficient strength today. It uses 128 bit blocks, and is efficient in both software and hardware implementations. It was selected through an open competition involving hundreds of cryptographers during several years. DES is the previous "data encryption standard" from the seventies. Its key size is too short for proper security. The 56 effective bits can be brute-forced, and that has been done more than ten years ago. DES uses 64 bit blocks, which poses some potential issues when encrypting several gigabytes of data with the same key. 3DES is a way to reuse DES implementations, by chaining three instances of DES with different keys. 3DES is believed to still be secure because it requires 2 112 operations which is not achievable with foreseeable technology. 3DES is very slow especially in software implementations because DES was designed for performance in hardware. Resources: http://www.differencebetween.net/technology/difference-between-aes-and-3des http://www.icommcorp.com/downloads/Comparison%20AES%20vs%203DES.pdf (offline, still in the Web Archive ) | {
"source": [
"https://security.stackexchange.com/questions/26179",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15038/"
]
} |
26,245 | Possible Duplicate: Do any security experts recommend bcrypt for password storage? I'm no security expert and do not pretend to be that's why I'm asking here. I write many PHP based applications and up to now I have been using bcrypt to hash my passwords. The scrypt website claims to be 4000 times slower than bcrypt, can this claim really be correct? If so would it be 'better' for a safety conscious developer to switch over to use scrypt instead of bcrypt? | Scrypt is supposed to be "better" than bcrypt, but is is also much more recent, and that's bad (because "more recent" inherently implies "has received less scrutiny"). All these password hashing schemes try to make processing of a single password more expensive for the attacker , while not making it too expensive for your server . Since your server is, basically, a PC, and the attacker can choose the most efficient hardware for his task, password hashing schemes try to be such that the best platform for them will be a PC. PBKDF2 can be thoroughly optimized with GPU, while bcrypt and scrypt are much less GPU-friendly. Bcrypt and scrypt both require fast RAM , which is a scarce resource in a GPU (a GPU can have a lot of RAM, but will not be able to access it from all cores simultaneously). It so happens that modern FPGA embed many small RAM blocks which are very handy to optimize a parallel dictionary attack with bcrypt: this means that an attacker will get a huge boost by using 1000$ worth of FPGA instead of 1000$ worth of generic PC. This is the kind of boost that we want to avoid. Hence scrypt, which requires a lot more RAM; not too much for a PC (we are only talking about a couple megabytes here), but enough to make life hard for a FPGA (see this answer for a detailed treatment of the question). Thus, theoretically , scrypt should be better than bcrypt. However , this is all subject to whether scrypt lives up to its cryptographic promises. This kind of guarantee of robustness can only be achieved through time: the scheme will be deemed secure if it survives years of relentless assaults from cryptographers. How much time is needed, is of course a bit subjective, and also depends a lot on exposure (the more widely deployed a scheme is, the more "interesting" a target it becomes, in that breaking it would increase the academic fame of the perpetrator; thus attracting more scrutiny). My own rule of thumb is to wait for about 5 years after publication, thus 2014 in the case of scrypt. There is also the question of availability : if you want to use a function, then you need an implementation which can be inserted in the programming framework you use. | {
"source": [
"https://security.stackexchange.com/questions/26245",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18231/"
]
} |
26,332 | Background: I want to implement something like this in our websites, and I'm looking for advice and possibly APIs that allow this out of the box rather than re-inventing the wheel, but I can't even figure out the right search terms. As seen on my bank account: When I registered, I was asked to pick a phrase that I would remember Now, when I log onto my website, the process is as follows: I enter my Username and click "next". The bank site shows me this phrase. This helps me to be assured that I am actually on my bank's site, and not some fake site set up to steal my login credentials. If the pass-phrase matches, I enter my password to complete the authentication process. If the pass-phrase doesn't match, I know that either I entered my username wrong or I'm on a phishing site, and I go back to my bank's home page and start over. In my mind, this sounds like "multi-step authentication". However, when I search for that, I keep getting results for multi-factor authentication - authentication using a token, or two-step authentication as implemented by Google and other sites. While I'm a HUGE proponent of multi-factor authentication using tokens or codes sent to your mobile device, I also want to figure out how to do what my bank is doing. Is there a name or term for this authentication pattern? | SiteKey is the feature name that many banks call it and should be able to be searched for under that name. It adds minimal if any security. Anything that your server can present to the user, a man in the middle can act as if they were the client and get the same information. SiteKey (which is likely what your bank calls it) is not secure and doesn't add meaningful security. It can actually be harmful as it may give users a false sense of security and make them ignore otherwise good indicators such as SSL indicators because the "secure" image or phrase is there. My general recommendation is do not use such flawed mechanisms as they can do more harm than good. | {
"source": [
"https://security.stackexchange.com/questions/26332",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5248/"
]
} |
26,347 | As described here , some banks use a SiteKey mechanism in an attempt to provide a defense against phishing. (This is a scheme where the user is shown a personalized image (each user has their own custom image) after the user enters their username but before they enter their password. In theory, this is supposed to let the user verify they are on BOA's page, not on a phishing site.)
The comments and answers indicate some issues with this approach, but that question wasn't about whether or not the approach is effective. For future visitors who might be considering using the same scheme, I'd like to have the question on the site: Is this a valid defense against Phishing? Perhaps not by itself, but as one step in a defense-in-depth strategy, or does this give a false sense of security? If so, is it just useless, or is it more dangerous to employ this scheme because of the false sense of security? | SiteKey is not an effective defense against phishing. In principle, it could be helpful for a tiny population of expert users who are very conscientious about examining the image and know how web security works, but those users are rare. However, a mechanism like this really needs to help protect average users, not just computer security experts. And, for typical users, SiteKey is not an effective defense against phishing, for reasons explained below. The good news is that, today, phishing appears to be relatively low on the scale of risks. Phishing attacks don't seem to be very successful today. Therefore, the deficiencies in SiteKey may be acceptable. That said, SiteKey is mostly security theater: it doesn't add much security, for the typical user. I should elaborate on how I can make such strong statements. As it happens, this question has been studied in the research literature and there is experimental data on it -- and the data is fascinating. The data turns out to have some surprises for all of us! Experimental methodology. SiteKey's use of a custom "security images" (and security phrase) has been evaluated in a user study, conducted with ordinary users who were asked to perform online banking in the lab. Unbeknownst to them, some of them were 'attacked' in a controlled way, to see whether they would behave securely or not and whether the security images helped or not. The researchers evaluated two attacks: MITM attack: The researchers simulated a man-in-the-middle attack that strips off SSL. The only visible indication of the attack is that lack of a HTTPS indicator (no HTTPS in the address bar, no lock icon, etc.). Security image attack: The researchers simulated a phishing attack. In this attack, it looks like the users are interacting with the real bank site, except that the SiteKey security image (and security phrase) is missing. In its place, the attack places the following text: SiteKey Maintanance Notice:
Bank of America is currently upgrading our award winning SiteKey feature. Please contact customer service if your SiteKey does not reappear within the next 24 hours. I find this a brilliant attack. Rather than trying to figure out what security image (or security phprase) to show to the user, don't show any security image at all, and just try to persuade the user that it's OK that there is no security image. Don't try to defeat the security system where it is strongest; just bypass the entire thing by undermining its foundation. Anyway, the researchers then proceeded to observe how users behaved when they were attacked in these ways (without their knowledge). Experimental results. The results? The attacks were incredibly successful. Not a single user avoided the MITM attack; every single one who was exposed to the MITM attack fell for it. (No one noticed that they were under attack.) 97% of those exposed to the security image attack fell for it. Only 3% (2 out of 60 participants) behaved securely and refused to log in when hit with this attack. Conclusions. Let me attempt to draw some lessons from this experiment. First, SiteKey (and security images) is ineffective . SiteKey is readily defeated by very simple attack techniques. Second, when assessing what security mechanisms will be effective, our intuitions are not reliable . Even expert security professionals can draw the wrong conclusions. For instance, I've seen some competent and knowledgeable security folks argue that security images add some security because they force the attacker to work harder and implement a MITM attack. From this experiment, we can see that this argument does not hold water. Indeed, a very simple attack (clone the website and replace the security image with a notice saying the security image feature is currently down for maintenance) is extremely successful in practice. So, when the security of a system depends upon how users will behave, it is important to conduct rigorous experiments to evaluate how ordinary users will actually behave in real life. Our intuitions and "from-first-principles" analyses are not a substitute for data. Third, ordinary users don't behave in the way security folks sometimes wish they would . Sometimes we talk about a protocol as "the user will do such-and-such, then the server will do thus-and-such, and if the user detects any deviation, the user will know he is under attack". But that's not how users think. Users don't have the suspicious mindset that security folks have, and security is not at the forefront of their mind. If something isn't quite right, a security expert might suspect she is under attack -- but that's usually not the first reaction of an ordinary user. Ordinary users are so used to the fact that web sites are flaky that their first reaction, upon seeing something odd or unusual, is often to shrug it off and assume that the Internet (or the web site) isn't quite working right at the moment. So, if your security mechanism relies upon users to become suspicious if certain cues are absent, it's probably on shaky grounds. Fourth, it's not realistic to expect users to notice the absence of a security indicator , like a SSL lock icon. I'm sure we've all played "Simon Says" as a kid. The fun of the game is entirely that -- even when you know to look out for it -- it is easy to overlook the absence of the "Simon Says" cue. Now think about a SSL icon. Looking for the SSL icon is not the user's primary task, when performing online banking; instead, users typically just want to pay their bills and get the chore done so they can move on to something more useful. How much easier it is to fail to notice its absence, in those circumstances! By the way, you might wonder how Bank of America (or other banks who use similar methods) have responded to these findings. After all, Bank of America emphasizes their SiteKey feature to users; so how have they reacted to the discovery that the security image feature is all but useless in practice? Answer: they haven't. They still use SiteKey. And if you ask them about their response, a typical response has been something of the form "well, our users really like and appreciate SiteKey". This tells you something: it tells you that SiteKey is largely a form of security theater. Apparently, SiteKey exists to make users feel good about the process, more than to actually protect against serious attacks. References. For more details of the experiment I summarized above, read the following research paper: The Emperor's New Security Indicators . Stuart E. Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer. IEEE Security & Privacy 2007. | {
"source": [
"https://security.stackexchange.com/questions/26347",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5248/"
]
} |
26,356 | What are the security risks of Bluetooth and what technologies and best practices should be used to protect my device? What can an attacker do once a malicious device is paired with mine? Specifically Is it a good idea to remove & re-pair my devices on a set interval (thinking that this is changing the Bluetooth PIN) What is the security impact of making my device or computer "discoverable"? What kind of access does a Bluetooth enabled device get on my system? How can I control the scope of access a Bluetooth device has? (if my phone were compromised I'd want to limit the exposure my PC has) Are there Bluetooth security features that may (or may not be enabled)? How can I audit for the presence (or lack of) these features? Assuming encryption is a security feature that can be enabled, is it required or is it optional? (moreover could an SSL Strip for Bluetooth be created?) I'm interested in information that addresses mobile phones ( iOS ), OSX, Windows, and Linux operating systems | (Note: This answer is from 2013. A lot has changed in Bluetooth since then, especially the sharp rise in BLE popularity, new attacks, deprecated features. Having that said, most of it is still applicable.) Introduction I'll try to the best of my knowledge to approach your questions without touching the technical parts of the Bluetooth technology itself. I've learned a lot of the following while I had to write a security report to shape a BYOD policy. Knowing you, I won't have to lecture you on that there's nothing 100% secure, everything we do is just to make it harder for the bad guys. What Bluetooth is NOT Bluetooth itself as a technology isn't secure, it's not only about the implementation, there are some serious flaws in the design itself. Bluetooth isn't a short range communication method - just because you're a bit far doesn't mean you're safe. Class I Bluetooth devices have a range up to 100 meters. Bluetooth isn't a mature communicate method (security-wise). With smart phones, it has turned into something totally different from what it was meant to be. It was created as a way to connect phones to peripherals. My advice: Don't use Bluetooth for anything serious. How is Bluetooth being secured now? Frequency hopping like crazy: Bluetooth uses something called AFH (Adaptive Frequency Hopping). It basically uses 79 channels in the 2.4 Ghz ISM band and it keeps hopping between them on a rate of 1600 hops/s, while observing the environment and excluding any existing frequencies from the hopping list. This greatly reduces interference and jamming attempts. E0 cipher suite : Uses a stream cipher with a 128-bit key. Undiscoverability: Unless you set your device to "discoverable" it won't respond to scanning attempts and your 48-bit BD_ADDR (the address that identifies your Bluetooth-enabled device) won't be revealed. Pairing: Unless the devices are paired with the parties' consent they won't be able to communicate. A pairing request can only be made if you know the other device's BD_ADDR (through scan or previous knowledge). Is it a good idea to remove & re-pair my devices on a set interval
(thinking that this is changing the Bluetooth PIN) Yes. It's a very good idea. You're eliminating the risks of being exploited by your trusted devices. Given that we usually pair devices for insignificant reasons (sending a file to an acquaintance, getting a VCard from a person you met somewhere..) it's very likely to build a huge list of "trusted" devices if you use Bluetooth a lot. What is the security impact of making my device or computer
"discoverable"? The problem with making your device discoverable is that you're advertising your BD_ADDR to anybody asking for it. The only way to pair with another device is by knowing the BD_ADDR. In a targeted attack, it's gonna take some time to brute-force the 48-bit BD_ADDR. In the normal case, knowing your BD_ADDR shouldn't be a big problem, but in case there's a vulnerability in your phone or the Bluetooth software on your computer, it's better to be under the radar. Another problem is the impact on privacy; by being discoverable you're letting unpaired (untrusted) parties know when you're around. What kind of access does a Bluetooth enabled device get on my system? In the normal case (no vulnerability to allow arbitrary code execution) it all depends on the Bluetooth profiles supported by your device. Usually, you can assume that your computer supports all profiles. I'll just list a few: BHIDP (Bluetooth Human Interface Device Profile) will give access to your mouse and keyboard event firing (moving the mouse and sending keyboard keys). BIP (Basic Imaging Profile) will give access to your camera. A2DP (Advanced Audio Distribution Profile) will give access to your MIC and to your audio output. OBEX (OBject EXchange) is what you usually need to worry about. Depending on the implementation, it could give access to your files, phonebook, messages, etc. Are there Bluetooth security features that may (or may not be
enabled)? How can I audit for the presence (or lack of) these
features? Prior to Bluetooth V2.1, when implementing the protocol itself, the developer has the options to use Security Mode #1, which means no security at all. Devices are allowed to communicate without the need of pairing, and encryption isn't used. Bluetooth V2.1 and newer require encryption. As a user, there are a few things you can do to make your Bluetooth usage more secure. (See below) Assuming encryption is a security feature that can be enabled, is it
required or is it optional? As in the previous question, it's implementation-dependeant. Usually encryption is used by default in PC-PC, smartphone-smartphone, and PC-smartphone communication. As of Bluetooth V2.1, encryption is enabled by default. What can an attacker do once a malicious device is paired with mine? Basically, anything that your device supports. To demonstrate this, just use an application called Super Bluetooth Hack , you'll see very scary things including: - Ringing: playing sounds of incoming call, alarm clock. - Calls: dialing number, ending a call. - Keys, Pressed keys: pressing and watching pressed keys - Contacts - Reading SMS - Silent mode: turning on or off - Phone functionality: turning off the network / phone - Alarms - Clock: change date and time - Change network operator - Java: start, delete java applications - Calendar - Memory status - Keylock So what's wrong with Bluetooth? Complete trust in the paired device: A paired device has access to virtually all of the profiles supported by the other device. This includes OBEX and FTP (File Transfer Profile). Profiles have too much freedom: Profiles are allowed to choose whatever security mode they want. You're even able to implement your own version of OBEX without Bluetooth requiring you to use encryption or authentication at all. (Prior to Bluetooth V2.1) Weaknesses in E0: Since 1999, E0 vulnerabilities started to show. It was proven that it's possible to crack E0 with 2 64 rather than the 2 128 previously believed. Year after year, researchers have discovered more vulnerabilities, leading to the 2005 attack by Lu, Meier and Vaudenay. The attack demonstrated the possibility to recover the key with 2 38 operations. Pairing is loosely defined: Devices are allowed to implement their own pairing methods, including a 4-digit PIN which can be cracked in no time. Finally, for good practice guidelines: I'll list some of the important NSA Bluetooth Security Recommendations (I've modified some of them and added some of my own): Enable Bluetooth functionality only when necessary. Enable Bluetooth discovery only when necessary. Keep paired devices close together and monitor what's happening on the devices. Pair devices using a secure long passkey. Never enter passkeys or PINs when unexpectedly prompted to do so. Regularly update and patch Bluetooth-enabled devices. Remove paired devices immediately after use. Update: An hour ago I was diving in the Bluetooth V4.0 specs , and, to my surprise, it appears they're still using E0 for encryption, and there were no good changes to the pairing mechanism. What's even worse is that they're pushing forward the number-comparison pairing mechanism, in which the users are shown a six-digit number on the two devices and asked to verify if they're the same. Which, in my opinion, opens huge doors for social engineering attacks. For pairing scenarios that require user interaction, eavesdropper protection makes a simple six-digit passkey stronger than a 16-digit alphanumeric character random PIN code. Source Update 2: It seems like this "Just Works" 6-digit PIN is indeed problematic. Mike Ryan has demonstrated an attack on BLE and released the code as his tool "crackle" to brute-force the temporary key and decrypt the traffic. | {
"source": [
"https://security.stackexchange.com/questions/26356",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
26,489 | After yet another failure of the public key infrastructure, I was thinking about how broken the whole thing is. This business of undeniably associating an identity with a public key, and all the work we put into achieving it, is starting to feel like ice-skating up hill. Forgive me, I'm mostly just thinking aloud here. I started thinking about the ToR Hidden Service Protocol and their method for solving this. The ' hidden service name ' (which is typed into the address bar like any other URL) is actually derived from the public key - so you end up visiting sites like kpvz7ki2v5agwt35.onion - but you have no need for certificates or PKI, the public key and the domain alone are enough information to prove that they belong together (until you are able to generate collisions, but that's another story). Now clearly there is a problem with this - the whole point of the DNS is to provide a human-readable mapping to IP addresses. Which leads me to my final, possibly flawed, suggestion; why do we not use an IP address that is generated from a public key? (The other way around sounds more convenient at first, but can't work for obvious cryptographic reasons). We have a HUGE address space with IPv6. A 1024 bit RSA keypair is believed to have around 80 bits of entropy at the most. So why not split off a an 80-bit segment, and map public RSA keys to IP addresses in this segment? Some downsides off the top of my head; So an attacker can generate a key pair, and know immediately which server that key pair would be used on, if such a server existed. Perhaps the 80 bit space could be expanded to use 4096 bit RSA keys, believed to have around 256 bits maximum entropy, making such a search infeasible (we would unfortunately require IPv7+ with a 512 or so bit address for this to fit however). This attack is also not as bad as it might at first sound, as it is untargeted. This attack could be mitigated by including a salt into the key->IP process, which the server sends to clients when they connect. This salt makes each server's key->IP process unique. An attacker could potentially brute-force the key space using a known salt until they match a chosen IP. This is a targeted attack, so is a bit more scary. However, using a slow (1-3 seconds ) algorithm to make the mapping from public key to IP could mitigate this. Use of the salt also means that such a brute-force would only apply to a single IP, and would have to be repeated per target IP. In order to try to stop the mods closing this, I'll do my best to turn it into a question; Is this idea completely flawed in some way? Has it been attempted in the past? Am I just rambling? | Mapping the public key to an IP address is easy (just hash it and keep the first 80 bits) and you have listed the ways to make this somehow robust (i.e. make the transform slow ). It has the drawback that it does not solve the problem at all : it just moves it around. The problem is about binding the cryptographically protected access (namely, the server public key) to the notion of identity that the human user understands . Human users grasp domain names . You will not make them validate hash-generated IPv6 addresses... Of course there is a deployed system which maps names to technical data such as IP addresses; this is the DNS . You could extend it to map domain names to public key verification tokens (i.e. put the hash of the public key somewhere in the DNS), or even public key themselves. If you use the DNS to transfer security-sensitive name-key bindings, then the DNS becomes a valuable target, so you would have to add security to the DNS itself. At that point, you have DNSSEC , which is a current proposal for a replacement of the X.509 PKI for HTTPS Web site. Whether DNSSEC would fare better than existing CA is unclear; that's switching actors, but the conceptual certification business would still lurk here. Humans want human-readable names, and public keys are unreadable. All the certificate-based solutions (be they X.509 or DNSSEC or whatever) try to bind a public key to an arbitrarily chosen name. Another distinct method would be to make the public key readable . Strangely enough, there are cryptographic protocols for that: ID-based cryptography . They use some rather tortuous mathematical tools (pairings on elliptic curves). They do not change the core concept (really, at some point, someone must make the link between a societal identity, like "Google", and the world of computers) but they change the dynamics . In an ID-based system for SSL, each server would have very short-lived private keys, and a central authority would issue to each server a new private key every day, matching its name. The net effect would be like an X.509 PKI where revocation inherently works well , so damage containment would be effective. Yet another twist would be to replace the notion of identity. Since humans cannot read public keys, then, accept it: they will not read them. Instead, track down active attacks with specialized entities, who do know how to read keys. That's the whole point of the "notaries" in Convergence . The notaries keep track of what public key is used by which site, and they scream and kick whenever they see something fishy. Anyway , the current system is not broken -- not in an economically relevant way. The breach you are linking to will join the Comodo and DigiNotar mishaps; that's a short list. Such problems occur way to rarely to even show up on the financial radar: if you add up the cost of all frauds which used a fake server certificate obtained from a "trusted CA", you will get an amount which is ludicrously small with regards to the billions of dollars from more mundane credit card frauds. From the point of view of banks and merchants and people who do commerce on the Internet, the X.509 PKI works . There is no incentive for them to promote a replacement. If there was a fake Google certificate every day , used to actually steal money from people, then the situation would be different. Right now, we are around one event per year . | {
"source": [
"https://security.stackexchange.com/questions/26489",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9377/"
]
} |
26,499 | I am getting emails with smime.p7s attachments. when I look at my mail on my Linux box, I can see base64 encoded block in mail body. I can extract that block and open it on Windows using certmgr and everything looks ok. I need to verify this certificate that I extracted to a file for CA path CRLs expiration I want to perform all the task on linux using a script. How can I use openssl or some other command to do this? | Mapping the public key to an IP address is easy (just hash it and keep the first 80 bits) and you have listed the ways to make this somehow robust (i.e. make the transform slow ). It has the drawback that it does not solve the problem at all : it just moves it around. The problem is about binding the cryptographically protected access (namely, the server public key) to the notion of identity that the human user understands . Human users grasp domain names . You will not make them validate hash-generated IPv6 addresses... Of course there is a deployed system which maps names to technical data such as IP addresses; this is the DNS . You could extend it to map domain names to public key verification tokens (i.e. put the hash of the public key somewhere in the DNS), or even public key themselves. If you use the DNS to transfer security-sensitive name-key bindings, then the DNS becomes a valuable target, so you would have to add security to the DNS itself. At that point, you have DNSSEC , which is a current proposal for a replacement of the X.509 PKI for HTTPS Web site. Whether DNSSEC would fare better than existing CA is unclear; that's switching actors, but the conceptual certification business would still lurk here. Humans want human-readable names, and public keys are unreadable. All the certificate-based solutions (be they X.509 or DNSSEC or whatever) try to bind a public key to an arbitrarily chosen name. Another distinct method would be to make the public key readable . Strangely enough, there are cryptographic protocols for that: ID-based cryptography . They use some rather tortuous mathematical tools (pairings on elliptic curves). They do not change the core concept (really, at some point, someone must make the link between a societal identity, like "Google", and the world of computers) but they change the dynamics . In an ID-based system for SSL, each server would have very short-lived private keys, and a central authority would issue to each server a new private key every day, matching its name. The net effect would be like an X.509 PKI where revocation inherently works well , so damage containment would be effective. Yet another twist would be to replace the notion of identity. Since humans cannot read public keys, then, accept it: they will not read them. Instead, track down active attacks with specialized entities, who do know how to read keys. That's the whole point of the "notaries" in Convergence . The notaries keep track of what public key is used by which site, and they scream and kick whenever they see something fishy. Anyway , the current system is not broken -- not in an economically relevant way. The breach you are linking to will join the Comodo and DigiNotar mishaps; that's a short list. Such problems occur way to rarely to even show up on the financial radar: if you add up the cost of all frauds which used a fake server certificate obtained from a "trusted CA", you will get an amount which is ludicrously small with regards to the billions of dollars from more mundane credit card frauds. From the point of view of banks and merchants and people who do commerce on the Internet, the X.509 PKI works . There is no incentive for them to promote a replacement. If there was a fake Google certificate every day , used to actually steal money from people, then the situation would be different. Right now, we are around one event per year . | {
"source": [
"https://security.stackexchange.com/questions/26499",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18367/"
]
} |
26,656 | If we know CAPTCHA can be beat, why are we still using them? A 35% to 90% success rate like wikipedia is stating means software is better at solving CAPTCHAs then I am. | CAPTCHAs are a trade-off between the patience of the attackers, and the patience of the normal users. Even if they can be beaten, they still serve their purpose if they slow down attackers sufficiently to discourage at least some of them, while not frightening too many potential users. Of course, as is customary in IT, a lot of systems are used and deployed and adopted because of cargo cult . CAPTCHAs are fashionable and this is sufficient to ensure their widespread usage. | {
"source": [
"https://security.stackexchange.com/questions/26656",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18219/"
]
} |
27,776 | Everyone knows that ECB operation mode with a block cipher should be avoided because of clear and obvious weaknesses. But little attention is given to comparison of the other modes in the context of security, and people instead appear to simply prefer the oldest mode: CBC. Are there security caveats with respect to other common operation modes? This is specifically in the context of stream encryption rather than special-purpose functions that might have very specific operational requirements ( i.e. TrueCrypt et.al. ). For example, the simplicity of OFB mode is appealing as it completely masks the nature of the underlying block cipher, turning into an easy-to-use stream cipher. But the fact that the cipher "output" is directly XORed with the plaintext to produce the ciphertext is vaguely unsettling, and smells like there's room for a chosen plaintext vulnerability. Of the common operation modes, are there any that we should avoid for stream encryption , or situations in which we should avoid any given one of them, or reasons to prefer one over another? Besides ECB, of course. Specifically, CBC, OFB, and CFB operation modes, as those are nearly universally supported. And possibly CTR 'cause everyone knows what it is. | OFB and CTR turn the block cipher into a stream cipher. This means that care must be taken when using them, like for RC4 ; reusing the same stream for encrypting two distinct messages is a deadly sin. OFB is more "delicate" in that matter. Since OFB consists in encrypting the same value repeatedly, it is, in practice, an exploration of a permutation cycle : the IV selects a point and OFB walks the cycles which contains this point. For a block cipher with block size n bits, the average size of a cycle should be around 2 n/2 , and if you encrypt more than that, then you begin to repeat a previous segment of the stream, and that's bad. This can be a big issue when n = 64 (e.g. you use 3DES). Moreover, this is an average: you can, out of (bad) luck, hit a smaller cycle; also, if you encrypt two messages with the same key but distinct IV, you could (there again if unlucky) hit the same cycle than previously (only at a different point). The bad point of OFB is that it is hard to test for these occurrences (if the implementation includes the necessary code, it can test whether these unwanted situations occur, but this cannot be done in advance , only when part of the encryption has already been done). For CTR, things are easier: CTR is encryption of successive counter values; trouble begins when a counter value is reused. But a counter behaviour is easy to predict (it is, after all, a counter) hence it is much easier to ensure that successive messages encrypted with the same key use distinct ranges of counter values. Also, when encrypting with CTR, the output begins to be distinguishable from pure random after about 2 n/2 blocks, but that's rarely lethal. It is a worry and is sufficient to warrant use of block ciphers with big blocks (e.g. AES with 128-bit blocks instead of 3DES and its 64-bit blocks), but that's a more graceful degradation of security than what occurs with OFB. To sum up, don't use OFB; use CTR instead . This does not make CTR easy to use safely, just easier. To avoid botching it, you should try to use one if the nifty authenticated encryption modes which do things properly and include integrity check, a necessary but often overlooked component. EAX and GCM are my preferred AE modes (EAX will be faster on small architectures with limited L1 cache, GCM will be faster on modern x86, especially those with the AES opcodes which were defined just for that). To my knowledge, CFB does not suffer as greatly as OFB from the cycle length issues, but encrypting a long sequence of zeros with CFB is equivalent to OFB; therefore, it seems safer to prefer CTR over CFB as well. Almost all block cipher modes of operation have trouble when reaching the 2 n/2 barrier, hence it is wise to use 128-bit blocks anyway. Note: CFB and OFB have an optional "feedback length". Usually, we use full-block feedback, because that's what ensures the maximum performance (production of n bits of ciphertext per invocation of the block cipher). Modes with smaller feedback have also been defined, e.g. CFB-8 which encrypts only one byte at a time (so it is 8 times slower than full-block CFB when using a 64-bit block cipher). Such modes are not as well supported by existing libraries; also, small feedback loops make the OFB issues worse. Therefore, I do not recommend using CFB or OFB will less than full block feedback. As pointed out by @Rook: CBC mode, like ECB but unlike CFB, OFB and CTR, processes only full blocks, therefore needs padding . Padding can imply padding oracle attacks , which is bad (arguably, a padding oracle attack is possible only if no MAC is used, or is badly applied; the proper way being encrypt-then-MAC). For this reason, padding-less modes are preferable over CBC. This leads us to a clear victory of CTR over other modes, CFB being second, then CBC and OFB in a tie, then ECB (this is a bit subjective, of course). But, really, use EAX or GCM. | {
"source": [
"https://security.stackexchange.com/questions/27776",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
27,787 | I have some doubts about how the RSS private feeds, get transferred between one portal to another. The whole point is to allow private RSS feeds to be consumed by external clients, and in 99% of cases, they support basic authentication and SSL. The problem is that external clients can't use FORM authentication of the portal to reach the private pages, making existing private RSS feeds all but useless. Is there anyway this problem can be resolved, or is it already resolved I need to know.This will help me in a project that I started working on,I want a service that will enable two separate bloggers to share their private posts with each other without necessarily requiring a user name. | OFB and CTR turn the block cipher into a stream cipher. This means that care must be taken when using them, like for RC4 ; reusing the same stream for encrypting two distinct messages is a deadly sin. OFB is more "delicate" in that matter. Since OFB consists in encrypting the same value repeatedly, it is, in practice, an exploration of a permutation cycle : the IV selects a point and OFB walks the cycles which contains this point. For a block cipher with block size n bits, the average size of a cycle should be around 2 n/2 , and if you encrypt more than that, then you begin to repeat a previous segment of the stream, and that's bad. This can be a big issue when n = 64 (e.g. you use 3DES). Moreover, this is an average: you can, out of (bad) luck, hit a smaller cycle; also, if you encrypt two messages with the same key but distinct IV, you could (there again if unlucky) hit the same cycle than previously (only at a different point). The bad point of OFB is that it is hard to test for these occurrences (if the implementation includes the necessary code, it can test whether these unwanted situations occur, but this cannot be done in advance , only when part of the encryption has already been done). For CTR, things are easier: CTR is encryption of successive counter values; trouble begins when a counter value is reused. But a counter behaviour is easy to predict (it is, after all, a counter) hence it is much easier to ensure that successive messages encrypted with the same key use distinct ranges of counter values. Also, when encrypting with CTR, the output begins to be distinguishable from pure random after about 2 n/2 blocks, but that's rarely lethal. It is a worry and is sufficient to warrant use of block ciphers with big blocks (e.g. AES with 128-bit blocks instead of 3DES and its 64-bit blocks), but that's a more graceful degradation of security than what occurs with OFB. To sum up, don't use OFB; use CTR instead . This does not make CTR easy to use safely, just easier. To avoid botching it, you should try to use one if the nifty authenticated encryption modes which do things properly and include integrity check, a necessary but often overlooked component. EAX and GCM are my preferred AE modes (EAX will be faster on small architectures with limited L1 cache, GCM will be faster on modern x86, especially those with the AES opcodes which were defined just for that). To my knowledge, CFB does not suffer as greatly as OFB from the cycle length issues, but encrypting a long sequence of zeros with CFB is equivalent to OFB; therefore, it seems safer to prefer CTR over CFB as well. Almost all block cipher modes of operation have trouble when reaching the 2 n/2 barrier, hence it is wise to use 128-bit blocks anyway. Note: CFB and OFB have an optional "feedback length". Usually, we use full-block feedback, because that's what ensures the maximum performance (production of n bits of ciphertext per invocation of the block cipher). Modes with smaller feedback have also been defined, e.g. CFB-8 which encrypts only one byte at a time (so it is 8 times slower than full-block CFB when using a 64-bit block cipher). Such modes are not as well supported by existing libraries; also, small feedback loops make the OFB issues worse. Therefore, I do not recommend using CFB or OFB will less than full block feedback. As pointed out by @Rook: CBC mode, like ECB but unlike CFB, OFB and CTR, processes only full blocks, therefore needs padding . Padding can imply padding oracle attacks , which is bad (arguably, a padding oracle attack is possible only if no MAC is used, or is badly applied; the proper way being encrypt-then-MAC). For this reason, padding-less modes are preferable over CBC. This leads us to a clear victory of CTR over other modes, CFB being second, then CBC and OFB in a tie, then ECB (this is a bit subjective, of course). But, really, use EAX or GCM. | {
"source": [
"https://security.stackexchange.com/questions/27787",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18434/"
]
} |
27,806 | My VPN provider gives me the option between using UDP and TCP for connections. According to this site UDP is faster over short distances. I'm on the same continent as my server, is that considered short distance? Is there a test I can run to compare the two? | A VPN is for wrapping raw IP packets into some kind of "tunnel" between two sites (one of the site being possibly reduced to one computer, i.e. yours). TCP is a protocol which sits on top of IP, and uses IP packets (which are "unreliable": they can get lost, duplicated, reordered...) to provide a reliable two-directional channel for data bytes, where bytes always reach the receiver in the order they were sent. TCP does that by using a complex assortment of metadata with explicit acknowledges and reemissions. Thus, TCP incurs a slight network overhead. If the VPN uses TCP, then your own TCP connections will use IP packets sent through the VPN, so you end up paying the TCP overhead twice. An UDP-based VPN thus has the potential for slightly better performance. On the other hand, the cryptographic protection of the VPN requires some state management, which may be harder for the VPN implementation when using UDP, hence it is possible that the UDP-based VPN has an extra overhead to contend with. Therefore, the performance situation is not clear, and you should measure . | {
"source": [
"https://security.stackexchange.com/questions/27806",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18500/"
]
} |
27,810 | My security department insists that I (the system administrator) make a new private key when I want a SSL certificate renewed for our web servers. They claim it's best practice, but my googling attempts have failed to verify their claim. What is the correct way? The closest I've found is Should I regenerate new SSL private key yearly? , but it doesn't really explain why it would be necessary to change the keys. I know it's not a big deal to change the keys while I'm at it, but I've never been one to just do what I'm being told without a proper reason or explanation :) | I would say that their suggestion isn't a very solid one, unless you're using horrifically small key sizes - in which case you have a different problem altogether. A 2048-bit key, by most estimates , will keep you safe until at least the year 2020, if not longer than that. If you're running with 1024-bit keys or less, you're below the standard, and I recommend updating to 2048-bit immediately. If you're currently using 1536-bit keys, you should be safe for a year or two. Of course, this is all academically speaking. The likelihood of someone being able (or inclined) to crack your 1024-bit SSL key within a year is extremely low. As mentioned in the question you linked, there are benefits and drawbacks. Benefits Gives an attacker less time to crack the key. Somewhat of a moot point if you're using reasonable key sizes anyway. Halts any evil-doers that may have compromised your private key. Unlikely, but unknown. Gives you a chance to increase your key size to be ahead of the curve. Drawbacks Doesn't really give you any concrete protection against key cracking unless you're using terribly small keys. SSL checking / anti-MitM plugins for browsers might alert the user that the key has changed. This is, in my opinion, a weak drawback - most users won't be using this. Might cause temporary warnings in relation to more strict HSTS policies and implementations. It requires work. You could be doing something else. So on both sides there are some weak reasons, and some corner-cases you might need to consider. I'd lean slightly towards the "don't do it unless you need to" angle, as long as you're using 2048-bit keys or higher. The most important thing is to ask them why they think it's necessary - it may be that you have an infrastructure-related reason for updating the keys which we don't know about. If they can't come up with a solid argument ("why not?" isn't really valid) then they should probaly re-evaluate their advice. | {
"source": [
"https://security.stackexchange.com/questions/27810",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1415/"
]
} |
27,889 | I have a Node.JS HTTPS webserver instantiated as follows: var privateKey = fs.readFileSync('./../ssl/localhost.key').toString();
var certificate = fs.readFileSync('./../ssl/localhost.crt').toString();
https.createServer({
key: privateKey,
cert: certificate
}, server).listen(80, 'localhost'); My private key is on my server which Node.JS reads to create the HTTPS webserver. If a hacker has read access to server files he can access the private key and impersonate the webserver. Should the private key be stored on the server? Should it be destroyed once the HTTPS server has been instantiated? | You do not want to destroy your private key: you will need it again if you server restarts. Reboots happen sometimes... That's a generic observation: you want your server to be able to restart in an unattended fashion. Therefore it must contain the private key, and that private key must be available to the server software. If an attacker hijacks your complete machine, then he gets a copy of your key. That's a fact of life. Live with it. There are mitigations, though: You can use the "DHE" cipher suites for SSL. With these cipher suites, the server key is used for signatures, not for encryption. This means that the attacker who steals your key can thereafter impersonate your server (i.e. run a fake server) but not decrypt SSL sessions which he previously recorded. This is a good property known as Perfect Forward Secrecy . If you store the key in a Hardware Security Module then the key will not be stolen. The attacker may play with the key as long as he has effective control of the server, but not extract it. This decreases his power of nuisance (but HSM are expensive). In practice, you will just make sure that the file containing the private key is readable only to the system account which runs the server ( chown and chmod on Unix-like systems, NTFS access rights on Windows). | {
"source": [
"https://security.stackexchange.com/questions/27889",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11902/"
]
} |
27,906 | There are 2 sites: http://www.site1.com
http://www.site2.com http://www.site1.com contains link to http://www.site2.com as <a href="http://www.site2.com/">link<a/> When user clicks on link from http://www.site1.com browser sends Referer header to http://www.site2.com . Based on Referrer header http://www.site2.com makes some processes. I wonder if I can fake/change (maybe with javascript, PHP, ...) Referer header or not send it at all? | There are two situations in which you would want to control the Referer header. By the way, Referer is a miss-spelling of the word "referrer". If you want to control your personal browser not to pass the Referer to site2.com , you can do that with many browser extensions: For Firefox there is RefControl (which I use and am happy with. I use the option "Forge- send the root of the site") Chrome has Referer Control The other situation is where you are a webmaster and you want the users of your site (site1.com) not to send the Referer to other sites linked on your site. You can do that in multiple ways: Use SSL/TLS (https) on your site and a security feature of the browser is not to pass the Referer to HTTP links on your pages served up with SSL/TLS. However, if the links on your pages use HTTPS, then Referer will still be passed over unless explicitly turned off by other means described below. Use the HTML5 rel="noreferrer" attribute. It is supported by all major browsers. Use a Data URL ('data:') to hide the actual page the link is coming from: <a href='data:text/html;charset=utf-8, <html><meta http-equiv="refresh" content="0;URL='http://site2.com/'"></html>'>Link text</a> . Hide the Referer by redirecting through an intermediate page. This type of redirection is often used to prevent potentially-malicious links from gaining information using the Referer , for example a session ID in the query string. Many large community websites use link redirection on external links to lessen the chance of an exploit that could be used to steal account information, as well as make it clear when a user is leaving a service, to lessen the chance of effective phishing. Here is a simplistic redirection example in PHP: <?php
$url = htmlspecialchars($_GET['url']);
header( 'Refresh: 0; url=http://'.$url );
?>
<html>
<head>
<title>Redirecting...</title>
<meta http-equiv="refresh" content="0;url=http://<?php echo $url; ?>">
</head>
<body>
Attempting to redirect to <a href="http://<?php echo $url; ?>">http://<?php echo $url; ?></a>.
</body>
</html> | {
"source": [
"https://security.stackexchange.com/questions/27906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9633/"
]
} |
27,926 | First it was Apple, now it's the US government... U.S. urges users to disable Java; Apple disables some remotely New malware exploiting Java 7 in Windows and Unix systems How serious is this "unspecified vulnerability"? Should all users be disabling Java until we know things have been patched? Edit:- Some people have asked that the following distinction is made: Java is completely different to Javascript, so any recommendations below pertain only to Java. | Apple apparently takes this seriously, since they "disabled Java" in users' computers, which is a rather drastic move. This actually smells like a pretext to kill off the technology, as part of a wider strategy. For this specific hole, there are a few details there . It is all about the Java applet model. To understand: Java is a programming language and a huge library of code, all running within a virtual machine . The VM means that code is much easier to port between architectures. So far so good; the same applies to several other frameworks, including .NET. The strong typing of Java and the VM conceptually allows an extra feature, which you cannot have (at least not easily) with more bare-metal languages like C++: the possibly of safely running potentially hostile code . With C or C++ or assembly or whatever, such a feat requires some help from the hardware and the operating system (namely the privilege levels of protected mode, or, for the extreme cases, specialized virtualization opcodes ). Strong types and the VM allow for a software-only sandbox solution, which could be integrated in, for instance, a Web browser. Java applets are just that: a sandbox for running Java code which is downloaded from the Web, as part of a Web page. However, the people at Sun/Oracle didn't know where to stop, and were a bit too eager. Since sandboxed code is severely limited in what it can do, there are only two choices: learn to live with limitations (that's what Javascript developers do), or include in the sandbox an escape mechanism allowing some applets to do out-of-sandbox things, such as reading and writing files, opening arbitrary sockets, and running native code. The VM allows that, provided that the applet asks nicely , which entails fine-grained permissions, a digital signature and an explicit authorization popup. It turns out that managing this system of "permissions" is very hard to do for the VM and library; namely, the library is very rich in code which offers access to various OS facilities, and they must all be plugged without forgetting any. There are hundreds, maybe thousands of "sensitive calls" to care about. The long history of security holes in Java is a testimony to the nigh impossibility of the task. If the competing technology from Microsoft ( Silverlight , built over .NET) seems a bit less impacted, it is mostly because it is much less used worldwide, giving it far less exposure. For the time being , the safest thing to do is to disable support for Java applets in your browser. Note that Java applications , and in particular anything which runs server-side, are not impacted. The problem of safely running hostile code, while simultaneously maintaining rich functionality and fine-grained access control, is not a new problem. What this yet another Java mishap shows is that this old problem is still unsolved. | {
"source": [
"https://security.stackexchange.com/questions/27926",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1662/"
]
} |
29,019 | I just realized that, in any language, when you save a password in a variable, it is stored as plain text in the memory. I think the OS does its job and forbids processes from accessing each other's allocated memory. But I also think this is somehow bypassable. So I wonder if it is really safe and if there is a safer way to store passwords to ensure that foreign processes can't access them. I didn't specify the OS or the language because my question is quite general. This is rather a computer literacy question than a specific purpose one. | You are touching a sore point... Historically , computers were mainframes where a lot of distinct users launched sessions and process on the same physical machine. Unix-like systems (e.g. Linux), but also VMS and its relatives (and this family includes all Windows of the NT line, hence 2000, XP, Vista, 7, 8...), have been structured in order to support the mainframe model. Thus, the hardware provides privilege levels . A central piece of the operating system is the kernel which runs at the highest privilege level (yes, I know there are subtleties with regards to virtualization) and manages the privilege levels. Applications run at a lower level and are forcibly prevented by the kernel from reading or writing each other's memory. Applications obtain RAM by pages (typically 4 or 8 kB) from the kernel. An application which tries to access a page belonging to another application is blocked by the kernel, and severely punished ("segmentation fault", "general protection fault"...). When an application no longer needs a page (in particular when the application exits), the kernel takes control of the page and may give it to another process. Modern operating systems "blank" pages before giving them back, where "blanking" means "filling with zeros". This prevents leaking data from one process to another. Note that Windows 95/98/Millenium did not blank pages, and leaks could occur... but these operating system were meant for a single user per machine. Of course, there are ways to escape the wrath of the kernel: a few doorways are available to applications which have "enough privilege" (not the same kind of privileges than above). On a Linux system, this is ptrace() . The kernel allows one process to read and write the memory of the other, through ptrace(), provided that both processes run under the same user ID, or that the process which does the ptrace() is a "root" process. Similar functionality exists in Windows. The bottom-line is that passwords in RAM are no safer than what the operating system allows. By definition, by storing some confidential data in the memory of a process, you are trusting the operating system for not giving it away to third parties. The OS is your friend , because if the OS is an enemy then you have utterly lost. Now comes the fun part. Since the OS enforces a separation of process, many people have tried to find ways to pierce these defenses. And they found a few interesting things... The "RAM" which the applications see is not necessarily true "memory". The kernel is a master of illusions, and gives pages that do not necessarily exist. The illusion is maintained by swapping RAM contents with a dedicated space on the disk, where free space is present in larger quantities; this is called virtual memory . Applications need not be aware of it, because the kernel will bring back the pages when needed (but, of course, disk is much slower than RAM). An unfortunate consequence is that some data, purportedly held in RAM , makes it to a physical medium where it will stay until overwritten. In particular, it will stay there if the power is cut. This allows for attacks where the bad guy grabs the machine and runs away with it, to inspect the data later on. Or leakage can occur when a machine is decommissioned and sold on eBay, and the sysadmin forgot to wipe out the disk contents. Linux provides a system called mlock() which prevents the kernel from sending some specific pages to the swap space. Since locking pages in RAM can deplete available RAM resources for other process, you need some privileges (root again) to use this function. An aggravating circumstance is that it is not necessarily easy to keep track of where your password is really in RAM. As a programmer, you access RAM through the abstraction provided by the programming language. In particular, programming languages which use Garbage Collection may transparently copy objects in RAM (because it really helps for many GC algorithms). Most programming languages are thus impacted (e.g. Java, C#/.NET, Javascript, PHP,... the list is almost endless). Hibernation brings back the same issues, with a vengeance. By nature, hibernation must write the whole RAM to the disk -- this may include pages which were mlocked, and even the contents of the CPU registers. To avoid leaks through hibernation, you have to resort to drastic measures like encrypting the whole disk -- this naturally implies typing the unlock password whenever you awake the machine. The mainframe model assumes that it can run several process which are hostile to each other, and yet maintain perfect peace and isolation. Modern hardware makes that very difficult. When two process run on the same CPU, they share some resources, including cache memory; memory accesses are much faster in the cache than elsewhere, but cache size is very limited. This has been exploited to recover cryptographic keys used by one process, from another. Variants have been developed which use other cache-like resources, e.g. branch prediction in a CPU. While research on that subject concentrates on cryptographic keys, which are high-value secrets , it could really apply to just any data. On a similar note, video cards can do Direct Memory Access . Whether DMA cannot be abused to read or write memory from other process depends on how well undocumented hardware, closed-source drivers and kernels collaborate to enforce the appropriate access controls. I would not bet my last shirt on it... Conclusion: yes, when you store a password in RAM, you are trusting the OS for keeping that confidential. Yes, the task is hard, even nigh impossible on modern systems. If some data is highly confidential, you really should not use the mainframe model, and not allow potentially hostile entities to run their code on your machine. (Which, by the way, means that hosted virtual machines and cloud computing cannot be ultimately safe. If you are serious about security, use dedicated hardware.) | {
"source": [
"https://security.stackexchange.com/questions/29019",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19660/"
]
} |
29,087 | Whenever I see someone on the internet ask the question, "Can someone find out my IP Address from my tweets/my tumblr/facebook posts/whatever else", the response I always see is, "Who cares! An IP Address doesn't even tell anyone anything! It's just a ruse, a scare tactic, people are so stupid to worry about it!" I've seen how much information a skilled hacker can get once they trace back from an ip address and even for the average person, simply having a general vicinity of where someone is from or being able to show the ip address linked to an anonymous posting as identical to something posted under someone's real world info can be enough to fuel a fire. I personally was being harassed online by someone where it was common knowledge that she lived in a very remote part of Wyoming. We had a traffic tracker on my website which included a breakdown of visitors by ip address and then more detailed information attached to the ip address (location, time visited, pages visited, etc). Even though it's not enough to prove definitively that it was this person, everyone thought it was an awful coincidence that the same ip address listed as being from that very same small area of Wyoming kept popping up at the very same times when the woman was visiting the site to harass me. Especially since it was the only ip address tied to Wyoming. It was enough to have the ip banned. Also, in the case of a static ip address, it seems it can be even worse. Example, a friend of mine was being stalked by someone. She used to post to her journal from her office. Her company's office had a static ip address. Her stalker was able to get her ip address from her journal posts somehow and unlike a dynamic ip address, when he looked up the info on the static one it gave, it showed all of the company's information, including the street address of the building. He showed up there one day and security had to be called to have him removed. My question, then, is whether or not the real "ruse" is saying that no one can get any meaningful information from an ip address and the fact that people are constantly being told not to worry about it when they should at least know how people can get their ip address if at all when on the web? As people who work with security issues and/or programming with security in mind, do you personally feel that the privacy of someone's ip address should be a concern when building sites or do you guys agree that it's nothing worth worrying about? | Revealing your IP address doesn't compromise the security of your machine . If an attack on your machine is untargetted (i.e. the attacker just wants to use it to send spam or fishing emails, or as a proxy for targetted attacks), your machine will be scanned at random, not based on the IP address that may be posted in a forum. If the attack is targetted, the person conducting the attack will usually know enough about you to find out your IP address anyway, the real security comes from not having a vulnerable machine. On the other hand, revealing your IP address compromises your privacy . It usually reveals what general geographic area you are accessing the Internet from, and who your Internet provider is; depending on your Internet provider, it may be possible to locate you quite precisely. It may also be possible to correlate your IP address with one online identity with your IP address with another online identity. So it's often not something you want to publish to the whole world. Any computer you directly connect to knows your IP address by construction. As a website designer, treat IP addresses the same way you'd treat any other private data such as name, age, gender, street address, telephone number, ... Do not expose them to anyone who isn't a site administrator. Remember that webserver logs will usually contain IP addresses for every request, so protect the logs like you protect your user database. Note however that it often isn't difficult to obtain someone's IP address online. All you have to do is host an image on a server that you control (costs <$10/month), and arrange for the person to browse that image in their browser. The IP address of everyone who viewed the image will be in the server logs. This is why email programs usually require you to confirm whether you want to view an image, and one of the reasons why many social sites require all images to be uploaded to their own servers. As a user, if you're really worried about revealing your identity, use a proxy. You trade privacy for bandwidth and latency, as well as privacy (the proxy knows what sites you've visited). You can go further and use Tor , which is a “split” proxy where different entities get to know your IP address (the entry node) and what site you're visiting (the exit node); you trade more bandwidth and latency for a bigger privacy gain. | {
"source": [
"https://security.stackexchange.com/questions/29087",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19700/"
]
} |
29,101 | Possible Duplicate: Effectiveness of Security Images Information about the efficacy of Yahoo's SignIn Seal is scarce, the best I could find was this section on Wikipedia's entry on Phishing, claiming that "few users refrain from entering their password when images are absent" and "this feature [...] is susceptible to other attacks" (which?). But even if users were attentitve to it, I'm having trouble wraping my head around the concept. How does one ensure the seal will only be shown when the user is actually at that particular site? Looking at the source code from Yahoo's login page I see a myriad of techniques being used, but I find it hard to comprehend what are their purposes, and how they work together to achive its goal: JavaScript is used to check whether or not the page is in an iframe (not showing the image if it is - or if JavaScript is disabled); The inserted image has a long random-looking token in its src , which I presume is to keep it secret; The image url expires pretty quickly, so it can't be stolen and used somewhere else. What I could infer from the above: The login page request must have been started from the user's browser, or else the cookie that holds the seal wouldn't be sent; If the attacker puts the page in an iframe , he can't access its contents due to same-origin policy. Likewise, the attacker can't request it via Ajax, for the same reason. On request of the login page, the server prepares an unique URL to serve that image (using the contents of the cookie - the image is not permanently stored in the server), with an unguessable token that the attacker could not have access; The image will only be displayed if the page successfully determines it is not in an iframe ; thus, if the user sees the image, he can be confident that the site is legit. Is my reasoning correct? Are there any known attacks to this scheme? (maybe something involving MitM, etc) | Revealing your IP address doesn't compromise the security of your machine . If an attack on your machine is untargetted (i.e. the attacker just wants to use it to send spam or fishing emails, or as a proxy for targetted attacks), your machine will be scanned at random, not based on the IP address that may be posted in a forum. If the attack is targetted, the person conducting the attack will usually know enough about you to find out your IP address anyway, the real security comes from not having a vulnerable machine. On the other hand, revealing your IP address compromises your privacy . It usually reveals what general geographic area you are accessing the Internet from, and who your Internet provider is; depending on your Internet provider, it may be possible to locate you quite precisely. It may also be possible to correlate your IP address with one online identity with your IP address with another online identity. So it's often not something you want to publish to the whole world. Any computer you directly connect to knows your IP address by construction. As a website designer, treat IP addresses the same way you'd treat any other private data such as name, age, gender, street address, telephone number, ... Do not expose them to anyone who isn't a site administrator. Remember that webserver logs will usually contain IP addresses for every request, so protect the logs like you protect your user database. Note however that it often isn't difficult to obtain someone's IP address online. All you have to do is host an image on a server that you control (costs <$10/month), and arrange for the person to browse that image in their browser. The IP address of everyone who viewed the image will be in the server logs. This is why email programs usually require you to confirm whether you want to view an image, and one of the reasons why many social sites require all images to be uploaded to their own servers. As a user, if you're really worried about revealing your identity, use a proxy. You trade privacy for bandwidth and latency, as well as privacy (the proxy knows what sites you've visited). You can go further and use Tor , which is a “split” proxy where different entities get to know your IP address (the entry node) and what site you're visiting (the exit node); you trade more bandwidth and latency for a bigger privacy gain. | {
"source": [
"https://security.stackexchange.com/questions/29101",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6939/"
]
} |
29,106 | A large amount of files were encrypted by openssl enc -aes-256-cbc -pass pass:MYPASSWORD Openssl should derive key+IV from passphrase. I'd like to know key+IV equivalent of that MYPASSWORD . Is that possible? I know MYPASSWORD. I could decrypt and then re-encrypt with new known key+IV with: openssl enc -d -aes-256-cbc -pass pass:MYPASSWORD
openssl enc -aes-256-cbc -K MYKEY -IV MYIV But the problem is that the amount of data is quite large. | Usage of the openssl enc command-line option is described there . Below, I will answer your question, but don't forget to have a look at the last part of my text, where I take a look at what happens under the hood. It is instructive. OpenSSL uses a salted key derivation algorithm. The salt is a piece of random bytes generated when encrypting, stored in the file header; upon decryption, the salt is retrieved from the header, and the key and IV are re-computed from the provided password and salt . At the command-line, you can use the -P option (uppercase P) to print the salt, key and IV, and then exit. You can also use the -p (lowercase P) to print the salt, key and IV, and then proceed with the encryption. First try this: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -P If you run this command several times, you will notice each invocation returns different values ! That's because, in the absence of the -d flag, openssl enc does encryption and generates a random salt each time. Since the salt varies, so do the key and IV. Thus, the -P flag is not very useful when encrypting; the -p flag, however, can be used. Let's try again; this time, we have the file foo_clear which we want to encrypt into foo_enc . Let's run this: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -p -in foo_clear -out foo_enc This command will encrypt the file (thus creating foo_enc ) and print out something like this: salt=A68D6E406A087F05
key=E7C8836AD32C688444E3928F69F046715F8B33AF2E52A6E67A626B586DE8024E
iv=B9F128D827203729BE52A834CC0890B7 These values are the salt, key and IV actually used to encrypt the file. If I want to get them back afterwards, I can use the -P flag in conjunction with the -d flag: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -d -P -in foo_enc which will print the same salt, key and IV as above, every time. How so? That's because this time we are decrypting , so the header of foo_enc is read, and the salt retrieved. For a given salt value, derivation of the password into key and IV is deterministic. Moreover, this key-and-IV retrieval is fast, even if the file is very long, because the -P flag prevents actual decryption; it reads the header , but stops there. Alternatively , you can specify the salt value with the -S flag, or de-activate the salt altogether with -nosalt . Unsalted encryption is not recommended at all because it may allow speeding up password cracking with pre-computed tables (the same password always yields the same key and IV). If you provide the salt value, then you become responsible for generating proper salts, i.e. trying to make them as unique as possible (in practice, you have to produce them randomly). It is preferable to let openssl handle that, since there is ample room for silent failures ("silent" meaning "weak and crackable, but the code still works so you do not detect the problem during your tests"). The encryption format used by OpenSSL is non-standard: it is "what OpenSSL does", and if all versions of OpenSSL tend to agree with each other, there is still no reference document which describes this format except OpenSSL source code. The header format is rather simple: magic value (8 bytes): the bytes 53 61 6c 74 65 64 5f 5f
salt value (8 bytes) Hence a fixed 16-byte header, beginning with the ASCII encoding of the string Salted__ , followed by the salt itself. That's all! No indication of the encryption algorithm; you are supposed to track that yourself. The process by which the password and salt are turned into the key and IV is un-documented, but the source code shows that it calls the OpenSSL-specific EVP_BytesToKey() function, which uses a custom key derivation function (KDF) with some repeated hashing. This is a non-standard and not-well vetted construct (!) which relies on the MD5 hash function of dubious reputation (!!); that function can be changed on the command-line with the undocumented -md flag (!!!); the "iteration count" is set by the enc command to 1 and cannot be changed (!!!!). This means that the first 16 bytes of the key will be equal to MD5(password||salt) , and that's it. This is quite weak! Anybody who knows how to write code on a PC can try to crack such a scheme and will be able to "try" several dozens of millions of potential passwords per second (hundreds of millions will be achievable with a GPU). If you use openssl enc , make sure your password has very high entropy! (i.e. higher than usually recommended; aim for 80 bits, at least). Or, preferably, don't use it at all; instead, go for something more robust ( GnuPG , when doing symmetric encryption for a password, uses a stronger KDF with many iterations of the underlying hash function). | {
"source": [
"https://security.stackexchange.com/questions/29106",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19713/"
]
} |
29,110 | From their FAQ : What are "Alternate Inbox Names" ? There are 2 ways to get email into any given inbox. When you check an inbox, listed at the top is the Alternate Inbox name. Emailing that alternate name is the same as emailing the regular name of the inbox. For example, the alternate name for "joe" is "M8R-yrtvm01" (all alternate names start with "M8R-"). So basically if you cracked the Alternate Inbox Names , you will have access to the URL of the original word used, in this case M8R-yrtvm01 will lead to joe.mailinator.com Do you think it's possible or easy to crack? I think for that you should brute force attack their site, which will be sooner down due to heavy loads. | Usage of the openssl enc command-line option is described there . Below, I will answer your question, but don't forget to have a look at the last part of my text, where I take a look at what happens under the hood. It is instructive. OpenSSL uses a salted key derivation algorithm. The salt is a piece of random bytes generated when encrypting, stored in the file header; upon decryption, the salt is retrieved from the header, and the key and IV are re-computed from the provided password and salt . At the command-line, you can use the -P option (uppercase P) to print the salt, key and IV, and then exit. You can also use the -p (lowercase P) to print the salt, key and IV, and then proceed with the encryption. First try this: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -P If you run this command several times, you will notice each invocation returns different values ! That's because, in the absence of the -d flag, openssl enc does encryption and generates a random salt each time. Since the salt varies, so do the key and IV. Thus, the -P flag is not very useful when encrypting; the -p flag, however, can be used. Let's try again; this time, we have the file foo_clear which we want to encrypt into foo_enc . Let's run this: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -p -in foo_clear -out foo_enc This command will encrypt the file (thus creating foo_enc ) and print out something like this: salt=A68D6E406A087F05
key=E7C8836AD32C688444E3928F69F046715F8B33AF2E52A6E67A626B586DE8024E
iv=B9F128D827203729BE52A834CC0890B7 These values are the salt, key and IV actually used to encrypt the file. If I want to get them back afterwards, I can use the -P flag in conjunction with the -d flag: openssl enc -aes-256-cbc -pass pass:MYPASSWORD -d -P -in foo_enc which will print the same salt, key and IV as above, every time. How so? That's because this time we are decrypting , so the header of foo_enc is read, and the salt retrieved. For a given salt value, derivation of the password into key and IV is deterministic. Moreover, this key-and-IV retrieval is fast, even if the file is very long, because the -P flag prevents actual decryption; it reads the header , but stops there. Alternatively , you can specify the salt value with the -S flag, or de-activate the salt altogether with -nosalt . Unsalted encryption is not recommended at all because it may allow speeding up password cracking with pre-computed tables (the same password always yields the same key and IV). If you provide the salt value, then you become responsible for generating proper salts, i.e. trying to make them as unique as possible (in practice, you have to produce them randomly). It is preferable to let openssl handle that, since there is ample room for silent failures ("silent" meaning "weak and crackable, but the code still works so you do not detect the problem during your tests"). The encryption format used by OpenSSL is non-standard: it is "what OpenSSL does", and if all versions of OpenSSL tend to agree with each other, there is still no reference document which describes this format except OpenSSL source code. The header format is rather simple: magic value (8 bytes): the bytes 53 61 6c 74 65 64 5f 5f
salt value (8 bytes) Hence a fixed 16-byte header, beginning with the ASCII encoding of the string Salted__ , followed by the salt itself. That's all! No indication of the encryption algorithm; you are supposed to track that yourself. The process by which the password and salt are turned into the key and IV is un-documented, but the source code shows that it calls the OpenSSL-specific EVP_BytesToKey() function, which uses a custom key derivation function (KDF) with some repeated hashing. This is a non-standard and not-well vetted construct (!) which relies on the MD5 hash function of dubious reputation (!!); that function can be changed on the command-line with the undocumented -md flag (!!!); the "iteration count" is set by the enc command to 1 and cannot be changed (!!!!). This means that the first 16 bytes of the key will be equal to MD5(password||salt) , and that's it. This is quite weak! Anybody who knows how to write code on a PC can try to crack such a scheme and will be able to "try" several dozens of millions of potential passwords per second (hundreds of millions will be achievable with a GPU). If you use openssl enc , make sure your password has very high entropy! (i.e. higher than usually recommended; aim for 80 bits, at least). Or, preferably, don't use it at all; instead, go for something more robust ( GnuPG , when doing symmetric encryption for a password, uses a stronger KDF with many iterations of the underlying hash function). | {
"source": [
"https://security.stackexchange.com/questions/29110",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18620/"
]
} |
29,172 | What did the DTLS (TLS over UDP) authors have to change so that it could run without TCP? Bonus points:
Do any of the protocol difference affect the way it should be used, both in terms of interface but also best-practices? | DTLS is currently (version 1.2) defined in RFC 6347 by explaining the differences with TLS 1.2 ( RFC 5246 ). Most of the TLS elements are reused with only the smallest differences. The context is that the client and the server want to send each other a lot of data as "datagrams"; they really both want to send a long sequence of bytes, with a defined order, but do not enjoy the luxury of TCP . TCP provides a reliable bidirectional tunnel for bytes, where all bytes eventually reach the receiver in the same order as what the sender used; TCP achieves that through a complex assembly of acknowledge messages, transmission timeouts, and reemissions. This allows TLS to simply assume that the data will go unscathed under normal conditions; in other words, TLS deems it sufficient to detect alterations, since such alterations will occur only when under attack. On the other hand, DTLS works over datagrams which can be lost, duplicated, or received in the wrong order. To cope with that, DTLS uses some extra mechanisms and some extra leniency. Main differences are: Explicit records . With TLS, you have one long stream of bytes, which the TLS implementation decides to split into records as it sees fit; this split is transparent for applications. Not so with DTLS: each DTLS record maps to a datagram. Data is received and sent on a record basis, and a record is either received completely or not at all. Also, applications must handle path MTU discovery themselves. Explicit sequence numbers . TLS records include a MAC which guarantees the record integrity, and the MAC input includes a record sequence number which thus verifies that no record has been lost, duplicated or reordered. In TLS, this sequence number (a 64-bit integer) is implicit (this is always one more than the previous record). In DTLS, the sequence number is explicit in each record (so that's an extra 8-byte overhead per record -- not a big deal). The sequence number is furthermore split into a 16-bit "epoch" and a 48-bit subsequence number, to better handle cipher suite renegotiations. Alterations are tolerated . Datagrams may be lost, duplicated, reordered, or even modified. This is a "fact of life" which TLS would abhor, but DTLS accepts. Thus, both client and server are supposed to tolerate a bit of abuse; they use a "window" mechanism to make sense of records which are "a bit early" (if they receive records in order 1 2 5 3 4 6, the window will keep the record "5" in a buffer until records 3 and 4 are received, or the receiver decides that records 3 and 4 have been lost and should be skipped). Duplicates MAY be warned upon, as well as records for which the MAC does not match; but, in general, anomalous records (missing, duplicated, too early beyond window scope, too old, modified...) are simply dropped. This means that DTLS implementation do not (and, really, cannot) distinguish between normal "noise" (random errors which can occur) and an active attack. They can use some threshold (if there are too many errors, warn the user). Stateless encryption . Since records may be lost, encryption must not use a state which is modified with each record. In practice, this means no RC4. No verified termination . DTLS has no notion of a verified end-of-connection like what TLS does with the close_notify alert message. This means that when a receiver ceases to receive datagrams from the peer, it cannot know whether the sender has voluntarily ceased to send, or whether the rest of the data was lost. Note that such a thing was considered one of the capital sins of SSL 2.0, but for DTLS, this appears to be OK. It is up to whatever data format which is transmitted within DTLS to provision for explicit termination, if such a thing is needed. Fragmentation and reemission . Handshake messages may exceed the natural datagram length, and thus may be split over several records. The syntax of handshake messages is extended to manage these fragments. Fragment handling requires buffers, therefore DTLS implementations are likely to require a bit more RAM than TLS implementations (that is, implementations which are optimized for embedded systems where RAM is scarce; TLS implementations for desktop and servers just allocate big enough buffers and DTLS will be no worse for them). Reemission is done through a state machine, which is a bit more complex to implement than the straightforward TLS handshake (but the RFC describes it well). Protection against DoS/spoof . Since a datagram can be sent "as is", it is subject to IP spoofing: an evildoer can send a datagram with a fake source address. In particular a ClientHello message. If the DTLS server allocates resources when it receives a ClientHello , then there is ample room for DoS . In the case of TLS, a ClientHello occurs only after the three-way handshake of TCP is completed, which implies that the client uses a source IP address that it can actually receive. Being able to DoS a server without showing your real IP is a powerful weapon; hence DTLS includes an optional defense. The defensive mechanism in DTLS is a "cookie": the client sends its ClientHello , to which the server responds with an HelloVerifyRequest message which contains an opaque cookie, which the client must send back as a second ClientHello . The server should arrange for a type of cookie which can be verified without storing state; i.e. a cookie with a time stamp and a MAC (strangely enough, the RFC alludes to such a mechanism but does not fully specify it -- chances are that some implementations will get it wrong). This cookie mechanism is really an emulation of the TCP three-way handshake. It implies one extra roundtrip, i.e. brings DTLS back to TLS-over-TCP performance for the initial handshake. Apart from that, DTLS is similar to TLS. Non-RC4 cipher suites of TLS apply to DTLS. DTLS 1.2 is protected against BEAST-like attacks since, like TLS 1.2, it includes per-record random IV when using CBC encryption. To sum up, DTLS extra features are conceptually imports from TCP (receive window, reassembly with sequence numbers, reemissions, connection cookie...) thrown over a normal TLS (the one important omission is the lack of acknowledge messages). The protocol is more lenient with regards to alterations, and does not include a verified "end-of-transmission" (but DTLS is supposed to be employed in contexts where this would not really make sense anyway). The domain of application of DTLS is really distinct from that of TLS; it is meant to be applied to data streaming applications where losses are less important than latency, e.g. VoIP or live video feeds. For a given application, either TLS makes much more sense than DTLS, or the opposite; best practice is to choose the appropriate protocol. | {
"source": [
"https://security.stackexchange.com/questions/29172",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2264/"
]
} |
29,375 | The main difference being TrueCrypt creates containers and 7-Zip encrypts the file itself, so file sizes can be guessed. Now let's just talk about the strength and breakability of the encryption. Update: http://forums.truecrypt.org/viewtopic.php?p=107396 | If implemented correctly, AES is AES; the output between two different implementations is identical, and therefore no distinction is possible in after-the-fact comparison -- if done correctly, the one is exactly the same as the other. But there are a few points where differences can crop in: Operation Mode Truecrypt implements a modified counter mode called XTS. It's pretty well vetted and has withstood some serious abuse from some powerful attackers (such as the US Government). From examining the p7zip source code, it appears that AES encoding for the 7-zip format operates in CBC mode. This is certainly not necessarily insecure; it's the mode most popularly used in protocols such as TLS, but it is potentially vulnerable to padding oracle attacks. See this discussion on operation modes for more information. Key Derivation Truecrypt uses PBKDF2 to turn your password into an encryption key. It's difficult to come up with a better alternative than that. p7zip uses a salted SHA256 hash repeated over a configurable number of iterations. PBKDF2 is a bit more configurable, but 7-zip's alternative is functionally similar and arguably reaches the same goals. Vetted Implementation Here's probably the biggest difference: TrueCrypt's code has been poured over by cryptographers and carefully examined for implementation mistakes. 7-zip's has not (at least not to the same degree). This means that there is a higher probability that 7-zip's code contains some sort of mistake that could allow for some sort of as-yet-unknown attack. That's not to say that such a mistake exists, and that's not to say that such a mistake couldn't be found in TrueCrypt instead. But the this is a matter of probability, not certainty. All in all, the differences are minor, and for most use cases you shouldn't expect any difference at all from a security perspective. If it's a matter of life-and-death, I'd probably pick TrueCrypt. But for matters of mere secrecy, I'd recommend going with whichever solution fits your problem the best. | {
"source": [
"https://security.stackexchange.com/questions/29375",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/18620/"
]
} |
29,378 | I was reviewing several different comparisons of AppArmor and SELinux which include: Why I Like AppArmor More Than SELinux SELinux and AppArmor: An Introductory Comparison From these articles I conclude that AppArmor is better than SELinux based on AppArmor is far less complex and far shorter learning curve. Thus the majority of comparisons are in favour of AppArmor but how can I say that AppArmor is more secure than SELinux? | These security systems provide tools to isolate applications from each other... and in turn isolate an attacker from the rest of the system when an application is compromised. SELinux rule sets are incredibly complex but with this complexity you have more control over how processes are isolated. Generating these policies can be automated . A strike against this security system is that its very difficult to independently verify. AppArmor (and SMACK) is very straight forward. The profiles can be hand written by humans, or generated using aa-logprof . AppArmor uses path based control, making the system more transparent so it can be independently verified. | {
"source": [
"https://security.stackexchange.com/questions/29378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11801/"
]
} |
29,425 | What is the difference between .pfx and .cert certificate files? Do we distribute .pfx or .cert for client authentication? | There are two objects: the private key , which is what the server owns, keeps secret, and uses to receive new SSL connections; and the public key which is mathematically linked to the private key, and made "public": it is sent to every client as part of the initial steps of the connection. The certificate is, nominally, a container for the public key. It includes the public key, the server name, some extra information about the server, and a signature computed by a certification authority (CA). When the server sends its public key to a client, it actually sends its certificate, with a few other certificates (in a chain: the certificate which contains the public key of the CA which signed its certificate, and the certificate for the CA which signed the CA's certificate, and so on). Certificates are intrinsically public objects. Some people use the term "certificate" to designate both the certificate and the private key; this is a common source of confusion. I personally stick to the strict definition for which the certificate is the signed container for the public key only. A .pfx file is a PKCS#12 archive : a bag which can contain a lot of objects with optional password protection; but, usually, a PKCS#12 archive contains a certificate (possibly with its assorted set of CA certificates) and the corresponding private key. On the other hand, a .cert (or .cer or .crt ) file usually contains a single certificate, alone and without any wrapping (no private key, no password protection, just the certificate). | {
"source": [
"https://security.stackexchange.com/questions/29425",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/19893/"
]
} |
29,449 | Most (if not all) of us know that a Google Doc link looks something like this: https://docs.google.com/document/d/13P3p5bA3lslqEJT1BGeTL1L5ZrQq_fSov_56jT9vf0I/edit There are becoming several tools (like Trello) that allow you to "attach" a document from your Google Drive. When you attach a document, you have to manually go in and add people to the document - or say that anyone with a link can edit, view, or comment. From a security standpoint, how risky is just saying that everyone can edit? What is the likelihood that someone could brute force guess your Google Doc link, and thus gain access to your document? My guess here is that there are a lot easier avenues (e.g. guessing someone's Trello PW, rubber hose decryption) to gain access to whatever information the attacker was looking for, mainly on the obvious fact that there are a lot of characters there, plus the assumption that Google probably keeps an eye out for that sort of sneaky behavior... But let's say that you were able to brute force the links - what are the vulnerabilities with this approach? | Assuming the document ID distribution is uniform and unpredictable, here's the math: 44 characters long Uppercase, lowercase, digits and underscore = 26 + 26 + 10 + 1 = 63 character alphabet Therefore: Total possible combinations: 63 44 keyspace: 263 bits ⇐ 44 * log 2 (63) And we know that brute-forcing a 263-bit key in any reasonable amount of time (lifetime of the universe) is well beyond what the laws of physics will allow, no matter how advanced and magical and "quantum" the computers may become. This may seem a bit bold an assertion, but it comes from the fact that the sun simply doesn't put out enough energy in such a timeframe to count that high. See page 157 of Schneier's Applied Cryptography for the details, or look at this answer here where I summarized the math, or this answer where lynks quoted the entire section from Schneier's book. Specifically, the sun's energy is only sufficient to count to 2 187 per year, meaning it will take 2 76 years with our own sun, 2 75 years if we could harness 2 suns, etc. You might barely have enough power to count to 2 256 if you were to power your computer with the supernova destruction of every star in the Milky Way Galaxy. So that's getting somewhere. | {
"source": [
"https://security.stackexchange.com/questions/29449",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3053/"
]
} |
29,539 | I found myself telling a coworker today "Email is insecure, that's why we developed our secure report application." But then it struck me, Why is email considered insecure? If it is so insecure, why do we trust it for password resets? I never questioned it before... | We trust email for password resets because we do not have anything better . It is not really a matter of trust as in "we have full faith in the email"; it is more like "eh, as if we had a choice...". In particular with Web-based business with consumers: a consumer is authenticated by his dynamic IP address (cannot be used except as part of a police operation with warrants to uncover the ISP logs), possible credit card details (idem), and whatever the customer accepted to tell us, which, at best, is a valid email address. So, email. | {
"source": [
"https://security.stackexchange.com/questions/29539",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10574/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.