source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
41,247
When Lotus Notes asks for the password, it displays a screen with a picture that appears to change after a new character is entered after the fifth character. I have noticed the sequence of pictures is the same between closing and reopening Lotus Notes. Is this to distract an attacker from looking at the keyboard as someone types? Has this ever been proven effective? Also as far as I can tell a random amount of x's are added after each character is typed in. I guess this is so an attacker can't see the password length, but is there a point to having anything at all because the user doesn't know how many characters are typed? EDIT: for what it's worth I didn't even realize the pictures were the same each time I typed in a password.
There is some information on this defunct page . Apparently, the idea that the "moving picture" is there to distract shoulder surfers is widespread, and wrong. That's not how this picture works; what it does is actually worse , although it proceeds from good intentions . When you type the password letters, Lotus employs a "fairly complicated" but deterministic algorithm to map the password as entered to a picture; this basically is a hash function with a very small output size (the output is a "value" in the set of possible pictures). It is possible that the said hash function includes some server-specific secret, but it won't matter much. The real point is that, as you observe, when you enter your password you always end up with the same picture for a given password . The good intention is to achieve the two following properties: Give an early visual warning as to whether the password was entered correctly or not; the user will soon learn the sequence of pictures for his own password, and thus if a picture changes, then the user knows that he typed the wrong key at that point (or possible a few characters before). An attacker who tries to mimic this login popup and make the user type his password in a fake popup would supposedly find it "difficult" to recompute the pictures and display them correctly. The second reason is pure baloney, when you think about it: the "complicated algorithm" cannot be kept secret (especially if the fake popup is actually a man-in-the-middle attack and the true popup is used under the covers to get the actual pictures), and making pictures which move on the screen is really easy: that's what 99.9% of the Web is about. The first reason, however, includes the seeds of destruction: this leaks information on the password . The pictures are on the screen and very visible; quite prominent, even. The "shoulder surfer" can see them from afar. And he can use them to prune out potential passwords. Indeed, if there are four possible pictures, then this leaks 2 bits per character : for an 8-character password, which would have, realistically, about 30 bits of entropy, this is then reduced to a meagre 14 bits. Indeed, this feature is analogous to a system which would write on the screen, in big letters, and for each password character: "this character is a digit" / "this character is an uppercase letter between A and M" / ... Therefore, this "picture" system is downright dangerous and should be banned. As for the password length , the number of characters is very easy to get for the attacker, because each key stroke is highly audible . The shoulder surfer just needs to be within earshot of the victim, and could easily record the sequence with his smartphone to listen it later on, suitable slowed down, and thus obtain the password length. Under these conditions, hiding the length from the user himself is pointless.
{ "source": [ "https://security.stackexchange.com/questions/41247", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10714/" ] }
41,309
I am a little confused on the contextual differences between permission and privilege from computer security perspective.Though I have read the definition of both the terms but it will be nice if someone can give me some practical example e.g. User A can read write file X What is privilege and permission in this case ?
In computer security, they are used interchangeably. In the context of rights, permission implies consent given to any individual or group to perform an action. Privilege is a permission given to an individual or group. Privileges are used to distinguish between different granted permissions (including no permission.) A privilege is a permission to perform an action. Also from the above english.se link A permission is a property of an object, such as a file. It says which agents are permitted to use the object, and what they are permitted to do (read it, modify it, etc.). A privilege is a property of an agent, such as a user. It lets the agent do things that are not ordinarily allowed. For example, there are privileges which allow an agent to access an object that it does not have permission to access, and privileges which allow an agent to perform maintenance functions such as restart the computer. So in your example the privilege is having the permission to write the file 'x'
{ "source": [ "https://security.stackexchange.com/questions/41309", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11801/" ] }
41,399
OpenSSL 1.0.1c onwards seems to offer CMS support. I have difficulty understanding the difference between smime and pkcs7 . S/MIME specs are layered on PKCS#7 (so says Wikipedia ). Now in openssl I have the following commands: openssl smime openssl cms Are these equivalent commands ? Why then have two implementations ? When should either be used ?
PKCS#7 is an old standard from RSA Labs, published later on as an "informational RFC". Then, a new versions was produced, as an "Internet standard", i.e. with the seal of approval from the powers-that-be at IETF; a new name was invented for that: CMS . Newer versions were subsequently defined: RFC 3369 , RFC 3852 , RFC 5652 . You can consider CMS and PKCS#7 to both designate the same standard, which has several versions. CMS (PKCS#7) is a format for applying encryption, signatures and/or integrity checks on arbitrary binary messages (which can be large). It can be nested. It is the basis for some protocols such as time stamping . It is also frequently (ab)used as a container for X.509 certificates: the SignedData type includes a field for a set of signatures (and that set can be empty), and also a field to store arbitrary X.509 certificates which are considered as "potentially useful" for whoever processes the object. CMS is nominally backward compatible: there are version fields at various places, and a library which understands a version of CMS should be able to process messages from older versions; moreover, such a library should also produce messages which are compatible with other versions, inasmuch as it is feasible given the message contents. See section 5.1 for an example: the "version" fields are set to the minimal values which still make sense given what else is in the structure. S/MIME is a protocol for doing "secure emails". S/MIME uses CMS for the cryptographic parts; what S/MIME adds over CMS is the following: Rules for encoding CMS objects (which are binary) into something which can travel unscathed in emails (which are text-based). These rules build on MIME , so basically Base64 with some headers. Canonicalization rules for signed but not encrypted emails, with "detached signatures". When an email is just signed, and the recipient might not be S/MIME-aware, it is convenient to send the signature as a CMS SignedData object which does not contain the email data itself; the signature is joined to the email as an attachment. It is then important to define exactly how the email contents are to be encoded into a sequence of bytes for the purpose of signature verification (a single mis-encoded bit would imply verification failure). Restrictions and usage practices on the actual CMS objects: how much nesting is allowed, what types may appear... Some extra attributes so that a sender may publish, in his signed messages, what kinds of cryptographic algorithms it supports and similar metadata. In a way, we can think of S/MIME as the glue between CMS and emails. OpenSSL is a library which implements some protocols, including some versions of PKCS#7 and CMS and S/MIME. The library also comes with command-line tools which expose, as a command-line interface, some functionalities of the library. The tools won't support anything that the library does not implement (the contrary would be surprising, to say the least), but the converse is not true: the library may implement features to which the command-line tools give no easy access, or no access at all. Generally speaking, the command-line tools are not very "serious". They are convenient for some manual operations or a few scripts, but for robust applications you should use the library directly (it is a C library, but bindings exist for many programming languages). Among the command line tools are the subcommands pkcs7 , cms and smime . Confusingly, the cms command is meant to be used for S/MIME, and supports S/MIME, and turns out to be usable as a "generic" CMS support tool only as a byproduct. The OpenSSL developers created the new command cms in order to support recent versions of S/MIME (and CMS) because they felt that they could not update the smime command without breaking interface compatibility: third-party scripts using openssl smime would have become invalid, if the parameters for that command has been adjusted to account for newer S/MIME versions. So they just created a new command, called cms . Theoretically, you should use openssl cms for everything related to S/MIME; but since that command gives access to the OpenSSL support code for the "new" versions of S/MIME and CMS, it necessarily allows you to produce S/MIME messages which are perfectly valid but use features that older implementations of S/MIME might not support. This is summarily described in the man page . Extensive tests with other S/MIME implementations (e.g. Thunderbird, Outlook...) are needed to know exactly what you can do with S/MIME without making your emails unreadable by other people.
{ "source": [ "https://security.stackexchange.com/questions/41399", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16579/" ] }
41,447
After reading this article , I can see the benefits of password hashing as a second layer of defence, in the event of an intruder gaining access to a password database. What I still don't understand is this: Isn't password hashing only important if the system is weak enough to give an intruder access to the password database? If so, then why is such an emphasis put on building secure password databases, instead of secure systems that would prevent unauthorized access to important user information? How is it that hashed password databases are so often stolen?
Experience has taught the community that it's effectively impossible to keep intruders out. It is a question of when, not if, somebody will gain access to a your password database. Doesn't matter whether you are a random blog or a multi-billion dollar government department, you need to assume that somebody will one day gain access. And quite often, they will have enough access to read the database but not enough access to, for example, insert a man-in-the-middle which grabs plaintext passwords as they're used to authenticate someone. For example, they might not hack your primary server, they might only hack a server that contains backup copies of your database. Also, most organisations have many employees. An employee doesn't need to hack your network to view the database, they might already have unfettered access (especially if they're an engineer or sysadmin). There are many reasons why it is a bad idea for your employees to know customer passwords. Even if your website is completely worthless and it wouldn't matter if somebody hacks into it. The username and password used to log into your website is often exactly the same as the one used to log into other much more important services. For example, somebody might write a bot that attempts to log into Apple's iTunes store with every username/password in your database, and if successful it starts purchasing things through the store. This attack might be successful for as many as 10% of the users in your database, and many of those users will never even notice they've been billed $4.99. This is not a theoretical attack, it happens all day long every day and attempts to stop it do not always succeed. EDIT: And in the comments, @emory made another point I forgot: somebody might file a subpoena or use some other legal process to gain access to the database, allowing them to see plaintext passwords unless you have a good hash. Note it's not just law enforcement who can do this, any private lawyer with a strong legal case against you can get access to your database.
{ "source": [ "https://security.stackexchange.com/questions/41447", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30122/" ] }
41,527
If I bring the mouse pointer to a link, but not click on it, I can see in the left/bottom corner that it displays the URL of it. Q: Could this URL (in the left/bottom) be different from the one that my Web browser will go? (don't count that server side can be redirected with an eg.: HTTP 302) Question is just because of knowing that telling the users to check the left/bottom of the browser before clicking on the link is a good/usable thing regarding security. Please provide authentic links/descriptions too :) PS: maybe if JavaScript is enabled, it can be done that the user will go to another website, different from the one displayed in the left/bottom? UPDATE: Does disabling JavaScript with ex.: NoScript solves this problem 100%? (opened a bounty for this part of the question) - because it looks like it could be prevented with NoScript.
Could this URL (in the left/bottom) be different from the one that my webbrowser will go? Yes. for a simple link click, the whole click could be captured by JavaScript and made to do something else, including navigating to a different page the link could be substituted onmousedown (this is common behaviour for some link-click tracking scripts) for browsers like Chrome where the address pop-up appears inside the page area, that pop-up could be faked with JavaScript and page elements. In general you can only trust browser UI that appears in the chrome border, outside of page control. Consequently the address pop-up is a convenience feature but offers no security function.
{ "source": [ "https://security.stackexchange.com/questions/41527", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30175/" ] }
41,617
First of all, my motive is to avoid storing the salt in the database as plain text. As far as this question is concerned, the salt is not stored in the database. After discussion in comments and in chat , I've come up with a theory: It appears that using doman_name + user ID alongside a pepper will provide a sufficient combination of randomness and uniqueness. Would a method such as this provide just as much security as a random salt without having to store a designated salt in the database as plain text?
The only requirement of a salt is to be globally unique. It doesn't have to be random and it most certainly does not have to be unknown. Note the keyword globally . A salt must be unique not only in the context of your database, but in the context of every single database out there. An email or an incremental user ID is (very) unlikely to fulfill this requirement. Using a randomly-generated salt does. EDIT I'm going to address the whole using a domain + userid as a salt idea that came up in the comments. I do not think this scheme is a good idea especially when compared with just using a random salt. If your site is a high target one, an attacker might find it worthwhile to generate a rainbow table targeting the admin account before the attack occurs. This might allow him to get access before the attack gets discovered. The domain + userid scheme does not work well when you consider this situation because the admin account is usually within the first few userid s in the system especially when userid is incremented using a counter. That being said, this whole discussion is really moot in practice. Generating random salts is easy and fast. Many password hashing libraries, especially the bcrypt ones already do it for you. There really isn't a point in rolling your own scheme even if it's just as secure. Let's face it, we are all lazy people aren't we? :P Why reinvent the wheel?
{ "source": [ "https://security.stackexchange.com/questions/41617", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
41,797
And if it is possible, why has it been decided to keep using a smart card for this task? I will be grateful if you can provide some practical examples on how to bypass the use of a smart card (if possible).
A satellite TV system must face the following challenge: it is one-way . The receivers cannot do anything but receive; they cannot emit anything. The generic problem is known as broadcast encryption . In practice, things go that way: Each subscriber has a smartcard, and that card contains a key K s specific to that subscriber. The media stream is encrypted with a key K . That key is updated regularly. Along with the media stream, the publisher sends K encrypted with each K s in circulation. That is, every minute or so, thousands of small blobs are sent in some "holes" in the data stream (apparently there is sufficiently free bandwidth for that); all these blobs contain K , but encrypted with the key of a subscriber. Thus, a given decoder will just wait for the blob which contains K encrypted with his card-specific key K s , and use the card to obtain K and decrypt the stream. When a subscriber is no longer a subscriber (he ceased to pay), the publisher simply stops sending the blob containing K encrypted with the corresponding K s . The next time K is updated (it happens several times per day), the ex-subscriber is "kicked out". In older times , media publishers had the habit of doing everything "their way", which means that they designed their own encryption algorithms, and they were very proud of them, and kept them secret. Of course, such secrets were never maintained for long because reverse engineering works well; and, inevitably, these homemade encryption algorithms almost invariably turned out to be pathetically weak and breakable. Nowadays, the publishers have begun to learn , and they use proper encryption. In such a situation, the only recourse for attackers is to clone smartcards , i.e. break their way through the shielding of a legally obtained smartcard, to get the K s of that card. Breaking through a smartcard is expensive, but not infeasible, at least for the kind of smartcard that are commonly used for such things; it requires a high-precision laser and an electronic microscope, and is rumoured to cost "a few thousands of dollars" for each break-in. Attackers do just that. Publishers react in the following way: with traditional police methods. The cloning method is worth the effort only if thousands of clones can be sold, as part of some underground market. Inspectors just masquerade as buyers, obtain a clone, see what card identity the clone is assuming, and deactivate the corresponding subscriber identity on the publisher side, evicting all clones of that card in one stroke. From what I have seen, the break-clone-sell-detect-deactivate cycle takes about two weeks. It is more-or-less an equilibrium: the non-subscribers who accept the semi-regular breakage of connectivity are in sufficient numbers to maintain professional pirates, but they are not numerous enough to really endanger the publishers' business model. We may note that Blu-ray discs are a much more challenging model, because readers can be off-line and must still work; and though each Blu-ray reader embeds its own reader-specific key, there is not enough room on a Blu-ray disc to include "encrypted blobs" for all readers in circulation. They use a much more advanced algorithm called AACS . However, for satellite TV, the simple method described above works well.
{ "source": [ "https://security.stackexchange.com/questions/41797", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30388/" ] }
41,803
What are the threats to having the KDC accessible via the internet for remote clients? It's my understanding that the authentication is a challenge/response protocol and that the password is never transmitted. Are brute force attacks the reason this isn't common? Is this because the kerberos authentication is more valuable than brute forcing an individual ssh or mail account as multiple services on multiple hosts might be compromised at once? Is it because of the attack surface of the KDC itself and that a pinhole firewall is not seen as sufficient barrier to preventing access to this valuable lynchpin for authentication and authorization?
Arguably the reason Kerberos isn't used over the public Internet doesn't have to do with the security of the protocol, or the exposure of the KDC, but rather that it's an authentication model that doesn't fit the needs of most "public Internet" applications. To quote Wikipedia, Kerberos " provides mutual authentication — both the user and the server verify each other's identity. " That means that the client machine needs to have the necessary keys to vouch for its identity before user authentication takes place. Distributing those keys for "public Internet" applications isn't practical for several reasons - consider how many PCs access your banking web site, owned by a wide variety of people whose ability to install Windows patches regularly is strained. It's not like corporate IT will come around and configure Kerberos for them. Kerberos requires tightly synchronized clocks - not that hard to do in a centrally managed corporate or educational environment, a lot harder when the machines are all owned and managed by completely different people. One of the advantages of Kerberos is the ability to seamlessly leverage an initial authentication into multiple application accesses. On the public Internet, the multiple applications rarely have anything to do with each other - my bank, my mail, and my /. account have no call to trust each other or the same set of people. (This falls into the area of Federated Identity, and there is work in this area, but it doesn't need the same mix of things Kerberos brings to the table). In short, Kerberos is a heavyweight solution, and public Internet application access control is a lightweight problem. To answer your actual question directly, no, I don't believe concerns about brute-force attacks or vulnerability of the KDC keeps Kerberos off the Internet. The protocol has been reasonably well vetted, attacked, fixed, and updated over 20+ years. The client (machine) authentication piece alone provides tremendous protection against a number of attacks that are expected on a large open network like the Internet.
{ "source": [ "https://security.stackexchange.com/questions/41803", "https://security.stackexchange.com", "https://security.stackexchange.com/users/29420/" ] }
41,872
This question is meant as a canonical question in regard to the US and UK spy agencies compromising end nodes and encryption between nodes to spy on people they suspect to be terrorists. However, this has the side effect of significantly elevating the risk of exposing innocent people's personal data. Recently an article was published by the Guardian detailing: US and British intelligence agencies have successfully cracked much of the online encryption relied upon by hundreds of millions of people to protect the privacy of their personal data, online transactions and emails, according to top-secret documents revealed by former contractor Edward Snowden. What is the impact of this and have they really broken all of the crypto out there?
There will be a lot of speculation regarding this question. I will try to provide as much information as stated in the articles. I will also update the answer regularly with facts provided in the comments. Relevant articles to this answer: NSA: How to remain secure against surveilance The US government has betrayed the internet. We need to take it back US and UK spy agencies defeat privacy and security on the internet First of all I would like to say: Is this threat real? Depending on how well we can trust the papers, it should be considered a real threat in the sense that security agencies have succesfully implemented backdoors in software or at encryption end-points. I believe this is probably true as three reputable news papers were kindly requested to not publish the article. This means that there is a high likelihood that at least part of the story is true. Have they really broken crypto? As far as we can tell from the articles they mainly have three strategies: Use supercomputers (clusters) to brute force encryption protocols. This probably means they can efficiently bruteforce encrypted files. Implement backdoors into the software which does the encryption. Make technology companies comply with their demands, some of which may include #2. Option 2 and 3 suggest that they have not succeeded at real time decryption of, for instance, SSL. As Bruce Schneier stated: The NSA deals with any encrypted data it encounters more by subverting the underlying cryptography than by leveraging any secret mathematical breakthroughs. First, there's a lot of bad cryptography out there. If it finds an internet connection protected by MS-CHAP, for example, that's easy to break and recover the key. It exploits poorly chosen user passwords, using the same dictionary attacks hackers use in the unclassified world. They still require people at either end node to implement a backdoor covertly or make the technology company help them in decrypting traffic passed through their systems. Chances are high they have the encryption/decryption/signing keys of some of the Certificate Authorities, which would allow them to setup proxies and perform man-in-the-middle attacks. Due to the trusted certificates (because they can sign them themselves), these attacks will not be noticed by the users they spy on. Note that they have proposed a system to perform real time decryption but there are not indicators that they have actually succeeded in building this. All we can do, at this moment, is speculate. There has also been one case where they, allegedly, backdoored a random generator (according to Wired) used by encryption algorithms. Please note: Encryption standards are public which means anyone who wants to scrutinize them can look into them. The NSA has made it a lot more difficult to review them though. (Implementations are a whole other thing though.) Note that for option 2 it seems they specifically target commercial software. If you want to be more confident you're not using compromised software, you should use open-source products and compile the binaries yourself. (Although, theoretically speaking, the compiler could also be backdoored.). The code could be peer reviewed, or you could review it yourself. (Unfortunately, the latter is often not feasible or practical.) Furthermore, this quote from Edward Snowden would also suggest that they haven't managed to crack strong crypto: "Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on," he said before warning that NSA can frequently find ways around it as a result of weak security on the computers at either end of the communication What are the consequences? The NSA endangers everyone on a certain system by deliberately installing backdoors. They are not just risking the privacy of those they are investigating, but everyone using the system. Open source software is less likely to be compromised. It's clearly stated they attack commercial, closed source software. Backdoors are much more likely to be spotted, and spotted more quickly, in open source software. Commercial software, especially those published in the U.S., are more likely to be backdoored than any other software. (This is personal speculation, but I believe that it would be harder to do in many other regions - e.g.: the E.U. - due to more stringent privacy laws and the fact that multiple governments would often need to be involved. However, this is still no guarantee.) It also seems that they want to compromise as many internet nodes/hubs (tier 1 providers probably) as possible. This is logical because most traffic on the internet will pass by the tier 1 providers at some point. The biggest risk is data leaking because of their negligent practices with introducing backdoors. If the NSA is really after you I doubt some crypto will help you to save your ass. They will probably just round you up at some point and make you disappear. The NSA is not generally going to come after copyright infringers or script kiddies/hackers. They're more interested in the hard-core, dedicated (cyber) terrorists. I highly doubt that, unless the have indicators that you are a terrorist, they will use their information to sue you or even pass that information to another agency. The danger within the NSA is however when someone like Snowden, but who has bad intentions, decides to leak all your private data or use it for personal gain (or any other purpose that is not in the interest of the citizens the NSA tries to "protect"). They have very limited oversight - and much less so publicly - at the moment, which greatly serves to facilitate abuse of the system. What can I do? Start by reading the article NSA: How to remain secure against surveilance written by Bruce Schneier. My personal opinion is that the NSA probably has access to tons of sensitive data and that, even when using strong crypto, they still will be able to get access to sensitive data due to backdoors they introduced in systems or because of the cooperation companies give the NSA. There are some precautions you can take: Use strong passwords Use strong cryptography (websites with SSL certificates should be verified to be running a secure, strong version of TLS) Use VPN/proxies/Tor (not located in the US or UK - maybe not even Europe - though even they can still be backdoored) We also need to open up software and protocols, as Bruce Schneier said: We can make surveillance expensive again. In particular, we need open protocols, open implementations, open systems – these will be harder for the NSA to subvert. My 2 cents It's also an illusion to think the NSA are the only ones doing something like this. It would surprise me if the Chinese and the Russians (or any other state with a large secret police budget for that matter) didn't have similar programs. For the Chinese we already have indicators (APT-1) that they are involved in similar practices as the NSA. Does this make it any less wrong/hypocritical of the US/UK? Probably not. As Bruce Schneier said: I am saddened to say it, but the US has proved to be an unethical steward of the internet. The UK is no better. The NSA's actions are legitimizing the internet abuses by China, Russia, Iran and others. We need to figure out new means of internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.
{ "source": [ "https://security.stackexchange.com/questions/41872", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3339/" ] }
41,925
I just read the article written by Bruce Schneier, the crypto guru. In the article, he says : Prefer symmetric cryptography over public-key cryptography. But, he doesn't shed any light as to why. Now, I thought that public key systems avoid problems with some man in the middle attacks & key distribution which are the problems with symmetric key cryptography, right? Can anybody please explain Bruce Schneier's view or am I missing something?
This preference of symmetric cryptography over asymmetric cryptography is based on the idea that asymmetric cryptography uses parametrized mathematical objects and there is a suspicion that such parameters could be specially chosen to make the system weak. For instance, when using Diffie-Hellman, DSA or ElGamal, you have to work modulo a big prime p . A randomly chosen prime p will be fine, but it is possible to select a special p which will "look fine" but will allow easy (or easier) breakage of the algorithms for whoever knows how that special p value was generated. Such primes p which make crypto weak are very rare so you won't hit one out of bad luck (as I say, a randomly chosen prime will be fine). This means that good cryptographic parameters are nothing up my sleeve numbers . If you look at FIPS 186-4 (the standard for DSA), you will see a description of a generation system for DSA parameters (namely, the big modulus p , and the generator g and group order q ) which is demonstrably non-malicious. This works simply by showing the deterministic PRNG which presided to the generation of these values; by revealing the seed, you can show that you produced the parameters faithfully, and thus did not indulge in "special prime crafting". A jib can be made against the standard NIST elliptic curves (see FIPS 186-4 again) because these curves again have some parameters, and NIST forgot to use a deterministic PRNG as described above (actually, NIST is not at fault here; these curves were inherited from SEC, so the mistake was probably Certicom's). The Brainpool curves attempt to fix that mistake. (Note that among the 15 NIST curves, at least the 5 "Koblitz curves" cannot have been deliberately weakened since there is no random parameter in them at all.) Ole' Bruce turns all of the above into a generic anathema against asymmetric cryptography because he wanted a punchy line, not a more correct but lengthy explanation on group parameter validation, as described above. The lesson to remember is that the complete design of any cryptographic system shall be as transparent as possible , and this includes the generation of "seemingly random" parameters.
{ "source": [ "https://security.stackexchange.com/questions/41925", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30468/" ] }
41,939
These days, there's pretty much three forms of authentication in general use on the web: Single-factor authentication, e.g.: PIN or password. Two-factor authentication, e.g.: Single-factor plus a software- or hardware-generated token code, or a smart card. "Two-step" authentication, e.g.: Single-factor plus a code sent to the user out-of-band. Usually, the second step in two-step authentication involves the user receiving a code via e-mail or SMS and entering it alongside (or after) their pin/password on the website/app being used. The e-mail inbox or receiving phone could be considered as "something you have", thus qualifying this as two-factor authentication. However, the code that is actually used (and the credentials used to access the account/device which receives the code) in the second step is still a "something you know". So, is two-step authentication a new form of two-factor authentication? Or is it really just multi-single-factor authentication?
Two-factor authentication refers specifically and exclusively to authentication mechanisms where the two authentication elements fall under different categories with respect to "something you have", "something you are", and "something you know". A multi-step authentication scheme which requires two physical keys, or two passwords, or two forms of biometric identification is not two-factor, but the two steps may be valuable nonetheless. A good example of this is the two-step authentication required by Gmail. After providing the password you've memorized, you're required to also provide the one-time password displayed on your phone. While the phone may appear to be "something you have", from a security perspective it's still "something you know". This is because the key to the authentication isn't the device itself, but rather information stored on the device which could in theory be copied by an attacker. So, by copying both your memorized password and the OTP configuration, an attacker could successfully impersonate you without actually stealing anything physical. The point to multi-factor authentication, and the reason for the strict distinction, is that the attacker must successfully pull off two different types of theft to impersonate you: he must acquire both your knowledge and your physical device, for example. In the case of multi-step (but not multi-factor), the attacker needs only to only pull off one type of theft, just multiple times. So for example he needs to steal two pieces of information, but no physical objects. The type of multi-step authentication provided by Google or Facebook or Twitter is still strong enough to thwart most attackers, but from a purist point of view, it technically isn't multi-factor authentication.
{ "source": [ "https://security.stackexchange.com/questions/41939", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
41,950
Suppose, a server hosts https://www.master.com/ and thus is equipped with a (single domain) SSL certificate. Furthermore, suppose there are some web apps below master.com : http://wiki.master.com/ http://docs.master.com/ http://cal.master.com/ ... To protect those web apps, one could use a wildcard certificate for master.com , which reaches 1 level down. This wildcard SSL certificate would thus protect connections to each of these web apps. Question: Could this wildcard certificate be issued , although another (non-wildcard) certificate for www.master.com has already been issued? Do CAs exchange data to ensure, that the domains of issued certificates do not overlap?
Two-factor authentication refers specifically and exclusively to authentication mechanisms where the two authentication elements fall under different categories with respect to "something you have", "something you are", and "something you know". A multi-step authentication scheme which requires two physical keys, or two passwords, or two forms of biometric identification is not two-factor, but the two steps may be valuable nonetheless. A good example of this is the two-step authentication required by Gmail. After providing the password you've memorized, you're required to also provide the one-time password displayed on your phone. While the phone may appear to be "something you have", from a security perspective it's still "something you know". This is because the key to the authentication isn't the device itself, but rather information stored on the device which could in theory be copied by an attacker. So, by copying both your memorized password and the OTP configuration, an attacker could successfully impersonate you without actually stealing anything physical. The point to multi-factor authentication, and the reason for the strict distinction, is that the attacker must successfully pull off two different types of theft to impersonate you: he must acquire both your knowledge and your physical device, for example. In the case of multi-step (but not multi-factor), the attacker needs only to only pull off one type of theft, just multiple times. So for example he needs to steal two pieces of information, but no physical objects. The type of multi-step authentication provided by Google or Facebook or Twitter is still strong enough to thwart most attackers, but from a purist point of view, it technically isn't multi-factor authentication.
{ "source": [ "https://security.stackexchange.com/questions/41950", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6324/" ] }
41,988
I've been reading up on SSLstrip and I'm not 100% sure on my understanding of how it works. A lot of documentation seems to indicate that it simply replaces occurrences of "https" with "http" in traffic that it has access to. So a URL passing through such as " https://twitter.com " would be passed on the to victim as " http://twitter.com ". At this point does SSLstrip continue to communicate with Twitter via HTTPS on our behalf? Something like this: Victim <== HTTP ==> Attacker <== HTTPS ==> Twitter Or is it just the fact that the client is now communicating with Twitter over HTTP that gives us access to the traffic? Victim <== HTTP ==> Attacker <== HTTP ==> Twitter My guess is it would be the first option where the Attacker continues to communicate with Twitter via HTTPS as it is enforced by Twitter but I would just like some clarification, thanks.
You should watch Moxie Marlinspike's talk Defeating SSL using SSLStrip . In short SSLStrip is a type of MITM attack that forces a victim's browser into communicating with an adversary in plain-text over HTTP, and the adversary proxies the modified content from an HTTPS server. To do this, SSLStrip is "stripping" https:// URLs and turning them into http:// URLs. HSTS is a proposed solution to this problem.
{ "source": [ "https://security.stackexchange.com/questions/41988", "https://security.stackexchange.com", "https://security.stackexchange.com/users/28535/" ] }
42,046
When OpenSSL generates keys you'll always see a series of periods/dots ( . ) and pluses ( + ). openssl dhparam -text -noout -outform PEM -5 2048 ............+........+...................................................................................................................................................................+..........................+......+......+..........................................................................................................................................................................................+....................................................................................................................................................................+..............................................................................................................................+.............................................................................................................................+.......+............................................................................................................................+..+.......................................................+....................................................................................................................................+..................................+...........................................................+...........................................................................................................................................................................................................................................................................................................+.+...................................................................................................+................................................................................................+.....+....+.................+.......................................................+.............................................................................................+...............................................................................................................................................................+................+....................................................+....................................................................................+...........................................................................................................................................................................................................................................................................................................................................................................................................................................................+.......................................................................+......................................................................................................................................................................................................................+............................................................+...........................................................................+.............................................................+.......................................+....................................................................................+............................................................................................................................................................+..................+...........................................................+.......................................................................................+.....................................................................................................................+...............................................................................................+.............................+.....................................+..................+...........................................................................................................+...........................................+...+.............................................................................................................................................................................+....................................................+............+.............................................................................................................................................................+.....................................+.....+.........................+...........................................................+..........................................................................................+............................................................................................................................................+.................................................+..........................................................................................+.......................+..........................................................................................+......................................................................................................................................................................................+.................................................................................................+...........................................................+.............................................................................................................................+......................+.............................................................................................................................+........................................+..........................................................................+..............................................+............................................................+...+.................................................................................+............................................+................+..........+.........+.....................................+...........................................+..........................................................................................................+........................................................................................................................................................................................................................................................................................................................................................+...........................................................................+..........................+..................................+...........................................................+................................................................................+..+.........................+..................................................................................................................................+........................+.......................................................+..........................................................................................+..........................................+.+...................................................+............................................................................................................+.........................................................................................................................................................................................................+.................................................................................................+....................+.......................................................................................................................+...............................+............................................................................+...............................+....................................................................................................................................................................................................................................................................................................................................................................................................................................+.........................................+.....................+........................................................................+.....................+..........+...............................................................+...........+...............................+....++*++* What do they mean?
When computing DHPARAM you will get these as the output while computing Diffie Hellman parameters: . : A potential prime number was generated. + : Number is being tested for primality. * : A prime number was found. References: source code: dh_cb function in dhparam.c man page: dhparam
{ "source": [ "https://security.stackexchange.com/questions/42046", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11447/" ] }
42,134
Let's say I'm setting up a new account on a website. Should I bother using a strong password? My reasoning is this: Password strength is a rough measure of how long it would take to brute-force my password from a hash. Brute-forcing even a 'weak' password is difficult via the authentication endpoint for the website - it's too slow, and limitations on the number of incorrect password attempts are commonplace. Brute-forcing even 'strong' passwords is becoming trivial when you have the password hash, and this situation will only get worse. In the situation that password hashes are leaked from a website, passwords are reset anyway. It seems to me that there's no difference between using the password fredleyyyy or 7p27mXSo4%TIMZonmAJIVaFvr5wW0%mV4KK1p6Gh at the end of the day.
Brute-forcing even 'strong' passwords is becoming trivial when you have the password hash, and this situation will only get worse. This is not true. A highly random password is near impossible to bruteforce given that the web application in question is using a strong hashing algorithm like bcrypt or pbkdf2 . On the other hand, weak passwords are laughably easy to bruteforce even if a strong hashing algorithm is used. So yes, there is merit to using a strong password.
{ "source": [ "https://security.stackexchange.com/questions/42134", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1953/" ] }
42,154
Let's take MD5 for example: It outputs a 128-bit hash. Does it make sense (in theory) to choose an input (password) which is itself longer than 128-bit? Does it increase the probability of a collision in any way? I know that MD5 is broken, so what about more modern algorithms like bcrypt or scrypt?
If you consider a set of potential passwords of size P , with a hash function with N possible output values, then the probability that there exists at least one collision in this set is quite low when P is less than the square root of N , and quite high beyond. See the birthday problem . As the Wikipedia page says, that probability is approximately equal to 1-e -P 2 /2·N . With figures: if you use passwords consisting of 10 characters (uppercase letters, lowercase letters and digits), then P = 62 10 = 839299365868340224 ; with a 128-bit hash function, N = 2 128 . The formula above means that there may exist at least one colliding pair among all these passwords with probability close to 0.1%. On the other hand, if you add one character to your passwords (i.e. the potential passwords have length 11, not 10), then the probability that there exists at least one collision rises to 98.1%. Now all of this is about the probability of existence of a collision; not probability of hitting a collision. Collisions are not relevant to password hashing . Password hashing works on preimage resistance : given the hash, how hard or easy it is to guess a corresponding password. Note that I said "a", not "the": for the attacker, it does not matter whether he finds the same password as the user did choose; he just wants a password which grants access, and any password which matches the hash output will do the trick. Note that while MD5 is "broken" for collisions, it is not so for preimages (well, for preimages it is "slightly dented", but not significantly for the purposes of this question). There are two ways to break preimage resistance: Guess the password. This means trying all potential passwords until a/the right one is found. If there are P possible passwords with uniform probability, then this has cost at most P/2 because the user did choose one of the passwords, and the attacker will need, on average, to try half of them before hitting that exact password. Get lucky. Try passwords (random, consecutive... it does not matter) until a matching hash value is found. This has average cost N/2 . The password hashing strength will be no more than the lower of the two. In that sense , using a set of possible passwords which is larger than the output of the hash function (e.g. P > 2 128 for a 128-bit hash function) does not bring additional security, because beyond that point, the "get lucky" attack becomes a better bargain for the attacker than the "guess the password" attack, and the "get lucky" attack does not depend on how the user actually chooses his password. Please note that I say "size of the set of passwords" and NOT "password length". All the analysis above is based on how many password values could have been chosen, with uniform probability. If you use only 200-letter passwords, but you may only choose ten thousands of them (e.g. because each "password" is a sentence from your favourite book, and the attacker knows that book), then the size of the set of potential passwords is 10000, not 62 200 . In practice , P is bounded by the user's brain (the user has to remember the password) and is invariably lower than N . A "very strong" password is a password from a selection process which uses a P of 2 80 or more; that's sufficient for security, and yet far below the 2 128 of MD5 or the 2 192 of bcrypt. But it seems unrealistic to expect average users to choose very strong passwords on average. Instead, we must cope with weak passwords, with P around 2 30 or so (meaning: try one billion possible passwords, and you'll have broken the passwords of half your users). Mitigation measures are then slow hashing (make each guess expensive) and salts (don't allow the attacker to parallel-attack several passwords at reduced cost). See this answer .
{ "source": [ "https://security.stackexchange.com/questions/42154", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27234/" ] }
42,164
" Linus Torvalds , in response to a petition on Change.org to remove RdRand from /dev/random, has lambasted the petitioner by called him ignorant for not understanding the code in Linux Kernel. Kyle Condon from UK raised a petition on Change.org to get Linus to remove RdRand from /dev/random in a bid 'to improve the overall security of the linux kernel.' What is the problem with RdRand from /dev/random?
It's a hardware implementation that hasn't been tested formally, and it's proprietary. The potential worry is that Intel could have backdoored the implementation at the NSA's demand. The current way of mixing the rdrand output into the Linux kernel PRNG is that it's xor'ed into the pool, which mathematically means that there's no possible way for a weak output from the rdrand implementation to weaken the overall pool - it will either strengthen it or do nothing to the security. However, the real risk is that the xor instruction is backdoored in a way that detects for the use of rdrand in a special scenario, then produces a different output when xor is called, causing only the purposefully weakened rdrand output to be placed into the pool. Feasible? Yes. Plausible? Given recent revelations, maybe. If it is backdoored, is Linus complicit in it? Your guess is as good as mine. Also, there's a great paper [PDF] on hiding hardware backdoors at transistor level in CPUs. Edit, Feb 2019. User Luc commented below that things have changed since this answer was originally written: As of Linux 4.19, the kernel trusts RDRAND to seed its CSPRNG fully, unless one passes the random.trust_cpu=0 flag on boot (or sets it compile time). This should not be an issue if this is not your first boot, but newly installed systems or newly created VMs might have a predictable startup seed file (or no seed file at all), so for those systems this is relevant to gather good entropy.
{ "source": [ "https://security.stackexchange.com/questions/42164", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27274/" ] }
42,246
I have an issue with support for a system and I don't know what approach to take. It boils down to the question: Should I know my users passwords so that I can check with certainty that a particular user can logon and there is not a problem with their account? If I can view my users passwords I can logon as a particular user, but this ability to see their passwords 'feels' wrong and I'm sure that there is a better design to overcome this problem. Environment: We have a bespoke Java web application that supports users being able to log in only with the correct user id and password. A systems administrator is responsible for creating users (and also their initial password), but users can go to a settings page to change their password. Our system stores the usernames and passwords but the passwords are encrypted. A Salt it also applied to the encrypted password. The goal here is that only the user knows their password. We don't want to know it and we don't want to be responsible for it. Hopefully we are following a good password strategy! Problem: Alice, a registered user, cannot login to our system for some reason. She gets in touch with support and complains that our system does not work. Often this is user error but we have to follow procedure and check that the system is running. We can test the system with our own administrator accounts and see that the system is working fine. We can also look at Alice's credentials and see that her account is not disabled or locked. But Alice insists that she has a problem and that our system is broken. Since we do not have Alice's password we cannot log on as Alice so we cannot state with 100% certainty that her account is actually working. We cannot rule out that the account has not become corrupt in some manner. So should we have a system where we can log on as a particular individual (Alice) and prove that her account is working? We really don't want to be responsible for the users passwords, but without a user's password how can we have confidence that the system is working? I also don't want to get into a situation where we are asking the users for their passwords since they (probably) share these passwords with other accounts. I need to present a strong case to management as to how we resolve this sort of situation. They are of the belief that we should know all of the users passwords to overcome issues such as these. Advice please.
You can replace the users passwordhash with a hash you DO know the password for, save the users passwordhash so you can place it back later and then login to the system with the users credentials.
{ "source": [ "https://security.stackexchange.com/questions/42246", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30687/" ] }
42,264
Say a ASP.NET page, or any html page for that matter, has a drop down list with a bunch of prices. On posting the page, the code looks at the selection of the drop down list for a computation. Is it possible for someone to alter the values and post the page without the server knowing the page has been tampered with? Update I have been told ASP.NET does offer some protection from this with Page.EnableEventValidation Property . With this enabled (enabled by default), trying to change the value of an ASP control will result in an error: Invalid postback or callback argument.  Event validation is enabled using in configuration or <%@ Page EnableEventValidation="true" %> in a page.  For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them.  If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation.
Dropdown lists are an HTML/UI construct. There isn't any such concept in HTTP, which is how the client and the server ultimately talk to one another. So, while yes, a client could alter the page, that isn't absolutely required, because there doesn't actually need to be a page. In the end a client simply sends an HTTP request back to the server and it contains some data, and that data could be the values entered into the HTML form, or it could be arbitrary values chosen at the user's whim. The bottom line is, you can't trust input . Anything sent by the client should be suspect, there's no guarantee that it's what you expect, and it must be validated on the server before acceptance.
{ "source": [ "https://security.stackexchange.com/questions/42264", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30699/" ] }
42,268
I have a public key generated with ssh-keygen and I'm just wondering how I get information on the keylength with openssl?
With openssl , if your private key is in the file id_rsa , then openssl rsa -text -noout -in id_rsa will print the private key contents, and the first line of output contains the modulus size in bits. If the key is protected by a passphrase you will have to enter that passphrase, of course. If you only have the public key, then OpenSSL won't help directly. @Enigma shows the proper command line (with ssh-keygen -lf id_rsa.pub ). You can still do that with OpenSSL the following way: Open the public key file with a text editor. You will find something like this: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDo2xko99piegEDgZCrobfFTvXUTFDbWT ch4IGk5mk0CelB5RKiCvDeK4yhDLcj8QNumaReuwNKGjAQwdENsIT1UjOdVvZOX2d41/p6J gOCD1ujjwuHWBzzQvDA5rXdQgsdsrJIfNuYr/+kIIANkGPPIheb2Ar2ccIWh9giwNHDjkXT JXTVQ5Whc0mGBU/EGdlCD6poG4EzCc0N9zk/DNSMIIZUInySaHhn2f7kmfoh5LRw7RF3c2O 5tCWIptu8u8ydIxz9q5zHxxKS+c7q4nkl9V/tVjZx8sneNZB+O79X1teq7LawiYJyLulUMi OEoiL1YH1SE1U93bUcOWvpAQ5 [email protected] Select the first characters of the middle blob ( after ssh-rsa ); this is Base64 and OpenSSL can decode that: echo "AAAAB3NzaC1yc2EAAAADAQABAAABAQDDo2xko99piegEDgZC" | openssl base64 -d | hd OpenSSL is picky, it will require that you input no more than 76 characters per line, and the number of characters must be a multiple of 4. The line above will print out this: 00000000 00 00 00 07 73 73 68 2d 72 73 61 00 00 00 03 01 |....ssh-rsa.....| 00000010 00 01 00 00 01 01 00 c3 a3 6c 64 a3 df 69 89 e8 |.........ld..i..| 00000020 04 0e 06 42 |...B| This reads as such: 00 00 00 07 The length in bytes of the next field 73 73 68 2d 72 73 61 The key type (ASCII encoding of "ssh-rsa") 00 00 00 03 The length in bytes of the public exponent 01 00 01 The public exponent (usually 65537, as here) 00 00 01 01 The length in bytes of the modulus (here, 257) 00 c3 a3... The modulus So the key has type RSA, and its modulus has length 257 bytes , except that the first byte has value "00", so the real length is 256 bytes (that first byte was added so that the value is considered positive, because the internal encoding rules call for signed integers, the first bit defining the sign). 256 bytes is 2048 bits.
{ "source": [ "https://security.stackexchange.com/questions/42268", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11447/" ] }
42,383
Given the ongoing leaks concerning mass surveillance and the fact that the NSA is the original developer of SELinux , I'm wondering whether that means that backdoors should be expected in there? As every other obfuscated C contest, not at last the Underhanded C Contest , shows, well-written backdoors can elude the reviewer. And just because software is FLOSS doesn't imply people always make use of the opportunity to read the code (not to mention the vast majority that wouldn't be able to comprehend it). In case of SELinux it's not so much about crypto as the recent NIST RNG debacle , but backdoors in there would certainly provide an inroad into seemingly secure hosts. Need we be worried? And if not, why not?
Expecting backdoors is a bit strong... There are several strong arguments against the plausibility of such backdoors: Linux is used by a lot of people, including US corporations. A big part of the mandate of modern security agencies is to protect the interests of their country. In particular, the NSA shall, as much as possible, protect US corporations against spying from foreign competitors. Putting a backdoor in Linux implies the risk of allowing "bad people" (from the NSA point of view) to spy on US corporations through this backdoor. Linux is open-source and the kernel is believed to be under rather thorough scrutiny from competent programmers. This is the "many eyes" theory . SELinux is right in the middle of all this inspection. Whether the "many eyes" theory actually holds is debatable (and debated). However, there are people who do PhD theses on SELinux, so it is not preposterous to assume that this particular piece of code was thoroughly investigated. Any patch committed into the Linux kernel is followed through revision control . SELinux comes from the NSA and is tagged as such. If a backdoor was inserted and then subsequently discovered, it would be easy to track it back to the apparent author. A very basic protection measure is to not do such things in your own name ! If I were the NSA, I would first build up a virtual persona who is not associated with the NSA, so that even if he gets caught pushing backdoors, this will not incriminate my organization. Spy agencies know a lot about spy network segmentation. It would be singularly dumb of them to inject backdoors in their own name. There is also a strong argument for the existence of such backdoors: Spying on a lot of people and organizations is the core business of the NSA. Honestly, until you find the corpse (i.e. the backdoor itself), your question is unanswerable. It is a matter of many parameters which can only be know through subjective estimates... (Personally, I still find that backdoors in PRNG, especially hardware PRNG, are much more plausible than backdoors "hidden in plain sight".)
{ "source": [ "https://security.stackexchange.com/questions/42383", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1609/" ] }
42,428
Forgive my ignorance on the subject, but I wish to know more and asking (stupid) questions are one way. I was reading http://www.random.org/randomness/ and this idea popped into my head (before the bit about lava-lamps) Considering the following: Things like atmospheric properties and "real" life in general are supposedly random in the truest sense, they count as TRNGs. Computers' pseudo random number generators are not as random (hence the pseudo) and, judging by all the NSA/GHCQ revelations lately, not to be trusted. Smartphones have increasingly sensitive cameras. Smartphone photos are usually taken by hand. Would taking a photo using a smartphone and using the RAW file's bytes count as a good way to get a large random number quickly? The sensitivity and the naturally differing position would make even several photos of the same thing quite different, and photos are of the real world, making them as random as the things they point at (prior to loss introduced by the camera). If it is a good way to get a large random number, I could see that it would be an accessible, easy way for average Joe to generate a key. An added benefit would be that it could be used as a key, or a random number for a key, that the holder could recognise by sight, and yet could also deny was a key, as it's just a photo. Finally, since market forces and increasing technology mean more sensitive cameras will become more widespread, would this be one way to protect against intentional flaws being introduced? I imagine that poor camera quality would quickly be noticed and can easily be tested for (the linked article gave an example of how humans are good at testing things visually) - and it'd be news that would harm uptake of a model (happened to Apple at least once, not sure if it put people off though). Hence, market forces could work against the introduction of flaws. If this is stupid, please say why and point me to a resource to further my knowledge. If it's not, I'm going to write an app to do this.
Using a camera as random source is a good idea (not a new one, but still a good one). However, you should do it correctly: take the photo, then hash it with a cryptographic hash function , e.g. SHA-256. Then use the output as a seed for a cryptograhically secure PRNG to generate as many random bytes as you need. Using the file size will yield only very few bits of entropy from your picture: if a typical picture compresses to a size around, say, 2 MB, plus or minus 128 kB on average, then you will get at most 18 bits of entropy from the size. With a hash function, you harvest all the entropy that there is in the photo itself, up to the internal limit of the hash function (about 255 bits for a 256-bit output), which is way beyond that which is necessary for all realistic purposes. A single photo ought to contain a lot of entropy, unless the camera output is covered in some way and the picture is uniformly black. A word of warning, though: if the hash of the photo is used as a secret (and that's the case if you want to use it as seed for a PRNG to produce keys), then that photo must remain confidential: hash it, but never let it be written as a file in the Flash memory of the phone. The photo should be obtained in RAM only, hashed, and then discarded. I don't know what API for photo capture applications have in a typical phone; it seems probable that you can obtain the photo without hitting the Flash in any way, but I invite you to check.
{ "source": [ "https://security.stackexchange.com/questions/42428", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8518/" ] }
42,498
These are things I do when users submit data: substr if extra characters found. htmlspecialchars() + ENT_QUOTES + UTF-8 str_replace '<' '>' in user input What more things need to be done?
“Sanitisation” is an unhelpful and misleading term. There are two different animals here: Output escaping. This is an output-stage concern . When you take variable strings and inject them into a larger string that has a surrounding syntax, you must process the injected string to make it conform to the requirements of that syntax. What exactly that processing is depends on the context: if you are putting text in HTML, you must HTML-escape that text at the point of making the HTML. If you are putting text in SQL queries, you must SQL-escape the text at the point of creating the query.(*) Input validation. This is an input-stage concern , making sure that user input is within the accepted possible values for a data item. This is primarily a matter of business rules, to be considered on a field-by-field basis, although there are some kinds of validation that it makes sense to do to almost all input fields (primarily checking for control characters). Input validation does have security impact in that it can mitigate the damage when you've made a mistake with your output escaping. But it is not enough to rely on input validation as your only text-handling measure because you're always going to need to allow the user to use some characters that are special in some syntax or the other. You're going to want to be able to have a web page about fish & chips and a customer in your database called O'Reilly . “Sanitisation” confuses these two concepts and encourages you to address them at the same stage, which can never work consistently. A common anti-pattern is to HTML-escape all your input. But you don't know if each input element is going to be output to HTML (and only output to HTML) at that input processing phase. If you do this: you end up with HTML-encoded material in the database, that can't be cut up and processed without the entity references getting in the way; if you need to create content from that data that isn't HTML, like send an e-mail or write some CSV, you've got ugly mangled text in it; if you get content in your database from any other source it might not be HTML-escaped and so outputting it straight to the page still gives you XSS vulnerabilities. “Sanitisation” as a concept should be destroyed by fire, then drowned, cut into little pieces and destroyed by some more fire again. (*: in both cases it is wiser to choose a method that does the processing for you implicitly so you don't get it wrong: use an HTML templating language that escapes output by default, and a data access layer that uses parameterised queries or object-relational mapping. Similarly for other kinds of escaping: prefer a standards-compliant XML serialiser to manual XML escaping, use a standard JSON serialiser to pass data to JavaScript, and so on.) substr if over limited values found. Do you mean truncating too-long input strings? That's OK as a form of input validation where your business rules have valid reason to limit the length of an input. But you might prefer returning an error to the user if you have a too-long input string, as depending on what field it is it might not be appropriate to quietly discard data. htmlspecialchars() + ent_quotes + UTF-8 This is output escaping. Do it on the values at the point you drop them into HTML, not on input. If you are using native PHP templating you may like to define yourself a shortcut to make it quicker to type, for example: function h($s) { echo htmlspecialchars($s, ENT_QUOTES, 'UTF-8'); } ... <p>Hello, <?php h($user['name']); ?>!</p> str_replace < > users input What for? If you are HTML-escaping correctly, these characters are perfectly fine, and unless your business rules says otherwise may be quite valid to include in a field—just as both characters are valid for me to type in this comment box for SO. Of course you may want to disallow them in input validation for specific fields—you wouldn't want them in a phone number or email address.
{ "source": [ "https://security.stackexchange.com/questions/42498", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30886/" ] }
42,500
In Information and IT Security there is a nasty tendency for specific "best practices" to become inviolable golden rules, which then leads to people recommending that they are applied regardless of whether they are appropriate for a given situation (similar to Cargo Cult Programming ) A good example of this is the common approach to password policies which applies a one-size fits all 8-character length requirement combined with high complexity requirements, 12 previous passwords stored in a history to stop re-use, 3 incorrect attempt lockout and 30 day rotation. The 30 day rotation is intended to lower the window of opportunity for an atacker to use a stolen password, however it is likely to lead users to use sequence passwords meaning that if an attacker can crack one instance they can easily work out others, actually reversing the intended security benefit. The high length and complexity requirements are intended to stop brute-force attacks. Online brute-force attacks are better mitigated with a combination of sensible lockout policies and intrusion detection, offline brute-force usually occurs when an attacker has compromised the database containing the passwords and is better mitigated by using a good storage mechanism (e.g. bcyprt, PBKDF2) also an unintended side affect is that it will lead to users finding one pattern which works and also increases the risk of the users writing the password down. The 3 incorrect lockout policy is intended to stop online brute-force attacks, but setting it too low increases account lockouts and overloads helpdesks and also places a risk of Denial of service (many online systems have easily guessed username structures like firstname.lastname, so it's easy to lock users out) What are other examples of Cargo-Cult security which commonly get applied inappropriately?
Closed source is more secure than open-source as attackers can view the source code and find and exploit vulnerabilities. While I'm not claiming this is always false, with open source software it's at least possible for outside experts to review the software looking for gaping vulnerabilities/backdoors and then publicly patching them. With closed source software that simply isn't possible without painstakingly disassembling the binary. And while you and most attackers may not have access to the source code, there likely exist powerful attackers (e.g., US gov't) who may be able to obtain the source code or inject secret vulnerabilities into it. Sending data over a network is secret if you encrypt the data . Encryption needs to be authenticated to prevent an attacker from altering your data. You need to verify the identity of the other party you are sending information to or else a man-in-the-middle can intercept and alter your traffic. Even with authentication and identification, encryption often leaks information. You talk to a server over HTTPS? Network eavesdroppers (anyone at your ISP) knows exactly how much traffic you sent, to what IP address, and what the size of each of the responses (e.g., you can fingerprint various webpages based on the size of all the resources transferred). Furthermore, especially with AJAX web sites, the information you type in often leads to a server response that's identifiable by its traffic patterns. See Side-Channel Leaks in Web Applications . Weak Password Reset Questions - How was Sarah Palin's email hacked ? A person went through the password reset procedure and could answer every question correctly from publicly available information. What password reset questions would a facebook acquaintance be able to figure out? System X is unbreakable -- it uses 256-bit AES encryption and would take a billion ordinary computers a million billion billion billion billion billion years to likely crack. Yes, it can't be brute-forced as that would require ~2 256 operations. But the password could be reused or in a dictionary of common passwords. Or you stuck a keylogger on the computer. Or you threatened someone with a $5 wrench and they told you the password . Side-channel attacks exist. Maybe the random number generator was flawed . Timing attacks exist. Social engineering attacks exist. These are generally the weakest links. This weak practice is good enough for us, we don't have time to wait to do things securely . The US government doesn't need to worry about encrypting the video feeds from their drones - who will know to listen to the right carrier frequencies; besides encryption boxes will be heavy and costly - why bother? Quantum Computers can quickly solve exponential time problems and will break all encryption methods . People read popular science articles on quantum computers and hear they are these mystical super-powerful entities that will harness the computing power of a near infinite number of parallel universes to do anything. It's only part true. Quantum computers will allow factoring and discrete logarithms to be done in polynomial time O(n 3 ) via Shor's algorithm rendering RSA, DSA, and encryption based on those trap-door functions easily breakable. Similarly, quantum computers can use Grover's algorithm to brute force a password that should take O(2 N ) time in only O(2 N/2 ) time; effectively halving the security of a symmetric key -- Granted Grover's algorithm is known to be asymptotically optimal for quantum computers, so don't expect further improvement.
{ "source": [ "https://security.stackexchange.com/questions/42500", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37/" ] }
42,555
Sometimes it happens that I have to access my email or some other web service from public machines. Are there any options to make it secure or at least reduce risks to acceptable levels? I am most worried about keyloggers. But even if can copy-paste password from somewhere there are still tools to intercept that. So there is basically no security by definition? One measure I can think about is two factor authentication, but it is not always available (for every service). Other possible measure is to have a server/machine somewhere and access it using some sort of remote access, but this requires considerable resources.
The usual recommendations about accessing private data from a public computer are: Don't do it. Don't do it; instead, use a smartphone or tablet. If your really must do it, there is no other choice, it would be life-threatening not to read your email, then the following mitigations can be used: Use two-factor authentication (e.g. with Gmail ), the second factor using your phone (presumed safe). Use One-Time Passwords . Change your password from a "safe" computer as soon as possible after the access. The first two mitigations suppose that you can enable these extra authentication methods on your Webmail, and that you have some extra device with you (a phone to send or receive a password by SMS, a generator for one-time passwords, a printed list of your next 20 one-time passwords...). The third mitigation depends on how soon you may access a safe computer, but, of course, if you can use a safe computer within the next five minutes, then why don't read your emails from that safe computer ? In any case, whatever you do, you are betting that possible attackers won't hijack your connections automatically; instead, that the attackers passively gather passwords and won't do anything else immediately, leaving you with some minutes or hours to "plug holes". There are some attackers who act that way; but there also are other attackers who recognize connections to various Webmail systems and automatically hijack them. So, really, don't do it .
{ "source": [ "https://security.stackexchange.com/questions/42555", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30941/" ] }
42,586
Preface My mobile app allows users to create accounts on my service. In addition to being able to log in with external authentication providers, like Facebook, I want to give the user the option to create an account using an e-mail address. Normally, all calls to my web service are authenticated via basic authentication over HTTPS. However, the create account function (also over HTTPS) does not have any authentication - since the user does not yet have any credentials. If I was writing a website, I would use Captcha to stop my database from being filled up with bogus accounts via script. Question How can I verify that new user requests are coming from an instance of my application and not from a bot? If all data is sent over HTTPS, is it sufficient for the application to have a stored password to say "hey, it's me!"? What are the best practices for doing something like this? Elaboration The server is written in Java using the Spring Framework and Spring Security. It is hosted on App Engine. Cost is a concern (both network and computation). The app is a mobile game. I do not store sensitive information like credit card numbers. However, I do keep track of user purchases on the Apple and Android stores. My biggest concern is player experience. I don't want a hacker bringing down the system and ruining someone's enjoyment of the game. I also, need to make sure that the player faces as few obstacles as possible when creating an account. Update/Clarification I am looking for a way to ensure all calls to the service are coming from an instance of my application. User accounts are already protected because the stateless service requires that they send their credentials on every request. There are no sessions and no cookies. I need to stop bot-spam on the unsecured calls, such as create-new-account. I cannot use captcha because it does not fit into the flow of the application.
The bottom line is that you will need to embed a secret into your app. It is an unfortunate truth that DRM (which is more or less what you are trying to achieve) is impossible. A person with access to your app will always be able to recover the embedded secret, no matter what you do to protect it. That said, there are plenty of things you can do to make your embedded secret very difficult to recover. Construct it at Runtime - Do not store the secret in a string or configuration file anywhere in your app. Instead derive it from a series of computations at runtime. This stops attackers from simply browsing your app with a hexeditor. Never Send it Over the Wire - Use a challenge-response system of some kind with a >128bit nonce, that way an attacker cannot MitM the SSL stream (which is easy when he controls the mobile device remember) and see the secret in the clear. In any case, try to find a tried and tested key-scrambling mechanism and authentication protocol. Do not roll your own.
{ "source": [ "https://security.stackexchange.com/questions/42586", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30962/" ] }
42,752
Can't see how they find this over the web. I'm on Windows, its in my network config. It changes when I use a VPN provider, but not when I use a local socks proxy connected to a remote server over SSH (ProxyCap). None of the publicly available sites are finding my correct IP, but leaktest finds my ISP DNS. I imagine the protection is in using a VPN that is creating a virtual network adapter and therefore setting DNS providers there, but how is the website detecting my provider?
From the source of https://www.dnsleaktest.com/ : <iframe style="display:none" src="https://1segRNWUwPK0Y21Bm1M0.dnsleaktest.com/"></iframe> <iframe style="display:none" src="https://ldJT4mFLnijeQDBhQX2D.dnsleaktest.com/"></iframe> <iframe style="display:none" src="https://nC4B4vChnPXPshinJoyw.dnsleaktest.com/"></iframe> That site generates a random host name and arranges for your browser to request content from that host name. Since the host name has never been served to anyone else before, your browser's request to resolve it is sure to make its way to the primary server for the dnsleaktest.com domain. Since they run their own DNS, whoever they receive the DNS requests from is your DNS provider. Whether forcing your DNS traffic to go through a tunnel instead of via your ISP depends on who you want privacy from. If you don't want the sites you visit to know where you're from, it's a mild leak: most sites don't bother to track where your DNS requests come from, and anyway they'd have a hard time figuring out which ones are yours, and your requests are likely to be served by some cache anyway. However any site could use the same trick as dnsleaktest if they cared. If you're using a tunnel because you don't want your ISP to know which sites you're visiting, making sure that no DNS request ever reaches your ISP is crucial.
{ "source": [ "https://security.stackexchange.com/questions/42752", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31104/" ] }
42,779
In this iteration of the OWASP top 10 application security vulnerabilities list ( https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project ), a new category 'A9 Using Components with Known Vulnerabilities' has been introduced. This appears to require the investigation of all libraries and imported code in any application to ensure compliance. I have a number of clients who, because of their PCI-DSS audit requirements, use the OWASP top 10 to ensure the security of their own software platforms for those portions of their code base written to process credit card payments. With this new set of requirements it would appear that they would have to find/list all of their imported libraries (Perl modules from CPAN in one instance, and Java libs in another) and go through them line by line - probably a million lines of someone else's code!. This can't be practical or, probably, very useful! Can OWASP seriously be suggesting that those organisations that write their own applications, importing common libraries, have to review all third-party library code? Has anyone else come across this problem, and how do you think I can deal with this?
In a formal review of an application's security, all libraries should be vetted for security defects. However, this is not the point of OWASP-2013 A9. The core of OWASP-2013 A9 is about having a policies in place to ensure that an application isn't compromised due to negligence. OWASP states the following: Identify all components and the versions you are using, including all dependencies. (e.g., the versions plugin). Monitor the security of these components in public databases, project mailing lists, and security mailing lists, and keep them up to date. Establish security policies governing component use, such as requiring certain software development practices, passing security tests, and acceptable licenses. Where appropriate, consider adding security wrappers around components to disable unused functionality and/ or secure weak or vulnerable aspects of the component. Number 2 is the most important. If you are dependent on a library or platform, these components need to be updated regularly. Internally there should be a cycle to review all components and versions, and ensure that these are fully updated. A monthly cycle to review these components would be ideal. In short number 4 is requiring strong validation of input to untrusted libraries. If a library hasn't been fully tested for security defects then data passed to this library must be validated. It is a very good security practice to do this for all input. An example of this is using an OWASP ESAPI validation routine for all input. So if it is an email address, it should match a regex for email addresses.
{ "source": [ "https://security.stackexchange.com/questions/42779", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30713/" ] }
42,952
I'd like to generate a bunch of keys for long term storage on my MacBook. What's a good way to: measure the amount of entropy and ensure it is sufficient before each key is generated, and increase the entropy if needed? Something similar to Linux's /dev/urandom and /dev/random …
First, note that on OS X, /dev/random and /dev/urandom have the same behavior; /dev/urandom is provided as a compatibility measure with Linux. So, while you will see much advice online to use /dev/random because it will block if insufficient entropy is available, that is not the case under OS X. /dev/random uses the Yarrow CSPRNG (developed by Schneier, Kelsey, and Ferguson). According to the random(4) manpage, the CSPRNG is regularly fed entropy by the 'SecurityServer' daemon. Unlike Linux, reading values from /dev/urandom does not exhaust an entropy "pool" and then fall back on a CSPRNG. Instead, Yarrow has a long-term state and is reseeded regularly from entropy collected in two different pools (a fast and slow pool), and reading from /dev/random pulls directly from the CSPRNG. Still, the random(4) manpage notes: If the SecurityServer system daemon fails for any reason, output quality will suffer over time without any explicit indication from the random device itself. which makes one feel very unsafe. Furthermore, it does not appear that OS X exposes a Linux-style /proc/sys/kernel/random/entropy_avail interface, so there is no way to measure how much entropy the SecurityServer daemon has fed Yarrow, nor does there appear to be a way to obtain the current size of the entropy pools. That being said, if you are concerned about the amount of entropy currently available, you can write directly to /dev/random to feed the CSPRNG more data, e.g. echo 'hello' >/dev/random (of course, hopefully you would use actual good, random data here). Truth be told, though, the default behavior probably suffices; but if you're feeling paranoid, use a Linux live distro, maybe with a hardware RNG attached, and generate your keys from /dev/random .
{ "source": [ "https://security.stackexchange.com/questions/42952", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4264/" ] }
42,990
Maybe you could help me with a small problem. Would you recommend IPSec or TLS for a Server-to-Server-Connection? I need two or three arguments for reasoning a decision within my final paper, but sadly didn't find a criterion for an exclusion. The reflection of IPSec and TLS within my thesis is only a minor aspect, therefore, one or two main reasons for selecting TLS or IPSec would be enough. Do you think the possible compression within IPSec could be one of these main reasons? More complete explanation of the scenario: A user wants to login on a portal to make use of a service. The portal backend consists of the portal itself, a registration authority, user management, access control authority etc. Should I choose TLS or IPSec to secure those Backend connections? The delivered service does not have to be positionated within the backend, but it is reliable and is only connected with the portal. Here's a list with the connections I want to secure: service (may be an external one) ---- portal registration authority ---- user management portal ---- user management for a user-server-connection I've already made my decision, but the reasons I've used for this decision, doesn't work for a server-to-server-connection. Thanks a lot in advance! A picture of the scenario: (red = problematic connection) (in fact the model is by far larger, since I treat also other mechanisms like authentication, storage-mechanisms, storage access, access control, software-architecture, including possible technologies and so on. Therefore, securing the connections is only a partial aspect)
There are two main usage modes for IPsec : AH and ESP . AH is only for authentication, so I suppose that you are talking about an ESP tunnel between the two servers. All IP packets get encrypted and authenticated, including some header details such as the source and target ports. There are several encryption and MAC algorithms which can be used with IPsec; AES (CBC mode, 128-bit key) and HMAC/SHA-1 (truncated to 96 bits) are fine, and are MUST-implement , so any conforming IPsec implementation should support them. For this "tunnelling" part, IPsec does things correctly, and so does TLS (assuming TLS 1.1 or 1.2, for IV selection with block ciphers in CBC mode). In fact IPsec can be deemed to be "more correct" than TLS because it uses encrypt-then-MAC instead of MAC-then-encrypt (see this ); however, properly implemented TLS 1.1 or 1.2 will be fine too. Practical differences will occur at other levels: To encrypt and decrypt data, your two servers must first agree on a shared secret. This is what the handshake in TLS is about; the equivalent in IPsec would be IKE . You might want to rely on X.509 certificates ; or maybe on a "shared key" manually configured in both servers. Both TLS and IPsec support both, but any specific implementation of either may make one option easier or more complex than the other. IPsec acts at the OS level; application software needs not be aware of the presence of IPsec. This is great in some cases, not so great in others. On the one hand, it allows some legacy applications to be "secured" with IPsec even if this was not envisioned during application development; and the security management can be done without being constrained by what the application developers thought of (for instance, if using X.509 certificates, you can enforce revocation checks, whereas a TLS-powered application might use a library which does not support CRL). On the other hand, with IPsec, applications cannot do anything application-specific with security: a server may not, for instance, vary its security management depending on which other server is talking to it. Though TLS is normally used applicatively, it is also possible to use it as building block for an OS-level VPN between the two servers. This basically means using TLS as if it was IPsec. IPsec being OS-level, its operation implies fiddling with OS-level network configuration, whereas TLS can usually be kept at application level. In big organizations, Network people and application people tend to live in separate worlds (and, sadly, are often at war with each other). Keeping everything at the application layer may make the deployment and management easier, for organizational reasons. Summary: it depends. Use IPsec if you want to apply it on uncooperative applications (e.g. an application which does not natively support any kind of protection for its connections to other servers). Use TLS if you want to abstract away OS-level configuration. If you still want to use TLS and yet do so at OS level, then use a TLS-powered VPN.
{ "source": [ "https://security.stackexchange.com/questions/42990", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31270/" ] }
43,205
I see a lot of these messages in /var/log/messages of my Linux server kernel: nf_conntrack: table full, dropping packet. kernel: __ratelimit: 15812 callbacks suppresse while my server is under DoS attack but the memory is not still saturated. I am wondering what is the significance of the message and how to counter possible security implications.
The message means your connection tracking table is full. There are no security implications other than DoS. You can partially mitigate this by increasing the maximum number of connections being tracked, reducing the tracking timeouts or by disabling connection tracking altogether, which is doable on server, but not on a NAT router, because the latter will cease to function. sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=54000 sysctl -w net.netfilter.nf_conntrack_generic_timeout=120 sysctl -w net.ipv4.netfilter.ip_conntrack_max=<more than currently set>
{ "source": [ "https://security.stackexchange.com/questions/43205", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22605/" ] }
43,266
According to the official complaint , on Page 28 they mention: on May 24, 2013, a Silk Road user sent him a private message warning him that "some sort of external IP address" was "leaking" from the site, and listed IP address of the VPN Server. What does it mean that an IP Address "was leaking" and when would this occur?
It is an information leak on the Silk Road server. It appears somebody located a debug or info screen on the Silk Road server that dumped configuration and environment variables. Some possibilities: The output of Apache's mod_status ( example ) Output of phpinfo() ( example ) A custom debug page that is part of the Silk Road application It could have been found by checking known locations of status and debug pages or checking common locations (eg. /phpinfo.php ). It looks like the debug output contained the servers public IP address and the IP address of another server that was being used by the admin as a VPN proxy to administer the Silk Road site (the IP was stored because it was being used to whitelist). In March of 2013, less than a few months prior to when the FBI took a snapshot of the Silk Road server, a blogger posted a warning to Silk Road users that the server contained a misconfiguration that revealed the servers IP address and other information: WARNIG TO SILK ROAD USERS: SR is leaking their public IP address It is possible that this is the same person who messaged and warned Dread Pirate Roberts about the info leak. From a related thread on reddit that user says: Last night, while SR was down for maintenance, a brief few moments allowed a certain set of circumstances that caused me to be able to view the public IP of the httpd server of Silk Road. This isn't an obvious flaw, but it is extremely simple if you know where to look - the server basically will publish a page containing all of the configuration data of the httpd server including the public IP address. While we can't know yet if this is the same user or the same bug it is indicative of potential problems with information leaks on the Silk Road server. The FBI could have later discovered the same information and used it to trace back to the VPN server. What we do know is that the Silk Road server had a public IP address, which was used by Dread Pirate Roberts to administer the server. Accessing this public interface using a VPN server (which in turn revealed his real IP address) was a good part of the case against the accused. Most important, this leak had nothing to do with Tor or a flaw in Tor, it was all down to how the site was hosted and how it was accessed for administration. Edit: I have since been able to confirm this theory I found the following thread on /r/SilkRoad, it was posted 5 months ago : Should we be worried? Showing on login page I have removed the IPs Array ( [USER] => X [HOME] => X [FCGI_ROLE] => RESPONDER [QUERY_STRING] => [REQUEST_METHOD] => GET [CONTENT_TYPE] => [CONTENT_LENGTH] => [SCRIPT_FILENAME] => X [SCRIPT_NAME] => /index.php [REQUEST_URI] => / [DOCUMENT_URI] => /index.php [DOCUMENT_ROOT] => X[SERVER_PROTOCOL] => HTTP/1.0 [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_SOFTWARE] => X[REMOTE_ADDR] => xx.xx.xx.xx.xx [REMOTE_PORT] => X [SERVER_ADDR] => xx.xx.xx.xx.xx [SERVER_PORT] => 443 [SERVER_NAME] => _ [HTTPS] => on [REDIRECT_STATUS] => 200 [HTTP_HOST] => xx.xx.xx.xx.xx [HTTP_CONNECTION] => close [HTTP_USER_AGENT] => Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0 [HTTP_ACCEPT] => text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 [HTTP_ACCEPT_LANGUAGE] => en-us,en;q=0.5 [HTTP_ACCEPT_ENCODING] => gzip, deflate [HTTP_COOKIE] => session=X [PHP_SELF] => /index.php [REQUEST_TIME] => 1367610063 ) The above environment variables were being dumped in the source of the login page on Silk Road. They contained the real IP address of the server. Edit: September 2014 - Confirmed by the FBI In a filing during Ross Ulbricht's trial and in response to a defense motion the FBI revealed how they located the Silk Road server: they found a misconfiguration in an element of the Silk Road login page, which revealed its Internet Protocol (IP) address and thus its physical location. As they typed “miscellaneous” strings of characters into the login page’s entry fields, Tarbell writes that they noticed an IP address associated with some data returned by the site didn’t match any known Tor “nodes,” the computers that bounce information through Tor’s anonymity network to obscure its true source. And when they entered that IP address directly into a browser, the Silk Road’s CAPTCHA prompt appeared
{ "source": [ "https://security.stackexchange.com/questions/43266", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
43,321
According to the Popular Mechanics article RFID Credit Cards and Theft: Tech Clinic , the fact that many new credit/debit cards have a RFID chip embedded on it, there is a risk (albeit, small according to the article) that the card would be 'skimmed' - from the article: RFID cards do have a unique vulnerability. "Your card can be read surreptitiously. Unless you were paying attention to the guy behind you with a reader, you'd never know you were being skimmed." Now, even though the risk is low, there is always a chance. With that in mind, a friend bought me a wallet - a Stainless Steel RFID Blocking Wallet to be precise, that claims to prevents 'accidental' reading of your information I have this wallet still (it is rather nice looking), so my question is really two-fold: Can a steel woven wallet prevent RFID scanning of credit card information? and Is there a practical way I can test this myself? (Note: I have no affiliation with anything to do with the manufacture or sale of these wallets)
Any Faraday cage will do the trick. So a shielding of just about anything conductive, be it aluminum foil, conductive paint, wire mesh, or any of a number of similar alternatives is going to be opaque to radiation. That means no radio waves in or out, which means the RFID signal is blocked. Note that the size of the mesh has to be significantly smaller than the wavelength in question; RFID specs are mostly in the MHz range but go as high as 2.4GHz, which is about a 10cm wavelength. So your mesh should be just fine. But aluminum foil is cheaper.
{ "source": [ "https://security.stackexchange.com/questions/43321", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
43,369
It has been known in the security community that a tool as versatile as Tor is likely the target of intense interest from intelligence agencies. While the FBI has admitted responsibility for a Tor malware attack, the involvement of SIGINT organizations has not been confirmed. Any doubt was removed in early October, 2013 with The Guardian's release of " Tor Stinks ," an NSA presentation (vintage June 2012) outlining current and proposed strategies for exploiting the network. Some salient points: Fundamentally, Tor is secure ...however, de-anonymization is possible in certain circumstances "Dumb users" will always be vulnerable (designated internally as "EPICFAIL") NSA/GCHQ operate Tor nodes Traffic analysis, in various forms, appears to be the tool of choice After reviewing the literature, what changes should Tor users implement to ensure -- to the greatest degree technically feasible -- their continuing security?
As a very long time Tor user, the most surprising part of the NSA documents for me was how little progress they have made against Tor . Despite its known weaknesses, it's still the best thing we have, provided it's used properly and you make no mistakes. Since you want security of "the greatest degree technically feasible", I'm going to assume that your threat is a well-funded government with significant visibility or control of the Internet, as it is for many Tor users (despite the warnings that Tor alone is not sufficient to protect you from such an actor ). Consider whether you truly need this level of protection. If having your activity discovered does not put your life or liberty at risk, then you probably do not need to go to all of this trouble. But if it does, then you absolutely must be vigilant if you wish to remain alive and free. I won't repeat Tor Project's own warnings here, but I will note that they are only a beginning, and are not adequate to protect you from such threats. When it comes to advanced persistent threats such as state actors, you are almost certainly not paranoid enough . Your Computer To date the NSA's and FBI's primary attacks on Tor have been MITM attacks (NSA) and hidden service web server compromises and malware delivery (FBI) which either sent tracking data from the Tor user's computer, compromised it, or both. Thus you need a reasonably secure system from which you can use Tor and reduce your risk of being tracked or compromised. Don't use Windows. Just don't. This also means don't use the Tor Browser Bundle on Windows. Vulnerabilities in the software in TBB figure prominently in both the NSA slides and FBI's recent takedown of Freedom Hosting. It has also been shown that malicious Tor exit nodes are binary patching unsigned Windows packages in order to distribute malware. Whatever operating system you use, install only signed packages obtained over a secure connection. If you can't construct your own workstation capable of running Linux and carefully configured to run the latest available versions of Tor, a proxy such as Privoxy, and the Tor Browser, with all outgoing clearnet access firewalled , consider using Tails or Whonix instead, where most of this work is done for you. It's absolutely critical that outgoing access be firewalled so that third party applications or malware cannot accidentally or intentionally leak data about your location. If you must use something other than Tails or Whonix, then only use the Tor Browser (and only for as long as it takes to download one of the above). Other browsers can leak your actual IP address even when using Tor, through various methods which the Tor Browser disables. If you are using persistent storage of any kind, ensure that it is encrypted. Current versions of LUKS are reasonably safe, and major Linux distributions will offer to set it up for you during their installation. TrueCrypt is not currently known to be safe. BitLocker might be safe, though you still shouldn't be running Windows. Even if you are in a country where rubber hosing is legal, such as the UK, encrypting your data protects you from a variety of other threats. Remember that your computer must be kept up to date. Whether you use Tails or build your own workstation from scratch or with Whonix, update frequently to ensure you are protected from the latest security vulnerabilities. Ideally you should update each time you begin a session, or at least daily. Tails will notify you at startup if an update is available. Be very reluctant to compromise on JavaScript, Flash and Java. Disable them all by default. The FBI has used tools which exploit all three in order to identify Tor users. If a site requires any of these, visit somewhere else. Enable scripting only as a last resort, only temporarily, and only to the minimum extent necessary to gain functionality of a web site that you have no alternative for. Viciously drop cookies and local data that sites send you. Neither TBB nor Tails do this well enough for my tastes; consider using an addon such as Self-Destructing Cookies to keep your cookies to a minimum. Of zero. Your workstation must be a laptop; it must be portable enough to be carried with you and quickly disposed of or destroyed. Don't use Google to search the Internet. A good alternative is Startpage ; this is the default search engine for TBB, Tails and Whonix. Another good option is DuckDuckGo . Your Environment Tor contains weaknesses which can only be mitigated through actions in the physical world. An attacker who can view both your local Internet connection, and the connection of the site you are visiting, can use statistical analysis to correlate them. Never use Tor from home, or near home. Never work on anything sensitive enough to require Tor from home, even if you remain offline. Computers have a funny habit of liking to be connected... This also applies to anywhere you are staying temporarily, such as a hotel. Never performing these activities at home helps to ensure that they cannot be tied to those locations. (Note that this applies to people facing advanced persistent threats. Running Tor from home is reasonable and useful for others, especially people who aren't doing anything themselves but wish to help by running an exit node , relay or bridge .) Limit the amount of time you spend using Tor at any single location. While these correlation attacks do take some time, they can in theory be completed in as little as a day. (And if you are already under surveillance, it can be done instantly; this is done to confirm or refute that a person under suspicion is the right person.) While the jackboots are very unlikely to show up the same day you fire up Tor at Starbucks, they might show up the next day. I recommend for the truly concerned to never use Tor more than 24 hours at any single physical location; after that, consider it burned and go elsewhere. This will help you even if the jackboots show up six months later; it's much easier to remember a regular customer than someone who showed up one day and never came back. This does mean you will have to travel farther afield, especially if you don't live in a large city, but it will help to preserve your ability to travel freely. Avoid being electronically tracked. Pay cash for fuel for your car or for public transit. For instance, on the London Underground, use a separate Travelcard purchased with cash instead of your regular Oyster card or contactless payment. Pay cash for everything else, too; avoid using your normal credit and debit cards, even at ATMs. If you need cash when going out, use an ATM close to home that you already frequently use. If you drive, try to avoid number plate readers by avoiding major bridges, tunnels, motorways, toll roads and primary arterial roads and traveling on secondary roads. If the information is publicly available, learn where these readers are installed in your area. When you go out to perform these activities, leave your mobile phone turned on and at home. If you need to make and receive phone calls, purchase an anonymous prepaid phone for the purpose. This is difficult in some countries, but it can be done if you are creative enough. Pay cash; never use a debit or credit card to buy the phone or top-ups. Never insert its battery or turn it on if you are within 10 miles (16 km) of your home, nor use a phone from which the battery cannot be removed. Never place a SIM card previously used in one phone into another phone, as this will irrevocably link the phones. Never give its number or even admit its existence to anyone who knows you by your real identity. This may need to include your family members. Your Mindset Many Tor users get caught because they made a mistake, such as using their real email address in association with their activities, or allowing a hostile adversary to reach a high level of trust. You must avoid this as much as possible, and the only way to do so is with careful mental discipline. Think of your Tor activity as pseudonymous , and create in your mind a virtual identity to correspond with the activity. This virtual person does not know you and will never meet you, and wouldn't even like you if he knew you. He must be kept strictly mentally separated. Consider using multiple pseudonyms, but if you do, you must be extraordinarily vigilant to ensure that you do not reveal details which could correlate them . If you must use public Internet services, create completely new accounts for this pseudonym. Never mix them; for instance do not browse Facebook with your real email address after having used Twitter with your pseudonym's email on the same computer. Wait until you get home. By the same token, never perform actions related to your pseudonymous activity without using Tor, unless you have no other choice (e.g. to sign up for a service which blocks signup via Tor). Take extra precautions regarding your identity and location if you must do this. Hidden Services These have been big in the news, with the takedown of high-profile hidden services such as Silk Road and Freedom Hosting in 2013, and Silk Road 2.0 and dozens of other services in 2014. The bad news is, hidden services are much weaker than they could or should be . The Tor Project has not been able to devote much development to hidden services due to the lack of funding and developer interest (if you're able to do so, consider contributing in one of these ways). Further, it is suspected that the FBI is using traffic confirmation attacks to locate hidden services en masse , and an early 2014 attack on the Tor network was actually an FBI operation . The good news is, the NSA doesn't seem to have done much with them (though the NSA slides mention a GCHQ program named ONIONBREATH which focuses on hidden services, nothing else is yet known about it). Since hidden services must often run under someone else's physical control, they are vulnerable to being compromised via that other party. Thus it's even more important to protect the anonymity of the service, as once it is compromised in this manner, it's pretty much game over. The advice given above is sufficient if you are merely visiting a hidden service. If you need to run a hidden service, do all of the above, and in addition do the following. Note that these tasks require an experienced system administrator who is also experienced with Tor; performing them without the relevant experience will be difficult or impossible, or may result in your arrest. The operator of both the original Silk Road and Silk Road 2.0 were developers who, like most developers, were inexperienced in IT operations. Do not run a hidden service in a virtual machine unless you also control the physical host. Designs in which Tor and a service run in firewalled virtual machines on a firewalled physical host are OK, provided it is the physical host which you are in control of, and you are not merely leasing cloud space. It is trivial for a cloud provider to take an image of your virtual machine's RAM, which contains all of your encryption keys and many other secrets. This attack is far more difficult on a physical machine. Another design for a Tor hidden service consists of two physical hosts, leased from two different providers though they may be in the same datacenter. On the first physical host, a single virtual machine runs with Tor. Both the host and VM are firewalled to prevent outgoing traffic other than Tor traffic and traffic to the second physical host. The second physical host will then contain a VM with the actual hidden service. Again, these will be firewalled in both directions. The connection between them should be secured with a VPN which is not known to be insecure, such as OpenVPN. If it is suspected that either of the two hosts may be compromised, the service may be immediately moved (by copying the virtual machine images) and both servers decommissioned. Both of these designs can be implemented fairly easily with Whonix . Hosts leased from third parties are convenient but especially vulnerable to attacks where the service provider takes a copy of the hard drives. If the server is virtual, or it is physical but uses RAID storage, this can be done without taking the server offline. Again, do not lease cloud space, and carefully monitor the hardware of the physical host. If the RAID array shows as degraded, or if the server is inexplicably down for more than a few moments, the server should be considered compromised, since there is no way to distinguish between a simple hardware failure and a compromise of this nature. Ensure that your hosting provider offers 24x7 access to a remote console (in the hosting industry this is often called a KVM though it's usually implemented via IPMI ) which can also install the operating system. Use temporary passwords/passphrases during the installation, and change them all after you have Tor up and running (see below). Use only such a tool which is accessible via a secured (https) connection, such as Dell iDRAC or HP iLO. If possible, change the SSL certificate on the iDRAC/iLO to one you generate yourself, as the default certificates and private keys are well known. The remote console also allows you to run a fully encrypted physical host, reducing the risk of data loss through physical compromise; however, in this case the passphrase must be changed every time you reboot the system (even this does not mitigate all possible attacks, but it does buy you time). If the system was rebooted without your knowledge or explicit intent, consider it compromised and do not attempt to decrypt it in this manner. Silk Road 2.0 apparently failed to encrypt its hard drives, and also failed to move service when it went down in May 2014, when it was taken offline by law enforcement to be copied. Your initial setup of the hosts which will run the service must be in part over clearnet (via a Tor exit node), albeit via ssh and https; however, to reiterate, they must not be done from home or from a location you have ever visited before. As we have seen, it is not sufficient to simply use a VPN. This may cause you issues with actually signing up for the service due to fraud protection that such providers may use. How to deal with this is outside the scope of this answer, though. Once you have Tor up and running, never connect to any of the servers or virtual machines via clearnet again. Configure hidden services which connect via ssh to each host and each of the virtual machines, and always use them. If you run multiple servers, do not allow them to talk to each other over the clearnet; have them access each other via unique Tor hidden services. If you must connect via clearnet to resolve a problem, again, do so from a location you will never visit again. Pretty much any situation which would require you to connect via clearnet indicates a possible compromise; consider abandoning it and moving service instead. Hidden services must be moved occasionally, even if compromise is not suspected. A 2013 paper described an attack which can locate a hidden service in just a few months for around $10,000 in cloud compute charges, which is well within the budget of even some individuals. As noted earlier, a similar attack took place in early 2014 and may have been involved in the November 2014 compromise of dozens of hidden services. How often is best to physically move the hidden service is an open question; it may be anywhere from a few days to a few weeks. My best guess right now is that the sweet spot will be somewhere between 30 to 60 days. While this is an extremely inconvenient timeframe, it is much less inconvenient than a prison cell. Note that it will take approximately an hour for the Tor network to recognize the new location of a moved hidden service. Conclusion Anonymity is hard . Technology alone, no matter how good it is, will never be enough. It requires a clear mind and careful attention to detail, as well as real-world actions to mitigate weaknesses that cannot be addressed through technology alone. As has been so frequently mentioned, the attackers can be bumbling fools who only have sheer luck to rely on, but you only have to make one mistake to be ruined. The guidelines I have given above are intended to make it harder, more time-consuming and more expensive for attackers to locate you or your service, and whenever possible to give you warning that you or your service may be under attack. We call them "advanced persistent threats" because, in part, they are persistent . As the US attorney Preet Bharara said announcing the Silk Road 2.0 raid, "We don't get tired." They won't give up, and you must not. Further reading Chatting in Secret While We're All Being Watched Mostly good advice from one of the journalists who communicated with Edward Snowden. The only part I can really disagree with is the possibility of using your existing operating system or smartphone for communication. As we've seen already, this cannot be done safely, and you must prepare a computer with something like Whonix or Tails. Selected Papers in Anonymity An extensive collection of anonymity-related research, some of which has been presented here. Go through this to get a feel for just how difficult remaining anonymous really is.
{ "source": [ "https://security.stackexchange.com/questions/43369", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21882/" ] }
43,574
According to an article I just read, the functions printf and strcpy are considered security vulnerabilities due to Buffer overflows. I understand how strcpy is vulnerable, but could someone possibly explain how/if printf is really vulnerable, or I am just understanding it wrong. Here is the article: https://www.digitalbond.com/blog/2012/09/06/100000-vulnerabilities/#more-11658 The specific snippet is : The vendor had mechanically searched the source code and found some 50,000-odd uses of buffer-overflow-capable C library functions such as “strcpy()” and “printf().” Thanks!
It is possible to have issues with printf() , by using as format string a user-provided argument, i.e. printf(arg) instead of printf("%s", arg) . I have seen it done way too often. Since the caller did not push extra arguments, a string with some spurious % specifiers can be used to read whatever is on the stack, and with %n some values can be written to memory ( %n means: "the next argument is an int * ; go write there the number of characters emitted so far). However, I find it more plausible that the article you quote contains a simple typographical mistake, and really means sprintf() , not printf() . (I could also argue that apart from gets() , there is no inherently vulnerable C function; only functions which need to be used with care . The so-called "safe" replacements like snprintf() don't actually solve the problem; they hide it by replacing a buffer overflow with a silent truncation, which is less noisy but not necessarily better.)
{ "source": [ "https://security.stackexchange.com/questions/43574", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27846/" ] }
43,639
I understand the purpose of the Access-Control-Allow-Credentials header , but can't see what problem the Access-Control-Allow-Origin header solves. More precisely, it's easy to see how, if cross-domain AJAX requests with credentials were permitted by default, or if some server were spitting out Access-Control-Allow-Credentials headers on every request, CSRF attacks would be made possible that could not otherwise be performed. The attack method in this scenario would be simple: Lure an unsuspecting user to my malicious page. JavaScript on my malicious page sends an AJAX request - with cookies - to some page of a target site. JavaScript on my malicious page parses the response to the AJAX request, and extracts the CSRF token from it. JavaScript on my malicious page uses any means - either AJAX or a traditional vessel for a CSRF request, like a form POST - to perform actions using the combination of the user's cookies and their stolen CSRF token. However, what I can't see is what purpose is served by not allowing uncredentialed cross-domain AJAX requests without an Access-Control-Allow-Origin header. Suppose I were to create a browser that behaved as though every HTTP response it ever received contained Access-Control-Allow-Origin: * but still required an appropriate Access-Control-Allow-Credentials header before sending cookies with cross-domain AJAX requests. Since CSRF tokens have to be tied to individual users (i.e. to individual session cookies), the response to an uncredentialed AJAX request would not expose any CSRF tokens. So what method of attack - if any - would the hypothetical browser described above be exposing its users to?
If I understand you correctly, you are saying why is the browser blocking access to a resource that can be freely obtained over the internet if cookies are not involved? Well consider this scenario: www.evil.com - contains malicious script code looking to exploit CSRF vulnerabilites. www.privatesite.com - this is your external site, but instead of locking it down using credentials, you have set it up to be cookieless and to only allow access from your home router's static IP. mynas (192.168.1.1) - this is your home server, only accessible on your home wifi network. Since you are the only one that you allow to connect to your home wifi network, this server isn't protected by credentials and allows anonymous, cookieless access. Both www.privatesite.com and mynas generate tokens in hidden form fields for protection against CSRF - but since you have disabled authentication these tokens are not tied to any user session. Now if you accidentally visit www.evil.com this domain could be making requests to www.privatesite.com/turn_off_ip_lockdown passing the token obtained by the cross-domain request, or even to mynas/format_drive using the same method. Unlikely I know, but I guess the standard is written to be as robust as possible and it doesn't make sense to remove Access-Control-Allow-Origin since it does add benefit in scenarios like this.
{ "source": [ "https://security.stackexchange.com/questions/43639", "https://security.stackexchange.com", "https://security.stackexchange.com/users/29805/" ] }
43,683
This article states: Brute-force techniques trying every possible combination of letters, numbers, and special characters had also succeeded at cracking all passwords of eight or fewer characters. There are 6.63 quadrillion possible 8 character passwords that could be generated using the 94 numbers, letters, and symbols that can be typed on my keyboard. I'm skeptical that that many password combinations could actually be tested. Is it really possible to test that many possibilities in a less than a year in this day and age?
As per this link , with speed of 1,000,000,000 Passwords/sec, cracking a 8 character password composed using 96 characters takes 83.5 days. Research presented at Password^12 in Norway shows that 8 character NTLM passwords are no longer safe. They can be cracked in 6 hours on machine which cost ~$8000 in 2012. One important thing to consider is which algorithm is used to create these hashes (assuming you are talking about hashed passwords). If some computationally intensive algorithm is used, then the rate of password cracking can be reduced significantly. In the link above, author highlights that "the new cluster, even with its four-fold increase in speed, can make only 71,000 guesses against Bcrypt and 364,000 guesses against SHA512crypt."
{ "source": [ "https://security.stackexchange.com/questions/43683", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26160/" ] }
43,809
A quick Google search doesn't reveal whether it is important to logout of webapps (online banking, Amazon, Facebook, etc.), or if I am safe just closing the tab or browser. I am sure I heard on some TV show that it's best to logout... What possible threats am I exposing myself to if I don't properly log out?
This is not a trivial, simplistic question. There are several different aspects you need to consider, and several different mechanisms and countermeasures that apply to several different threats in several different scenarios that are affected by several different clients. Let's examine these one at a time. (There will be a TL;DR at the end...) If you're using a Public computer: LOG OUT. Any service you keep an account on should not be left logged in on a publicly accessible machine. If you're using a Trivial unsensitive service: STAY LOGGED IN. This applies only to throw-away, temporary accounts, such as for Internet radio, where giving away access is nothing more than a nuisance. If you're using Public Wifi: LOG OUT. Since the network is inherently untrusted, there is one big obvious THREAT: Session cookie theft. It is possible that your session was hijacked, and someone - either someone else on the network, or the hotspot itself - stole your session cookie. Of course, if this was the case, you wouldn't know, but then you may not be able to really log out either (if it is a malicious network or MITM, they have control of your entire connection - they might simply drop your logout request). That said, 3rd party theft of just your session cookie IS a valid threat (e.g. FireSheep ), and explicit logging out prevents unlimited use. (Basically the damage may have been done already, but this stops it from continuing.) Even better would be to go to a trusted network, login, and explicitly log back out, just in case the MITM blocked your logout. Better yet is to change your password on the trusted site... But best to never access a non-trivial, sensitive site from an untrusted network. If you're using All-day applications: STAY LOGGED IN. For services you use all day and want quick/easy access to, e.g. Facebook, email, etc - IF this is your own private (or work) computer on a trusted network, it is a sensible trade-off to leave your browser logged in long-term. THREAT: Malicious bystander Lock your computer any time you step away, even to get a cup of coffee. Or lock your office, if you have a physical door that noone else can get through. (Or have a home office, wheee!) Periodically log out and back in. Monitor any posts "you" make. THREAT: Other sites can register that you are logged in (e.g. to show you that important "Like" icon from Facebook). This is part of the trade-off that applies, while there are wider implications that are out of scope for this answer. If you are using Any application that uses HTTP Basic Authentication (e.g. many routers): LOG OUT, AND CLOSE ALL BROWSER WINDOWS. Here is where it gets interesting, and this applies to the next section too. When you log in to a webapp using Basic AuthN, the browser caches your password, and sends it on every request. The browser's BasicAuth mechanism has no concept of session. Even if you repeatedly logout, the webapp does not - neither serverside nor clientside - have any way of "killing" the session. The only way to clear those cached credentials, is to kill the browser process. HOWEVER . Your choice of browser matters for this concept of "browser process". E.g.: Firefox : Always a single process, no matter how many tabs and windows you open. Chrome : Each tab is a separate process. However, there is another, "global", parent process. All the tab processes are child processes of this one (aka "job process" in Windows parlance), and they all share process memory through the parent . This is also true if you open a new window. So while Chrome's ample use of child processes with shared parent make its tabs particularly lively and robust, the downside is sharing process state. In other words, the only way to remove cached BasicAuth credentials from Chrome is to close all Chrome windows, every last one. Closing the tab alone will not help. IE : tab/process model is identical (or similar) to Chrome... with one exception. By default , IE also opens all tabs in a child of the parent process. (Actually, this is not 100% accurate - some tabs share a child process with other tabs - but this doesnt matter in reality). However, if you add " -NoFrameMerging " to the IE commandline, it will create a completely new IE parent process. The difference here is that you can e.g. create a new parent window just to login to your router, and then close just that window when you are done. This clears your BasicAuth cache, without touching any other open IE windows. (Side note: it is actually possible to do this with Chrome too! It is a lot more involved, though, and requires you to create another browser profile on your machine.) If you are using Sensitive applications, e.g. banking apps - ALWAYS EXPLICITLY LOG OUT, AND CLOSE ALL BROWSER WINDOWS. This part is a little more complicated, but a lot of the dependencies were already covered above. THREAT: Malicious bystander Locking your computer, as above, would make sense, however there is no need for the trade-off from before. Just log out. Session Timeout: In addition, most sensitive (e.g. banking) apps should implement some form of automatic idle timeout, so if you go out for the afternoon your session will automatically die at some point. This might not help with this threat, since the malicious bystander may just hop on your computer if you step out for 4 1/2 minutes to refill your coffee. THREAT: Session cookie theft Hopefully, sensitive apps are actively preventing this, with e.g. HTTPS, IDS, geo/fraud detection, etc. That said, it still makes sense to close that "window of opportunity", just in case - defense in depth, and all that. Session Timeout: As before, most sensitive (e.g. banking) apps should implement some form of automatic idle timeout, and will help minimize this threat too. However , even if you do know for a fact that this app does implement idle timeouts correctly, there is still a window of opportunity for the attacker. That said, in a relatively-secure app this is not much of a threat. THREAT: Cross Site Request Forgery (CSRF) This is the one you need to worry about. Say you are logged in to your bank. In the same window, in a different tab, you are browsing some dubious website. While viewing this website, it might be surreptitiously testing various well-known banksites, to see if you happen to be logged in to one of them. If you are, it will mount the CSRF attack (not all bank sites are vulnerable to this, but many still are). CSRF'd! Okay. Now say you are smarter than that other guy , and dont browse suspicious sites the same time your banksite is open. So, after you finish on your bank, you carefully close the tab. Only then do you open a new tab to browse to the dodgy site. Well, problem is, you are still logged in, and will be for a while (typically around 30 minutes, but it could be as little as 10 or as much as an hour...). CSRF'd! . (Note that the session timeout here does help, by shortening the window of opportunity, but there is still a chance of this happening within the window). Hmm. Well, I know, let's open a new browser window! Use that for bank work, then again CLOSE the tab, and again open a new one for the malware sites I like to play with. Whoops, see the above section on Basic Authentication - your choice of browser matters. Unless you're using "incognito/private browsing", or the " -NoFrameMerging " flag for IE, you are still in the same process family , and this still-open session will be shared between all your windows, at least until the server hits the idle timeout. Assuming it hasn't already been co-opted. CSRF'd! Okay, one more, just one. I read this overly long post somewhere, about how I always need to logout from my sensitive apps - so I do just that, before popping on to my criminal sites. Unfortunately, the application "forgot" to do a proper logout, it just redirects me out of the application (or erases my cookie, or...) instead of invalidating it on the server... CSRF'd! So, TL;DR? If you care about your account on this site: LOG OUT. If you care about your account, and it uses Basic Authentication: LOG OUT AND CLOSE ALL BROWSER TABS AND WINDOWS. If you don't care about your account - it doesn't matter what you do, so stop asking :-). P.S. I did not cover things like Flash cookies, non-http sessions, and Integrated Windows Authentication. Enough is enough.
{ "source": [ "https://security.stackexchange.com/questions/43809", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16032/" ] }
43,888
One time while I was printing my documents at a copy center, I saw they dragged-and-dropped my PDFs from my flash drive to their desktop. How do I protect my files without using password? Is there a way to make a file open once and after that encrypt it?
As far as I know there is no feature like that in Adobe Reader. But even when there were such a feature, it couldn't be effective. PDF is an open format, so they could just use another PDF-capable program to view it which doesn't support this feature They could create a copy of the file before opening it. Adobe Reader couldn't know about that second copy, so it wouldn't be able to encrypt it too. They could make the file write-protected which means that it can't be encrypted after opening it They could log the generated print-job in the printer spool of the operating system They could use a custom printer driver which outputs each printed document to a file The printer itself could cache the print-job (likely it does by default - professional printers often come with quite large hard drives) Bottom line : When you want to print a sensitive document, either find a copyshop you trust or print it yourself.
{ "source": [ "https://security.stackexchange.com/questions/43888", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32034/" ] }
44,018
Assume that I'm using bcrypt with a unique salt (or some other best practice ) to hash user passwords before storing them in my database. Is there any security advantage to be gained by character-shifting the password before encrypting? (i.e. A -> B , a -> b , 9 -> : ) Is there any security advantage to be gained by character-shifting the hash after encrypting, excluding the == ? (i.e. AB+6/== -> BC,70== ) If there's no advantage to be gained, is there any loss of security by doing this? Or is it a wash? In light of the rise in attacks based on standard password patterns , does obscuring these patterns help at all?
Applying a transform on passwords before hashing them makes no direct harm to security as long as it is injective : two distinct passwords before apply the transform shall still be distinct once both are transformed. Your "character shifting" is fine for that if you do it properly (i.e. beware of transforming a byte of value 255 into a byte of value 0 which will "truncate" the password). Indirectly , there is a bit of harm in that your extra transform is extra code, thus extra complexity, and complexity is always bad for security. Moreover, if the transform is computationally expensive, then it comes at odds with the iteration count used in PBKDF2/bcrypt (i.e. you have less CPU available, so you must use a lower iteration count). A transform like the one you envision makes any good to the security only insofar as the attacker is not aware of it, i.e. the attacker did not do his homework. Basic, low-power attackers who just got lucky with a SQL injection attack may indeed lack the knowledge, but these attackers are not the scary ones. Your extra transform will not deter strong attackers, and that's strong attackers you should worry about. Transforms applied after hashing have no influence whatsoever on security, except if you are using something like extra hashing, in which case you are just trying to build a custom hash function, which, as usual, is a bad idea. So don't do it. The proper way to use your CPU is not to waste it on voodoo character shifts and other rituals; instead, use your CPU to have a bcrypt/PBKDF2 iteration count as high as is tolerable for your overall application performance.
{ "source": [ "https://security.stackexchange.com/questions/44018", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17569/" ] }
44,065
I'm wondering how to use NAT with IPv6. Seems that you don't even need it any more. So what exactly is the concept behind firewall configurations in IPv6 environments?
There is some widespread confusion about NAT . NAT has never been meant to be used as a security feature. However, it so happens that in most cases (not all), when a machine has access to the Internet through NAT only, then the machine is somehow "protected". It is as if the NAT system was also, inherently, a firewall. Let's see how it works: An IP packet has a source and a destination address. Each router, upon seeing the destination address, decides to which subsequent router the packet shall be sent. When a router implements NAT, it forwards outgoing packets under a guise; namely, the packets bear the router's external IP as source address, not the actual source. For incoming packets, the router does the reverse operation. The TCP/UDP port numbers are used to know to what internal host the packets relate. However, from the point of view of the router, the internal hosts have (private) IP addresses which are directly reachable. NAT is for communications between the internal hosts and machines beyond the router. Let's take an example: Inner <---> HomeRouter <---> ISPRouter <---> The Internet "Inner" is your PC. "HomeRouter" is the router which does the NAT. "ISPRouter" is the router at your ISP. The "firewall effect" is the following: usually , even if "Inner" has an open port (it runs a remotely reachable service, e.g. a local Web server on port 80), people from "the Internet" will not be able to connect to it. The reason is the following: there are two ways by which an IP packet may be transferred by HomeRouter to Inner: An incoming packet may come with HomeRouter's address as destination, and targeting a port which HomeRouter knows to be associated with an outgoing connection from Inner to somewhere on the Internet. This works only for a connection which was initiated by Inner, and this implies that the port will not match that of the server which runs on Inner. An IP packet contains Inner's private IP address as destination and is somehow brought to the attention of HomeRouter. But ISPRouter does not know Inner's private IP, and would not forward an IP packet meant for that address to HomeRouter. Source routing could be used to tag a packet with Inner's private IP address as destination and HomeRouter's public IP address as intermediate host. If ISPRouter supports source routing, then such a packet will reach Inner, regardless of NAT. It so happens that almost no ISP actually supports source routing. Therefore, the "firewall effect" of NAT relies on two properties: Attackers are far : attackers do not inject packets directly on the link between the home router and the ISP; all their attempts must go through the ISP routers. ISP don't allow source routing . This is the (very) common case. So in practice there are a lot of machines, in private homes and small business, which could be hacked into in a matter of seconds except that they benefit from the "firewall effect" of NAT. So what of IPv6 ? NAT was designed and deployed ( widely deployed) in order to cope with the scarcity of free IPv4 addresses. Without NAT, the IPcalypse would have already destroyed civilization (or triggered IPv6 actual usage, maybe). IPv6 uses 128-bit addresses, instead of the meagre 32-bit IPv4 addresses, precisely so that crude workarounds like NAT need not be used. You can use NAT with IPv6, but it makes little sense - if you can live with NAT, why would you switch to IPv6 at all ? However, without NAT, then no "firewall effect", flimsy as it could be. Most operating systems are now IPv6 ready, and will use it automatically if given the chance. Therefore, if an ISP decides to switch IPv6 on, just like that, then a lot of machines which were hitherto "hidden" behind a NAT will become reachable from the outside. This could well turn into a worldwide hacking orgy. It is no wonder that ISP are somewhat... reluctant. To switch to IPv6 nicely , you have to couple its enabling with some solid, well-thought firewalling rules, which will prevent incoming connections which were not possible in a NAT world (with the caveats explained above), but are now feasible thanks to the magic of IPv6. The operational word here is "think": this will require some time from some people, and that's not free. So it can be predicted that IPv4 will be used and maintained as long as it can be tolerated, and, thanks to NAT and transparent proxies, this will be a long time (especially if we succeed at containing human population below 10 billions).
{ "source": [ "https://security.stackexchange.com/questions/44065", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32139/" ] }
44,094
I know that one shouldn't rely on "obscurity" for their security. For example, choosing a non-standard port is not really security, but it also doesn't usually hurt to do so (and may help mitigate some of the most trivial attacks). Hashing and encryption relies on strong randomization and secret keys. RSA, for instance, relies on the secrecy of d , and by extension, p , q , and ϕ(N) . Since those have to be kept secret, isn't all encryption (and hashing, if you know the randomization vector) security through obscurity? If not, what is the difference between obscuring the secret sauce and just keeping the secret stuff secret? The reason we call (proper) encryption secure is because the math is irrefutable: it is computationally hard to, for instance, factor N to figure out p and q (as far as we know). But that's only true because p and q aren't known. They're basically obscured . I've read The valid role of obscurity and At what point does something count as 'security through obscurity'? , and my question is different because I'm not asking about how obscurity is valid or when in the spectrum a scheme becomes obscure , but rather, I'm asking if hiding all our secret stuff isn't itself obscurity, even though we define our security to be achieved through such mechanisms. To clarify what I mean, the latter question's answers (excellent, by the way) seem to stop at "...they still need to crack the password" -- meaning that the password is still obscured from the attacker.
See this answer . The main point is that we make a sharp distinction between obscurity and secrecy ; if we must narrow the difference down to a single property, then that must be measurability . Is secret that which is not known to outsiders, and we know how much it is unknown to these outsiders. For instance, a 128-bit symmetric key is a sequence of 128 bits, such that all 2 128 possible sequences would stand an equal probability of being used, so the attacker trying to guess such a key needs to try out, on average, at least 2 127 of them before hitting the right one. That's quantitative . We can do math, add figures, and compute attack cost . The same would apply to a RSA private key. The maths are more complex because the most effective known methods rely on integer factorization and the involved algorithms are not as easy to quantify as brute force on a symmetric key (there are a lot of details on RAM usage and parallelism or lack thereof). But that's still secrecy. By contrast, an obscure algorithm is "secret" only as long as the attacker does not work out the algorithm details, and that depends on a lot of factors: accessibility to hardware implementing the algorithm, skills at reverse-engineering, and smartness . We do not have a useful way to measure how smart someone can be. So a secret algorithm cannot be "secret". We have another term for that, and that's "obscure". We want to do security through secrecy because security is risk management: we accept the overhead of using a security system because we can measure how much it costs us to use it, and how much it reduces the risk of successful attacks, and we can then balance the costs to take an informed decision. This may work only because we can put numbers on risks of successful attacks, and this can be done only with secrecy, not with obscurity.
{ "source": [ "https://security.stackexchange.com/questions/44094", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9086/" ] }
44,238
I've seen something like: sshd[***]: Invalid user oracle from **.**.**.** // 1st line sshd[***]: input_userauth_request: invalid user oracle [preauth] // 2nd line sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth] // 3rd line and I know that's someone tries to log into my server, but what does it mean when there's only the 3rd line repeating over and over again for, like, 3000+ times? I mean, like this (there's no Invalid user or input_userauth_request ): sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth] sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth] sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth] What's the purpose of doing so, what's he trying to do since it's "disconnect" instead of trying to login?
This error rises from a fatal error in the authentication process (see monitor.c of OpenSSH versions 6.1p1+). It is likely that the attacker is using some custom code to brute-force the server which is ending up in malformed authentication requests being sent, resulting in the server killing the connection. So from the code it appears they are in fact trying to login, but the server doesn't like how they're attempting that. As such, these log entries aren't anything to worry about unless you think you are likely to be a targeted victim for any reason (in which case you should be taking extra precautions such as refusing password-based logins). In any case, I suggest you install the simple fail2ban program if you haven't already which will significantly hinder cookie-cutter brute-force authentication attempts.
{ "source": [ "https://security.stackexchange.com/questions/44238", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32289/" ] }
44,251
Does anyone know how to use OpenSSL to generate certificates for the following public key types: DSA - For DHE_DSS key exchange. Diffie-Hellman - For DH_DSS and DH_RSA key exchange. ECDH - For ECDH_ECDSA and ECDH_RSA key exchange. ECDSA - For ECDHE_ECDSA key exchange. Most that I can find online teaches you how to generate a RSA type certificate though.
(EC)DSA For a DSA key pair, use this: openssl dsaparam -out dsakey.pem -genkey 1024 where "1024" is the size in bits. The first DSA standard mandated that the size had to be a multiple of 64, in the 512..1024 range. Then another version deprecated sizes below 1024, so a valid DSA key size had length 1024 bits and nothing else. Current version specifies that a valid DSA key has length 1024, 2048 or 3072 bits. OpenSSL accepts other lengths. If you want to maximize interoperability, use 1024 bits. For an ECDSA key pair, use this: openssl ecparam -genkey -out eckey.pem -name prime256v1 To see what curve names are supported by OpenSSL, use: openssl ecparam -list_curves (For optimal interoperability, stick to NIST curve P-256, that OpenSSL knows under the name "prime256v1".) Once you have a DSA or ECDSA key pair, you can generate a self-signed certificate containing the public key, and signed with the private key: openssl req -x509 -new -key dsakey.pem -out cert.pem (Replace "dsakey.pem" with "eckey.pem" to use the EC key generated above.) (EC)DH For Diffie-Hellman (with or without elliptic curves), things are more complex, because DH is not a signature algorithm: You will not be able to produce a self-signed certificate with a DH key. You cannot either make a PKCS#10 request for a certificate with a DH key, because a PKCS#10 request is supposed to be self-signed (this self-signature is used as a proof of possession ). While OpenSSL, the library , has the support needed for issuing a certificate which contains a DH public key; this page may contain pointers. The challenge is to convince OpenSSL, the command-line tool , to do it. In the jungle of the OpenSSL documentation, I have not found a complete way to do it. Key pairs are easy enough to generate, though. To generate a DH key pair, with the OpenSSL command-line tool, you have to do it in two steps: openssl dhparam -out dhparam.pem 1024 openssl genpkey -paramfile dhparam.pem -out dhkey.pem For an ECDH key pair, use this: openssl ecparam -out ecparam.pem -name prime256v1 openssl genpkey -paramfile ecparam.pem -out ecdhkey.pem However, it so happens that the format for certificates containing ECDH public keys is completely identical to the format for certificates containing ECDSA public keys; indeed, the format contains "an EC public key" without indication of the intended algorithm (ECDH or ECDSA). Therefore, any private key and certificates for ECDSA (private key for generating ECDSA signatures, certificate self-signed or signed by any other CA) will be fit for ECDH-* cipher suites. The one case that I don't know how to produce with the OpenSSL command-line tool is a static Diffie-Hellman (non-EC) certificate. Note, though, that OpenSSL does not support SSL/TLS with static DH cipher suites either, so even if you could produce the certificate, it would not work with OpenSSL. (And, in fact, nobody uses static DH in practice.)
{ "source": [ "https://security.stackexchange.com/questions/44251", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31012/" ] }
44,257
Is there a PCI-DSS rule that a merchant cannot capture the funds for an order until we have actually shipped it? I can't seem to find an official reference that describes this as a PCI DSS requirement. Is it one? If so, where can I find it?
(EC)DSA For a DSA key pair, use this: openssl dsaparam -out dsakey.pem -genkey 1024 where "1024" is the size in bits. The first DSA standard mandated that the size had to be a multiple of 64, in the 512..1024 range. Then another version deprecated sizes below 1024, so a valid DSA key size had length 1024 bits and nothing else. Current version specifies that a valid DSA key has length 1024, 2048 or 3072 bits. OpenSSL accepts other lengths. If you want to maximize interoperability, use 1024 bits. For an ECDSA key pair, use this: openssl ecparam -genkey -out eckey.pem -name prime256v1 To see what curve names are supported by OpenSSL, use: openssl ecparam -list_curves (For optimal interoperability, stick to NIST curve P-256, that OpenSSL knows under the name "prime256v1".) Once you have a DSA or ECDSA key pair, you can generate a self-signed certificate containing the public key, and signed with the private key: openssl req -x509 -new -key dsakey.pem -out cert.pem (Replace "dsakey.pem" with "eckey.pem" to use the EC key generated above.) (EC)DH For Diffie-Hellman (with or without elliptic curves), things are more complex, because DH is not a signature algorithm: You will not be able to produce a self-signed certificate with a DH key. You cannot either make a PKCS#10 request for a certificate with a DH key, because a PKCS#10 request is supposed to be self-signed (this self-signature is used as a proof of possession ). While OpenSSL, the library , has the support needed for issuing a certificate which contains a DH public key; this page may contain pointers. The challenge is to convince OpenSSL, the command-line tool , to do it. In the jungle of the OpenSSL documentation, I have not found a complete way to do it. Key pairs are easy enough to generate, though. To generate a DH key pair, with the OpenSSL command-line tool, you have to do it in two steps: openssl dhparam -out dhparam.pem 1024 openssl genpkey -paramfile dhparam.pem -out dhkey.pem For an ECDH key pair, use this: openssl ecparam -out ecparam.pem -name prime256v1 openssl genpkey -paramfile ecparam.pem -out ecdhkey.pem However, it so happens that the format for certificates containing ECDH public keys is completely identical to the format for certificates containing ECDSA public keys; indeed, the format contains "an EC public key" without indication of the intended algorithm (ECDH or ECDSA). Therefore, any private key and certificates for ECDSA (private key for generating ECDSA signatures, certificate self-signed or signed by any other CA) will be fit for ECDH-* cipher suites. The one case that I don't know how to produce with the OpenSSL command-line tool is a static Diffie-Hellman (non-EC) certificate. Note, though, that OpenSSL does not support SSL/TLS with static DH cipher suites either, so even if you could produce the certificate, it would not work with OpenSSL. (And, in fact, nobody uses static DH in practice.)
{ "source": [ "https://security.stackexchange.com/questions/44257", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32308/" ] }
44,340
A few basic questions around self-signed certificates Question 1 Does a self-signed certificate has to have a CA? I realize that a "real" certificate must as it otherwise can be chain validated but what about self-signed? Question 2 If a self-signed hasn't got a CA - should it still go in the trusted CA store to not cause chain validation errors when using it for test? Question 3 Is there a way to see if a self-signed cert is signed by a CA - if it has a CA does that effect the question above? Do I then have to have the CA to trust it or can I still just put the actual cert in the CA store to create a secure SSL channel for example?
A certificate is signed by the CA which issues it. A self-signed certificate, by definition, is not issued by a CA (or is its own CA, if you want to view it like this). A certificate may have CA power , i.e. be trusted to sign other certificates, or not, depending on whether it contains a Basic Constraints extension with the cA flag set to TRUE (cf the standard ). What happens is the following: when some application wants to validate a certificate (e.g. a Web browser who wants to do SSL with a server, and just obtained the server's certificate), it will want to build a certificate chain which begins with a "trust anchor" (one of the certificates in its "trusted root" certificate store) and ends with the certificate to validate (called "EE" as "end-entity"). Exact rules for a chain to be valid are intricate and full of details; for the purposes of this answer, let's limit ourselves to these necessary conditions: The chain must start with a trust anchor. Each certificate is signed by the previous certificate in the chain (i.e. the signature on each certificate is to be verified relatively to the public key as is stored in the previous certificate). Each certificate except the end-entity has a Basic Constraints extension with the cA flag set to TRUE . So a self-signed but not CA certificate, when used as a trust anchor, will be accepted as valid as an end-entity certificate (i.e. in a chain reduced to that certificate exactly) but not otherwise. This is the normal case. When, as a browser user, you want to accept a given self-signed certificate as valid, you actually tell your browser that the self-signed certificate should become a trust anchor -- but you certainly do not want to trust that certificate for issuing other certificates with other names ! You want to trust it only for authenticating a specific site. As usual, details may vary -- not all browsers react in the exact same ways. But the core concepts remain: A self-signed certificate lives outside of the CA world: it is not issued by a CA. A client (browser) uses trust anchors as the basis for what it trusts. A self-signed certificate for a Web server, usually, should be trusted (if at all) only for that server, i.e. added as a trust anchor, but not tagged as "good for issuing certificates". When a chain is reduced to a single certificate, i.e. the end-entity is also the trust anchor, then this is known as direct trust : a specific certificate is trusted by being already known, exactly, instead of being trusted by virtue of being issued by a trusted CA.
{ "source": [ "https://security.stackexchange.com/questions/44340", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32074/" ] }
44,512
In a blog post in response to the Amazon privacy controversy , Mark Shuttleworth wrote: Don’t trust us? Erm, we have root . You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err. What does he mean by "we have root"? Surely Canonical doesn't have root access to every machine running Ubuntu?
The wording of that sentence may seem a bit worrying because in a way it implies that they have root access as a backdoor that is already installed and in use. The truth is that it was just bad wording from Mark and what he tried to explain is that, yes, they have potential root access to your machine because every package update runs as root and at that point they can do and install anything they want or anything that could potentially sneak in from some open source project. If you also go through the comments on that blog post you will find Mark giving the answer to your question(his username in comments is 'mark') Someone asked him: Sebastian says: ( http://www.markshuttleworth.com/archives/1182#comment-396204 ) September 23rd, 2012 at 11:42 am Ermm. You have root? Details please. and then Mark replied: mark says: ( http://www.markshuttleworth.com/archives/1182#comment-396225 ) September 23rd, 2012 at 1:00 pm @Sebastian – Every package update installs as root.
{ "source": [ "https://security.stackexchange.com/questions/44512", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32194/" ] }
44,611
I am very confused the difficult jargon available in web about OAUTH, OpenID and OPENID Connect. Can anyone tell me the difference in simple words.
OpenID is a protocol for authentication while OAuth is for authorization . Authentication is about making sure that the guy you are talking to is indeed who he claims to be. Authorization is about deciding what that guy should be allowed to do. In OpenID, authentication is delegated: server A wants to authenticate user U, but U's credentials (e.g. U's name and password) are sent to another server, B, that A trusts (at least, trusts for authenticating users). Indeed, server B makes sure that U is indeed U, and then tells to A: "ok, that's the genuine U". In OAuth, authorization is delegated: entity A obtains from entity B an "access right" which A can show to server S to be granted access; B can thus deliver temporary, specific access keys to A without giving them too much power. You can imagine an OAuth server as the key master in a big hotel; he gives to employees keys which open the doors of the rooms that they are supposed to enter, but each key is limited (it does not give access to all rooms); furthermore, the keys self-destruct after a few hours. To some extent, authorization can be abused into some pseudo-authentication, on the basis that if entity A obtains from B an access key through OAuth, and shows it to server S, then server S may infer that B authenticated A before granting the access key. So some people use OAuth where they should be using OpenID. This schema may or may not be enlightening; but I think this pseudo-authentication is more confusing than anything. OpenID Connect does just that: it abuses OAuth into an authentication protocol. In the hotel analogy: if I encounter a purported employee and that person shows me that he has a key which opens my room, then I suppose that this is a true employee, on the basis that the key master would not have given him a key which opens my room if he was not.
{ "source": [ "https://security.stackexchange.com/questions/44611", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32407/" ] }
44,797
Can they be used together? ....or are they two separate protocols that may or may not be useful depending on the context? The reason I ask is because I'm trying to implement the following: User "Bob" goes to a Client implemented as a User-Agent only application. The protected resources are controlled by the same domain as the authentication/authorization server, but they are on different subdomains. However, no session is found in the cookies, so... Bob clicks "login," and gets redirected to authorization/authentication server using something like the following: GET https://accounts.example.com/authorize?response_type=token&client_id=123&redirect_uri=http://original.example.com&scope=openid profile token custom Bob is given a list of options to choose from to authenticate, i.e., "example, google, twitter," etc. which leads to his authentication at example.com, which in turn is used for his authorization for a specific API hosted by example.com. Should I be using OpenID Connect, OpenID 2.0, both? What? This is my first time implementing any of them. I'm only asking about the authentication part of this. I'm just trying to get Bob authenticated so that I can move on to issuing the token to the client.
I don't think either of the other previous responses answer the question, which is asking the difference between OpenID Connect and OpenID 2.0 . OpenID 2.0 is not OAuth 2.0 . OpenID 2.0 and OpenID Connect are very different standards with completely different parameters and response body formats. Both are built on top of OAuth 2.0 by putting additional values into otherwise valid OAuth 2.0 requests and responses, in order to provide identity information needed for authentication (whereas OAuth 2.0 only provides authorization, not authentication). OpenID Connect improved naming and structure of OpenID 2.0 fields and parameters in order to be easier to use. I can easily read the OpenID Connect specification and understand what all the values are used for and what to set them to, but trying to read the OpenID 2.0 specification is a bit more difficult and convoluted. At this point the choice is easy, OpenID 2.0 is deprecated. You should use OpenID Connect .
{ "source": [ "https://security.stackexchange.com/questions/44797", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32716/" ] }
44,811
I have been reading about SSL/TLS for the last few days and looks like it has never been practically cracked. Given SSL/TLS is used for all communications between client application and the server, and given the password/API key is random and strong and securely stored on the server-side (bcryted, salted), do you think Basic Auth is secure enough, even for banking services? I see a lot of people disliking the idea of username/password getting sent on the wire with every request. Is this a real weak point of Basic Auth (even with SSL/TLS used), or is it just out of irrational fear?
HTTP Basic Authentication is not much used in browser-server connections because it involves, on the browser side, a browser-controlled login popup which is invariably ugly. This of course does not apply to server-server connections, where there is no human user to observe any ugliness, but it contributes to a general climate of mistrust and disuse for Basic Authentication. Also, in the 1990s, before the days of SSL, sending plaintext passwords over the wire was considered a shooting offence, and, in their folly, people considered that challenge-response protocols like HTTP Digest were sufficient to ensure security. We now know that it is not so; regardless of the authentication method, the complete traffic must be at least cryptographically linked to the authentication to avoid hijack by active attackers. So SSL is required . But when SSL is in force, sending the password "as is" in the SSL tunnel is fine. So, to sum up, Basic Authentication in SSL is strong enough for serious purposes, including nuclear launch codes, and even money-related matters. One should still point out that security relies on the impossibility of Man-in-the-Middle attacks which, in the case of SSL (as is commonly used) relies on the server's certificate. The SSL client (another server in your case) MUST validate the SSL server's certificate with great care, including checking revocation status by downloading appropriate CRL. Otherwise, if the client is redirected to a fake server, the fake server owner will learn the password... In that sense, using something like HTTP Digest adds some extra layer of mitigation in case the situation got already quite rotten, because even with HTTP Digest, a fake server doing a MitM can still hijack the connection at any point. If we go a bit further, we may note that when using password-based authentication, we actually want password-based mutual authentication. Ideally, the SSL client and the SSL server should authenticate each other based on their knowledge of the shared password. Certificates are there an unneeded complication; theoretically, SSL client and server should use TLS-PSK or TLS-SRP in that situation, and avoid all the X.509 certificate business altogether. In particular, in SRP, what the server stores is not the password itself but a derivative thereof (a hash with some extra mathematical structure). One shall note an important point: in the case of a Web API, both the client and the server are machines with no human involved. Therefore, the "password" does not need to be weak enough to be remembered by the meat bags. That password could be, say, a sequence of 25 random characters, with an entropy gone through the roof. This makes the usual password hashing methods (slow hashing, salts) kind of useless. We still want to avoid storing in the server's database (thus as a prey to potential SQL injections) the passwords "as is", but, in that case , a simple hash would be enough. This points to the following: ideally, for a RESTful API to be used by one server to talk to another, with authentication based on a shared (fat) secret, the communication shall use TLS with SRP. No certificate, only hashes stored on the server. No need for HTTP Basic Auth or any other HTTP-based authentication, because all the work would have already occurred on the SSL/TLS level. Unfortunately, the current state of deployment of SRP-able SSL/TLS implementations usually means that you cannot use SRP yet. Instead, you will have to use a more mundane SSL/TLS with an X.509 certificate on the server side, that the client dutifully validates. As long as the validation is done properly, there is no problem in sending the password "as is", e.g. as part of HTTP Basic Authentication.
{ "source": [ "https://security.stackexchange.com/questions/44811", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32724/" ] }
44,931
The differences between an IDS and a firewall are that the latter prevents malicious traffic, whereas the IDS: Passive IDS: the IDS only reports that there was an intrusion. Active IDS: the IDS also takes actions against the issue to fix it or at least lessen its impact. However, what's the difference between an IPS and a Firewall? Both are a preventative technical control whose purpose is to guarantee that incoming network traffic is legitimate.
The line is definitely blurring somewhat as technological capacity increases, platforms are integrated, and the threat landscape shifts. At their core we have Firewall - A device or application that analyzes packet headers and enforces policy based on protocol type, source address, destination address, source port, and/or destination port. Packets that do not match policy are rejected. Intrusion Detection System - A device or application that analyzes whole packets, both header and payload, looking for known events. When a known event is detected a log message is generated detailing the event. Intrusion Prevention System - A device or application that analyzes whole packets, both header and payload, looking for known events. When a known event is detected the packet is rejected. The functional difference between an IDS and an IPS is a fairly subtle one and is often nothing more than a configuration setting change. For example, in a Juniper IDP module, changing from Detection to Prevention is as easy as changing a drop-down selection from LOG to LOG/DROP. At a technical level it can sometimes require redesign of your monitoring architecture. Given the similarity between all three systems there has been some convergence over time. The Juniper IDP module mentioned above, for example, is effectively an add-on component to a firewall. From a network flow and administrative perspective the firewall and IDP are functionally indistinguishable even if they are technically two separate devices. There is also much market discussion of something called a Next Generation Firewall (NGFW). The concept is still new enough that each vendor has their own definition as to what constitutes a NGFW but for the most part all agree that it is a device that enforces policy unilaterally across more than just network packet header information. This can make a single device act as both a traditional Firewall and IPS. Occasionally additional information is gathered, such as from which user the traffic originated, allowing even more comprehensive policy enforcement.
{ "source": [ "https://security.stackexchange.com/questions/44931", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15194/" ] }
45,041
When entering your email address publicly, a practice is to replace . with text dot and @ with text at . I assume that the reasoning is that this way automatic email-collector robots won't match your address so easily. I still see updated websites using this. However, this practice is not very hard to workaround with a program, and it has been around since more than a decade (as of 2013). Anyone in the business of collecting emails had quite enough time to update all their robots to handle this. Are there still robots that doesn't handle this? Why? Are there any reasons remaining today to use this kind of mangling?
To understand this, we must understand how crawlers find the email. While steering away from the technicals, the basic idea is this (today's algorithms are, of course, smarter than that): Find @ in the page. Is there a dot within 255 characters after the @ ? Grab what's behind the @ until you reach a space or the beginning of the line. Grab the . and what's behind it until you reach the @ . Grab what's after the . until your reach the end of the line or a space. Now, an easy countermeasure would be to replace the @ with at and the . with dot . The most intuitive counter-countermeasure would be to teach the crawler that at is actually @ . Well, it's not that simple. Take the following text: We climbed into the attic and found a dotted piece of wood. Please email us: adnan at gmail dot com. Now let's run our new crawler on it. First it will find the at in attic , then it will find the dot in dotted . The resulting email would be [email protected] , then it will find the second email [email protected] . Then spammers started teaching crawlers about finding certain domains, ignoring spaces, taking spaces into account, considering certain domain names, etc. Then we started using images, spammers used OCR. We started using JavaScript tricks, inserting comments, URL-encide, etc. and always the spammers found a way to get around them. It's a race. Having that said, the most basic techniques usually give good enough results (apparently, in some place in the world, that link is NSFW . Personally, I disagree), and the more obfuscate, the better results you get. So, to directly answer your question : Is using 'dot' and 'at' in email addresses in public text still useful? Yes, I think so, at least to some degree. But this solution has been around long enough for us to assume that some crawlers have already found a way around it. My advice? Either use some fancy advanced munger , or simply use images.
{ "source": [ "https://security.stackexchange.com/questions/45041", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15262/" ] }
45,053
If you enable 2FA for Google Apps the shared secret is 160 bits. The google_authenticator PAM module on the other hand seems to use 80 bits for the shared secret. According to RFC 4226 : R6 - The algorithm MUST use a strong shared secret. The length of the shared secret MUST be at least 128 bits. This document RECOMMENDs a shared secret length of 160 bits. Note "MUST" not "SHOULD". So that makes Google Authenticator non compliant. What's the rationale behind doing this?
The initial commit for this code already includes the "80 bits" secret key length. It was not changed afterwards. Now let's analyze things more critically. HOTP is specified in RFC 4226 . Authentication uses a "shared secret", which is the value that we are talking about. What does RFC 4226 says about it ? Essentially, there is a a requirement in section 4 : R6 - The algorithm MUST use a strong shared secret. The length of the shared secret MUST be at least 128 bits. This document RECOMMENDs a shared secret length of 160 bits. And then section 7.5 explains at length various methods for generating the shared secret. Now the google_authenticator code generates an 80-bit shared secret with /dev/urandom and then proceeds to encode this secret with Base32 : this encoding maps each group of 5 bits to a printable character. The resulting string thus consists of 16 characters, which are written "as is" in a file, with ASCII encoding, meaning that in the file the shared secret has length... 16 bytes , i.e. 128 bits. It is thus possible that the initial confusion comes from that: what is stored has a length which can be seen, in a way, to comply with requirement R6 of RFC 4226. Of course, the RFC talks about the "shared secret" by calling it " K " in section 5.1 and then proceeds to using it as key for HMAC ( section 5.3 , step 1). With the shared secret generated with the command-line tool in google_authenticator , what enters HMAC really is a sequence of 80 bits, not 128 bits, even though these 80 bits happen to be encoded as 128 bits when stored (but they are decoded back to 80 bits upon usage). Thus, this 80-bit secret cannot really, in a legalistic way, comply with requirement R6 of RFC 4226. However, the confusion on "length" (after or before encoding) may explain this feature of google_authenticator . Note, though, that this is just for the command-line tool which can be used to generate the secret as an initial step. The rest of the code supports longer secret values. (Another theory is that the author wanted to test his code, and, in particular, test the situation in which there is no QR code. In that case, the user must type the code, and an 80-bit secret is easier to type than a 128-bit or 160-bit secret. Possibly, the author first used a short secret to ease development, and subsequently forgot to set it back to its nominal length afterwards. This sort of mishap happens quite often.) Is it critical ? With my cryptographer's hat, I must answer: no. An 80-bit secret key is still quite strong against brute force, because even with a lot of GPU, 2 79 evaluations of HMAC/SHA-1 will still take quite some time (with an 80-bit key, average cost of brute force is that of trying half the possible keys, i.e. 2 79 evaluations). Indeed, HMAC/SHA-1 is deemed "cryptographically strong", meaning that the best known attack is brute force on the key. Let's put figures on it: HMAC/SHA-1 uses two SHA-1 invocation. So the attack cost is, on average, the cost involved by calling SHA-1 2 80 times. This page shows benchmarks at 2.5 billions of SHA-1 calls per second for a good GPU. If you are mounting a cracking farm, you will usually use a "middle range" GPU, not a top-notch model, because you will get more power-per-dollar that way. Let's assume that you use 100$ GPU that can do 2 31 SHA-1 per second (that's a bit more than 2 billions). With a budget of one billion dollars you can have ten millions of such GPU, and they will run the attack in an average of... 652 days. Of course, 10 millions of GPU take quite some room and, more importantly, use a substantial amount of power. If we suppose that each GPU can run in 50W (a quite optimistic figure), then each attack run will need a bit less than 8 TW.h (terawatt-hours). I live in Canada, province of Québec, where electricity is known to be very cheap due to huge dams and substantial government subventions, resulting in a price of about 0.05$ per kW.h (see the prices ). With this price, each attack run will cost around 400 millions of dollars on electricity alone. This does not include cooling prices, because all this energy will become heat and will have to be dissipated (to some extent, a Canadian winter can help). Also, notice that all the GPU will collectively draw 500 MW, a non-discreet amount (that's about half of a nuclear power plant...). What this amounts to is the following: in practice, an 80-bit key is strong enough . I would be nervous if an 80-bit key protected the launch code for nuclear missiles; however, if the strategic dissuasion was protected by Google's 2FA, I would also be quite nervous for... other reasons. So we can say that while this 80-bit secret is non-compliant and academically "a bit short", it still is quite strong and does not mandate immediate and drastic actions. It would be cool if the code was fixed; the World won't stop spinning if it is not.
{ "source": [ "https://security.stackexchange.com/questions/45053", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32889/" ] }
45,066
I'd describe myself as an average tech savvy computer user. I have many accounts in forums, shopping sites, etc where I recycle two moderately strong password with small variation. These are account where I don't care if anybody gains access to them and that's why I have them saved in the browser's password manager. For example, I don't care if somebody gains access to my Alfa Romeo forum account or my Deal Extreme account because they can't do me any harm. Now for my internet banking and main email, it's a different story. I use a strong password for my internet banking which I DON'T recycle and don't have it saved in my browser password managers. For banking transactions I use a hardware token. For my gmail I use a two-step verification with another strong password. To me that sounds like a secure enough method where I'm keeping what's important safe and at the same time I'm not clogging my mind with too many passwords or worrying about what the latest security breach in my password manager would be. Thanks!
Yes. The average user should use long random passwords for every site. Passwords should not be repeated, passwords should not follow a discernible pattern. The compromise of any one password (e.g. your Adobe or LinkedIn login) must not be allowed to make it any easier for the attacker to guess your other passwords. These requirements make remembering passwords very nearly impossible. But that's not the primary reason why you should use a password manager. The primary reason is that it reliably protects you against phishing attacks. A browser-integrated password manager will only fill in a site-specific password if you're actually visiting the correct site. So you won't accidentally type in your Paypal.com password into www.paypal.com.us.cgi-bin.webscr.xzy.ru. This is doubly true for average users , who on the average, rely on the general familiarity of a site to determine whether or not its legitimate (a terribly ineffective heuristic). Since you don't know your password, you can't type it in. Instead, it will only auto-fill if you're at the authentic site. Use a browser-integrated password manager, don't get phished. It literally is that simple. Phishing is far more prevalent and serious a threat than password disclosure, anyway.
{ "source": [ "https://security.stackexchange.com/questions/45066", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32908/" ] }
45,112
From reading a lot of info on this website I came to the conclusion that if someone with enough skill really badly wants to gain access somewhere, then there is absolutely nothing stopping them from doing so. Additionally I learned that getting access to a company computer is much easier than a singular computer at home. However I am completely confused. What exactly prevents someone from for example, stalking one(or more) of the bank employees, getting information on them and then gaining access into their system? As far as I have seen financial fields have one of the most soul sucking programming tasks which usually end up in small amount of security holes. So what would prevent someone from waltzing into the bank's system, causing gigantic chaos (not for sake of stealing money, but for sake of screwing them up), and then walk out. For example I know from internal sources that this has one of the worst security methods implemented. This makes me assume that this is not the first software that a bank would release that would allow data to be stolen and/or modified. Does this actually happen but the banks do not care because they suffer very small damages? Or is a bank completely impossible to get into and destroy data?
I think it's fair to say that the idea that any large organisation is entirely impervious to attack has been proven false over the last five years or so. Everyone from nation states through large corporations, security consultancies and other security minded companies have had breaches. One reason that a bank hasn't been thrown into "complete chaos" as you put it is likely down to a combination of the security measures they have in place to react to attacks, the size and complexity of their systems and the motivations of people who have the resources to effectively attack them. If you think about people who are most motivated to attack banking systems, it's criminals who want to steal money from them. From their perspective there's no reason to cause chaos, they want to get in, steal things and get away without being noticed, if possible.
{ "source": [ "https://security.stackexchange.com/questions/45112", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27627/" ] }
45,153
I would like to stay out of the automatic filters in place by security agencies and not be accidentally placed on a no-fly list or such. Say I'm having a political debate with a friend about democracy and stuff, and terms like revolution, capitalism, freedom and such (oh, hello there NSA!) are thrown around a lot. Sending an encrypted email where normally I would send emails in plain text is a sure fire way to trigger some of the filters, I assume. Is there a way to encrypt emails so that the cypher text is hard for an algorithm to distinguish from regular (let's say spam, hard to get any more regular than spam) email? For example: Normal PGP cyphertext, easy to distinguish: -----BEGIN PGP MESSAGE----- Comment: GPGTools - http://gpgtools.org hQIMA7t6lidYOUd0AQ//Z7y+/tvQQ0TRoOT0ydUwVjJZh5sLQOEVQNDHGEUjfvL9 7UJhtEaisVwlDsqTEqpa04FWzgehBBDnxgOUFcPB3xSGD9Bi61MItK6gm1phTnEn hOezHmGqAyrCarofkYn5vpwPZtpSmRvpS9tykhRTKMlhsN5EOLvaDa8TsqMnqwGm pPC8j219YG2U/OmRa96GTslMaDtIx6470Ea4fcJf2jdo3RlgLEc7BGQVcrOpHj/0 -----END PGP MESSAGE----- Cyphertext, less easy to distinguish: pen3s grow for cheap russion brides are looking for parntners in Detroid area visit our website now click to unsubscribe
Say what you actually want to do is to make your encrypted email look like spam. OK, how to accomplish that? One possible way would be to take the ciphertext and break it down into managable chunks of, say, nine bits each. Using a set of dictionaries, these nine-bit quantities are mapped to one or more words in a target language (nine bits would require a dictionary of 512 words, which is feasible while at the same time providing variation). A Markov chain could possibly be used to pick the next dictionary based on the word selected in the previous dictionary, which likely could be made to make the output resemble very poorly written text in the given language. By tweaking the interaction between the two parts, the output of such a scheme could conceivably be anything from nonsense to semi-legible text (much like a lot of spam emails). And it'll be text, not binary data. An even simpler variant would be to simply encode the ciphertext using something like the PGP word list . The result of that will of course be complete and utter nonsense, but it'll probably pass the most simple statistical tests for a given target language. Now that I've described these ideas, they are of course totally useless. You'll have to come up with something of your own. ;-)
{ "source": [ "https://security.stackexchange.com/questions/45153", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32985/" ] }
45,170
I use LastPass to store and use my passwords, so I do not have duplicate passwords even if I have to register four to five different accounts a day, and the passwords are long. How safe are password manager services like LastPass? Don't they create a single point of failure? They are very attractive services for hackers. How can I trust the people behind these services and their security mechanisms? I imagine that a third party (government, company, etc.) would be very easy to 'bribe' and get all of my passwords. Are there any other solutions that offer similar services with similar ease of use?
We should distinguish between offline password managers (like Password Safe ) and online password managers (like LastPass ). Offline password managers carry relatively little risk. It is true that the saved passwords are a single point of failure. But then, your computer is a single point of failure too. The most likely cause of a breach is getting malware on your computer. Without a password manager, malware can quietly sit and capture all the passwords you use. With a password manager, it's slightly worse, because once the malware has captured the master password, it gets all your passwords. But then, who cares about the ones you never use? It is theoretically possible that the password manager could be trojaned, or have a back door - but this is true with any software. I feel comfortable trusting widely used password managers, like Password Safe. Online password managers have the significant benefit that your passwords are available on anyone's computer, but they also carry somewhat more risk. Partly that the online database could be breached (whether by hacking, court order, malicious insider, etc.) Also because LastPass integrates with browsers, it has a larger attack surface, so there could be technical vulnerabilities (which are unlikely with a standalone app like Password Safe). Now, for most people these risks are acceptable, and I would suggest that the approach of using a password manager like LastPass for most of your passwords is better than using the same password everywhere - which seems to be the main alternative. But I wouldn't store every password in there; make an effort to memorize your most important ones, like online banking. I know someone who won't use Password Safe and instead has a physical notebook with his passwords in obfuscated form. This notebook is obviously much safer against malware... whether it's at greater risk of loss/theft is an interesting question.
{ "source": [ "https://security.stackexchange.com/questions/45170", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31942/" ] }
45,270
What is the difference between "role based authorization" and "claim based authorization"? Under which circumstances would it be appropriate to implement each of these authorization models?
Claims are a method of providing information about a user, and roles are a description of a user by way of which roles they belong. Claims are generally more useful because they can contain arbitrary data -- including role membership information. E.g. whatever is useful for the given application. Claim Based identities are more useful, but tend to be trickier to use because there's a lot of setup involved for acquiring the claims in the first place. RBAC identities are less useful because they are just a collection of roles, but they are generally easier to setup. The .NET stack, and Windows as a whole, is going claims. Windows authn tickets are claims, and Active Directory now has the ability to use claims for certain functions. The .NET stack uses a claims identity as the base identity object now by default.
{ "source": [ "https://security.stackexchange.com/questions/45270", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32407/" ] }
45,272
It certainly would be more convenient to store my KeePass database on either S3, Dropbox, or better yet SpiderOak. My fear is having my cloud storage account compromised then having the credentials recovered by either brute force or some other attack vector. How safe is this? What risks do I need to know about?
It is hard to quantify exactly, but if you have the DB on a mobile device then I wouldn't say this is particularly any less secure. KeePass encrypts the DB because the file remaining secure isn't expected to be a guarantee. It's certainly preferable that the DB file not get in the wild, but if your security depends on the encrypted file remaining confidential, then you have bigger problems than whether to use cloud storage or not. A sufficiently strong master password should prevent brute forcing at least long enough for a breach to be detected and for you to change the passwords within it. In this way, it may even be slightly preferable to having a local copy on a mobile device as someone may compromise the file if you take your eyes off your device even momentarily and it would be much harder to identify that breach occurred. If you want to secure it even further, you can add another layer of security by encrypting the file you store in cloud storage online. The master password provides pretty good security as long as you choose a difficult to brute force password (long and truly random), but it still can't compete with an actual long encryption key. If you encrypt the file that you store online and then keep that key with you protected by a similar master password, now the online component alone is much, much harder to decrypt (likely impossible if done correctly) and if your key file gets compromised, you simply re-encrypt your online DB immediately with a new key. You're still in trouble if someone can compromise your cloud account first and get the file, but it requires two points of compromise instead of one. Personally, I'd probably end up using my OwnCloud (which is self hosted), but I have the advantage of having my own personal web server and I realize that's not an option everyone can take advantage of. (The only reason I haven't is that I don't have a particular need to coordinate a key database in that manner.) A public cloud based solution should work as a fine second option though.
{ "source": [ "https://security.stackexchange.com/questions/45272", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34088/" ] }
45,318
How long is the encryption/decryption key for an assymetric algorithm, such as AES? If I use AES 128-bit, how many characters should I type in for my key? What about AES 256-bit? Edit: Here is why I am asking: I am trying to use OpenSSL to encrypt some data using Node.js, PHP and command-line. I need to give in the key. When I tried 32 letter-key for AES128, it returned key length error. When I tried 32 for AES256 it returned general key error. I have no idea how I am supposed to enter the encryption key...
An AES 128-bit key can be expressed as a hexadecimal string with 32 characters. It will require 24 characters in base64. An AES 256-bit key can be expressed as a hexadecimal string with 64 characters. It will require 44 characters in base64.
{ "source": [ "https://security.stackexchange.com/questions/45318", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25919/" ] }
45,344
Imagine that I am hashing the users' passwords with a random, long enough, salt, using key stretching, and a secure hash. Would it be more secure to finally encrypt the hash with a symmetric key?
What's the threat model? It could be of benefit to protect the password storage with an application-side secret, but only if you think it's a realistic scenario that your database is going to get compromised—without your application server, which holds the secret, also having been compromised. Usually, the application layer is considered the more vulnerable part and the database layer more protected, and for this reason it is not considered useful to protect the data with an application-side secret. But that may not be the case for every app, and personally I would somewhat question this assumption given the prevalence of SQL injection. The downside of introducing an app-side secret is that you now have to worry about key management processes. What's your mechanism for replacing the key if it gets compromised? Are you going to rotate it as a matter of course? How is your key stored so that you can move servers and not lose it (potentially making all your data useless)? For short-lived keys (used for eg signing session tokens) you may not care but for long-term password storage it becomes important. The further downside with symmetric ciphers (especially block ciphers) is implementing them correctly. Since you are hashing and don't need recoverability, you could instead include the secret in the hash (as a ‘pepper’)... but then you wouldn't be able to re-hash all the password data on a key change, so you'd have to have a multi-key method for rollover. ETA re comment @Pacerier: Hashing does not prevent an offline guessing attack by someone who has obtained the password database. It only increases the amount of time it takes for that attack to bear fruit; with a hashing scheme of appropriate difficulty (ie rounds of key derivation rather than raw salt length per se), it will hopefully increase that amount of time to long enough for you to notice you've been hacked and change everyone's passwords before too many of their accounts are lost. Protecting the content with a secret key (either using encryption or including the secret key in the hashed material) does prevent the offline guessing attack, but only as long as the secret key itself is not leaked. If the secret key is on the app server separate from the database server, the passwords are only attackable if both machines are compromised. How much of a benefit that is arguable, given that many kinds of attack can end up compromising both, especially if the app server and database server are the same machine. There also is a price to pay in manageability as discussed. So whether there is net benefit needs to be judged on a case-by-case basis with reference to the threat model.
{ "source": [ "https://security.stackexchange.com/questions/45344", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32994/" ] }
45,439
There seems to be similar encryption going into both. They both use asymmetric (RSA, elliptic curves, etc) for the initial key exchange, then go to some symmetric (AES, Blowfish, etc) protocol. I'm wondering if someone here might have a moment to explain the difference between the 2 and how they would appear differently to the user (speed, applications they're present in, etc).
@graham-hill's answer is correct in general but short and pedantically incorrect, so I'll expand upon it. SSH is at the Application level - you can think of it as SSH/TCP/IP, or SSH-over-TCP-over-IP. TCP is the Transport layer in this mix, IP is the Internet layer. Other "Application" protocols include SMTP, Telnet, FTP, HTTP/HTTPS... IPSec is implemented using two separate Transports - ESP ( Encapsulating Security Payload for encryption) and AH ( Authentication Header for authentication and integrity). So, let's say a Telnet connection is being made over IPSec. You can generally envision it as Telnet/TCP/ESP/AH/IP. (Just to make things interesting, IPSec has two modes - Transport (outlined above) and Tunnel. In tunnel mode, the IP layer is encapsulated within IPSec and then transmitted over (usually) another IP layer. So you'd end up with Telnet/TCP/IP/ESP/AH/IP!) So - you ask "explain the difference between the two" - SSH is an Application and IPSec is a Transport. So SSH carries "one" type of traffic, and IPSec can carry "any" type of TCP or UDP traffic.* This has implications described below: You ask, "how they would appear differently to the user" - Because IPSec is transport layer, it should be invisible to the user - just as the TCP/IP layer is invisible to the user of a web browser. In fact, if IPSec is being used, then it is invisible to the writer of the web browser and web server - they don't have to worry about setting up or not setting up IPSec; that's the job of the system administrator. Contrast this to HTTPS, where the server needs a private key and certificate properly enabled, SSL libraries compiled in, and code written; the client browser has SSL libraries compiled in and code written and either has the appropriate CA public key or screams loudly (intruding upon the application) if it isn't there. When IPSec is set up, because it acts at the transport layer, it can support multiple applications without any trouble. Once you've configured IPSec between two systems, it's a really minor difference between working for 1 application or 50 applications. Likewise, IPSec makes an easy wrapper for protocols that use multiple connections like FTP (control and data may be separate connections) or H.323 (not only multiple connections, but multiple Transports (TCP and UDP)!). From the administrator's point of view, though, IPSec is heavier-weight than SSH or SSL. It requires more setup as it works its magic at the system level rather than the application level. Each relationship (host-to-host, host-to-net, or net-to-net) needs to be set up individually. On the other side, SSH and SSL are generally opportunistic - using either a PKI (CA) or web of trust model, they make it reasonably easy to trust the other end of the connection without much setup ahead of time. IPSec can be opportunistic, but it isn't generally used that way, in part because configuring an IPSec connection often requires trial and error. Interoperability between different vendors (Linux, Checkpoint, Cisco, Juniper, Etc) is not seamless; the configuration used to make Cisco<->Checkpoint is likely to be different than the configuration used to make Cisco<->Juniper. Contrast that with SSH - the OpenSSH server doesn't really care if you're running Putty, OpenSSH (*nix or Cygwin) SSH, or Tectia SSH. As far as speed goes - I don't have any good numbers to give you, and part of the answer is "it depends" - if IPSec is involved, then the actual IPSec processing is likely to be offloaded to a firewall or VPN concentrator with dedicated fast hardware, making it faster than SSH which is pretty much always crypting using the same general-purpose CPU that's running the server operating system and applications as well. So it's not really apples-to-apples; I'll bet you could find use cases where either of the two options is faster. * This isn't strictly true. SSH supports a terminal application, file copy/ftp application, and TCP tunneling application. But effectively it's just a really rich application, not a transport - its tunneling is not an IP Transport replacement.
{ "source": [ "https://security.stackexchange.com/questions/45439", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15660/" ] }
45,452
I tend to use Git for deploying production code to the web server. That usually means that somewhere a master Git repository is hosted somewhere accessible over ssh , and the production server serves that cloned repository, while restricting access to .git/ and .gitignore . When I need to update it, I simply pull to the server's repository from the master repository. This has several advantages: If anything ever goes wrong, it's extremely easy to rollback to an earlier revision - as simple as checking it out. If any of the source code files are modified, checking it as easy as git status , and if the server's repository has been modified, it will be come obvious the next time I try to pull. It means there exists one more copy of the source code, in case bad stuff happens. Updating and rolling back is easy and very fast. This might have a few problems though: If for whatever reason web server decides it should serve .git/ directory, all the source code there was and is becomes readable for everyone. Historically, there were some (large) companies who made that mistake. I'm using .htaccess file to restrict access, so I don't think there's any danger at the moment. Perhaps an integration test making sure nobody can read the .git/ folder is in order? Everyone who accidentally gains read access to the folder also gains access to every past revision of the source code that used to exist. But it shouldn't be much worse than having access to the present version. After all, those revisions are obsolete by definition. All that said, I believe that using Git for deploying code to production is reasonably safe, and a whole lot easier than rsync , ftp , or just copying it over. What do you think?
I would go as far as considering using git for deployment very good practice . The two problems you listed has very little to do with using git for deployment itself. Substitute .git/ for the config file containing database passwords and you have the same problem. If I have read access to your web root, I have read access to whatever is contained in it. This is a server hardening issue that you have to discuss with your system administration. git offers some very attractive advantages when it comes to security. You can enforce a system for deploying to production. I would even configure a post-receive hook to automatically deploy to production whenever a commit to master is made. I am assuming of course, a workflow similar to git flow . git makes it extremely easy to rollback the code deployed on production to a previous version if a security issue is raised. This can be helpful in blocking access to mission-critical security flaws that you need time to fix properly. You can enforce a system where your developers have to sign the commits they make. This can help in tracing who deployed what to production if a deliberate security flaw is found.
{ "source": [ "https://security.stackexchange.com/questions/45452", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5767/" ] }
45,491
I am a newbie in this field. I am confused with why SSL certificates cannot be free of charge. In my understanding the certificate is just a text file consisting of cryptic numbers installed on a server. What do the SSL certificates cost?
SSL certificates provide two things, encryption and authentication. For encryption, any SSL certificate will do. You can use a self-signed certificate which you can make free of charge and it will provide encrypted communication between your server and a client. The problem is that since it lacks any authentication, an attacker could simply make their own certificate and claim to be the server you want to connect to. Your browser wouldn't know the difference and would connect to the attacker with an encrypted connection and the attacker could then attach to the real server and monitor all your communication. To avoid this problem, SSL certificates also need to provide authentication and that means that someone has to verify domain ownership and identity information. The policies have to be administered and systems have to be run to handle dealing with lost keys. Relationships also have to be built with browser makers to get the root keys for the certificate authorities in to the applications. This all has costs and so those costs are passed on to those who buy SSL certificates from a Certificate Authority. In exchange for that cost, the CA verifies the identity of the organization and domain that they are issuing the certificate to. Now back in our original case, the attacker may be able to get between the client and the server, but they can't get the client to connect to their SSL certificate since it isn't trusted and if the client connects with the real SSL certificate, then the encryption kicks in blocking the attacker from being able to monitor what is happening. More recently, the service Let's Encrypt has appeared and offers a limited selection of free certificates. They are able to do this for three main reasons: First, they have generous sponsors who support their operating costs. Lack of encryption and trust on the internet has become a growing problem in recent years as attackers have grown increasingly capable. This need, and the cost of dealing with the lack of trust on the internet, has led to Let's Encrypt being able to get funding. Second, they offer an extremely limited portfolio of certificate options. They lack the facilities for handling EV certificates or even identity validation. They only offer domain validation and only offer extremely short validity periods due to the automated nature of their verification. Third, they drastically limited their costs by cutting humans out of the equation. Rather than have conventional validation, Let's Encrypt works purely from automated domain level validation via the ACME protocol. This is good enough to establish a low level of trust that the domain server is being run by the same person that controls the domain name, but not good for anything else. While it is a free option, unless you are certain of the identity of the website operator, it isn't nearly as good or as trust worthy as certificates available from paid CAs who do further identity verification prior to issuing certificates (though it's of equal value to domain validated certificates offered by other CAs, some of which also offer similar automated free or low cost options, though with even greater limitations on the certs offered.)
{ "source": [ "https://security.stackexchange.com/questions/45491", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25463/" ] }
45,509
PPTP is the only VPN protocol supported by some devices (for example, the Asus RT-AC66U WiFi router). If PPTP is configured to only use the most secure options, does its use present any security vulnerabilities? The most secure configuration of PPTP is to exclusively use: MPPE-128 encryption (which uses RC4 encryption with a 128bit key) MS-CHAPv2 authentication (which uses SHA-1) strong passwords (minimum 128 bits of entropy) I realize that RC4 and SHA-1 have weaknesses, but I am interested in practical impact. Are there known attacks or exploits that would succeed against a PPTP VPN with the above configuration?
Yes. The protocol itself is no longer secure, as cracking the initial MS-CHAPv2 authentication can be reduced to the difficulty of cracking a single DES 56-bit key, which with current computers can be brute-forced in a very short time (making a strong password largely irrelevant to the security of PPTP as the entire 56-bit keyspace can be searched within practical time constraints). The attacker can do a MITM to capture the handshake (and any PPTP traffic after that), do an offline crack of the handshake and derive the RC4 key. Then, the attacker will be able to decrypt and analyse the traffic carried in the PPTP VPN. PPTP does not provide forward secrecy, so just cracking one PPTP session is sufficient to crack all previous PPTP sessions using the same credentials. Additionally, PPTP provides weak protection to the integrity of the data being tunneled. The RC4 cipher, while providing encryption, does not verify the integrity of the data as it is not an Authenticated Encryption with Associated Data (AEAD) cipher. PPTP also doesn't do additional integrity checks on its traffic (such as HMAC), and is hence vulnerable to bit-flipping attacks, ie. the attacker can modify PPTP packets with little possibility of detection. Various discovered attacks on the RC4 cipher (such as the Royal Holloway attack) make RC4 a bad choice for securing large amounts of transmitted data, and VPNs are a prime candidate for such attacks as they by nature usually transmit sensitive and large amounts of data. If you want to, you can actually try cracking a PPTP session yourself. For a Wi-Fi user, it involves ARP poisoning your target such that the target sends the MSCHAPv2 handshake through you (which you can capture with Wireshark or any other packet capture tool). You can then crack the handshake with tools like Chap2Asleap, or if you have a few hundred dollars to spare submit the captured handshake to online cracking services. The recovered username, hash, password and encryption keys can then be used to impersonate logins to the VPN as that user, or to retroactively decrypt the target's traffic. Obviously, please do not do this without proper authorisation and outside a controlled environment . In short, please avoid using PPTP where possible. For more information, see http://www.computerworld.com/s/article/9229757/Tools_released_at_Defcon_can_crack_widely_used_PPTP_encryption_in_under_a_day and How can I tell if a PPTP tunnel is secure? . Issues discovered with RC4 (resulting in real world security issues in protocols like TLS) can be found in http://www.isg.rhul.ac.uk/tls/RC4mustdie.html and https://www.rc4nomore.com/ For the cracking portion, refer to https://www.rastating.com/cracking-pptp-ms-chapv2-with-chapcrack-cloudcracker/ and https://samsclass.info/124/proj14/p10-pptp.htm .
{ "source": [ "https://security.stackexchange.com/questions/45509", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34241/" ] }
45,712
On all our boxes we have ssh access via keys. All keys are password protected. At this moment the sudo mode is not passwordless. Because the number of VMs are growing in our setup, we investigate the usage of Ansible . Ansible itself says in the docs : Use of passwordless sudo makes things easier to automate, but it’s not required. This got me thinking about passwordless sudo and I found some questions/answers here. However, I couldn't really find anything about the security concerns of passwordless sudo per se. It could happen in the future user Bob has a different password on machine X than Y. When Bob is a sudoer, this gives problems with Ansible to enter one sudo password for all boxes. Given the facts ssh is done via keys, keys are password protected and all user accounts have passwords (so su bob is impossible without a password): how is the security affected when NOPASSWD is set in the sudo file?
NOPASSWD doesn't have a major impact on security. Its most obvious effect is to provide protection when the user left his workstation unattended: an attacker with physical access to his workstation can then extract data, perform actions and plant malware with the user's permissions, but not elevate his access to root. This protection is of limited use because the attacker can plant a keylogger-type program that records the user's password the next time he enters it at a sudo prompt or in a screensaver. Nonetheless, requiring the password does raise the bar for the attacker. In many cases, protection against unsophisticated attackers is useful, particularly in unattended-workstation scenarios where the attack is often one of opportunity and the attacker may not know how to find and configure discreet malware at short notice. Furthermore it is harder to hide malware when you don't have root permissions — you can't hide from root if you don't have root. There are some realistic scenarios where the lack of a password does protect even against sophisticated attackers. For example, a stolen laptop: the user's laptop is stolen, complete with private SSH keys; either the thief manages to guess the password for the key file (perhaps by brute force), or he gains access to them from a memory dump of a key agent. If the theft is detected, this is a signal to investigate recent activity on that user's account, and this means that a planted malware should be detected. If the attacker only had user-level access, anything he did will leave traces in logs; if the attacker obtained the user's password and ran sudo, all logs are now compromised. I don't know whether the downsides of NOPASSWD balance the upsides for your use case. You need to balance that against all the other factors of your situation. For example, it seems that you allow but don't enforce having different passwords. Can you instead use a centralized account database? How much containment do you need between your systems? Are you considering alternatives to Ansible that would support differing sudo passwords? Have you considered other authentication mechanisms?
{ "source": [ "https://security.stackexchange.com/questions/45712", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34405/" ] }
45,770
Looking at all the people who question the viability of DNSSEC , it's no wonder that the adoption rates are so poor . However, what about DNSCurve? It supposedly fixes all the DNS security and privacy problems independent of DNSSEC, doesn't suffer from the problems that are specific and unique to DNSSEC, and, should one disregard the maturity of either approach, seems to be a clear win for the situation — yet even though it's much-much younger than DNSSEC, there are still practically no implementations for DNSCurve, other than djbdns and DNSCrypt support by OpenDNS. Why?
There's a problem: DNSCurve is more like TLS for DNS servers, in comparison to DNSSEC, which is signed records . DNSCurve uses point-to-point cryptography to secure communication, while DNSSEC uses pre-calculated signatures to ensure the accuracy of the supplied records. So we can summaraize it like this: DNSSEC : Accurate Results DNSCurve : Encrypted Traffic Theoretically you can use traffic encryption to ensure accuracy, the way TLS does for websites. Except that it's not really the encryption that's ensuring your accuracy, is the authentication provided through the PKI. And there's a set of critical problems with the basic DNSCurve PKI. The first problem here is that with DNSCurve, each and every DNS server involved needs a private key , and since the key signature is encoded into the resolver's address, then in the case of anycast DNS servers, each server needs the same private key. But even if they use different keys, you're still trusting the local security where the DNS Server is installed. If the server is installed somewhere hostile, then the results can be compromised. This is not true with DNSSEC. ICANN has stated that, in the case of the DNS Root zone servers, DNSCurve will not be implemented, ever. Many of the root servers operate in less-trusted locations, and the potential for abuse by local governments would be enormous. This is precisely why DNSSEC was designed such that signing happens outside the DNS server. DNS relies on a vast network of server which may not be individually trustworthy, so DNSSEC was designed such that the trust is based solely on the information they serve, not the honesty of the operator. The second problem is that DNSCurve secures the public key by encoding it into the resolver name. But DNSSEC does not sign the resolver name. This means that DNSSEC (which is implemented in the root zone) cannot be used as a trust root for DNSCurve, because the one thing that DNSCurve requires to be accurate is in fact the very thing for which DNSSEC cannot ensure accuracy. So essentially DNSCurve is pretty much a non-starter. While it can be used to guarantee the security of your communication with a single DNS resolver, there currently is no way of globally anchoring your trust in a way that could guarantee the accuracy of any results you retrieve. Unless DNSCurve is re-designed to allow for trusted key distribution, it will have to remain a client-side security tool rather than a tool for ensuring the authenticity of DNS records. Since DNSCurve is relatively new and was developed largely by djb in isolation, presumably these show-stopping issues were simple oversights on his part, and may be fixed at some future date. Though given Dr. Bernstein's track record of maintaining his inventions, I wouldn't hold my breath.
{ "source": [ "https://security.stackexchange.com/questions/45770", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16831/" ] }
45,850
I notice that in /usr/share/wordlists in Kali Linux (former Backtrack) there are some lists. Are they used to bruteforce something? Is there specific list for specific kind of attacks?
Kali linux is a distribution designed for penetration testing and computer forensics, both which involve password cracking. So you are right in thinking that word lists are involved in password cracking, however it's not brute force. Brute force attacks try every combination of characters in order to find a password, while word lists are used in dictionary based attacks. Many people base their password on dictionary words, and word lists are used to supply the material for dictionary attacks. The reason you want to use dictionary attacks is that they are much faster than brute force attacks. If you have many passwords and you only want to crack one or two then this method can yield quick results, especially if the password hashes are from places where strong passwords are not enforced. Typical tools for password cracking (John the Ripper, ophtcrack, hashcat, etc) can do several types of attacks including: Standard brute force: all combinations are tried until something matches. You tpyically use a character set common on the keyboards of the language used to type the passwords, or you can used a reduced set like alphanumneric plus a few symbols. the size of the character set makes a big difference in how long it takes to brute force a password. Password length also makes a big difference. This can take a very long time depending on many factors Standard dictionary: straight dictionary words are used. It's mostly used to find really poor passwords, like password, password123, system, welcome, 123456, etc. Dictionary attack with rules: in this type dictionary words are used as the basis for cracks, rules are used to modify these, for instance capitalizing the first letter, adding a number to the end, or replacing letters with numbers or symbols Rules attacks are likely the best bang for the buck if all you have are standard computing resources, although if you have GPUs available brute-force attacks can be made viable as long as the passwords aren't too long. It depends on the password length, hashing/salting used, and how much computing power you have at your disposal.
{ "source": [ "https://security.stackexchange.com/questions/45850", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31325/" ] }
45,963
Can someone explain what the Diffie-Hellman Key Exchange algorithm in plain English? I have read that Twitter has implemented this technology which allows two parties to exchange encrypted messages on top of a non-secured channel. How does that work?
Diffie-Hellman is a way of generating a shared secret between two people in such a way that the secret can't be seen by observing the communication. That's an important distinction: You're not sharing information during the key exchange, you're creating a key together. This is particularly useful because you can use this technique to create an encryption key with someone, and then start encrypting your traffic with that key. And even if the traffic is recorded and later analyzed, there's absolutely no way to figure out what the key was, even though the exchanges that created it may have been visible. This is where perfect forward secrecy comes from. Nobody analyzing the traffic at a later date can break in because the key was never saved, never transmitted, and never made visible anywhere. The way it works is reasonably simple. A lot of the math is the same as you see in public key crypto in that a trapdoor function is used. And while the discrete logarithm problem is traditionally used (the x y mod p business), the general process can be modified to use elliptic curve cryptography as well . But even though it uses the same underlying principles as public key cryptography, this is not asymmetric cryptography because nothing is ever encrypted or decrypted during the exchange. It is, however, an essential building-block, and was in fact the base upon which asymmetric crypto was later built. The basic idea works like this: I come up with a prime number p and a number g which is coprime to p-1 and tell these numbers to you. (FYI: p for "Prime", g for "Generator") You then pick a secret number ( a ), but you don't tell anyone. Instead you compute g a mod p and send that result back to me. (We'll call that " A " since it came from a ). I do the same thing, but we'll call my secret number b and the computed number B . So I compute g b mod p and send you the result (called " B "). Now, you take the number I sent you and do the exact same operation with it . So that's: B a mod p . I do the same operation with the result you sent me, so: A b mod p . The "magic" here is that the answer I get at step 5 is the same number you got at step 4. Now it's not really magic, it's just math, and it comes down to a fancy property of modulo exponents. Specifically: (g a mod p) b mod p = g ab mod p (g b mod p) a mod p = g ba mod p Which, if you examine closer, means that you'll get the same answer no matter which order you do the exponentiation in. So I do it in one order, and you do it in the other. I never know what secret number you used to get to the result and you never know what number I used, but we still arrive at the same result. That result, that number we both stumbled upon in step 4 and 5, is our shared secret key. We can use that as our password for AES or Blowfish, or any other algorithm that uses shared secrets. And we can be certain that nobody else, nobody but us, knows the key that we created together. FYI: DH Parameters The numbers from step 1 ( p and g ) are the "Diffie-Hellman Parameters." You usually compute these well ahead of time since it takes a while. These aren't secret, so they tend to be reused. For a long time many servers all just used the same values for p and g , and not particularly good ones, leading to a bit of a kerfuffle (which got named Logjam ) that might be fun to read about if you're interested.
{ "source": [ "https://security.stackexchange.com/questions/45963", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
46,134
Today I was trying to uninstall some application and I was very surprised to see this entry in my applications list Then I try to find what is this and I finally found it in "Program Files". After I opened the application and explored it a little bit I found this window The email address you see is my spouse's email address. What I see from this is that my spouse has installed this application to spy on my computer and then send reports to her email. What I should do with my computer?
Change all your passwords! (no one had mentioned this) This is assuming that you're going to take an open approach to this problem rather than engage in counter-spying or image manipulation of your own. It's fairly basic advice, but do this on a computer you trust (this one cleaned or at work), and don't re-use any of your old passwords. Personally, I like using LastPass to store my passwords (generally random characters) but your mileage may vary. If you have things to hide (or just value your privacy) consider also using TrueCrypt to encrypt your system drive - you have a terrible security situation in that the attacker has physical access to the computer so let's make it hard for them to read the data while you're away. And then, on a personal level, perhaps prepare for the worst regarding your relationship. The fact that she (or he) installed this either shows a desire to leave anyway, or a serious lack of trust from them - and if that's not recognised and dealt with, it underlies all your dealings with each other. Update: On the passwords/account compromised note, you may want to check key accounts to make sure the recovery details haven't been changed or that no forwarding is going on. For example: GMail lets you set recovery options (click on your face - account - security - recovery) or set up a filter that silently forwards everything (gear icon - settings - filters). Consider setting up two-factor authentication on accounts that support it and look for ways to log-off other sessions. Google has a checklist that covers some other things.
{ "source": [ "https://security.stackexchange.com/questions/46134", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26985/" ] }
46,270
I've come across the following website hxxp://politie.nl.id169787298-7128265115.e2418.com/ [Possible malware] Whenever I open it in Firefox it prevents me from closing it, I can't even close the browser without using the task manager. The site asks for money for an unlock and I can imagine some people could fall for it. How can this happen? I thought Firefox was a more or less secure browser? I haven't checked it on other browsers.
Update: this is no longer a concern on Firefox (29+) and Chrome (version? not sure if it was ever an issue there). Firefox will only display a single dialog now. Firefox 31 additionally makes the dialog non-modal, and will also close the window if you press the close button a second time. Unfortunately, IE11 still shows multiple dialogs. I'm not sure if Microsoft is aware of this issue. The method is pretty simple, actually. If you inspect the source, you'll see a lot of repetition of this line: <iframe test="test" srcdoc="<script> window.onbeforeunload=function(env){ return 'YОUR BRОWSЕR HAS BЕЕN LОCKЕD. АLL PC DАTА WILL BЕ DЕTАINЕD АND CRIMINАL PRОCЕDURЕS WILL BЕ INITIАTЕD АGАINST YОU IF THЕ FINЕ WILL NОT BЕ PАID.';} </script>" src="au/close.php"></iframe> Basically, they have a whole bunch of iframes and each iframe can trigger the 'are you sure you want to leave' message once. It's not infinite by any means (as Braiam said, you can just hold down enter on the leave page), but it's probably enough to trick some people. This is arguably behaving as intended - though it may be better to add one of those "don't show again" checkboxes, much like they do for alert popups. Another way to prevent this kind of thing is to disable JavaScript on sites you don't trust. NoScript works well. This can, however, be somewhat annoying and/or break sites at times. For the interested: Chrome actually displays one long dialog. That comes with its own problems - and this is a browser bug - the buttons get pushed out of the bottom of the page (happens with any too-long Chrome popup). You can still hit enter . The Chrome dialog seems to be initiated by a different alert popup, and it actually doesn't seem to display the iframe unload text at all - it's likely they added this specifically to trap Chrome users: <script type="text/javascript">window.onbeforeunload = function(env){var str = '\n\nUw br' + 'оws' + 'еr is gе' + 'blо' + 'kkе' + 'еrd.\n\nAl' + 'le P' + 'C dа' + 'tа zu' + 'lle' + 'n wо' + 'rde' + 'n vа' + 'stgе' + 'hou' + 'den e' + 'n st' + 'rаf' + 'rес' + 'htе' + 'li' + 'jk' + 'е pr' + 'ocе' + 'dur' + 'es zu' + 'lle' + 'n wо' + 'rd' + 'en ing' + 'еlе' + 'id te' + 'ge' + 'n u al' + 's d' + 'e bo' + 'еt' + 'е ni' + 'et wo' + 'rd' + 't be' + 'taa' + 'ld.\n\n'; alert(str); return str; } </script> Again, it's nothing more than a scary popup that seems to be impossible to get rid of. Malicious? Yes. Actually dangerous? Not really. As long as you don't fall for the scam. Also of potential interest is that they do display your public IP and approximate location. This is a fairly standard scare tactic on these scam sites and really isn't anything special - any site you connect to can get your public IP, and public GeoIP databases can usually provide the approximate location. If you're really worried, go get a VPN or anonymous proxy to connect through.
{ "source": [ "https://security.stackexchange.com/questions/46270", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34890/" ] }
46,569
On some accounts I use my real name on-line (Google+/Facebook/Wikipedia/personal blog), others (Q&A/Gaming) I use an alias. My question is: Security and privacy wise, what can people do with my real name? What are the dangers of using your real name on-line.
This is actually an interesting new field in infosec - reputation management . Employers, Law Enforcement and other government agencies, legal professionals, the press, criminals and others with an interest in your reputation will be observing all online activity associated with your real name. These "interested parties" (snoops) are usually terrible at separating professional and personal life, so you could be made to suffer for unpopular opinions, political or religious convictions, associates or group affiliations they consider "unsavory", and any behavior that can be interpreted in the most uncharitable light. (Teachers have been forced to resign for drinking wine responsibly while vacationing in Europe. No, really. ) Conversely, you need an online presence, otherwise you will be made to suffer for a lack of things for the snoops to spy on - employers, especially (from Forbes): Key takeaway for hiring employers: The Facebook page is the first interview; if you don’t like a person there, you probably won’t like working with them. The bad news for employers, though, who are hoping to take the Facebook shortcut: “So many more profiles are restricted in what the public can access,” says Kluemper. You must carefully balance your public and private personas. Give as little information as possible in your public persona, and be mindful that unknown entities who may be antagonistic toward you will look to use whatever you put online against you. For instance - you announce you're going to visit relatives for the weekend! Robbers and vandals may take notice (from Ars Technica:) 39-year-old Candace Landreth and 44-year-old Robert Landreth Jr. allegedly used Facebook to see which of their friends were out of town. If a post indicated a Facebook friend wasn't home, the two broke into that friend's house and liberated some of their belongings. Social media companies such as Facebook and Google have proven to be hostile to the notion of privacy, and continually change their terms of service and "privacy settings" without consent to share more and more of your information with others. You cannot rely on them to protect your public reputation from your personal life. From NBC: The Internet search giant is changing its terms of service starting Nov. 11. Your reviews of restaurants, shops and products, as well as songs and other content bought on the Google Play store could show up in ads that are displayed to your friends, connections and the broader public when they search on Google. The company calls that feature "shared endorsements.'' It is best to offer information of a more personal nature pseudonymously, and keep the pseudonym(s) carefully firewalled from your real identity. Avoid major social media services when participating online pseudonymously if at all possible.
{ "source": [ "https://security.stackexchange.com/questions/46569", "https://security.stackexchange.com", "https://security.stackexchange.com/users/12810/" ] }
46,666
I'm going to recommend that our users start using a password manager and start creating strong random passwords. Though I don't know what size of a password to recommend. Is it possible for a password to be so strong that it stops making sense? I want them to have passwords that are strong, random, and long enough so that even if the hashed password table were stolen, that no brute-force or rainbow attack will ever * be able to guess it. Where ever* is something reasonable. Of course with sufficient time and resources any password can be brute-forced. But at some point in password strength, I think it must stop making sense. I don't want to sound like I'm going overkill and protect their password from a 4th dimensional being or something. Oh and a 64 character websafe alphabet is what I'm thinking about recommending.
Armed with the knowledge that a search space of 128 bits is more than sufficient for the foreseeable future; with 192 bits being ridiculously high; and 256 bits being something that is unimaginably impossible to cycle through , and assuming that your character set is alphanumeric upper and lower case English, randomly-generated , then we can say that: A password of 22 characters is more than sufficient for the foreseeable future. A password of 33 characters is ridiculously long . A password of 46 characters is just... I don't know what to say. So, to put it in one line: With an alphanumeric upper/lower character set, it starts making less sense after 22 characters. When does it stop making sense? Somewhere between that and 33 characters, very early.
{ "source": [ "https://security.stackexchange.com/questions/46666", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35158/" ] }
46,836
I was unable to find any good documentation or anything on mXSS. Can anyone give some info or give a link? I found a video and a PDF of the video's presentation: http://www.youtube.com/watch?v=Haum9UpIQzU https://hackinparis.com/data/slides/2013/slidesmarioheiderich.pdf
mXSS is a new type of XSS attack by Mario Heiderich. I actually saw him present this very talk at Syscan 2013 this year. The vulnerability in question comes from innerHTML which allows direct manipulation of HTML content, bypassing the DOM . An elements innerHTML is non-idempotent. The browser manipulates the contents to fix and optimize errors with the HTML. This is very visible in the example given in the slides. The problem with this manipulation is that it sometimes introduces flaws that is not apparent at first sight. One good example is also in the slides. The big deal about this attack is that it bypasses (at the time of the talk, situation may be better now) most of the existing XSS filters and sanitizers. As far as I know, there isn't any other available information about this class of XSS attack besides whatever information is present in the presentation slides.
{ "source": [ "https://security.stackexchange.com/questions/46836", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35502/" ] }
47,097
I believe that it was leaked recently that the NSA has a long list of zero day exploits on various software "for a rainy day," ie: for whenever it would be useful to them. The question is, how do they find these zero days? Does someone have to physically sit at a computer and try a whole bunch of random things (ie: remote code execution encoded in base64 inside of a script in a PDF), or are there automated systems which actively pen-test pieces of software for holes where privilege escalation or remote code execution would work?
Zero-days are found in exactly the same ways as any other kind of hole. What makes a security hole a "zero day" relies exclusively on who is aware of the existence of the hole, not on any other technical characteristic. Holes are found, usually, by inquisitive people who notice a funky behaviour, or imagine a possible bug and then try out to see if the programmer fell for it. For instance, I can imagine that any code which handles string contents and strives to be impervious to case differences (i.e. handles "A" as equivalent to "a") may run into problem when executed on a Turkish computer (because in Turkish language, the lowercase for "I" is "ı", not "i") which can lead to amusing bugs, even security holes (e.g. if some parts of the system checks for string equivalence in a locale-sensitive way, while others do not). Thus, I can try to configure my computer with a Turkish locale, and see if the software I target starts doing weird things (besides talking Turkish). Part of bug-searching can be automated by trying a lot of "unusual combinations". This is known as fuzzing . It can help as a first step, to find input combinations which trigger crashes; anything which makes the target system crash ought to be investigated, because crashes usually mean memory corruption, and memory corruption can sometimes be abused into nifty things like remote code execution. However, such investigations must still be done by human brains. (If there was a fully automatic way to detect security holes, then software developers would use it to produce bug-free code.)
{ "source": [ "https://security.stackexchange.com/questions/47097", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2374/" ] }
47,183
I would like to get a few opinions on whether it would be safe or not to use PBKDF2 to generate a hash of a password. For my purposes I'd like to assume that the hash itself will be posted on the White House Twitter page (in other words it will be public). Would I be better off using a massive number of SHA-256 iterations (as a replacement for my 100,000 PBKDF2 iterations)? Are there any attacks that can reverse PBKD2? This is the code I'm using: string salt = Utility.RandString(32); // convert to byte[] and store in both forms this.Salt = salt; this.BSalt = Utility.ToByteArray(salt); // generate the password hash Rfc2898DeriveBytes preHash = new Rfc2898DeriveBytes(this.Password, this.BSalt, 100000); byte[] byteHash = preHash.GetBytes(32); this.Hash = Convert.ToBase64String(byteHash);
The short answer is that PBKDF2 is considered appropriate and secure for password hashing. It is not as good as could be wished for because it can be efficiently implemented with a GPU; see this answer for some discussion (and that one for more on the subject). There are some arguable points, notably that PBKDF2 was designed to be a Key Derivation Function , which is not the same kind of animal as a hash function; but it turned out to be quite usable for password hashing as well, provided that you do not do anything stupid with it (meaning: be sure to keep at least 80 bits of output, and to use a high enough iteration count, ideally as high as you can tolerate on your hardware). Don't try to make your own hash function; these things are surprisingly hard to get right, especially because it is nigh impossible to assess whether a given construction is secure or not. Use PBKDF2. If you really want SHA-256 (or SHA-512), then use PBKDF2 with SHA-256: PBKDF2 is a configurable construction, which is traditionally configured to use SHA-1, but works equally well with SHA-256 or SHA-512.
{ "source": [ "https://security.stackexchange.com/questions/47183", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35240/" ] }
47,293
With data mining tools like Maltego and other correlation tools for large data sets, if we conduct any transactions online assume that these can all be collated to build a good picture of what we do, buy, read etc (hence Google etc). If a normal person, with a large online history decides to go off-web, is there an effective way to do this? This question was featured as an Information Security Question of the Week . Read the Jan 27, 2014 blog entry for more details or submit your own Question of the Week .
The problem is heuristics. All mentioned tools are built on heuristics and the only way to avoid them is to change how you live completely. You can be fingerprinted by the modules installed in your browser. By the programs you use and the frequency you use them. These days you're going further than just online behavior. Shops know what you buy in what amounts, because nobody buys all the same brands you are getting fingerprinted constantly. This is used for targeted advertising, but it can also theoretically be used to track you. MIT's Reality Mining project proved the same using smartphones. You prefer certain apps, you use your phone at certain intervals, you move around certain places. This all contributes to a somewhat unique pattern (back when I did some research on it during my internship we were getting 91% certainty in simulations, even when people changed their SIM card every few days we were still able to track them based on the SSIDs they encountered, places they went, apps they installed and used, when they checked their phones, Bluetooth devices they connect to, cell towers they passed at a certain moment in time and what smileys they use in text messages). Avoiding heuristics means changing everything you do completely. Stop using the same apps, accounts, go live somewhere else and do not buy the same food from the same brands. The problem here is that this might also pop up as a special pattern because it is so atypical. Changing your identity is the first step. The second one is not being discovered. As Thomas said the internet doesn't forget. This means that photos of you will remain online, messages you posted, maybe even IDs you shared will remain on the net. So even when changing your behavior it only will need one picture which might expose you.
{ "source": [ "https://security.stackexchange.com/questions/47293", "https://security.stackexchange.com", "https://security.stackexchange.com/users/485/" ] }
47,413
There's a recent report in the news of a Harvard student who emailed in a bomb threat so as to postpone year-end exams. According to the report, he carefully covered his tracks using the best technology he knew about: he used a throw-away email account, and only accessed it over Tor. It turns out that this last point -- using Tor to send his email -- is what made him easy to find. Officials simply searched Harvard's logs for anyone who had recently accessed the Tor network, which led them directly to the culprit. Arguably his critical mistake was using Harvard's WiFi for his Tor access; going down the street to a coffee shop would possibly have prevented his identity from being tied to his Internet activity. But in that case, Tor would probably have been unnecessary. And in fact, the interesting point is that his use of Tor completely failed at its primary purpose of providing anonymity, but instead simply provided him a completely false sense of security. This principle easily extends to other anonymity tools and techniques as well; encryption, proxies, and others: if the tools are not popular, then the fact that you're using them alone makes your activity suspicious, making you an immediate target for investigation, interrogation, etc. So how do you deal with this? Can you really trust Tor and other anonymity tools to make you anonymous? Would layering these tools help? Or would it just compound the problem?
If you are in a crowd and you wear a mask, but nobody else in the crowd does, then you tend to attract attention... If you want to remain anonymous, then you must use only tools which do not single you out as a potential miscreant, i.e. tools that everybody uses. A good example is when you pay in cash: this is a mostly traceless payment system, and yet sufficiently many people use it so that paying with cash does not appear suspicious (unless you use cash to pay for, say, a big car). To a large extent, this illustrates a tendency to miss the point, which is unfortunately often encountered in circles dealing with anonymity: it is what I would call the "game fantasy". When using Tor, PGP or whatever, the wannabe anonymous sometimes feel that he is playing some game with informal rules, in particular a definite and finished scope. The Tor user tends to believe that his adversaries will meet him only in a network-related way. As rumour has it, one of the first reactions of Kevin Mitnick upon his being arrested was to say that Tsutomu Shimomura had "cheated" by calling the cops, instead of trying to defeat him through technical skills alone. So let there be a lesson: if you want to be anonymous, don't concentrate on the tools. Instead, focus on the big picture. Layering anonymity gimmicks on top of each other does not address the actual problem. In fact it can be argued that no layering can help Tor in any way, since the point of Tor is to randomize the network path so that sender and recipient cannot be correlated with each other; if something else is needed then sender and recipient were correlated with each other, and the actual use of Tor came to naught. This is a property of anonymity through absence of correlation: it is all-or-nothing. You cannot get anonymity incrementally; you have it all in one go, or you have none. This answers one of your questions: layering does not ultimately help . To really be anonymous, you have to blend in the background. You achieve perfect anonymity by doing nothing. However, as soon as you try to act , if only to send an email, then you begin to leave traces of many kinds. For instance, when you use a WiFi access from a coffee shop, then you are physically in the coffee shop, so you are in range of CCTV cameras, you leave fingerprints and DNA traces on the premises,...
{ "source": [ "https://security.stackexchange.com/questions/47413", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2264/" ] }
47,576
Playing devil's advocate , Let's assume I purchase a Linux server from a hosting provider. I am given a password for the root user and am told I may login using SSH. The only purpose this server has is to host one or more websites, possibly with SSL properly configured and enabled. My initial login will be to install (via a well-reviewed and widely used package management system) and configure (by editing files in /etc ) a web server, a database, some software that does not access the Internet, and a web-application server (PHP-FPM, Unicorn, that sort of thing). The package management was smart enough to set up unprivileged users to run the servers, a configuration I maintain. Next I put some website files (PHP, Ruby, Python, etc) in /var/www , and chown all the those files to be owned by the same unprivileged user that runs runs as the web server process (ie. www-data ). In the future, only I will login, and only to update the web site files and to perform some read-only operations like reviewing logs. In the scenario above, is there any security-related reason , why I should create a non-root user account to use rather than the root user? After all, almost every command would be run with sudo had I logged in with a non-root user. Please Note: I understand there are many universally compelling security and non-security reasons to use non-root user(s). I am not asking about the necessity for non-root user accounts. My question is strictly limited to the limited setup I describe above. I think this is relevant because even though my example is limited, it is very common.
There are a few reasons: traceabilty : Commands run with sudo are logged. Commands run with bash are sometimes logged, but with less detail and using a mechanism that is easy to block. privilege separation : almost every command is not the same as every command. There's still plenty which doesn't require root file editing : the web files are owned by a non-root user and run by a non-root user... so why would you edit them with root? attack mitigation : Consider the following totally-not-even-hypothetical scenario: Your workstation gets some malware on it which filches your FTP/SCP/SFTP/SSH login out of the stored authentication database from the appropriate client and transmits it to the attacker. The attacker logs on to your device to do some mischief. Now, can they cover their tracks, or will what they do be visible to you? I talk to someone new more than once every week to whom this has recently happened. automated attack mitigation : A hacked server in Brazil is scanning your network and pulls up a listening SSH server. What username does the attacker use for his automated attack? Maybe webuser , or test , or www or admin -- but more than any other: root . There are certainly many more reasons, but these are the first ones to come to my head.
{ "source": [ "https://security.stackexchange.com/questions/47576", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
47,680
A few minutes ago I attempted to ssh to a server I have at my office. Since it is a new server my public key has not been set up there so I had to type my password in manually. After three times of trying to log in unsuccessfully I notice that I had typed the domain name wrong -- two characters transposed. The server I had actually been trying to log into was not at my office but somewhere else! My question: does the SSH protocol expose my password in such a situation? In other words if that server had be deliberately set up in some way to catch a mistake like this could it do it? I am going to go change that password anyway but I would like to know if that accident is a real risk. Any insights welcome.
Yes, the remote server will now have had access to your password. And if they've set it up to log that password (which is not the default in any SSH product I know), you should change your password even quicker than you are already doing :)
{ "source": [ "https://security.stackexchange.com/questions/47680", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36370/" ] }
47,749
I've been using Terminal under Mac OS X for years but somehow missed to spot this feature: I'm now wondering how does this actually work, and is it 100% safe? If it isn't, what technique could be used to still get the keystrokes?
"Secure Keyboard Entry" maps to the EnableSecureEventInput function whose concept is described here . Basically, applications don't access the hardware themselves; they obtain events (e.g. about key strokes) from the operating system. Some elements in the OS decides what application gets what events, depending on its access rights and GUI state (there are details depending on which application is "in the foreground"). Applications can "spy" on each other, meaning (in this case) that an application running on the machine can ask to the OS to send it a copy of all key strokes even if they are meant for another application, and/or to inject synthetic events of its own. This is a feature : it allows things like "password wallets" (which enter a password as if it was typed by the user, from the point of view of the application) or the "Keyboard Viewer" (the GUI-based keyboard which allows you to "type" characters with the mouse and also shows what keys are actually being pressed at any time). EnableSecureEventInput blocks this feature for the application which calls it. Try it ! Run Terminal.app, enable the "Keyboard Viewer", and see that enabling "Secure Keyboard Entry" prevents the Keyboard Viewer from doing its job. All these event routing is done in some user-space process which runs as root . This relies on the process separation enforced by the kernel: normal user process cannot fiddle at will with the memory allocated for a root process. The kernel itself is unaware of the user-level concept of "event". The management of events, in particular the enforcement (or not) of EnableSecureEventInput , is made by non-kernel code. An interesting excerpt of the page linked above is the following: The original implementation of EnableSecureEventInput was such that when a process enabled secure input entry and had keyboard focus, keyboard events were not passed to intercept processes. However, if the secure entry process was moved to the background, the system would continue to pass keyboard events to these intercept processes, since the keyboard focus was no longer to a secure entry process. Recently, a security hole was found that made it possible for an intercept process to capture keyboard events, even in cases where secure event input was enabled and the secure event input process was in the background. The fix for this problem is to stop passing keyboard events to any intercept process whenever any process has enabled secure event input, whether that process is in the foreground or background. This means that a process which enables secure event input and leaves secure event input enabled for the duration of the program, can affect all keyboard intercept processes, even when the secure event process has been moved to the background. This means that the event routing system actually got it wrong in the first installment of the feature. This is now supposed to be fixed. Even assuming that the event routing is now proper and secure, meaning that EnableSecureEventInput 's semantics are really enforced, then you must understand that this is completely relative to the process separation system. Any root process can inspect and modify at will the memory of all other process, and in particular see all the events; and a root process can also hook into the kernel and inspect the actual data from the keyboard bypassing the notion of event completely. A key logger which can be installed as root will do just that, and "Secure Keyboard Entry" will be defenceless against it. See this for an opensource proof of concept. So "Secure Keyboard Entry" is secure only against attackers who could get to run some code of their own on the machine, but could not escalate their local privileges to root level. This is a rather restrictive scenario, because local privilege escalation tends to be possible on a general basis: Local process can see a lot of the machine, so the "security perimeter" to defend is huge in that case. Preventing intrusion from remote attackers is much easier, and yet already quite hard. Apple tends to exhibit some lack of reactivity in the case of local privilege escalation holes. Summary: I would find it overly optimistic to believe that "Secure Keyboard Entry" provides sufficient security against key loggers on, say, public shared computers. It is not a bad feature, but it fulfills its promises only if root and the kernel are free from malicious alterations, and that's a very big "if".
{ "source": [ "https://security.stackexchange.com/questions/47749", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32489/" ] }
47,901
I thought I knew how two-factor authentication works: I enter the password. Server generates a random number (token) and sends it to me via SMS. I enter this token. Server checks that the token I entered matches the one generated for my earlier 2FA request. But today, I discovered Authy , and it looks like I don't know how this program's two-factor authentication (2FA) works. Authy shows me secret numbers (2FA tokens) without any connection with the server. How can this be? I suppose these tokens are not random? Maybe it is some kind of a number sequence, where knowing initial seeding parameters makes this a deterministic process? Maybe this sequence is a function of time? Is that how it works? Is it secure? Can I, for example, determine next 2FA tokens, if I know a N number of previous tokens?
Authy show me secret numbers without any connection with server. How can it do it ? Authy is using a one-time passcode (OTP) algorithm which come in a number of flavors, the two most popular being HMAC-based OTP (HOTP) and Time-based OTP (TOTP). Authy is using TOTP. Both algorithms are essentially the same; they require some seed data and a counter to generate the next passcode in the series. HOTP implementations increment the counter each time the user requests/uses a passcode, TOTP increments the counter after a given time interval. In Authy's case, when the user submits a passcode to the server, the server looks up the user's seed data, calculates the counter value based on the timestamp of the request and then generates the proper passcode. The server then checks that the generated passcode matches the user-submitted passcode. Is it secure ? Can I know next number if I know N previous numbers ? Yes and no, it depends on whether or not you trust the server's security. Given N previous tokens an attacker still shouldn't be able to recover the seed data. However, these algorithms require the server to store the seed data for all of the users. If an attacker is able to compromise the database (through SQL injection, etc.) then they will be able to generate valid passcodes. This is what happened to RSA and their SecurID tokens ( http://arstechnica.com/security/2011/06/rsa-finally-comes-clean-securid-is-compromised/ ) Some companies like Duo Security ( https://www.duosecurity.com/ ) and Twitter ( https://blog.twitter.com/2013/login-verification-on-twitter-for-iphone-and-android ) are tackling this issue by implementing challenge-response two-factor authentication with asymmetric key encryption. They only need to store public keys in this case, meaning that if their database is leaked an attacker doesn't have the private keys necessary to generate valid responses. Disclaimer, I worked at Duo. Updated based on questions in the comments The algorithms (HOTP or TOTP) must be the same on the server and the client application? The algorithm is identical, just the way the counter value is generated is different. If Google were HOTP and Authy wanted to support Google accounts, their app would have to generate and store the counter value differently from TOTP accounts. Does HOTP client require connection with server to get next passcode (because it doesn't know how much requests was made from last time), while TOTP doesn't require it? No, HOTP doesn't require a connection to work, but HOTP is generally not used because it's easy for the phone and server to fall out of sync. Say both the server and app start out with a counter value of 0. The server usually has a window, maybe the next 10 passcodes, that it will consider valid. When the user submits a passcode, the server will compare the submitted passcode with the next 10 generated passcodes. If any of the 10 match, the server can update the stored counter value and remain in sync. The problem though, is that the user may be able to generate too many passcodes in the app without using them. If the user is able to increment the counter beyond the passcode window size, then the server can no longer verify that the passcodes are valid. To see in detail how the OTP tokens are generated, see this informative blogpost .
{ "source": [ "https://security.stackexchange.com/questions/47901", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16110/" ] }
47,913
Or is authentication essentially incompatible with anonymity? If we have the idea that authentication is proving that someone is who they say they are and anonymity is essentially having an unknown identity, could you have a system that could authenticate you but that you could still use anonymously? Would adding a 3rd party service to the mix change the answer?
First, authentication is not really “proving that someone is who they say they are”, but linking an action, message or situation with an identity. If I show my passport to prove who I am, what I am really doing is linking my physical presence with the identity conferred to me by the state of which I am a national. A person may well have multiple identities. For example, people with dual citizenship have two passports and what they do with one is somewhat decoupled with what they do with the other. Multiple identities are the basis of anonymity. Indeed, as soon as you interact with anyone else in any way, this creates an identity: you're the person who did this thing at that time. Anonymity is not a lack of identity, but a lack of a link between a certain identity and any other identity that you may have. Coming back to authentication, it is a link between two identities: the person who did that thing at this time is the same person who owns these credentials. Phrased this way, authentication is exactly contradictory with anonymity. However, there are many situations where it is useful to have partial authentication or partial anonymity. An obvious solution to partial anonymity is to have a trusted third party as an intermediary. I'm using one right now: you know me by my Stack Exchange account, and you can use this identity to authenticate my Stack Exchange activity. Stack Exchange knows me by an identity with an OpenID provider, but isn't telling you. My OpenID provider in turn knows some things about me, but they only know the name on my passport if I've been telling them. There is a chain of links between identities (which could be traced all the way to, say, my home address via ISP logs), but you need the collaboration of multiple parties to resolve the chain. How satisfactory such situations are depends on how the identification linkage chain is set up and what parties you're prepared to trust. Having identification linkage chains that are practically impossible to trace is the basis of anonymity systems such as TOR. Going in another direction, authentication is very often used for authorization. I may use this account because I am the account owner. I may enter this building because I am an employee. An authorization system can be set up to decouple the action that is being authorized from the identity that led to the authorization. A well-known example is voting systems: they have both strong authorization requirements (only registered voters may vote, and only once per election) and strong anonymity requirements (even I cannot prove who I voted for, at least not if I want my vote to be counted). In traditional voting systems, anonymity is ensured by stepping into a booth to put a standardized piece of paper in an opaque envelope. Anonymity is ensured by observers who control that everyone steps into the booth and by rules that make non-anonymous ballots void.
{ "source": [ "https://security.stackexchange.com/questions/47913", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37/" ] }
48,022
There's the recent article NSA seeks to build quantum computer that could crack most types of encryption . Now I'm not surprised by the NSA trying anything 1 , but what slightly baffles me is the word "most" - so, what encryption algorithms are known and sufficiently field-tested that are not severely vulnerable to Quantum Computing?
As usual, journalism talking about technical subjects tends to be fuzzy about details... Assuming that a true Quantum Computer can be built, then: RSA, and other algorithms which rely on the hardness of integer factorization (e.g. Rabin), are toast. Shor's algorithm factors big integers very efficiently. DSA, Diffie-Hellman ElGamal, and other algorithms which rely on the hardness of discrete logarithm , are equally broken. A variant of Shor's algorithm also applies. Note that this is true for every group, so elliptic curve variants of these algorithms fare no better. Symmetric encryption is weakened ; namely, a quantum computer can search through a space of size 2 n in time 2 n/2 . This means that a 128-bit AES key would be demoted back to the strength of a 64-bit key -- however, note that these are 2 64 quantum-computing operations; you cannot apply figures from studies with FPGA and GPU and blindly assume that if a quantum computer can be built at all , it can be built and operated cheaply . Similarly, hash function resistance to various kind of attacks would be similarly reduced. Roughly speaking, a hash function with an output of n bits would resist preimages with strength 2 n/2 and collisions up to 2 n/3 (figures with classical computers being 2 n and 2 n/2 , respectively). SHA-256 would still be as strong against collisions as a 170-bit hash function nowadays, i.e. better than a "perfect SHA-1". So symmetric cryptography would not be severely damaged if a quantum computer turned out to be built. Even if it could be built very cheaply actual symmetric encryption and hash function algorithms would still offer a very fair bit of resistance. For asymmetric encryption, though, that would mean trouble. We nonetheless know of several asymmetric algorithms for which no efficient QC-based attack is known, in particular algorithms based on lattice reduction (e.g. NTRU), and the venerable McEliece encryption . These algorithms are not very popular nowadays, for a variety of reasons (early versions of NTRU turned out to be weak; there are patents; McEliece's public keys are huge ; and so on), but some would still be acceptable. Study of cryptography under the assumption that efficient quantum computers can be built is called post-quantum cryptography . Personally I don't believe that a meagre 80 millions dollars budget would get the NSA far. IBM has been working on that subject for decades and spent a lot more than that, and their best prototypes are not amazing. It is highly plausible that NSA has spent some dollars on the idea of quantum computing; after all, that's their job, and it would be a scandal if taxpayer money did not go into that kind of research. But there is a difference between searching and finding...
{ "source": [ "https://security.stackexchange.com/questions/48022", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3272/" ] }
48,325
A security scan result prior to the deployment of a web application on Windows Server 2008 R2 has raised the below message : Weak SSL Cipher Suites are Supported Reconfigure the server to avoid the use of weak cipher suites. The configuration changes are server-specific. SSLCipherSuite HIGH:MEDIUM:!MD5!EXP:!NULL:!LOW:!ADH For Microsoft Windows Vista, Microsoft Windows 7, and Microsoft Windows Server 2008 remove the cipher suites that were identified as weak from the Supported Cipher Suite list by following these instructions: http://msdn.microsoft.com/en-us/library/windows/desktop/bb870930(v=vs.85).aspx I've tried understanding the MSDN information but I'm totally lost in there. First of all, I do not understand which is the cipher suite that should be removed or disabled. Then how am I suppose to run the code given an example to remove a cipher suite? #include <stdio.h> #include <windows.h> #include <bcrypt.h> void main() { SECURITY_STATUS Status = ERROR_SUCCESS; LPWSTR wszCipher = (L"TLS_RSA_WITH_RC4_128_SHA"); Status = BCryptRemoveContextFunction( CRYPT_LOCAL, L"SSL", NCRYPT_SCHANNEL_INTERFACE, wszCipher); }
Figuring out which cipher suites to remove can be very difficult. For Windows, I've used the free IIS Crypto tool in the past: IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2003, 2008 and 2012. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click and test your website. This not only leverages someone's expert knowledge as far as which algorithms are more or less secure, but also takes the pain of figuring out how to actually implement the change in Windows away (hint: it's a bunch of registry entries).
{ "source": [ "https://security.stackexchange.com/questions/48325", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36895/" ] }
48,437
And know which are not default ones installed by Microsoft?
To view your certificate stores, run certmgr.msc as described there . The "root" store contains the root CA, i.e. the CA which are trusted a priori . certmgr.msc shows you an aggregate view of all root CA which apply to the current user; internally, there are several relevant stores (the "local machine" stores apply to all users, the "current user" stores are specific to the current user; and there also are "enterprise" stores which are similar to "local machine" but meant to be filled automatically from the AD server of the current domain). See this page for a list of all CA that Microsoft puts in Windows by default; any discrepancy would be a local variation. The list is occasionally updated, and this is propagated to your computer through the normal Windows update mechanisms.
{ "source": [ "https://security.stackexchange.com/questions/48437", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36765/" ] }
48,802
My understanding is that when using a client certificate for security one issues a private and public key cert (for example X509) of some sort and sends that of to the consumer of the service that one wants to authorize themselves before consuming. But what's then the standard way of checking that it's actually a valid client cert they are presenting? Please present the standard workflow here and also the role of the CA in this case. also wondering what's preventing someone for just exporting the client cert from the client machine and using it somewhere else, is preventing export for the private key safe enough?
From a high level perspective, three things have to happen: The client has to prove that it is the proper owner of the client certificate. The web server challenges the client to sign something with its private key, and the web server validates the response with the public key in the certificate. The certificate has to be validated against its signing authority This is accomplished by verifying the signature on the certificate with the signing authority's public key. In addition, certificate revocation lists (CRLs) are checked to ensure the cert hasn't been blacklisted. The certificate has to contain information which designates it as a valid user of the web service. The web server is configured to look at specific items in the certificate (typically the subject field) and only allow certain values.
{ "source": [ "https://security.stackexchange.com/questions/48802", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32074/" ] }
48,804
HTTPS is an end-to-end encrypted connection. Given this, does the website I am visiting know my original IP? The website is only available over HTTPS (not unencrypted HTTP).
No, it won't. The thing is that when you use HTTPs over TOR you: you use the public key of the server to encipher your message (so nobody except the server will be able to read your message). then you pass the HTTPs message (which, remember, is encrypted with the public key of the server) to a TOR node, this TOR node to another, and another and... finally, the last TOR node will send your encrypted HTTPs message to the server (that includes your key for the session); the response is encrypted by the server with this key and you will be the only one to be able to decrypt the response from the server. 1 So the graph should be as follow: ---> "Tor message" ===> "HTTPs message" [T] "Tor Node" [S] "Server" [U] "User" [U]-->[T1]-->[T2]-->[T3]-->...[TN]==>[S] [S]==>[TN]-->...[T3]-->[T2]-->[T1]-->[U] And yet still your communication will be secret. If you want to learn a bit more about how your connection is secret you can learn about the key exchange in this page .
{ "source": [ "https://security.stackexchange.com/questions/48804", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9616/" ] }
48,956
This is coming from a joke originally in French : Enter your password. carrot Sorry, your password must be more than 8 characters. carottegéante ( giant carrot ) Sorry, your password must contain a number 1carottegéante ( 1 giant carrot ) Sorry, your password must not contain accented characters 50putaindecarottesgeantes ( 50 damn giant carrots ) Sorry, your password must contain at least one capital letter 50PUTAINdecarottesgeantes ( 50 DAMN giant carrots ) Sorry, your password must not contain two consecutive capital letters 5OPutainDeCarottesGeantesQueJeVaisTeMettreAuCulSiTuNeMedonnesPasImmediatementUnAcces! ( 50 Damn Giant Carrots That I Will Put In Your Ass If You Do Not Immediately Give Me Access! ) Sorry, your password must not contain punctuation characters AttentionMaintenantJeVaisAllerTeTrouverEtTeMettreVraimentLes50CarottesGeantesSiTuContinues ( Caution Now, I'll Go Find You And Really Put 50 Giant Carrots If This Continues ) Sorry, this password is already in use. But speaking around with a friend: I said: This is a built joke only: there is a typo on 3th answer: ne s oit pas.. and at all, I don't know any user creation engine that do check for already used passwords. My friend answer: You're right, it's a joke, but you could configure PAM to do such a check: password required pam_pwcheck.so nullok remember=N Well, I don't understand! Any unauthorized person could check if a password is used by simply create a new (fake) account! What could be the goal of this kind of check?
You misunderstand what remember does for pam_pwcheck ; see the man page : remember=XX Remember the last XX passwords and do not allow the user to reuse any of these for the next XX password changes. XX is a number between 1 and 400. With this option, pam_pwcheck will reject attempts at reusing a password which was previously used by the same user . It does not do anything cross-users; it will not warn you about whether your password of choice is used by anybody else or not. In fact, such a check would be expensive to implement if proper password hashing is in place (with salts and many iterations for slowness, it would take several minutes to "try" the putative password for thousands of other users). As you say, warning users about how a potential password is already used by another user would be, by itself, a serious security issue; but it would also mean that the passwords are not stored properly, and that is another serious security issue.
{ "source": [ "https://security.stackexchange.com/questions/48956", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15701/" ] }
48,962
I'm currently studying Computer Science, where we're teached Java programming. I want to get into the IT-security field, but it seems to me that Ruby and Python are more relevant for that, so I have a hard time motivating myself to learn Java. But do Java have a place in modern IT-Security compared to say, Ruby or Python?
Programming is relevant to IT-Security; it may be subtitled as "yeah, I kinda grasp the concepts of what I am blabbing about". You cannot be a decent practitioner of IT security if you cannot imagine what occurs in a computer beyond something like "then magic occurs". This necessarily implies some basic skills at development. The exact programming language does not matter much. In fact, if the programming language matters to you, then you don't know enough programming yet. If you want to "get into the IT-security field" then you must reach the point where your question feels ridiculous, as in "how could I have been such a twerp to ask such a silly question ?". Meanwhile, go learn how to do basic programming in at least two or three different languages (Java may be one of them); this is the path to enlightenment.
{ "source": [ "https://security.stackexchange.com/questions/48962", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37534/" ] }
49,031
I want to encrypt a file with AES in CBC mode (maybe another mode is better for file encryption...I don't know, but suggestions are welcome!). What I usually do is that I first write a few random data (256 bits, just to muddy the waters), then my salt and my IV (which are both uuid4...or generated from a secure PRNG) at the beginning of the encrypted file, and then, the encrypted blocks. I wonder if that solution is less secure than an other? Anyway, I really don't know how I could do otherwise!
Salts and IV are not the same thing; salts are for password hashing, IV are for starting up some encryption modes. Neither is meant to be secret, though; otherwise we would call them "keys". It is safe to put the IV and/or salt in file headers. Your adding of "a few random data (256 bits, just to muddy the waters)" is the computer equivalent of sacrificing a chicken to propitiate the gods. If it takes such rituals to make you feel good, then why not; but believing that it changes anything to your actual security would be somewhat naive. CBC mode requires a random, unpredictable IV; it is a hard requirement. Using an UUID is dangerous in several ways: Only the "v4" UUID uses a PRNG. Nobody guarantees that the PRNG used for UUID v4 is cryptographically strong. Even if it is strong, there still are six fixed bits in the UUID (the ones which say "this is a v4 UUID"); only 122 bits are random. CBC is not the best mode in class anymore; we found better. In particular: CBC has strict requirements on IV generation ( Chosen-Plaintext Attacks have been shown to be specially effective against poor IV generation). CBC needs padding, and padding handling has been shown to be delicate (decryption can turn into a "padding oracle" if done improperly). CBC does not ensure integrity. Usually, when you need to encrypt (for confidentiality), you also need to reliably detect hostile alterations. For that, you need a MAC . Assembling a MAC and encryption is tricky . Newer modes solve these issues, by tolerating a simple IV (a non-repeating counter is enough), by not needing padding, and by having an integrated MAC where the integration has been done adequately. See in particular GCM and EAX .
{ "source": [ "https://security.stackexchange.com/questions/49031", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37435/" ] }
49,048
I have set up an Apache/2.2.22 (Debian) Server on my Rasberry Pi. Looking at my access.log, I see a bit of strange activity consisting mostly of GETs, which is quite disconcerting as I am a complete newbie in such matters. The strangest thing was a series of POSTs consisting of percentage signs and numbers. I know next to nothing about web technologies, but I assumed they were escaped HTML characters. Decoding the characters of one of them (there were 2 varieties) from "POST php-cgi?" forwards, it resulted in the following: "POST /cgi-bin/php-cgi?-d+allow_url_include=on+-d+safe_mode=off+-d+suhosin.simulation=on-d+disable_functions=""+-d+open_basedir=none+-d+auto_ prepend_file=php://input+-d+ cgi.force_redirect=0+-d+cgi. redirect_status_env=0+-n" And a sample GET: "GET /HNAP1/ HTTP/1.1" What is going on here? Are bots randomly testing IP addresses for vulnerabilities? Should I be worried? Edit: these requests returned 404.
Salts and IV are not the same thing; salts are for password hashing, IV are for starting up some encryption modes. Neither is meant to be secret, though; otherwise we would call them "keys". It is safe to put the IV and/or salt in file headers. Your adding of "a few random data (256 bits, just to muddy the waters)" is the computer equivalent of sacrificing a chicken to propitiate the gods. If it takes such rituals to make you feel good, then why not; but believing that it changes anything to your actual security would be somewhat naive. CBC mode requires a random, unpredictable IV; it is a hard requirement. Using an UUID is dangerous in several ways: Only the "v4" UUID uses a PRNG. Nobody guarantees that the PRNG used for UUID v4 is cryptographically strong. Even if it is strong, there still are six fixed bits in the UUID (the ones which say "this is a v4 UUID"); only 122 bits are random. CBC is not the best mode in class anymore; we found better. In particular: CBC has strict requirements on IV generation ( Chosen-Plaintext Attacks have been shown to be specially effective against poor IV generation). CBC needs padding, and padding handling has been shown to be delicate (decryption can turn into a "padding oracle" if done improperly). CBC does not ensure integrity. Usually, when you need to encrypt (for confidentiality), you also need to reliably detect hostile alterations. For that, you need a MAC . Assembling a MAC and encryption is tricky . Newer modes solve these issues, by tolerating a simple IV (a non-repeating counter is enough), by not needing padding, and by having an integrated MAC where the integration has been done adequately. See in particular GCM and EAX .
{ "source": [ "https://security.stackexchange.com/questions/49048", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37593/" ] }
49,145
Summary Once a user logs into a web site and his username/password credentials are verified and an active session is established, is it possible to avoid hitting the DB for each and every request from that user? What is the recommended method of securely authenticating subsequent requests for the life of the session, while minimizing DB queries and other internal network traffic? Background In a stateless web app server architecture, where each request has no knowledge of any prior activity from the user, it would be necessary to query the DB on each and every request from that user (typically by querying the session ID stored in a cookie and transferred in the request header). But what if some basic information was encrypted and stored in that Session cookie that had enough information to validate the user for non-sensitive, non-editable requests? For such requests, you could as an example encrypt and store the user ID and something that uniquely identifies his machine as much as possible (user-agent + ip address) in the Session data. The key used to encrypt the data could change daily making it difficult for any hacker to clone the Session data on a different machine. When the Session expires you would need to fully validate the user's credentials. The fact is, the biggest threat to hacking a user's session would be someone using a user's computer that he or she left unattended. Should I just not worry about this and let some level of caching between the web app servers and the DB take care of expediting the authentication process? While it may seem to be unnecessary optimization, it seems like a candidate ripe for improvements in efficiency since each and every request requires this process. Thanks for any suggestions!
Yes it is possible, and this technique is widely used. It does have some minor drawbacks compared to stateful sessions: It does not support strong logout. If a user clicks logout, the cookie is cleared from their browser. However, if an attacker has captured the cookie, they can continue to use it until the cookie expires. The use of a server-side secret to create the tokens creates a single point of failure: if the secret is captured, an attacker can impersonate any user. Deciding whether to use stateless or stateful sessions depends on your performance and security requirements. Online banking would tend of use stateful sessions, while a busy blog would tend to use stateless sessions. A few tweaks are required to your proposed scheme: Encrypting the token does not protect it from tampering. You want to use a Message Authentication Code (MAC) which does protect tampering. You may additionally want to use encryption, but that is less important. You need to include a timestamp in the token and put a time limit on their validity. Somewhere around 15 minutes is sensible. Normally you would automatically re-issue shortly before the timeout, and usually the reissue would incur a database hit (although even that can be avoided) Do not include the user's IP address in the token. Approximately 3% of users will legitimately change IP address during a web session, due to modem resets, changing WiFi hot spots, load balanced proxies and more. While you can include the user agent in the token, it is not normal to do that - consider it an advanced technique to use if you are sure you know what you're doing. If an attacker captures a session cookie they can impersonate that user. There is nothing you can really do about. Instead, put all your effort into preventing an attacker capturing the cookie in the first place. Use SSL, use the "secure" cookie flag, fix all cross-site scripting flaws, etc. And have some user advice to lock their screen when their computer is unattended. I hope this is helpful to you. If anything is unclear or you need further information, leave a comment and I will see if I can help you further.
{ "source": [ "https://security.stackexchange.com/questions/49145", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37686/" ] }
49,234
After the recent Target hack there has been talk about moving from credit cards with magnetic stripes to cards with a chip. In what ways are chips safer than stripes?
You can't clone the chip. A magnetic strip holds a secret number, and if someone knows that number they can claim to be the owner of the card. But if a bad guy swipes the card, they then know the number, and can make their own card, i.e. "cloning". This has turned out to be a major practical problem with magstripe cards. A chip also holds a secret number. However, it is securely embedded in the chip. When you use the card, the chip performs a public key operation that proves it knows this secret number. However, it never reveals that secret number. If you put a chipped card in a bad guys machine, they can impersonate you for that one transaction, but they cannot impersonate you in the future. All of the above assumes that the implementation of the chip is good. Some chips have been known to have implementation flaws that leak the secret code. However, chip and pin is now pretty mature, so I expect most of these issues have been ironed out.
{ "source": [ "https://security.stackexchange.com/questions/49234", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10435/" ] }
49,280
Why are chips safer than magnetic stripes? The answers to the above question explain that the chip based cards can not be cloned as the "secret number" is embedded in the chip and protected by the use of public key cryptography. The chip also performs some cryptographic operations to authenticate itself without revealing the actual secret information. I was wondering what are these cryptographic operations that enable a secure financial transaction. I have searched sec.SE and did come across a similar Question but the answers do not explain the workings of the protocol. They focus more on the possible attacks and do not say more than "some crypto math". I hope I have made myself clear. P.S. I am curious to know about the workings of the cryptographic protocol and how it enables security.I am aware of the basic concepts of cryptography.
Exact cryptography depends on the bank. The communication standard ( ISO 7816 ) is flexible and does not mandate specific cryptographic algorithms. In practice, you would find the two following models: The card does symmetric cryptography only (symmetric encryption, MAC ). The card has a static identifier (which contains, roughly speaking, the card number and similar information) which has been signed by the card issuer (a copy of that signature is stored by the card, who sends it to the payment terminal). The card chip contains a secret value which is also known to the bank; that secret is used as a key for a MAC computed over the transaction details. The card can compute digital signatures . It contains a private key which is known to no other entity (in particular, the issuing bank does not know the private key either). The corresponding public key is certified by the bank, i.e. the bank has signed a package containing that public key and the card ID. That signature is stored on the card, and sent to the payment terminal. In both cases, the PIN code is sent to the chip, to convince it that its rightful owner is present, and therefore the chip should MAC/sign the transaction details. Main differences between the two models are: In the second model, the payment terminal can make sure, in an offline way, that it talks to a genuine card, by verifying the bank signature over the card ID and public key, and then the card-computed signature itself, using the card public key and the known transaction details. With the first model, the payment terminal cannot make sure that it talks to a genuine card; through the bank signature over the card ID, the terminal can make sure that a valid card with that specific ID exists, but the card actually inserted in the terminal could be a clone, who spews out random junk instead of the expected MAC. In the first model, since the bank has a copy of the card secret key, it could theoretically frame the card owner, by computing fake transaction. With the second model, the bank can claim that since it does not own a copy of the card private key, transactions are necessarily genuine. This may matter in a legalistic point of view, in case of litigation between the bank and its customer. This is not an absolute -- we enter here a lawyer-infested battlefield, where mathematics are just an element among others -- but the second model may reduce costs for the bank, through their own insurance system against payment defaults. Cards which can do asymmetric cryptography are more expensive than cards which can do only symmetric cryptography. The price difference has much reduced; an order of magnitude would be 0.1$ vs 2$ (building price). However, the card market has a huge latency because changes in the protocol can percolate only after cards have expired and payment terminals have been updated (the latter being the slowest of the two). Ten years ago, all payment cards in France ("Carte Bleue") followed the first model. This meant that a payment terminal, typically offline, could be fooled with a card clone, containing a copy of a valid card ID, but producing only junk instead of the MAC. So in practice, terminals would allow offline mode only for small amounts (say, less than 60$ or so) and even then would require online mode (the terminal asks to be stowed on its charging base, which contains a modem or ethernet plug) on a random basis. In later years, WiFi and 3G allowed terminals to go online much more frequently, in a smooth way (the restaurant waiter can stay with the customer, at his table, for the duration of the operation). There was also, around 2000, another issue, which was that the signature from the bank (over the card ID) used 320-bit RSA. Factoring a 320-bit modulus is computationally easy. A self-taught engineer called Serge Humpich noticed that, factored the modulus, created a fake card (i.e. not a clone at all, but still accepted by offline payment terminals), and thought himself a genius. He then tried to "sell his expertise" to the group which manages the banking smart card standards in France (anecdote has it that he contacted them through a lawyer, and he selected a blind lawyer so that the man could not describe his facial features, should legal hijinks ensue). The said group thought it was blackmail, and called the police. Humpich was arrested during a demonstration involving a couple subway tickets with a fake card (the dozen innocent bystanders suddenly turned out to be a dozen police officers in plain clothes). The whole story was a conflation of blunders: a 320-bit RSA modulus, in 2000, was certainly a mark of severe incompetence (even when it was chosen, in the late 1980s, it was already too small with regards to known academic cryptanalytic results); and the ludicrously wacky details of Humpich's actions showed that he was quite severely deluded on the novelty of his "findings".
{ "source": [ "https://security.stackexchange.com/questions/49280", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21234/" ] }
49,326
Currently, there is an HTML form / input attribute called autocomplete , which, when set to off , disables autocomplete/autofill for that form or element. Some banks seem to use this to prevent password managers from working. These days sites like Yahoo Mail seem to do it as well because they feel that password managers are unsafe. A few weeks ago I implemented a feature in Firefox that gives the user an option to override this for username/password fields only (i.e. to disable the password manager). There now is a request that is asking for it to override autocomplete=off by default. Quoting the issue: This behavior is a concession to sites that think password managers are harmful and thus want to prevent them from being effective. In aggregate, I think those sites are generally wrong, and shouldn't have that much control over our behavior. This makes sense to me, for similar reasons as the ones in this comment by BenB . autocomplete=off has been abused a lot recently. Yahoo started using it for their login (including webmail and my.yahoo.com), which is why I stopped using Yahoo. Webmail apps - even some bigger providers - now use it, which was decidedly not the purpose. The admins are very self-righteous, and insist that the keep this "for security" because password saving "is unsafe". They are misguided, because keyboard loggers exist and are widespread, probably more widespread than malware that can read Firefox password store. even simple attacks by the little nephew exist: Just look over the shoulder possibly most importantly, forcing users to re-enter their password every time practically forces them to use a simple password - easy to remember, easy to type, probably even used on multiple websites. This obviously lowers overall security dramatically and thus poses a danger to security. So, autocomplete=off is actively harmful for security. And a massive pain for end users, without a recurse for them apart from severing entire customer relationships. There have been many workarounds (usually bookmarklet-based) that have been posted on the Internet. IE11 has already removed support for autocomplete=off . The question is twofold: Is there any significant increase in security for a website when it uses autocomplete=off on password fields? Or is it actually harmful to security as per BenB's comment? Should browsers allow this attribute by default and give this much control to the website? (This bit is subjective, feel free to not answer) While my situation is specific to autocomplete=off for username/password fields (the code only affects the password manager), I do welcome input on the broader aspect of disabling autocomplete=off
The problem is that this one setting simultaneously controls the behavior of two similar but sufficiently dissimilar functions in the browser such that an optimal result is difficult to achieve. First, we have what you might call "smart" or "naïve" or "automatic" auto-complete. This is the original auto-complete technology. As you fill in forms on various sites, the browser watches the names of the forms and the contents you fill, and silently remembers the details. Then, when visiting another site with a similar-looking form, it "helpfully" fills in fields using the values it filched from your previous behavior on other sites. The idea here is to save you time without any configuration or decision-making on your part. Filling in your name? We'll automatically fill in the name you used last time. Filling in a credit card? We'll fill in the credit card you used elsewhere. In its zeal to be helpful, the browser is sharing your secrets from one site with all the others, just in case it's what you wanted. From a security perspective, this is a disaster for all the obvious reasons and for several non-obvious ones as well. It has to be disabled, and probably shouldn't have ever been implemented to begin with. Second, we have "explicit" or "secure" or "configured" auto-complete This is the world, primarily, of saved usernames and passwords. In this incarnation, the browser saves your form data only with your explicit approval. Ideally, it stores that data in an encrypted store, and most critically, the data is firmly associated with a single site. So your Facebook password stays with Facebook, and your Amazon address stays with Amazon. This technique is critically different in that the browser is replaying saved behavior when the matching environment is detected. By comparison, the other technique is anticipating desired behavior automatically by looking for similarities. When you visit the site and it presents a login form, your browser should helpfully auto-fill the data you had explicitly saved for that purpose. The interaction should be quick and thought-free for the user. And, critically , should absolutely BREAK in a phishing attempt. The browser should be so completely unwilling to deliver credentials to a phishing site such that it makes her stop and think about why the thing isn't working. This feature is your primary line of defense against phishing. It has to work. You are unavoidably less secure if the user can't depend on this feature working transparently and effortlessly under normal conditions. And while this is primarily used for credential storage, it's also a secure place to put other secure data as well, such as payment cards, address, security questions, etc. Such additional data probably won't be site-specific, but should probably not auto-fill without prompting. One option to rule them all The problem here is that in many implementations, the autocomplete=false option controls both behaviors. Both the one you want to keep, and the one you want to kill. Ideally, "secure" auto-complete should never be disabled. We're relying on this feature to add safety, so misguided site operators shouldn't be allowed to jeopardize that. And ideally, "automatic" auto-complete should be disabled by default, to be enabled only for those rare conditions (if any) where you actually want the browser to re-use your input from other sites.
{ "source": [ "https://security.stackexchange.com/questions/49326", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7497/" ] }
49,636
I was listening to Pandora as I logged in here, and the next commercial was about InfoSec. That set me wondering as to whether that was a coincidence (probably) or if they knew somehow. To make a long story short, I was wondering whether a webpage could access cookies that it didn't put there (thus getting a rather accurate browsing history as well as information on the user). It seems to me that this should be (and probably is) defended against, but if it is, how? I can read cookies on my computer and at least see where they came from, so it doesn't seem that they are encrypted...
This is defended against using the same origin policy , which generally prevents one site reading anothers cookies. When you see behaviour where adverts seem to know where you've been it's likely due to 3rd party ad tracking cookies. So as a simplified example if you go to site A which uses an ad network, that ad network can record that you were on that site by placing a tracking cookie on your PC. Then when you go to Site B which uses the same ad network, the ad network reads the cookie that was set when you were on Site A (which it can do 'cause it's loading content from it's domains in both cases so it doesn't break same origin) and can then offer you adverts based on your browsing habits.
{ "source": [ "https://security.stackexchange.com/questions/49636", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34536/" ] }
49,782
There is a new WhatsApp-killer application called Telegram . They said that it's open source and that it has a more secure encryption . But they store all the messages in their servers and WhatsApp doesn't store any messages in any server, only a local copy in the phones. Is Telegram more secure than WhatsApp ?
TL;DR: No, Telegram is not secure . I'd like to ignore the comparison to WhatsApp because WhatsApp does not advertise itself as a "secure" messaging option. I'd like to instead focus on whether Telegram is secure. Telegram's security is built around their home spun MTProto protocol. We all know that the first rule of Cryptography is Don't Roll Your Own Crypto . Especially if you aren't trained cryptographers. Which the Telegram people most certainly aren't. The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals. Source: https://news.ycombinator.com/item?id=6916860 Math Ph.Ds are not cryptographers. The protocol they invented is flawed. Here is a nice blog post explaining why. In addition to that, Telegram has issued a rather ridiculous challenge offering a reward to anyone who can break the protocol. Except that the terms they set makes even the most ridiculously weak protocol difficult to break. Moxie Marlinspike has a nice blog post explaining why the challenge is ridiculous. So, no. Telegram is by no means secure. For commonly accepted definitions of secure, not the one Telegram made up. If you want a real secure means of communication on your phone, look to more reputable projects such as Signal or WhatsApp (which, since this answer was first written, now uses the Signal Protocol for end-to-end message encryption). UPDATE 09 January 2015: A new 2^64 attack On Telegram has been announced. 12 December 2015: A new paper demonstrating that MTProto is not IND-CCA secure. 22 December 2017: Replaced outdated recommendation for CryptoCat with a more up-to-date recommendation for Signal and WhatsApp.
{ "source": [ "https://security.stackexchange.com/questions/49782", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35564/" ] }
50,878
Among the ECC algorithms available in openSSH (ECDH, ECDSA, Ed25519, Curve25519), which offers the best level of security, and (ideally) why?
In SSH, two algorithms are used: a key exchange algorithm (Diffie-Hellman or the elliptic-curve variant called ECDH) and a signature algorithm. The key exchange yields the secret key which will be used to encrypt data for that session. The signature is so that the client can make sure that it talks to the right server (another signature, computed by the client, may be used if the server enforces key-based client authentication). ECDH uses a curve ; most software use the standard NIST curve P-256. Curve25519 is another curve, whose "sales pitch" is that it is faster, not stronger, than P-256. The performance difference is very small in human terms: we are talking about less than a millisecond worth of computations on a small PC, and this happens only once per SSH session. You will not notice it. Neither curve can be said to be "stronger" than the other, not practically (they are both quite far in the "cannot break it" realm) nor academically (both are at the "128-bit security level"). Even when ECDH is used for the key exchange, most SSH servers and clients will use DSA or RSA keys for the signatures. If you want a signature algorithm based on elliptic curves, then that's ECDSA or Ed25519; for some technical reasons due to the precise definition of the curve equation, that's ECDSA for P-256, Ed25519 for Curve25519. There again, neither is stronger than the other, and speed difference is way too small to be detected by a human user. However most browsers (including Firefox and Chrome) do not support ECDH any more (dh too). Using P-256 should yield better interoperability right now, because Ed25519 is much newer and not as widespread. But, for a given server that you configure, and that you want to access from your own machines, interoperability does not matter much: you control both client and server software. So, basically, the choice is down to aesthetics, i.e. completely up to you, with no rational reason. Security issues won't be caused by that choice anyway; the cryptographic algorithms are the strongest part of your whole system, not the weakest.
{ "source": [ "https://security.stackexchange.com/questions/50878", "https://security.stackexchange.com", "https://security.stackexchange.com/users/39450/" ] }
50,937
As described in Perfecting the Art of Sensible Nonsense , a major breakthrough in cryptography research was published in the summer of 2013: Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits In principle it seems to allow for what most computer scientists had assumed was impossible: obfuscating computer code so that secrets within it are preserved even by attackers who can run the code fully under their own control. This kind of obfuscation is enormously powerful, and can be used, for example, to create new forms of public key encryption. It is related to breakthroughs in functional encryption and deniable encryption . It is described this way in the article: The ... obfuscator works by transforming a computer program into what Sahai calls a “multilinear jigsaw puzzle.” Each piece of the program gets obfuscated by mixing in random elements that are carefully chosen so that if you run the garbled program in the intended way, the randomness cancels out and the pieces fit together to compute the correct output. But if you try to do anything else with the program, the randomness makes each individual puzzle piece look meaningless. But it is also described as " ... far from ready for commercial applications. The technique turns short, simple programs into giant, unwieldy albatrosses. " A reason for the unwieldiness is the need to use Fully Homomorphic Encryption (FHE), which itself remains unwieldy as described in related questions here . Note that this has been discussed on other stack exchanges: Practical consequences of using functional encryption for software obfuscation - Cryptography Stack Exchange On the indistinguishability obfuscation (informally) - Reverse Engineering Stack Exchange Crypto is probably a more suitable forum for general discussion, but many discussions of possible practical aspects (or the absence thereof) may be best done here. Given the current state of the art, it seems that practical applications don't exist. But that statement is hard to accept without an example. So here is a question that should be relevant here: can someone provide a concrete example of the sort of useful task that indistinguishability obfuscation or functional encryption might some day be used for, and describe just how unwieldy it is at this point (size, performance, etc)?
The article actually describes two constructions, the second one using the first one as a building tool. The first construction provides indistinguishability obfuscation while the second one is functional encryption . Indistinguishability obfuscation is a rather esoteric property, which is not what non-academic think about when they hear "obfuscation"; the terminology is a misnomer. That property means that if you can encode some "processing" as a circuit (roughly speaking a piece of code which can be unrolled, with no infinite loop), such that there are several possible distinct circuits which yield the same results, then indistinguishability obfuscation allows publication of an "obfuscated circuit" such that anybody can run the circuit and obtain the result, but outsider cannot know which of the possible circuits was used as a basis internally. What good IO can make, by itself, is unclear. The authors, in section 1.7 of their article, still present an example, which is rather far-fetched: Software developers will often want to release a demo or restricted use version of their software that limits the features that are available in a full version. In some cases a commercial software developer will do this to demonstrate their product; in other cases the developer will want to make multiple tiers of a product with different price points. In other domains, the software might be given to a partner that is only partially trusted and the developer only wants to release the features needed for the task. Ideally, a developer could create a downgraded version of software simply by starting with the full version and then turning off certain features at the interface level -- requiring minimal additional effort. However, if this is all that is done, it could be easy for an attacker to bypass these controls and gain access to the full version or the code behind it. The other alternative is for a software development team to carefully excise all unused functionality from the core of the software. Removing functionality can become a very time consuming task that could itself lead to the introduction of software bugs. In addition, in many applications it might be unclear what can and cannot remain for a restricted use version. One immediate solution is for a developer to restrict the use at the interface level and then release an obfuscated version of the program. For this application indistinguishability obfuscation suffices, since by definition a version restricted in the interface is indistinguishable from an obfuscated program with equivalent behavior that has its smarts removed at the start. The use case is dubious, at best. However, IO has a more useful (theoretical) functionality, which the authors expose afterwards: it enables them to build functional encryption . Functional encryption is about providing a computable circuit (obfuscated with IO) which receives as input encrypted versions of some value x , and returns F(x) for some function F , without revealing anything else about x . The authors show how they can do that for any function F which can be encoded as a circuit, and the resulting obfuscated circuit is "polynomially-sized" with regards to the original unobfuscated circuit implementing F . Now this "polynomially-sized" expression tells us that we are in the abstract world of mathematics, and not in the practical world. It relates to asymptotic behaviour when circuit size grows towards infinity. It does not tell much about how much the construction would cost in any practical, finite situation; only that if God plays with Universe-sized computers, then He will find the construction to be "tolerably efficient" provided that He first creates a large enough Universe to accommodate the bulk of the involved computers -- with no a priori measure of how large that Universe has to be for the theoretical result to apply. Empirically , we find that most mundane algorithms that we manipulate and that offer polynomial complexity are "somewhat fast", but that's mostly because these algorithms are very simple things; there is no proof or even suggestion that the construction described in the article would be as "mundane". Functional encryption, if it can be made to work within reasonable CPU and RAM budgets, can be useful for security in a number of situations. For instance, imagine a FPS game opposing players on networked computers. For efficient 3D rendering, the local machine of each player must know the terrain map and the current location of all objects, including other players, so that it may decide whether, from the point of view of the local player, any other player is visible (and must be drawn on the screen) or hidden (e.g. sneakily poised behind a wall, ready to lob a hand grenade) -- but cheaters would find it very convenient to know these locations in real-time. Each player is assumed to have complete control of his machine (he has physical access). With function encryption, the player's machine may compute the rendering (that's the F function) based on the locations as sent by the server, without obtaining the locations themselves. Unfortunately , for practical applications, we are far from having a workable solution. What must be understood is that in all these constructions, each circuit gate must map to an instance of Gentry's fully homomorphic encryption scheme, and at every clock cycle for the obfuscated circuit, all gates must be processed, regardless of whether they are "active" or not in the circuit (this is a big part of why the obfuscation theoretically works: it does not reveal active gates, by always making them all active). This article gives performance result: on a rather big PC, we are up for minutes of computation. That's for each gate in the obfuscated circuit, and for each clock cycle . There are millions or even probably billions of gates in the envisioned circuit, given the setup of functional encryption: the "obfuscated circuit" must include asymmetric decryption and validation of zero-knowledge proofs. So now we are talking about using all the computers currently running on Earth, all collaborating on the computation, and they might make non-negligible progress towards running one instance of functional encryption within a few centuries . These are just estimates that I make from what I saw in the articles, but they should give you the order of magnitude. Namely, that the albatross metaphor falls very short of the reality; you'd rather have to imagine a full army of spaceships bent on galactic domination. If it flies at all, despite bureaucratic heaviness, it will have tremendous inertia and be quite expensive.
{ "source": [ "https://security.stackexchange.com/questions/50937", "https://security.stackexchange.com", "https://security.stackexchange.com/users/453/" ] }