source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
5,447 | I need to send some sensitive information to a client. I thought I would email a password protected zip file I created using Windows XP and then call them with the password. Presuming I pick a good password, how secure is this approach? Specifically, how difficult is it to decrypt a password protected zip file? | When creating a password-protected Zip file (with the "compressed folder" utility integrated in the OS), Windows XP uses the "standard" encryption algorithm for Zip files. This is a homemade stream cipher, and it is weak . With 13 bytes of known plaintext, the complexity of the attack is about 2 38 operations, which is doable in a few hours on a PC. 13 bytes are relatively easy to obtain (e.g. if one of the files in the archive is an image, it will probably be uncompressed and begin with a known header). The result has even been improved , notably because the files in an archive are encrypted separately but without proper key diversification. Some years ago (quite a few now, tempus fugit ), I have seen a password cracking software by Ivan Golubev which put this science to good use, and could crack Zip encryption in an hour. The attack on Zip encryption is actually: a nice introduction to cryptanalysis; a good exercise in programming; a reminder that you should not roll your own crypto . Phil Katz was very good in his domain, but the best cryptographers in the world will tell you that it takes much more than one extremely good cryptographer to make a secure algorithm -- it takes many cryptographers who feverishly propose designs and try to break the designs of the others, for a few years, until a seemingly robust design emerges (where "robust" means "none could find the slightest argument to support the idea that they may, possibly, make a dent in it at some unspecified date"). Now, if you use a tool which supports the newer, AES-based encryption, things will look better, provided that the format and implementation were not botched, and the password has sufficient entropy. However, such Zip files will not be opened by the stock WinXP explorer. If an external tool is required, you might as well rely on a tool which has been thoroughly analyzed for security, both in the format specification and the implementation; in other words, as @D.W. suggests: GnuPG . As for self-decrypting archives , they are all wrong, since they rely on the user doing exactly what should never be done, i.e. launching an executable which he received by email. If he does open a self-decrypting archive you send him, then he will just, by this action, demonstrate that he is vulnerable to the myriad of virus/worms/whatevers which roam the wild Internet, and he is probably already infected with various malware, including keyloggers. Nevertheless , there is a way, in your specific situation, to make a self-decrypting archive reasonable. It still needs your client to install a new piece of software, but at least it is straight from Microsoft: the File Checksum Integrity Verifier -- a pompous name for a tool which computes file hashes. Send the self-decrypting archive to your client, and have him save it as a file ( without executing it, of course). Then, have him run FCIV on it, to get the SHA-1 hash of the file. Do the same on your side. Finally, compare the two hashes by phone (it is not hard to dictate 40 hexadecimal characters). If the two hashes match, then your client will know that the file was not modified during the transfer, and he will be able to execute it with confidence. (That is, if your client trusts you, and trusts that your machine is not clock full with virus which could have infected the archive on its way out.) | {
"source": [
"https://security.stackexchange.com/questions/5447",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3496/"
]
} |
5,460 | I work for a virtual organization (we're all remote) that uses a lot of freelancers/subcontractors. Very often I need to exchange SSH login information with developers working on projects for us. How do I do this securely? Most of them know nothing about GPG / Public Key encryption, nor how to integrate GPG into their email clients. (I have a hard enough getting some of them to use version control properly, much less an alien encryption package.) I could self-generate S/MIME certs and distribute to them, but if I'm not mistaken wouldn't this throw an error in most email clients? EDIT: To clarify as per comments -> I need to hand off logins to other devs (username/password combos to allow them to SSH into a server.) Our passwords tend to be strong, such as %:(9h3LUPa&Zk which can be a little awkward to read over the phone. | Ask for their SSH public keys, and add those to the authorized_keys lists for the hosts they'll be logging in to. It's safe to disclose public keys so they can be distributed over non-confidential media such as email. | {
"source": [
"https://security.stackexchange.com/questions/5460",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3494/"
]
} |
5,469 | Gmail doesn't give the IP address of the sender in its mail headers for security reasons. I'd like to know whether there is some other way of getting the IP address of the sender. Since Gmail specifies the IP address of its email relay server, which the sender first contacts, is there any way of querying the relay server to get the IP address of the sender by specifying the unique Message-ID of that email? If so, please explain how is it done. And if not, is there any other method of getting the IP address? | There is no technical way to get the ip-address of someone sending an email via the gmail web interface. Google does not put it into the email headers. And there is no API to query gmail for it. If you really need that IP address for valid reasons, you need to get a court order. | {
"source": [
"https://security.stackexchange.com/questions/5469",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2814/"
]
} |
5,477 | I'm looking at password manager solutions and came across LastPass . I see that they also support two-factor authentication using YubiKeys . How secure is this combination for password management? What are the "weak links" in this scheme that could be targeted by an attack? | The answer everyone hates: it depends on your threat model and risk appetite. What passwords are you protecting in Lastpass? Are you storing the whole password in there or a unique value to which you add a passphrase? Who are you concerned would want your passwords? Opportunistic attackers or targeted governments / organized crime? How strong is your master password? Software vulnerabilities can exist. Lastpass has had a XSS vulnerability and a suspected intrusion recently. So yes all software can have vulnerabilities. Yubikey as @this.josh states could also be vulnerable. After all if RSA got hacked and the attackers were able to use this to get into military contractors then no two factor mechanism is invulnerable. Refer a sample attack tree for defeating two factor: Here is a broader set [PDF]: http://www.redforcelabs.com/Documents/AnalyzingInternetSecurity.pdf The question is are the risks acceptable to you? Using a password manager is better than not using one and is a simple, cheap solution to improve the security of virtually any application/service you need a password for. Using Yubikey and a strong master password greatly improves the security of whatever you store in Lastpass. The whole point of two factor is that even if one factor is compromised they still require the other. If you or the service discovers the compromise this gives you time at a minimum. Do a quick threat model, understand your risk appetite. No system will be invulnerable but you may find the advantages to using Lastpass + Yubikey outweighs the risks for you. | {
"source": [
"https://security.stackexchange.com/questions/5477",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/793/"
]
} |
5,534 | Possible Duplicate: From a security point : Is it OK to tell your password to an admin? I am working in a small company (20 employees) as a senior SW engineer. After having some email problems, our newly employed IT administrator asked me for a password to see with hosting company why exactly. Without any thought I gave him my password. After maybe 30 minutes, I realized that in my 10 years of working in several companies nobody asked me for a password, and I found it rather strange. Immediately after I changed my password. So, are there cases where the password is really needed, when I really have to tell my password to an IT administrator? EDIT The reason I am asking is this: I have heard of stories where admins asked for the user's password, but only on sites like The Daily WTF . Please note that the answers on this question were not given from a security point of view, and as such should not considered secure. | I have working with many companies, and the technical answer is that no one should know your password. In practice, you really have to weigh that against any real threats, and why you are giving it to him. Also, if your password is used for many things (you should not do this either), like your banking, then really NO. If you really ever have doubts, you can do one of two things: You can temporarily change it, so giving it to him is not an issue, or you can type it in for him. I often ask people to type their passwords in for me. That said, an administrator can change the password when they need to, but the only problem is that they have to get the user to change it again. | {
"source": [
"https://security.stackexchange.com/questions/5534",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3540/"
]
} |
5,539 | I'm working in a small company (20 employees) as a senior software engineer. After having problems with my email, our newly employed IT administrator asked me to write my user password to someone in our hosting company to help them identify the problem. Without any thought I gave him my user password. After 30 minutes, I realized that in my 10 years of working in several companies nobody asked me for a password, and I found it rather strange. Immediately after that, I changed my password. Are there cases where the password is really needed, when I really have to tell my password to an IT administrator? I have heard of stories where admins asked for the user's password, but only on sites like The Daily WTF , which prompted this question. (Related: "A client wants to tell me his home laptop's password. Must I push him towards a more-complex alternative?" ) | Short answer: ABSOLUTELY NOT! Your password is between you, and your computer alone. No one else. Not your boss, his boss, the system administrator, your bank official, your insurance agent, your ISP support technician, or your cat. Well, your cat you can tell, if she promises not to share it. There is NEVER a good reason to share a password. There are many reasons NOT to. Mostly, because a password is YOUR authentication, and as soon as even ONE other person knows it, it can no longer prove your identity. Any reason your admin comes up with, is bogus, either because he is malicious, lazy, misinformed, or incompetent. That said, it may not be his fault, but the fault of his organization.
Either way, there is incompetence, ignorance and laziness abound. If an admin, or ANY support technician asks for your password, the correct response is to LAUGH. Because there's no way they're serious, right? If your admin insists - explain to him that you will document sharing your password with him... and that, based on this, you are going to send nasty emails to all around - not about him, but you will claim that they came from him (using your account, in your name, using your password that you just shared with him). Of course he won't be able to prove that he didn't misuse your password... which is the point. No, on second thought, just don't give him your password. It's yours, between you and the computer alone. | {
"source": [
"https://security.stackexchange.com/questions/5539",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3540/"
]
} |
5,586 | What's wrong with this code? $password = "hello";
$password = md5($password);
for ($i = 1; $i < 20; $i++) {
$password = md5($password);
} I don't think an attacker with access to the hashes storage would be able to decrypt any password using more than 2 characters. The attacker would have to decrypt this list of hashes to be able to gain the plaintext password. 69a329523ce1ec88bf63061863d9cb14
0dcd649d4ef5f787e39ddf48d8e625a5
5d6aaee903365197ede6f325eb3716c5
cbe8d0c48ab0ed8d23eacb1621f6c5c3
8fa852c5f5b1d0d6b1cb0fad32596c71
91a84cf929b73800d2ff81da28834c64
45b7d5e4d3fca6a4868d46a941076b72
e5b7d9d10fef132829731255ef644319
b3af6ff5f5c7ae757ca816a6cb62f092
150f3682b2e58d1d0e1f789f9ba06982
3f76626950bf31dbc815c667ca4b2b43
44f4c75517671e12946aab3c8c293f98
442256b098b2d88e93428b08b5155308
7fd8ebc5bdff94f24a10decaa1ab64e8
d04bbc863839b720d932a697f0bf443b
de737c934db23c2d1d1026863e7583a8
d745f6394700c4ab1e9ded0924ea35d2
ce9428b51d3a63431c57a435423877b6
7017f40bdb4f1be1f5fae7dd0fc7b907 With brute force, an attacher should try 36 32 (*19) combinations, which is pretty unachievable. Isn't that true? | The wrong things on your method are: You use way too few iterations (20 is too low, it should be 20000 or more): password processing is still too fast, an attacker with a basic PC will still be able to "try" dozens of millions of passwords per second. There is no salt: an attacker may attack several passwords with very low per-password cost, e.g. with precomputed tables of hashed passwords (in particular rainbow tables ). You are in the process of inventing your own cryptography. There is nothing wrong with being inquisitive and trying to understand things, but since there is no sure test for knowing whether a given algorithm is secure or not, inventing your own cryptography is often a recipe for disaster. Don't do it. What you should do is to use bcrypt ; there is a PHP implementation in the Portable PHP password hashing framework . | {
"source": [
"https://security.stackexchange.com/questions/5586",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3010/"
]
} |
5,605 | This question made me start thinking about password hashing again. I currently use bcrypt (specifically py-bcrypt ). I've heard a lot about PBKDF2 , and scrypt . What I'm wondering is if there are any "more modern" password hashing methods that I might not know about (because they're new, so people don't talk about them as much), or maybe other methods I don't know about. And then going on from there, which one should I use? Most people seem to recommend bcrypt, but I wonder if that's just because it's old (read: well-known). scrypt seems better (variable amount of memory usage). I don't know much about PBKDF2. So if I was making a user-management scheme, which of these should I use? Or should I use something completely different? | In cryptography, "new" is not synonymous to "good". That bcrypt is twelve years old (12 years... is that really "old" ?) just means that it sustained 12 years of public exposure and wide usage without being broken, so it must be quite robust. By definition, a "newer" method cannot boast as much. As a cryptographer, I would say that 12 years old is just about the right age, and anything younger than, say, 5 years, is definitely "too young" for general deployment (of course, these estimates depend on how much exposure the algorithm got; an early, wide deployment, although risky for those who decide to deploy, will go a long way toward building confidence in security -- or revealing weaknesses at an early stage). Scrypt is much newer than bcrypt ; it dates from 2009. The idea is quite smart. Namely, slow password processing is meant to make dictionary attacks N times more expensive for the attacker, while implying that normal processing is N' times more expensive for the honest systems. Ideally, N = N' ; the scrypt author argues that with PBKDF2 or bcrypt, use of ASIC allow an attacker to get a N much lower than N' (in other words, the attacker can use specialized hardware, because he is interested only in breaking a password, and thus hashes many more passwords per second and per spent dollar than the honest system). To fix that, scrypt relies on an algorithm which requires quite some RAM, since fast access RAM is the specialty of the PC, and a sore point of ASIC design. To which extent scrypt is successful in that area remains to be measured; 2009 is recent times, and the figures given by the scrypt author are based on 130 nm ASIC technology and an hypothesis of "5 seconds worth of processing", which is quite beyond what the average user is ready to wait. For practical usage now , I recommend bcrypt. Scrypt notwithstanding, current research on the concept of password processing is more about specialized transforms that allow more than mere password verification. For instance, the SRP protocol allows for a cryptographic key agreement with mutual password-based authentication, and resilient to dictionary attacks (even in the case of an attacker actively impersonating the client or the server); this calls for a bit of mathematical structure, and the password-hashing in SRP involves modular exponentiation. | {
"source": [
"https://security.stackexchange.com/questions/5605",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1983/"
]
} |
5,637 | I think it definitelly isn't, because XSS which isn't saved anywhere would damage ONLY attacker. Am I right or are there any cases where XSS could hurt non-db-application? (I mean datas are not saved anywhere) | Very wrong, the basic form of XSS is Reflected XSS, where the payload is sent in the URL (for example) from the victim himself. This is most commonly used in phishing attacks, where the attacker crafts the malicious link, and mails it in social engineering attacks to his victims, or posts it on public forums, etc. In general XSS has nothing to do with database (unless it's Persistent / Stored XSS). See XSS on OWASP for more details. | {
"source": [
"https://security.stackexchange.com/questions/5637",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3010/"
]
} |
5,662 | According to the documentation for the "diskscrb" command for wiping conventional hard drives: http://www.forensics-intl.com/diskscrb.html "Conforms to and exceeds the Government Standard set forth in DoD 5220.22-M. Can overwrite ambient data areas 9 times. (Each pass involves 3 separate writes followed by a verify pass.) This helps eliminate the potentials for the recovery of Shadow Data." So it's ok to wipe the HDDs at least 9 times as mentioned. But how about SSDs and USB flash drives? Do I have to wipe the data on them 9 times, or is it only needed once? Here is what I use to regularly delete the data from my memory cards, USB flash drives, etc. (I start it in the evening, and stop in the morning, e.g.: it overwrites my USB flash drive 10 times): loopcountdd=0;
while [ 1 = 1 ];
do (( loopcountdd= $loopcountdd + 1 ));
dd if=/dev/urandom bs=4096 | pv | dd bs=4096 of=/dev/XXX;
echo "overwritten: $loopcountdd x";
done This question was IT Security Question of the Week . Read the Aug 3, 2011 blog entry for more details or submit your own Question of the Week. | Best stop doing that. Never overwrite an SSD/flash storage device completely in order to erase it, except as a last resort. NVRAM has a limited amount of write cycles available. At some point, after enough writes to an NVRAM cell, it will completely stop working. For modern versions, we're in the ballpark of an estimated lifespan of 3,000 write cycles . Furthermore, internally SSDs look nothing like traditional hard disks. SSDs have the following unique properties: Spare area, often on the order of 8% - 20% of the total flash is set aside for wear leveling purposes . The end user cannot write to this spare area with usual tools , it is reserved for the SSD's controller. But the spare area can hold (smaller) amounts of old user data. A "Flash Translation Layer", FTL. How your operating system 'sees' the SSD ( LBA addresses) and the actual NVRAM address space layout has no correlation at all. Very heavy writing to a consumer-grade SSD may bring the controller's garbage collection algorithm behind, and put the controller into a state of reduced performance . What happens then depends on the controller. In an extreme worst case scenario, it cannot recover performance. In a much more likely scenario it will slowly regain performance as the operating system sends "trim" commands . Lastly, from the conclusion of the paper "Reliably Erasing Data From Flash-Based Solid State Drives" : "For sanitizing entire disks, [...] software techniques work most, but not all, of the time." So when you're completely overwriting flash storage, you may be performing an effective secure wipe -- but, you may also be missing some bits. And you're certainly consuming quite much of the drive's expected life span. This isn't a good solution. So, what should we be doing? The 'best' modern drives support a vendor-specific secure erase functionality. Examples of this are Intel's new 320 series, and some SandForce 22xx based drives, and many SSDs which are advertised as having "Full Disk Encryption" or "Self Encrypting Drive". The method is generally something along the lines of: The SSD controller contains a full hardware crypto engine, for example using AES 128. Upon first initialization, the controller generates a random AES key, and stores this in a private location in NVRAM. All data ever written to the drive is encrypted with the above AES key. If/when an end user performs a secure wipe, the drive discards the AES key, generates a new one, and overwrites the old AES key position in the NVRAM. Assuming the old AES key cannot be recovered this effectively renders the old data unrecoverable. Some drives don't have the above, but do support the ATA Secure Erase commands. This is were it gets more tricky -- essentially we're relying on the drive manufacturer to implement a 'strong' secure erase. But it's a black box, we don't know what they're actually doing. If you need high security, then you should not rely on this, or at least you should read the tech docs and/or contact the drive manufacturer to verify how secure their method is. A fair guess as to what they're doing / ought to be doing is that: While the drive isn't using a full cryptographic cipher such as AES, it is still using extensive data compression algorithms & checksumming & RAID-like striping of data across multiple banks of NVRAM. (All modern high-performance SSDs use variants of these techniques.) This obfuscates the user data on the drive. Upon receiving an ATA Secure Erase command, the drive erases its "Flash Translation Layer" table, and other internal data structures, and marks all NVRAM as freed. My personal recommendations: If you just need an insecure wipe of an SSD, then use the manufacturer's end user tools, or use the ATA Secure Erase command via for example hdparm on Linux. If you need secure wipe then either: Only use drives which explicitly advertise secure wipe via strong (AES) encryption, and run the manufacturers secure wipe. and/or: Ensure that all data you write to the drive is encrypted before hitting the drive. Typically via software full disk encryption such as PGP Whole Disk Encryption, TrueCrypt, Microsoft BitLocker, BitLocker To Go, OSX 10.7 FileVault or LUKS. or: Physically destroy the drive. | {
"source": [
"https://security.stackexchange.com/questions/5662",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2212/"
]
} |
5,668 | I'm not so bad at mathematics: I know what are p-list and p-combinations, I know matrix algebra, I know what a XOR is, I know how to tell if number is a prime, etc: I'm not the programmer who hates math because he is bad at it, but I don't have a PhD eitherway. I'm not bad a computer sciences either, well at least in term of general computer science culture: I know C, C++ (both learned at school), python, some haskell, what text encoding are out there, how UNICODE works, I know how a file can be compressed or encrypted, what common algorithms are out there (diffie-hellman, the LZMA algorithm, DES, AES, Serpent, Blowfish, SHA, MD5...). I got a lot interested in cryptography on wikipedia or other websites, but I don't think wikipedia can teach me cryptography without detailing algorithms or without practice; for example I know what is synchronous cryptography and what is asynchronous (public/private key). I'd like to learn how to properly and securely implement the most popular algorithms, and how to make them reliable: a book or good tutorials or courses. I've quickly searched on Khan Academy, but this subject is not trivial and requires both knowledge in math, computer sciences and/or electronics. I don't want to read pages and pages of just theory about the basic things I might already know or might be not really relevant to today's cryptography, like a paper written by a researcher, just something practical, with problems, and cryptanalysis problems, for students. I have currently much free time, I'm only 26, and I'm sure I can learn this stuff, not only for the pay increase it can bring me but also because I've always been fascinated by cryptography without actually understanding it, I just can't find any good material. | (LZMA is a compression algorithm, not cryptographic.) For the purpose of implementing cryptographic algorithms, the generic method is getting the relevant descriptive standard, grabbing your keyboard, and trying. Most standards include "test vectors", i.e. sample values which let you know whether your implementation returns the correct answers. At that point, things differ, depending on what kind of algorithm you are considering. Symmetric cryptography: Symmetric algorithms cover symmetric encryption, hash functions, and message authentication codes (MAC). You do not need to know much mathematics to handle these; most of it is about additions of 32-bit and 64-bit integers (that's modular arithmetic, with 2 32 or 2 64 as modulus) and bitwise operations (XOR, AND...). Such code is usually done in C. Good performance is achieved by having some notions on how the C compiler will understand and translate the code into instructions for the CPU; knowledge of assembly is not strictly mandatory, but quite useful. An important parameter is cache memory: loop unrolling is usually a good tool, but if you overdo it, performance drops sharply. I suggest beginning by implementing the classical hash functions (the SHA family, described in FIPS 180-3 ) and trying to make them fast. As a comparison point, get OpenSSL and use the command-line tool openssl speed to see what kind of performance can be obtained (this tool is already included in any decent Linux distribution, and it works on Windows and MacOS too). For instance, on my PC: $ openssl speed sha256
Doing sha256 for 3s on 16 size blocks: 4842590 sha256's in 3.00s
Doing sha256 for 3s on 64 size blocks: 2820288 sha256's in 2.99s
Doing sha256 for 3s on 256 size blocks: 1262067 sha256's in 2.99s
Doing sha256 for 3s on 1024 size blocks: 395563 sha256's in 3.00s
Doing sha256 for 3s on 8192 size blocks: 53564 sha256's in 3.00s
OpenSSL 0.9.8o 01 Jun 2010
built on: Wed Feb 23 00:47:27 UTC 2011
options:bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) aes(partial) blowfish(ptr2)
compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN
-DHAVE_DLFCN_H -m64 -DL_ENDIAN -DTERMIO -O3 -Wa,--noexecstack -g -Wall -DMD32_REG_T=int
-DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM
available timing options: TIMES TIMEB HZ=100 [sysconf value]
timing function used: times
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
sha256 25827.15k 60367.37k 108056.57k 135018.84k 146265.43k which means that OpenSSL includes a SHA-256 implementation hand-optimized in assembly, which achieves 146 MB/s when processing 8 kB messages. On the same machine, a pure C implementation ought to get to at least 130 MB/s. For an example of how hash functions are implemented in C and Java, and how hashing speed can be measured in a meaningful way, see sphlib . Afterwards, you can try symmetric encryption, in particular the AES ( FIPS 197 ). It helps a bit to know what a finite field of characteristic 2 is, but the standard is clear enough to guide you through a perfunctory implementation. Then, try to optimize things. OpenSSL can serve as a comparison point, and get inspiration from the AES implementations of Brian Gladman . As for security, there has been some concern about what key-dependent information could be leaked through use of look-up tables in the implementation (try to search for "AES cache timing attack"); trying to reproduce that kind of attack is a very good exercise (mind you, it is not easy, but if you succeed in demonstrating it in lab conditions then you will have learned a good deal on how cryptographic implementations work). Asymmetric cryptography: Asymmetric cryptography is about the algorithms which involve more than one party. This includes asymmetric encryption (RSA, ElGamal), key exchange (Diffie-Hellman) and digital signatures (RSA again, DSA...). The maths contents are much bigger there, and optimization is a much broader subject than for symmetric cryptography, because there are several ways to implement each algorithm, instead of a single "obvious" implementation path. A good reference is the Guide to Elliptic Curve Cryptography . Although it is mainly about elliptic curves, it includes a general treatment of the implementation of operations in finite fields, and it so happens that this is the sample chapter which can be downloaded for free at the URL linked to above. So get it and read it now. Another indispensable reference is the Handbook of Applied Cryptography , which can be freely downloaded; chapter 14, in particular, is about efficient implementation. RSA is simple enough, and is adequately described in PKCS#1 . There are possible timing attacks on RSA, which are countered by masking (yes, this is a paper "written by a researcher", but in the subject of cryptography, researchers are the people who understand what is going on). If you get the hang of modular arithmetic, you can try to implement DSA ( FIPS 186-3 ). Diffie-Hellman is mathematically simple (it needs nothing more than is needed to implement DSA) but its describing standard (ANSI X9.42) is not downloadable for free. Elliptic curves are a popular future replacement for modular arithmetic; EC variants of DSA and Diffie-Hellman are faster and believed more secure with shorter public keys. But that's more mathematics. There again, the Guide to Elliptic Curve Cryptography is the must-have reference. There are other kinds of asymmetric cryptography algorithms, e.g. the McEliece cryptosystem (asymmetric encryption; there is a variant for signatures described by Niederreiter ) and algorithms based on lattice reduction . But they do not (yet) benefit from published standards which take care of the implementation details, and there are not so many existing implementations to compare with. You'd better begin with RSA and DSA. Cryptanalysis: Cryptanalysis uses a much higher dose of mathematics than implementation. For symmetric cryptography, the two main tools are differential and linear cryptanalysis; see this tutorial . My own path to cryptography began by implementing DES, and then implementing Matsui's linear cryptanalysis on a reduced version of DES (8 rounds instead of 16). DES is described in FIPS 46-3 , which is officially withdrawn, but still available. From DES can be defined Triple-DES (three DES instances, with three distinct keys, the middle one being use in "decryption" direction) and there are published test vectors for Triple-DES (also known as "TDES", "3DES", or sometimes "DES", which is arguably confusing). For asymmetric algorithms, cryptanalysis mostly involves working on the mathematical structure of the keys, e.g. by trying to factor big non-prime integers in order to break RSA variants. Mathematics here range from the non-trivial to the totally unimaginable, so this might be too steep a learning curve to begin cryptography by trying to break RSA... | {
"source": [
"https://security.stackexchange.com/questions/5668",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1124/"
]
} |
5,701 | A little background info here: I'm a self-taught web developer with very little experience outside of html/css, and the company I work for has hired a third party web development team to design us an e-commerce site. Anyway, I was beta testing the site today using the TamperData Firefox add-on, and I found two major design flaws which both involve HTTP headers. The first flaw was that when our site asks the user to choose a freight option (ground, express, etc.) the site passes the calculated freight value back to the server in an HTTP header. By manipulating the header, I was able to modify (see: erase) the freight value and so the backend interpreted the calculated freight value as 0, and so it didn't charge me freight! The second flaw, however, is far worse... When the total product value is calculated and I "checkout", all of the transaction information (CC#, CVV2, Expiry, $ total) gets passed to a third party merchant processor via an HTTP header. Once again, I used TamperData and was able to manipulate the header so that the $ value being sent to the merchant was something trivial (I choose $1 for the test). The fact that I -- with absolutely no experience in website security or server side coding -- was able to find these severe flaws has me completely scared, because what does that say about the programmers who designed this? Sure, they will probably fix these two issues somehow. BUT, If sending credit card data in a plaintext HTTP header seemed like a good idea to them, will their new solution realistically be any more secure? What if there are other, completely separate attack vectors that I missed? Thus, my questions for you: Given the information above, what steps would you take to avoid these security holes? (so I know what to request our programmers to do) What books, sites, and/or resources are available so I may teach myself about web security, and how to do actual penetration testing? It will take some time for my company to arrange for an outside security audit, and in the interim I want to fix as much of the site as possible. UPDATE : As I said in a comment below, I am interested to know exactly how secure it is to transmit the payment info in an http header to the cc merchant (we are using an https connection if that matters). Can third parties eavesdrop or intercept these packets? And if they can, is that a realistic scenario, or is it highly unlikely? I ask this because I don't yet have a good understanding of how transmitting data via HTTP headers works, at least on a technical level. This question was IT Security Question of the Week . Read the Aug 12, 2011 blog entry for more details or submit your own Question of the Week. | A couple of suggestions: 1) Don't build a site from the ground up, unless a new kind of e-commerce is your secret sauce. There are plenty of solutions out there that are tried and true--ebay, wordpress with a shopping cart plugin, drupal with plugins, etc. Rolling your own is a quick way to get hacked. 2) Be sure to redirect to a secure payment processor before collecting any card or identity information. Both PayPal and Google Checkout offer great portals with APIs that make it easy to sell stuff safely, and redirect back to your site. 3) Many of the hosting companies (GoDaddy, volusion, etc) offer turn-key setups, so that all you have to do is fill a catalog, pick a style, ship stuff and go to the bank. | {
"source": [
"https://security.stackexchange.com/questions/5701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3676/"
]
} |
5,734 | I want to put some parts of my secret data into specific file with Steganography method. Is this method as safe as other encryption methods like RSA or SHA? | NO, it isn't safe at all and steganography is not encryption! Encryption means that the method is known, but that's not a problem, the data can't be decrypted without a key. Bad luck for interceptors when a strong method and a strong key have been used. The message is useless for them. Steganography means hiding data in other data and it relies on the method used to hide the data being unknown to interceptors! It isn't encryption at all, but it can be combined with encryption. Simple/public domain steganography techniques can be detected quite easily, if the interceptor expects a hidden message. Pure steganography (just the hiding process) is security by obscurity , which is a bad practice. However, sometimes the combination of steganography and cryptography can be desirable, for example when you don't want anyone to know that a secret message has been sent at all. Interceptors won't be able to prove it, when they can't break the encryption. | {
"source": [
"https://security.stackexchange.com/questions/5734",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3768/"
]
} |
5,813 | I recently forgot my password for our cable provider online account, only to discover that they sent it to us via plain text in an email. I quickly sent an email to customer support asking them if they were storing passwords in plain text in their database. I actually got a quick response back from one of their software engineers who said that due to the "application's design" it was necessary to hash the passwords in a recoverable format. I didn't send an email back to ask if they were using a salt, but in general, I thought they were adhering to the lowest common denominator with regard to password security and recovery. Am I in the wrong here? If they are using a strong encryption method, is this perfectly acceptable? | For an ISP it is quite likely that they store your password in plain text or using a reversible encryption. This reversible encryption is not a hash as the answer claimed. ISPs tend to not use one way hashes because a number of old protocols use the password as part of a challenge-response digest authentication. Most notable is APOP , which is an extension to the old Post Office Protocol. In normal POP the username and password are transmitted in clear text, which is obviously bad. So people thought of an extension which prevented sniffing and replaying attacks: The server sends a unique identifier (for some strange reason it is called timestamp in the specification although it is more). The client concatenates this identifier and the password before calculating the MD5 hash. The server needs to do the same calculation, therefore it needs the clear password. This protocol is outdated; POP over SSL should be used instead. But it is still in common use. Furthermore, ISPs often offer a number of services and getting all of them to use a central authentication mechanism is a huge challenge. So often passwords are replicated to those servers instead. Since this replication must be reproducible at any time for reliability reasons the clear text password is often stored. If a central authentication is not possible, it would still be preferable to store the different encrypted formats at the replication source instead of the plain password. It is obviously extremely bad practice to give those passwords out to customers and first level support personal. Sending them in plain email makes it even worse. | {
"source": [
"https://security.stackexchange.com/questions/5813",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
5,906 | I've observed that several of our users are ignoring messages sent from IT Security managers, and also the system generated "You just sent a virus" notifications. The problem seems to be among people who are not computer savvy, who are in no way hostile to IT SEC. They are simply not "computer" people. What guidance is there to ensure that the IT Managers and System Notifications are understood, and acted upon? I'd like to craft a single message for the entire user base, and not take on responsibility for hand-holding the "special" folks. My hope is that I can develop a set of email best practices that are used when communicating with all end users, for the purpose of sending IT Security user notifications through email. How should I lay out the thoughts behind the message? Are HTML messages more effective? How so? Are there any cut-and-paste samples? Does the "From" address matter? What should the subject say? Examples of notifications include (but not limited to): Automatic Email Messages from Antigen or Forefront AV systems Revisions to IT security policy General notifications that are simply informative "Maintenance will be performed at 11pm-6am. Expect a disruption in service" General notifications that are meant to be read and acted upon. They do apply to the end user. "Close all applications and log off for patching" Other notifications that may or may not apply to the end user. SPAM Quarantine Summary Emails: "Enclosed is a list of quarantined messages..." "A security patch for an old version of software that might not be installed" This question was IT Security Question of the Week . Read the Aug 26, 2011 blog entry for more details or submit your own Question of the Week. | A small trick I learned years ago - lay your email out like this: Short Version Small number of very short succinct points If X, then you need to do this Else, then you need to do that (or don't need to do anything) Long Version or Full Details ...and here you lay out whatever full version you want. 97% of your users will never read the long version, so make the short version count. However , the key here is that most users will read the short version if they're given a choice between that and the long version . When you put that "Short Version" section header in, you're enticing them to read that because they feel like they can "get away" with just reading the short version. It's, like, psychology or something. Many of your users still won't read messages no matter what you do . I've gotten better hit rates with this method than not, though. | {
"source": [
"https://security.stackexchange.com/questions/5906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
6,015 | I recently setup VSFTPD on my personal server for sharing files over FTP. In the vsftpd.log file, i see hundreds of failed attempts to login with usernames like "adminitrator" , "adminitrator1", "adminitrator2", "adminitrator123" etc. I am surprised because i just setup my FTP server and i thought no one would know about its existence. I did not communicate it to anyone that my FTP server exists. I guess with port scanning tools, one would have found FTP port is open. However i wonder how one would have got my IP. I downloaded a torrent file, would that expose my IP address? Is it quite common for the attacker to harvest the ip address from torrent trackers or some other service? Any idea how attacker gets IP address? (like for spamming - spambots are used to harvest the email ID) Any general pointers for a new comer to secure the server (books, videos, totorials, blogs etc) | You don't need to find out how they got your IP - the entire Internet is constantly being scanned by malicious individuals, bots etc. If you have an FTP server on the Internet, one of these scans will find it and a whole series of attack attempts will commence. Your downside is - you can't secure an FTP server. FTP just wasn't designed to provide encryption or strong authentication so it has been deprecated. The recommendation is to replace it with one of the secure alternatives such as SFTP, or only provide access to it via SSH. The good thing is - SFTP is pretty much a drop-in replacement on most Operating Systems. Update - actually, you are using vsftpd, so you can configure ftps to add authentication and encryption. Check out http://viki.brainsware.org/?en/Explicit_FTPS | {
"source": [
"https://security.stackexchange.com/questions/6015",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3185/"
]
} |
6,050 | During a Q&A period at DEFCON this year, one member of the audience mentioned that we're using "fake salt" when concatenating a random value and a password before hashing. He defined "real salt" as something seen in the original Unix crypt implementation that changed how the algorithm ran, thus requiring different code to be used to crack each password and greatly troubling GPU attacks. Is there any merit to the "real" and "fake" salt discussion? | The distinction is arbitrary. A salt-aware algorithm works by taking input data and scrambling it in various ways, and there is no method for inserting the salt which is more or less "fake" than any other. Trying to devise a password processing algorithm which is efficient on a general purpose CPU but does not scale well on a GPU (or a custom FPGA or ASIC) is a real research topic. This is what scrypt is about, and arguably bcrypt already does a good job at it. The idea here is to use accesses in tables which are constantly modified; table access in RAM is something that general purpose CPU are good at, but which makes things difficult for GPU (they can do it, but not with as full parallelism as what is usually obtained with a GPU). | {
"source": [
"https://security.stackexchange.com/questions/6050",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/836/"
]
} |
6,058 | In the question about real vs. fake salt , the answers describe how real salt 'perturbs the encryption algorithm.' I know roughly how initialization vectors work; is this the same concept, or something different entirely? | A salt and an initialization vector are mostly the same thing in the following sense: they are public data, which should be generated anew for each instance (each hashed password, each encrypted message). A salt is about being able to use the same password several times without opening weaknesses; or, if you prefer, preventing an attacker from sharing password attack costs in case the same password could have been used on several instances -- which is all what precomputed (rainbow) tables are about. The point of an IV in, say, symmetric encryption with CBC, is to tolerate the use of the same key to encrypt several distinct messages. The name "initialization vector" hints at a repetitive process over a given internal state, the IV being what the state is initialized at. For instance, the MD5 hash function is defined as repeated action of a compression function which takes as input the current state (128 bits) and the next message block (512 bits), and outputs the next state value; at the beginning, the state is initialized to a conventional value which is called "the IV". In that sense, most "salts" used in password processing are not "initialization vectors". But this is a bit of an overinterpretation of the expression. Still, naming things is mostly a matter of Tradition. A "salt" is a kind of IV which: is involved in some processing of a password; should be distinct for each processing instance (it cannot be a fixed conventional value); only needs uniqueness ("it is not repeated"), not uniform selection among the space of possible salts (although uniform random selection is a good and cheap way to get uniqueness with overwhelming probability, assuming that the salts are long enough). The particulars (how the salt/IV is exactly inserted and at what point in the algorithm) are a red herring. | {
"source": [
"https://security.stackexchange.com/questions/6058",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3677/"
]
} |
6,095 | How accurate is this XKCD comic from August 10, 2011? I've always been an advocate of long rather than complex passwords, but most security people (at least the ones that I've talked to) are against me on that one. However, XKCD's analysis seems spot on to me. Am I missing something or is this armchair analysis sound? | I think the most important part of this comic, even if it were to get the math wrong ( which it didn't ), is visually emphasizing that there are two equally important aspects to selecting a strong password (or actually, a password policy, in general): Difficulty to guess Difficulty to remember Or, in other words: The computer aspect The human aspect All too often, when discussing complex passwords, strong policies, expiration, etc (and, to generalize - all security), we tend to focus overly much on the computer aspects, and skip over the human aspects. Especially when it comes to passwords, (and double especially for average users ), the human aspect should often be the overriding concern. For example, how often does strict password complexity policy enforced by IT (such as the one shown in the XKCD ), result in the user writing down his password, and taping it to his screen ? That is a direct result of focusing too much on the computer aspect, at the expense of the human aspect. And I think that is the core message from the sage of XKCD - yes, Easy to Guess is bad, but Hard to Remember is equally so. And that principle is a correct one. We should remember this more often, AKA AviD's Rule of Usability : Security at the expense of usability comes at the expense of security. | {
"source": [
"https://security.stackexchange.com/questions/6095",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/416/"
]
} |
6,141 | Cryptographic primitives usually assert some security level given as number of operations to mount an attack. Hash functions, for example, give different security levels for collision attacks, preimage attacks and second preimage attacks. From these, "safe" key sizes are derived for different primitives. There are many different recommendations for safe key sizes and many different means of estimating future capabilities in performing computation. For example, www.keylength.com has a lot of these recommendations combined. What I'm looking for, however, is the amount of simple operations that can be obviously seen as out of reach for all humanity for the foreseeable future - or actually, the lowest such value that is still believable. It is very obvious that 2^256 simple operations is something that will never be reached. It is also very obvious that 2^64 simple operations can be reached as it already has been. Many of the recommendations seem to calculate 2^128 as a number that would be safe for 30 years or more. So the value I am looking for is likely between 2^128 and 2^256. I am guessing 2^160 or 2^192 might be safely out of reach. But I want concrete arguments that can be easily reasoned about. I'd love to see arguments that are based on simple laws of physics or relations to concrete constants about the universe. For example, Landauer's principle could be used. Note: the actual simple operations used are not relevant here - they might be operations on a quantum computer, or hash invocations, or whatever. | As a starting point, we will consider that each elementary operation implies a minimal expense of energy; Landauer's principle sets that limit at 0.0178 eV, which is 2.85×10 -21 J. On the other hand, the total mass of the Solar system, if converted in its entirety to energy, would yield about 1.8×10 47 J (actually that's what you would get from the mass of the Sun, according to this page , but the Sun takes the Lion's share of the total mass of the Solar system). This implies a hard limit of about 6.32×10 68 elementary computations, which is about 2 225.2 . (I think this computation was already presented by Schneier in "Applied Cryptography".) Of course this is a quite extreme scenario and, in particular, we have no idea about how we could convert mass to energy -- nuclear fission and fusion converts only a tiny proportion of the available mass to energy. Let's look at a more mundane perspective. It seems fair to assume that, with existing technology, each elementary operation must somehow imply the switching of at least one logic gate. The switching power of a single CMOS gate is about C×V 2 where C is the gate load capacitance, and V is the voltage at which the gate operates. As of 2011, a very high-end gate will be able to run with a voltage of 0.5 V and a load capacitance of a few femtofarads ("femto" meaning 10 -15 ). This leads to a minimal energy consumption per operation of no less than, say, 10 -15 J. The current total world energy consumption is around 500 EJ (5×10 20 J) per year (or so says this article ). Assuming that the total energy production of the Earth is diverted to a single computation for ten years, we get a limit of 5×10 36 , which is close to 2 122 . Then you have to take into account technological advances. Given the current trend on ecological concerns and the peak oil , the total energy production should not increase much in the years to come (say no more than a factor of 2 until year 2040 -- already an ecologist's nightmare). On the other hand, there is technological progress in the design of integrated circuits. Moore's law states that you can fit twice as many transistors on a given chip surface every two years. A very optimistic view is that this doubling of the number of transistor can be done at constant energy consumption, which would translate to halving the energy cost of an elementary operation every two years. This would lead to a grand total of 2 138 in year 2040 -- and this is for a single ten-year-long computation which mobilizes all the resources of the entire planet. So the usual wisdom of "128 bits are more than enough for the next few decades" is not off (it all depends on what you would consider to be "safely" out of reach, but my own paranoia level is quite serene with 128 bits "only"). A note on quantum computers: a QC can do quite a lot in a single "operation". The usual presentation is that the QC performs "several computations simultaneously, which we filter out at the end". This assertion is wrong in many particulars, but it still contain a bit of truth: a QC should be able to attack n -bit symmetric cryptography (e.g. symmetric encryption with a n -bit key) in 2 n/2 elementary quantum operations. Hence the classic trick: to account for quantum computers (if they ever exist), double the key length. Hence AES with a 256-bit key, SHA-512... (the 256-bit key of AES was not designed to protect against hypothetical quantum computers, but that's how 256-bit keys get justified nowadays). | {
"source": [
"https://security.stackexchange.com/questions/6141",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/677/"
]
} |
6,272 | During traveling, especially in poor countries, sometimes you are going to need to use the internet at an internet cafe and you really can't be sure whether anyone has installed anything to listen to your keystrokes. I've been told this method, although I'm not sure if this works, but if you're at an internet cafe, one way to fool any key loggers is to avoid typing your password in one go. Instead type part of your password, then insert some random letters, then use your mouse to highlight over the random letters, and then overwrite them. This supposedly fools the key loggers into thinking your password has those random letters in it. So just wondering, does this actually work? Are there any other better methods I could use? | if you do not trust the medium: do not enter sensitive information. typing in your password in some obscure way is just that: security through obscurity , which never works. other than that: you might be able to achieve some level of security in such open places if the password you entered changes for the next login, see one-time-passwords . (note: '2-factor-authentification is a hybrid scheme where you know already one half of the password (which keeps constant / static) and then you get the other half by sms or any other means; that 2nd half is a one-time-password) | {
"source": [
"https://security.stackexchange.com/questions/6272",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2279/"
]
} |
6,287 | I'm creating a webapp, and part of my authentication method is password length. Should I put one in place? (say, 50 characters?) Or should I just put a minimum length (Currently at 6). Are there problems with not putting in a maximum length? | You should hash the passwords using a secure algorithm instead of storing it in clear text. The hash function will result in a constant output size regardless of the length of the input string. Using a minimum length and perhaps some other quality rules is a good idea because it helps a little against laziness. If you are afraid of Denial of Service attacks, you could put a server side limit for ordinary input fields into place, for example 1000 bytes. It's unlikely that someone wants to use such a long password. | {
"source": [
"https://security.stackexchange.com/questions/6287",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4276/"
]
} |
6,290 | I've often heard it said that if you're logging in to a website - a bank, GMail, whatever - via HTTPS, that the information you transmit is safe from snooping by 3rd parties. I've always been a little confused as to how this could be possible. Sure, I understand fairly well (I think) the idea of encryption, and that without knowing the encryption key people would have a hard time breaking the encryption. However, my understanding is that when an HTTPS connection is established, the encryption key is "discussed" between the various computers involved before the encrypted connection is established. There may be many factors involved in choosing an encryption key, and I know it has to do with an SSL certificate which may come from some other server. I do not know the exact mechanism. However, it seems to me that if the encryption key must be negotiated between the server and the client before the encryption process can begin, then any attacker with access to the network traffic would also be able to monitor the negotiation for the key, and would therefore know the key used to establish the encryption. This would make the encryption useless if it were true. It's obvious that this isn't the case, because HTTPS would have no value if it were, and it's widely accepted that HTTPS is a fairly effective security measure. However, I don't get why it isn't true. In short: how is it possible for a client and server to establish an encrypted connection over HTTPS without revealing the encryption key to any observers? | It is the magic of public-key cryptography . Mathematics are involved. The asymmetric key exchange scheme which is easiest to understand is asymmetric encryption with RSA. Here is an oversimplified description: Let n be a big integer (say 300 digits); n is chosen such that it is a product of two prime numbers of similar sizes (let's call them p and q ). We will then compute things "modulo n ": this means that whenever we add or multiply together two integers, we divide the result by n and we keep the remainder (which is between 0 and n-1 , necessarily). Given x , computing x 3 modulo n is easy: you multiply x with x and then again with x , and then you divide by n and keep the remainder. Everybody can do that. On the other hand, given x 3 modulo n , recovering x seems overly difficult (the best known methods being far too expensive for existing technology) -- unless you know p and q , in which case it becomes easy again. But computing p and q from n seems hard, too (it is the problem known as integer factorization ). So here is what the server and client do: The server has a n and knows the corresponding p and q (it generated them). The server sends n to the client. The client chooses a random x and computes x 3 modulo n . The client sends x 3 modulo n to the server. The server uses its knowledge of p and q to recover x . At that point, both client and server know x . But an eavesdropper saw only n and x 3 modulo n ; he cannot recompute p , q and/or x from that information. So x is a shared secret between the client and the server. After that this is pretty straightforward symmetric encryption, using x as key. The certificate is a vessel for the server public key ( n ). It is used to thwart active attackers who would want to impersonate the server: such an attacker intercepts the communication and sends its value n instead of the server's n . The certificate is signed by a certification authority, so that the client may know that a given n is really the genuine n from the server he wants to talk with. Digital signatures also use asymmetric cryptography, although in a distinct way (for instance, there is also a variant of RSA for digital signatures). | {
"source": [
"https://security.stackexchange.com/questions/6290",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4274/"
]
} |
6,349 | In the last couple of days there were a lot of talking about passwords and passphrases, not only here, but on several blogs and forums I follow (especially after XKCD #936 saw the light of this world). I heard quite a few pros and cos of both of them and this got me thinking. Why do we use password and passphrase at all instead of biometrics? I know biometrics are not the holy grail of authentication and/or identification, but ( And the most popular password is... from ZDNET) at least I can be pretty sure that majority of users won't have the very same and easy to guess biometrics.
Also I can't forget my finger or iris (while I can forget password / passphrase). With the era of cloud coming, the major strength of passphrases (length) might easly be ephemeral. Like I said, I know biometrics are not perfect , but if we know that passwords / passphrases are the Achilles' heel of almost every system, why are biometrics underused?
According to Tylerl ( Biometric authentication in the real world from this site, second answer), biometrics is used even less than it used to be.
I mean, even if fingerprints are easily forged, it's still better than having many users with password 123456 or qwertz, at least from my point of view (feel free to prove me wrong). So, in short, what are the biggest problems / obstacles which are stalling widespread adoption of biometrics? EDIT I won't comment each reply, but put my thoughts here. Also I would like to clarify some things. Problem of normalization I don't know how is it in USA, but in UK law states that you need at least 5 (or 7, I'm not sure) referent points used in matching. This means that even if you don't have perfect scan, system can still do matching against vector (which is representing fingerprint) stored in DB. System will just use different referent points.
If you are using face as biometric characteristic EBGM can recognized person even if face is shifted by ~45° . Problem of not-changeable (characteristics) Well, you can actually change characteristics - it's called cancelable biometric . It's working similar as salting. The beauty of cancelable biometric is that you can apply transformation daily is needed (reseting password every day could result in a lot of complains). Anyway, I feel like the most of you are only thinking about fingerprint and face recognition, while in fact there are much more characteristics which system can use for
authentication. In bracket I'll mark the chances of fraudery - H for high, M for medium and L for low. iris (L) termogram (L) DNA (L) smell (L - ask dogs if you don't believe me :] ) retina (L) veins [hand] (L) ear (M) walk (M) fingerprint (M) face (M) signature (H) palm (M) voice (H) typing (M) Ok, let say biometric hardware is expensive and for simple password you have everything you need - your keyboard. Well, why there aren't systems who are using dynamic of typing to harden the password. Unfortunately, I can't link any papers as they are written in Croatian (and to be honest, I'm not sure do I even have them on this disk), however few years ago two students tested authentication based on dynamic of typing. They made simple dummy application with logon screen. They uploaded application on one forum and post the master password. At the end of this test there were 2000 unique tries to log with correct password into the application. All failed.
I know this scenario is almost impossible on the webpages, but locally, this biometric characteristic without need of any additional hardware could turn 123456 password into fairly strong one. P.S. Don't get me wrong, I'm not biometric fanboy, just would like to point out some things. There are pretty nice explanations like - cost, type 2 error, user experience,... | Passwords and biometrics have distinct characteristics. Passwords are secret data. Data is abstract: it flows quite freely across networks. Cryptography defines many algorithms which can use secret data to realize various security properties such as confidentiality and authentication. The shortcomings of passwords are due to the fact that they are meant to be memorized by human beings (otherwise we would just call them "keys") and this severely limits their entropy. Biometrics are measures of the body (in a wide sense) of a human user. Being measures, they are a bit fuzzy: you cannot take a retinal scan and convert it into a sequence of bits, such that you would get the exact same sequence of bits every time. Also, biometrics are not necessarily confidential : e.g. you show your face to the wide World every time you step out of your home, and many face recognition systems can be fooled by holding a printed photo of the user's face. Biometrics are good at linking the physical body of a user to the computer world, and may be used for authentication on the basis that altering the physical body is hard (although many surgeons make a living out of it). However, this makes sense only locally. There is a good illustration in a James Bond movie (one with Pierce Brosnan; I don't remember which exactly): at some point, James is faced with a closed door with a fingerprint reader. James is also equipped with a nifty smartphone which includes a scanner; so he scans the reader, to get a copy of the fingerprint of the last person who used it, and then he just puts his phone screen in front of the reader; and lo! the door opens. This is a James Bond movie so it is not utterly realistic, but the main idea is right: a fingerprint reader is good only insofar as "something" makes sure that it really reads a genuine finger attached to its formal owner. Good fingerprint readers verify the authenticity of the finger through various means, such as measuring temperature and blood pressure (to make sure that the finger is attached to a mammal who is also alive and not too stressed out); another option being to equip the reader with an armed guard, who checks the whole is-a-human thing (the guard may even double as an extra face recognition device). All of this is necessarily local : there must be an inherently immune to attacks system on the premises. Now try to imagine how you could do fingerprint authentication remotely . The attacker has his own machine and the reader under his hand. The server must now believe that when it receives a pretty fingerprint scan, it really comes from a real reader, which has scanned the finger just now : the attacker could dispense with the reader altogether and just send a synthetic scan obtained from a fingerprint he collected on the target's dustbin the week before. To resist that, there must be a tamper-resistant reader, which also embeds a cryptographic key so that the reader can prove to the server that: it is a real reader; the scan it sent was performed at the current date; whatever data will come along with the scan is bound to it (e.g. the whole communication is through TLS and the reader has verified the server certificate). If you want to use the typing pattern, the problem is even more apparent: the measuring software must run on the attacker's machine and, as such, cannot be really trustworthy. It becomes a problem of defeating reverse engineering. It might deter some low-tech attackers, but it is hard to know how much security it would bring you. Security which cannot be quantified is almost as bad as no security at all (it can even be worse if it gives a false sense of security). Local contexts where there is an available honest systems are thus the contexts where biometrics work well as authentication devices. But local contexts are also those where passwords are fine: if there is an honest verifying system, then that system can enforce strict delays; smartcards with PINs are of that kind: the card locks out after three wrong PINs in a row. This allows the safe use of passwords with low entropy (a 4-digit PIN has about 13 bits of entropy...). Summary: biometrics can bring good user authentication only in situations where passwords already provide adequate security. So there is little economic incentive to deploy biometric devices, especially in a Web context, since this would require expensive devices (nothing purely software; it needs tamper-resistant hardware). Biometrics are still good at other things, e.g. making the users aware of some heavy security going on. People who have to get their retina scanned to enter a building are more likely to be a bit less careless with, e.g., leaving open windows. | {
"source": [
"https://security.stackexchange.com/questions/6349",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1351/"
]
} |
6,355 | Hypothetical situation: before I hire a web development company I want to test their ability to design secure web apps by viewing their previous client's websites. Issue: this situation raises a big red flag: with regards to viewing a website, what is and is not within the breadth of the law? Or in other words: at what point does poking around a website become illegal ? View Source with Firebug? Naturally that would be legal. But what if I change HTML (like a hidden form value before submission)? Perhaps I then edit or remove JavaScript, like a client side validation script. Would that be legal? What if I put %3Cscript%3Ealert(1)%3C/script%3E at the end of the URL. Or perhaps I type the URL: example.com/scripts/ and I'm able to view their directory due to faulty permission settings? What if I manipulate data passed in HTTP headers, for instance a negative product qty/price to see if they do server side validation (naturally, I wont complete the checkout). To me, all of this seems perfectly harmless because: I'm not causing undue stress to their server by spamming, mirroring the site with wget, or injecting potentially dangerous SQL. I'm not causing any potential loss or monetary damages, because I wont ever exploit the vulnerabilities, only test for their existence (proof of concept). None of my actions will have any implication for user data privacy. In no way would any of my actions potentially reveal confidential or private information about anyone. If I did find anything I would immediately notify the webmaster of the potential exploit so they could patch it. But even though I am logically able to justify my reasons for testing the site, that does not necessarily make my actions legal. In fact, cyber laws are notoriously backwards in the United States, and even the most laughably trivial actions can be considered hacking. Questions: Is there a defined line in the sand that separates illegal hacking from "testing without permission"? Or is this whole scenario a grey area that I should avoid (likely the case). Are there any linkable online resources that could expand my knowledge in this wholly grey area? What are the specific acts or laws that handle this? Please keep in mind that the number one most logical choice would be to simply: ask for permissions. However, due to heavy time constraints, by the time I would get permission it would all be for naught. | Don't do it! Don't do it! If you are in the US, the law is very broad. You don't want to even tiptoe up to the line. The relevant law is the Computer Fraud and Abuse Act (18 U.S.C. 1030). In a nutshell (and simplifying slightly), under the CFAA, it is a federal crime to "intentionally access a computer without authorization or exceed authorized access". This language is very broad, and I imagine an ambitious prosecutor could try to use it to go after everything on your list except #1 (view source). Orin Kerr, one of the leading legal scholars in this area, calls the statue "vague" and "extraordinarily broad" , and has said that "no one actually knows what it prohibits" . And, as @Robert David Graham explains, there have been cases where folks were prosecuted, threatened with prosecution, or sued for doing as little as typing a single-quote into a textbox, adding a ../ to a URL, or signing up to Facebook under a pseudonym. It's pretty wild that this alone constitutes a federal offense, even if there is no malicious intent. But that's the legal environment we live in. I'd say, don't take chances. Get written authorization from the company whose websites you want to test. | {
"source": [
"https://security.stackexchange.com/questions/6355",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3676/"
]
} |
6,424 | I realise this borders on sci-fi, but there's been some interesting demonstrations regarding security of various satellites . What would be required to hack a satellite (in general terms, any hack really)? Are they all basically connected in the same way, or would I need different equipment, software, or otherwise. Are there different encryption algorithms in use? What communication protocols would I use? How should I pick one? What are the legal repercussions of doing so? Generally, I'd like to find out how secure these computers flying above us really are, as not much is discussed about them in terms of security. This question was IT Security Question of the Week . Read the Feb 4, 2012 blog entry for more details or submit your own Question of the Week. | Overview First, I learned a lot of my information from a combination of my amateur radio experience and an awesome talk I sat in at DEFCON 18. The majority of satellite systems are simple repeaters. The signal that comes in on a transponder is cleaned, amplified, and retransmitted. If you know the location and input frequency, and you pump more effective radiated power than anybody else, you win. Many satellites also require command modules. These are used to interpret instructions to boost back into orbit or at the end of life, de-orbit into a "graveyard" pattern (or right into the atmosphere itself). Because most satellite systems are custom, it is a real crapshoot what you see for commands and security. I suspect that most command sequences are unencrypted and rely on the fact that a MITM attack on something in space is fairly hard. Frequencies vary wildly from MHz to several tens of GHz. Your equipment needs to put out the right frequency through a dish that is the right size. Legally speaking, you will at a minimum foul the FCC or your national equivalent, by violating regulations on licensed broadcasting. Also, "birds" and airtime are expensive, so the civil liability if found can be bankrupting. As far as taking a satellite transponder over is concerned, security relies on rarity of attacks, detection, and triangulation of the signal source. Then people come knocking on your door. Finding a bird First, you've got to have a target. Some satellites are geostationary, so they're easy. Other satellites have orbits that sending them in offset patterns around the world. The satellite will come into view at different elevations in the sky tracing different paths, so you'll need to know where it will be and how it will move in order to communicate. Communications satellites tend to either be geostationary or part of a cluster of many satellites such that one or more is always in view of at least one ground station and any other point on the planet. There are websites all over the place for this, and they often end up with military / disavowed satellites listed as people will track them with a telescope and then wonder why that one isn't listed yet. Talking to a bird: Bands Satellites operate on different frequencies, and the antenna used has to be sized to the frequency of the satellite. Most satellites operate in the microwave spectrum. The ubiquitous (in the United States) DirecTV / Dish Network antennas are usually on the higher end (smaller wavelength) of the spectrum. Because your signal has a lot of travel in its future and your target is small, your goal is to direct as much power in one direction as possible. Anything sent off to the sides, earth, etc. is wasted energy, so you will want an appropriately-sized high-gain antenna . Antenna design can be learned from amateur radio books on the topic. Before someone chimes in and says, "You don't NEED a directional antenna and tracking motor," that's true... but it will help a hell of a lot. Just because your spot messenger or GPS doesn't have one doesn't mean you shouldn't use one if you can. It will keep your signal where you want it and limit the possibility of interference from or with other things using the same frequency. It also means that it will be harder for somebody to hunt you down. Being nicked just because you let strangers hear you might have some costs associated. Talking to a bird: Protocol Now we're getting a bit trickier. Some satellites are very simple, particularly amateur radio satellites. They receive a signal and they transmit that signal back. There are different variations of protocol, polarisation, modulation ( QAM is a good one to understand), etc. If your target does more cleanup than just setting a noise floor and spitting things back out, you'll need to know that information as well. Higher-level protocols may be standard IP/TCP, plaintext, encrypted, or some totally imaginary 17 bit codeword system that was dreamed up by a guy like Mel . Taking over You need to deliver more power to the right place with the appropriate protocol. Because almost every satellite is a custom design, that's challenging. If you goal is beyond simple re-broadcast, you're up against a big black box every time. Computers are small, low-power, and probably have next to nothing on them. The best bet for MITM If you can't afford to launch your own satellite, figure out where the ground station is and fly over it. Small aircraft are relatively cheap to rent (under $100 / hour to operate), tethered balloons may get high enough to have an effective angle, and if you're quite sneaky you can put something on the transmitter feed line itself. Many smaller organizations rent their satellite time. I learned when I was 11 that the guy running the local news station's satellite truck is bored as hell when they're in between shots and will definitely show you all the cool things about his rig. Whatever he's renting is probably one of the easier things to get at because that has to be documented and relatively easy to work with. | {
"source": [
"https://security.stackexchange.com/questions/6424",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/488/"
]
} |
6,448 | If an attacker turns on wifi but doesn't have the security key to connect to an access point in range, can he still sniff packets that travel between the access point and clients connected to the access point, and thus get the Mac addresses of the clients? If the access point is a public wifi and there is no security key but there is Mac filtering, does that make a difference? | An attacker can always determine the client's MAC address if they can sniff packets to or from the client. This is true regardless of whether encryption is used or not. The MAC address is in the outer encapsulation layer of the 802.11 packet, and there is no encryption applied to that level. Here's a good link at Microsoft that lays out the packet encapsulation, including where encryption happens in 802.11 . This is kind of the expected result. By definition, the physical and data link layer information has to be openly available to other network devices so that they all can figure out who's supposed to send what where. Standard tools like Netstumbler will display MAC addresses for you. Your followup question will be "But doesn't that make it trivial to bypass MAC address filtering as a security measure on the AP?" And the answer is, yes. Yes it does. | {
"source": [
"https://security.stackexchange.com/questions/6448",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4164/"
]
} |
6,489 | I would like to ask what happens when an email is sent from Gmail, Yahoo or Hotmail public web email services? I don't understand email protocols in details, but as far as I know email traffic is unencrypted and the messages are passed along many mail servers (in plain text) before reaching their destination server. However, this was questioned recently by other people, and their view was that if one of the big providers is used, the email messages are encrypted and there is no need to worry about security. Do you know if they are right about this and are emails moderately secure? | An SMTP session between two mail servers may be encrypted, but only if both ends support it and if both ends choose to use it. So if you're sending mail from Gmail to example.net, then Google could only encrypt if example.net was ready and willing. For this reason, you cannot trust email to be even moderately secure at the transport layer. (The only safe end-to-end method is to encrypt your email using S/MIME or PGP, but the people you're exchanging email with need to be on board too... just like the mail servers). As to whether the big three are performing opportunistic STARTTLS, I haven't seen any evidence of it, but I spend less time reading my mail server logs than I used to. And if they are, they're still only half of every SMTP connection they make, and cannot guarantee the use of encryption. Update: I just banner tested MX hosts for gmail.com, yahoo.com, and hotmail.com. Only gmail advertises STARTTLS, which is to say, only gmail would be willing to encrypt the SMTP session if the other party wanted to. I tested gmail outbound by sending mail to a server I own and watching the wire; Google does indeed take advantage of STARTTLS if it is offered and encrypts the SMTP transaction when a gmail user is sending mail. Props to Google. So as far as "sending" email encryption goes: Google 1, Yahoo 0, Microsoft 0. As per the comments below, if you want to test these yourself, it's very simple: Determine the MX hosts (Mail eXchangers) for the domain Telnet to port 25 on one of them Type in "ehlo yourhostname.domain.com" If you don't see "250-STARTTLS" as one of the responses, they don't support opportunistic encryption. Like this: $ host -t mx yahoo.com
yahoo.com mail is handled by 1 mta5.am0.yahoodns.net.
yahoo.com mail is handled by 1 mta7.am0.yahoodns.net.
yahoo.com mail is handled by 1 mta6.am0.yahoodns.net.
$ telnet mta5.am0.yahoodns.net 25
Trying 66.196.118.35...
Connected to mta5.am0.yahoodns.net.
Escape character is '^]'.
220 mta1315.mail.bf1.yahoo.com ESMTP YSmtpProxy service ready
ehlo myhost.linode.com
250-mta1315.mail.bf1.yahoo.com
250-8BITMIME
250-SIZE 41943040
250 PIPELINING
quit
221 mta1315.mail.bf1.yahoo.com
Connection closed by foreign host.
$ As a side note, Yahoo will close the connection if you don't ehlo right away. I had to cut & paste my ehlo because typing it in took too long. MORE UPDATE: As of January 2014, Yahoo is now encrypting - I just tested (as above) and verified. However, both The Register and Computerworld are reporting that the intracacies of SSL setup (such as Perfect Forward Secrecy) leave a lot to be desired as implemented by Yahoo. EVEN MORER UPDATE: Google is now including SMTP encryption data in their Transparency Report Safer Email section . They're sharing their data about who else is willing to encrypt, and you can look at the top numbers as well as query individual domains. Addendum: @SlashNetwork points out that it is possible to configure a mail server to require that TLS be negotiated before exchanging mail. This is true, but to quote the Postfix documentation : You can ENFORCE the use of TLS, so that the Postfix SMTP server
announces STARTTLS and accepts no mail without TLS encryption, by
setting "smtpd_tls_security_level = encrypt". According to RFC 2487
this MUST NOT be applied in case of a publicly-referenced Postfix SMTP
server. This option is off by default and should only seldom be used. Now, the world is full of implementations that violate the RFCs, but this sort of thing - e.g., something that may break routine required functionality like accepting bounces and mail for the postmaster - is probably more likely to have negative consequences. A better solution which mail gateways often allow is the imposition of TLS requirements on a per-domain policy basis . For example, it is usually possible to say "Require TLS with a valid Certificate signed by Entrust when talking to example.com". This is usually implemented between organizations that are part of the same parent company but have different infrastructure (think: acquisitions) or organizations with a business relationship (think: ACME, Inc., and their outsourced support call center company). This has the advantage of ensuring that specific subsets of mail that you care about get encrypted, but doesn't break the open (accept from anyone by default) architecture of SMTP email. Addendum++ Google has announced the gmail will percolate information about the security if the mail path out to the reader . So these behind-the-scenes encryption steps will be brought to the notice of the user a little bit more. (Probably still doesn't care about the certificate provenance; just an indicator of encryption of bits). | {
"source": [
"https://security.stackexchange.com/questions/6489",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3296/"
]
} |
6,623 | Good practice is not to unnecessarily restrict password length, so that appropriately-long passphrases (perhaps 35-45 chars for 6/7 dicewords) can be used. (See e.g. Should I have a maximum password length? where a maximum of 1K is suggested, to protect against DoS without restricting users' ability to set long passwords.) bcrypt is also commonly recommended (see e.g. Do any security experts recommend bcrypt for password storage? , http://chargen.matasano.com/chargen/2007/9/7/enough-with-the-rainbow-tables-what-you-need-to-know-about-s.html ) It is also recommended to use a salt (random, and stored with the password hash) -- I believe 32-bits (4 characters) is often recommended. (I understand the salt-size rationale to be "enough that the number of combinations is much bigger than the number of user records AND is enough to make rainbow tables infeasibly large" -- 16 bits is enough for the second part but may not be enough for the first.) But AIUI bcrypt only hashes 55 bytes -- with 4 chars for the salt that leaves 51 for the password. I'm guessing that we shouldn't just bcrypt(left(password,51)) and ignore the last characters. Should we just limit users to 50 characters in their password (enough for nearly everyone, but not definitely enough)? Should we use something like bcrypt(sha256(salt+password)) instead, and allow up to 1K characters? Or does the addition of the sha256 (or sha512?) step reduce the overall security somehow? Do scrypt or PBKDF2 have similar length restrictions? (The last question is just for interest, really -- I realise that the space-hardness/FPGA-resistance, and relative newness of scrypt, and the GPGPU-resistance of bcrypt compared with PBKDF2 are far more important considerations in deciding which hash to use.) | Using a secure hash function to preprocess the password is secure; it can be shown that if bcrypt(SHA-256(password)) is broken, then either the password was guessed, or some security characteristic of SHA-256 has been proven false. There is no need to fiddle with the salt at that level; just hash the password, then use bcrypt on the result (with the salt, as bcrypt mandates). SHA-256 is considered to be a secure hash function. The point of a salt is to be unique -- as unique as possible, so that no two hashed passwords use the same salt value. 32 bits are a bit low for that; you should use a longer salt. If you have n bits of salt, then you will encounter collisions (two hashed passwords using the same salt) as soon as you have more than about 2 n/2 hashed passwords -- that's about 65000 with n = 32 , a not too high value. You'd better use 64 bits or more of salt (use 128 bits and you can cease to worry about it). | {
"source": [
"https://security.stackexchange.com/questions/6623",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4449/"
]
} |
6,683 | Entropy is a term used often in relation to password security and brute-force attacks, but it is a topic that can get complicated quickly . What is the best way to describe password entropy (what it is and how it's calculated) in terms a layman can understand? | Not sure can it be of any help to you, but once I managed to describe entropy to a child. After I said that entropy is a measure of chaos in system (to a group of people), a 12 (year more or less) year old said he doesn't understand me. I replied with - "Well, when your room is untidy, entropy is high. But when you clean your room, entropy is low - everything is in order then. So, when a thief comes to your room trying to steal your homework, when the room is clean and entropy is low, he will easly find it - it's usually on your desk or in your school bag. On the other hand, when your room is untidy and your homework is somewhere laying around, thief doesn't know exactly where it is and it can't find it quickly. If entropy is high enough (let's say roof collapse), it's almost impossible to find a piece of paper in it. High entropy - finding a needle in a haystack. In the game - Guess who I am, at the begging entropy is very high, but after few questions entropy is lower and lower until someone has enough information to guess who the person is (or in security, after trails (and errors) and trials, to guess what is the password). | {
"source": [
"https://security.stackexchange.com/questions/6683",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/793/"
]
} |
6,719 | I'm familiar with password hashing, using salts, bcrypt etc. But it doesn't seem like this would work to store a 4 digit pin code since the attacker could try all 10,000 combinations quite quickly. Is there something I'm missing or how is this commonly done? | If you are worried about a "leaked database" scenario (rather than the online cracking vector, which can be mitigated by per-user, per-IP, per-site rate-limiting and lockout) then you are right that hashing is not enough. Even using the most complex sequence of hashes & salting e.g. bcrypt(pbkdf2(sha256(pin),salt1),salt2) you are still going to be vulnerable to anyone who can see (e.g. from a source code/docs leak) what algorithms are used and who can find your salts (normally stored in the same DB/table) -- they can just run the same series of hashes, and even if it takes a minute for each that's only 7 days to try them all. So you would basically be relying on security-by-obscurity. In this case it would be worth encrypting the password hashes with a secret key (which can be kept separately from the source code, only accessible to trusted "production security operations" users, never re-used in pre-prod environments, changed regularly, etc.). That secret key will need to be present on the server which validates/updates PIN codes, but it can be kept separately from the user/pin database (e.g. in an encrypted config file), which means that Little Bobby Tables won't automatically get access to it when he snarfs the whole of T_USERS via SQL injection, or when someone grabs a DVD with a DB backup on it from your sysadmin's desk. [Normally encrypting the hashes is not recommended, because its better to use a secure hash, good salt and strong password/phrases, and encrypting the hashes can give a false sense of security. But if you can't have strong passwords then there aren't many other options -- beggars can't be choosers...] You could combine the PIN with a password -- but in that case why bother with the PIN at all? Just require a strong alphanumeric password. (If the PIN is used in combination with a smartcard or token or similar, of course, then it can add to the overall security.) We could use a bigger (6/8 digit) PIN. Its not feasible to make a digit-only PIN be secure from brute force: at 1B hashes per second, we'd need 16 or more digits to push the brute force time beyond a year. But adding a few more digits might be enough to make the online attack easier to detect and block -- with only 10K combinations its going to be hard to set "slow-force" thresholds low enough to make an attack infeasible. Unfortunately, you have likely have physical constraints (hardware selection) which preclude longer PINs. | {
"source": [
"https://security.stackexchange.com/questions/6719",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4264/"
]
} |
6,730 | I'm new to infosec and doing some reading. Not surprisingly one starting point was wikipedia. In this article , authenticity and non-repudiation are listed as 2 separate 'Basic concepts'. My understanding is that you cannot achieve non-repudiation by not knowing which parties are involved, which requires authenticity to be in place. In that sense, I see authenticity as a sub component of non-repudiation. Have you got examples backing up the approach that these 2 concepts are fundamentally separate? | Authenticity is about one party (say, Alice) interacting with another (Bob) to convince Bob that some data really comes from Alice. Non-repudiation is about Alice showing to Bob a proof that some data really comes from Alice, such that not only Bob is convinced, but Bob also gets the assurance that he could show the same proof to Charlie, and Charlie would be convinced, too, even if Charlie does not trust Bob. Therefore, a protocol which provides non-repudiation necessarily provides authenticity as a byproduct; in a way, authenticity is a sub-concept of non-repudiation. However, there are ways to provide authenticity (only) which are vastly more efficient than known methods to achieve signatures (authenticity can be obtained with a Message Authentication Code whereas non-repudiation requires a Digital Signature with much more involved mathematics). For this reason, it makes sense to use "authenticity" as a separate concept. SSL/TLS is a tunneling protocol which provides authenticity (the client is sure to talk to the intended server) but not non-repudiation (the client cannot record the session and show it as proof, in case of a legal dispute with the server, because it would be easy to build a totally fake session record). | {
"source": [
"https://security.stackexchange.com/questions/6730",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4591/"
]
} |
6,753 | I know that in order to delete a Java object I should use character array instead of String, since I can safely erase (rewrite the character array with other data) its content. This seems not to be feasible for the String objects. Now on BlackBerry which is Java based, I was not able to find an API to handle data as character array but i am obliged to use String. Thus my question, in case I store a password in an object, how can I securely delete it? | Actually you cannot really "safely erase" an array of characters in Java. Java does memory allocation through a garbage collector , a tricky piece of software which, in practice, will move memory objects in physical RAM on a regular basis. So what you think as "a char[] instance" will be copied in several places, and the erasure will physically happen only in one of those places. In that context, "secure deletion" cannot really exist in Java. If you use Java, you must ensure that the usage context is such that secure deletion is unnecessary: "secure deletion" is needed mostly when the OS may allocate non-zeroized RAM blocks (thus an application may get excerpts of old RAM from other applications), or in the presence of virtual memory (parts of the RAM being copied to a hard disk). I guess that these do not apply to a BlackBerry, so simple String instances ought to be fine. An other way to state it is that if String instance are not fine for passwords, then you have bigger security issues than mere password leakage. After all, you use a password to protect access to some data, so if you need "secure erasing" for a password, then, quite logically, you would also need "secure erasing" for the protected data as well, and everything you do with it. (One can guess that I am not a big fan of the concept of "secure erasing".) | {
"source": [
"https://security.stackexchange.com/questions/6753",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2858/"
]
} |
6,758 | I've made a series of penetration tests in my network and one of the things I've tried was to record webcam and microphone. Recording an end-user's microphone seems to be a stealth thing, but what about the webcam?
In my tests, the indicator is turned on and I can't figure out a way to do this without turning on the light. So far, I'm assuming that if someone broke into my computer and turned on the webcam, I'll know that. But, if that's possible, which of the available hardwares on the market are vulnerable to that kind of attack? | Most definitely, but in order to do this you would probably have to patch the camera's firmware and then flash it. Similar attacks have been used to disable the "shutter sound" on cameras. | {
"source": [
"https://security.stackexchange.com/questions/6758",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
6,873 | Something like *.com or *.net ? How about *.edu.au ? The RFC 2818 does not say anything about this topic. | Yes, it can be issued. Luckily the common browsers do not accept wildcard certificates for TLDs. Chromium Source Code : // Do not allow wildcards for public/ICANN registry controlled domains -
// that is, prevent *.com or *.co.uk as valid presented names, but do not
// prevent *.appspot.com (a private registry controlled domain).
// In addition, unknown top-level domains (such as 'intranet' domains or
// new TLDs/gTLDs not yet added to the registry controlled domain dataset)
// are also implicitly prevented.
// Because |reference_domain| must contain at least one name component that
// is not registry controlled, this ensures that all reference domains
// contain at least three domain components when using wildcards.
size_t registry_length =
registry_controlled_domains::GetCanonicalHostRegistryLength(
reference_name,
registry_controlled_domains::INCLUDE_UNKNOWN_REGISTRIES,
registry_controlled_domains::EXCLUDE_PRIVATE_REGISTRIES);
// ... [SNIP]
// Account for the leading dot in |reference_domain|.
bool is_registry_controlled =
registry_length != 0 &&
registry_length == (reference_domain.size() - 1);
// Additionally, do not attempt wildcard matching for purely numeric
// hostnames.
allow_wildcards =
!is_registry_controlled &&
reference_name.find_first_not_of("0123456789.") != std::string::npos;
} The complete list of domains that Google disallows is in net/base/registry_controlled_domains/effective_tld_names.dat Other browsers also do this, including IE and Firefox. In the list of fake certificates issued by DigiNotar , there is "*.*.com". This is obviously an attempt to get around the restriction. | {
"source": [
"https://security.stackexchange.com/questions/6873",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2440/"
]
} |
6,883 | First: I can't find any information on this phenomenon, not anywhere on the net. I don't know which application does it, but something in my Windows 7 Home Premium system (fully updated & legal) updates my hosts file. I have UAC enabled. To edit my hosts file, I have to run Notepad with admin privileges or else I can't save my file. The line 127.0.0.1 ad.doubleclick.net has disappeared several times now. It looks like that is the only line to which this happens. I have other lines in the same file, and they are left untouched. I suspect Google Chrome to be responsible for this, since the Google Updater probably has the permissions to modify system files - and it's in their interest to load their crap, but I am not sure. While I understand that I use their services and that ads pay for those services, I don't like the idea of software violating my system like that. And I am surprised that it's even possible, I thought Chrome installed within the user profile and didn't need system write access to install. Can anyone else confirm this issue? Any experience with similar things happening to the hosts file? Edit : I have ProcessMonitor running with a filter on the hosts file. Let's see what I can find... thanks for the suggestion, I hadn't thought of it initially. Update : This morning, Process Monitor showed a bunch of file activity. And 127.0.0.1 ad.doubleclick.net is gone! It looks like Windows Defender did it. Read the Process Monitor log here: http://pastebin.com/eJTf5qWs | Use Process Monitor with a filter to watch the hosts file. Run it long enough and you will see everything that changes the file. http://technet.microsoft.com/en-us/sysinternals/bb896645 | {
"source": [
"https://security.stackexchange.com/questions/6883",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4705/"
]
} |
6,919 | As I understand it, SQL injection should only allow for the manipulation and retrievial of data, nothing more. Assuming no passwords are obtained, how can a simple SQL injection be used to leverage a shell? I have seen attacks where this has been claimed to be possible, and if it is I would like to be able to protect against it. | Many common SQL servers support functions such as xp_cmdshell that allow the execution of arbitrary commands. They are not in the SQL standard so every database software has different names for it. Furthermore there is SELECT ... INTO OUTFILE, that can be used to write arbitrary files with the permissions of the database user. It may be possible to overwrite shell scripts that are invoked by cron or on startup. If the database server process is running on the same server as a web application (e. g. a single rented server), it may be possible to write .php files that can then be invoked by visiting the appropriate url in the browser. The third way to cause damage is to define and execute stored procedures in the database. Or redefine existing stored procedures, for example a function that verifies passwords. There are likely more ways. The application database user should neither have permissions to execute the shell functions nor use INTO OUTFILE nor to define stored procedures. | {
"source": [
"https://security.stackexchange.com/questions/6919",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2666/"
]
} |
6,950 | I have no experience with storing credit cards and I do not know anything about the legal end of this. The company I work for / develop for wants to store credit cards to process auto payments for accounts that are on layaway. Does anyone know a site that lays out the guidelines, or experience with storing credit card information internally? I have heard that whatever server stores the card information is not allowed access to the internet and all information has to be encrypted. Can anyone provide some feedback? Or know of a service that handles this for companies? | There are lots of different ways in which PCI impacts what you do; I'd point out the data security standards (PCI-DSS). Among many other things, they require strong authentication for anyone accessing the system remotely, and have a wide variety of restrictions on what kind of data you can keep. Don't even think about storing credit cards without understanding PCI. At high levels of sales, you will have to be audited by an accredited third party, and the audits can be quite strict, so start documenting early with that in mind. | {
"source": [
"https://security.stackexchange.com/questions/6950",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4786/"
]
} |
7,001 | What is the best defense against JSON hijacking ? Can anyone enumerate the standard defenses, and explain their strengths and weaknesses? Here are some defenses that I've seen suggested: If the JSON response contains any confidential/non-public data, only serve the response if the request is authenticated (e.g., comes with cookies that indicate an authenticated session). If the JSON data contains anything confidential or non-public, host it at a secret unguessable URL (e.g., a URL containing a 128-bit crypto-quality random number), and only share this secret URL with users/clients authorized to see the data. Put while(1); at the start of the JSON response, and have the client strip it off before parsing the JSON. Have the client send requests for JSON data as a POST (not a GET), and have the server ignore GET requests for JSON data. Are these all secure? Are there any reasons to choose one of these over the others? Are there any other defenses I'm missing? | The first defence is to stick to the specification by using valid JSON which requires an object as top level entity . All known attacks are based on the fact that if the top level object is an array, the response is valid Java Script code that can be parsed using a <script> tag. If the JSON response contains any confidential/non-public data, only serve the response if the request is authenticated (e.g., comes with cookies that indicate an authenticated session). That's the pre requisite for the attack , not a mitigation. If the browser has a cookie for site A, it will include it in all requests to site A. This is true even if the request was triggered by a <script> tag on site B. If the JSON data contains anything confidential or non-public, host it at a secret unguessable URL (e.g., a URL containing a 128-bit crypto-quality random number), and only share this secret URL with users/clients authorized to see the data. URLs are not considered a security feature . All the common search engines have browser addons/toolbars that report any visited URL back to the search engine vendor. While they might only report URLs that are explicitly visited, I wouldn't risk this for JSON URLS either. Have the client send requests for JSON data as a POST (not a GET), and have the server ignore GET requests for JSON data. This will prevent the <script> include. Put while(1); at the start of the JSON response, and have the client strip it off before parsing the JSON. I suggest a modified version of this approach: Add </* at the beginning . while(1) is problematic for two reasons: First it is likely to trigger maleware scanner (on clients, proxies and search engine). Second it can be used for DoS attacks against the CPU of web surfers. Those attacks obviously originate from your server . | {
"source": [
"https://security.stackexchange.com/questions/7001",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/971/"
]
} |
7,045 | One of our clients has sent us a list of security requirements. One of them was that registration does not including setting a password - once complete, a temp password is sent to the user, and the user must change the temp password on the first login. I think I have come across this flow as a user, but I can't figure out what is it good for. If it's to force the user to use a real email, the common practice is to send a validation link to email used, and make the account inactive until the link's url is accessed. So, is there a real security benefit in assigning temporary passwords to newly registered users, or is it just some IT manager who tries to be clever? | The procedure you describe seems to be the conflation of two distinct procedures which apply to distinct contexts. 1. Registration for a targeted individual There are some contexts in which the registration is for a specific individual, with a defined physical identity, and occurs in a process which does not allow the setting of a user-chosen password. An example is when you "register" someone who you met at some trade show, and who gave you his business card. You know his email address, but when registration takes place, the user is not there. In such a situation, you have to send him a way to initially log in to your system (with at least some level of authentication -- you do not want to create an account for everybody, just for that guy) so that he could complete the procedure by choosing his own password. An initial password, sent by email, and which must be changed upon first usage (enforced by the application), is one way to cope with that context. Sending him a "unique registration link" which brings him to a page where he can choose his password is, from a security point of view, exactly the same thing . This is a non-ideal situation, in that the initial credentials must travel unprotected. Forcing password change upon first usage is a mitigation measure: yes, an attacker could intercept the initial password / validation link, but then he would have to choose a password himself, and at least the intended user will notice the problem (by not receiving the email or not being able to log in). 2. Registration for "anybody" Most deployed Web apps which use registration work in a different context. Consider a Web-based vendor of some goods (think Amazon). Here, you accept anybody as new user. You need to "register" users because you want users to be able to keep track of their commands, and they should not be able to see the commands of other people. In such a context, the user is available during the initial registration: he initiated it, he is right behind his keyboard at that time, and he can type in his chosen password under the cover of HTTPS. The Web app can accept this kind of registration because it is not picky: by design, anybody is entitled to register. You still need a "validation link" sent by email, but for a very distinct purpose: the validation link is used to make sure that the email address is genuine. An attacker intercepting that communication could fool you (i.e. register under an email address which he does not "own" -- although he can intercept emails sent to that address, so he really owns it, to some extent), but such an attacker would not attack the same thing than in the first context. In the first context, the attacker impersonates the user who is entitled to log in; in the second context, the attacker is entitled to log in anyway, but he succeeds in doing so while providing the email address of someone else. Summary Sending a one-time initial password through email is a good or bad thing, depending on what kind of registration you are running. For a classical Web-based account creation that anybody can do, this is bad. For registration of a specific individual, the enforcement of a password change from the initial password is good, because in that case you already had to sent it by email, which is a security hazard (you just had no choice, and you must mitigate risks). | {
"source": [
"https://security.stackexchange.com/questions/7045",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4827/"
]
} |
7,057 | When a user's logging in to my site, they send their username and password to me over https. Besides the ssl, there's no special obfuscation of the password - it lives in memory in the browser in the clear. Is there anything else I should do? Should I keep it hashed somehow, even in RAM? | This is fine. You don't need to do anything else. There is no need to hash it or obfuscate it in RAM. You should be careful to hash the password appropriately on the server side before storing it in persistent memory (e.g., in a database), and you should take care to use proper methods for password hashing. See, e.g., How to securely hash passwords? , Which password hashing method should I use? , Most secure password hash algorithm(s)? , Do any security experts recommend bcrypt for password storage? . If you want to provide additional security for your users, here are some steps you could take: Use site-wide SSL/TLS. Any attempt to visit your site through HTTP should immediately redirect to HTTPS. Enable HSTS on your site. This tells browsers to only connect to you via HTTPS. Paypal uses it. It is supported in recent versions of Firefox and Chrome. I'm not saying you need to do these things (though site-wide SSL/TLS makes a big difference). But these are some options that can help strengthen security against some common attack vectors. | {
"source": [
"https://security.stackexchange.com/questions/7057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4850/"
]
} |
7,088 | Say you were in charge of getting rid of a large quantity of paper - up to 1,000 in a row. It can't be used as scratch paper because it contains confidential information. It also can't be outsourced to third parties because the company isn't interested in refunding your expenses. The scenario is real and is wasting the time of a person whose work is useful to me. The current method is tearing them all apart by hand or using a crushing machine when it's available. As mentioned, it takes a lot of time and could probably be done a thousand more efficient ways. What is the cheapest and fastest method to destroy a large quantity of paper containing confidential information? | The worst thing you can do is tearing them apart. It's time consuming and attacker just needs extra time and patience to put pieces together. The same rule applies for shredding - if after shredding are left too large pieces, again, attacker just needs time and patience. There are several shredding techniques (from wikipedia) Strip-cut shredders, the least secure, use rotating knives to cut narrow strips as long as the original sheet of paper. Such strips can be reassembled by a determined and patient investigator or adversary, as the product (the destroyed information) of this type of shredder is the least randomized. It also creates the highest volume of waste inasmuch as the chad has the largest surface area and is not compressed. Cross-cut or confetti-cut shredders use two contra-rotating drums to cut rectangular, parallelogram, or diamond-shaped (or lozenge) shreds. Particle-cut shredders create tiny square or circular pieces. Cardboard shredders are designed specifically to shred corrugated material into either strips or a mesh pallet. Disintegrators and granulators repeatedly cut the paper at random until the particles are small enough to pass through a mesh. Hammermills pound the paper through a screen. Pierce and Tear Rotating blades pierce the paper and then tear it apart. Grinders A rotating shaft with cutting blades grinds the paper until it is small enough to fall through a screen. Also, there are several standards for shredding (from wikipedia) Level 1 = 12 mm strips OR 11 x 40mm particles Level 2 = 6 mm strips OR 8 x 40mm particles Level 3 = 2 mm strips OR 4 x 30mm particles (Confidential) Level 4 = 2 x 15 mm particles (Commercially Sensitive) Level 5 = 0.8 x 12 mm particles (Top Secret or Classified) Level 6 = 0.8 x 4 mm particles (Top Secret or Classified) What I'm trying to say is - it's not about - "Let's torn this paper and we'll be fine!" It's about how hard is to put this pieces together. If it's next to impossible, then shredding is done well.
However, if attacker is aiming for the lowest hanging fruit, then any kind of shredding is better then none. Example of well done shredding (once it was a money) I guess setting them on fire or destroying them in chemical reaction would be the fastest, but this techniques should only be preformed in controlled environments by professionals!!!
Alternative is to decompose paper in water, however, it's pretty long process (10-14 days) and you'll need enough space and water to do so. | {
"source": [
"https://security.stackexchange.com/questions/7088",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4724/"
]
} |
7,118 | I recently received an email from a popular graduate job website (prospects.ac.uk) that I haven't used in a while suggesting I use a new feature. It contained both my username and password in plain text. I presume this means that they have stored my password in plain text. Is there anything that I can do to either improve their security or completely remove my details from their system? UPDATE: Thanks to everyone for the advice. I emailed them, spelling out what was wrong and why, saying that I will be writing to the DP commissioner and will be adding them to plaintextoffenders.com. I got a response an hour later: an automated message containing a username and password for their support system. Oh dear... | There isn't really much you can do, other than contact the website and try and explain them how bad of an idea and practice it is to store (and email) passwords in plain text. One thing you can do is report any offending site to plaintextoffenders.com - a site (currently a tumblr blog, but we're working on a proper site soon) which lists different "plain text offenders" - sites that email you your own password, thus exposing the fact they either store it in plain text, or using a reversible encryption, which is just as bad. With everything that's happened with Sony , again and again, people become more aware to the dangers of sites storing sensitive details unencrypted, yet many still aren't. There are over 300 sites reported, with more reports coming every day! Hopefully, plaintextoffenders.com helps by exposing more and more sites. Once this gets enough attention on twitter or other social media, sometimes sites change their way, and fix the problem! For example, Smashing Magazine and Pingdom have recently changed the way they deal with passwords, and no longer store nor email the passwords in plain text! The problem is awareness, and I hope that we help the cause with plaintextoffenders. | {
"source": [
"https://security.stackexchange.com/questions/7118",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4629/"
]
} |
7,142 | A "soon to enter beta" online backup service, Bitcasa, claims to have both de-duplication (you don't backup something already in the cloud) and client side encryption. http://techcrunch.com/2011/09/12/with-bitcasa-the-entire-cloud-is-your-hard-drive-for-only-10-per-month/ A patent search yields nothing with their company name but the patents may well be in the pipeline and not granted yet. I find the claim pretty dubious with the level of information I have now, anyone knows more about how they claim to achieve that? Had the founders of the company not had a serious business background (Verisign, Mastercard...) I would have classified the product as snake oil right away but maybe there is more to it. Edit: found a worrying tweet : https://twitter.com/#!/csoghoian/status/113753932400041984 , encryption key per file would be derived from its hash, so definitely looking like not the place to store your torrented film collection, not that I would ever do that. Edit2: We actually guessed it right, they used so called convergent encryption and thus someone owning the same file as you do can know wether yours is the same, since they have the key. This makes Bitcasa a very bad choice when the files you want to be confidential are not original. http://techcrunch.com/2011/09/18/bitcasa-explains-encryption/ Edit3: https://crypto.stackexchange.com/questions/729/is-convergent-encryption-really-secure have a the same question and different answers | I haven't thought through the details, but if a secure hash of the file content were used as the key then any (and only) clients who "knew the hash" would be able to access the content. Essentially the cloud storage would act as a collective partial (very sparse, in fact) rainbow table for the hashing function, allowing it to be "reversed". From the article: "Even if the RIAA and MPAA came knocking on Bitcasa’s doors, subpoenas in hand, all Bitcasa would have is a collection of encrypted bits with no means to decrypt them." -- true because bitcasa don't hold the objectid/filename-to-hash/key mapping; only their clients do (client-side). If the RIAA/MPAA knew the hashes of the files in question (well known for e.g. specific song MP3s) they'd be able to decrypt and prove you had a copy, but first they'd need to know which cloud-storage object/file held which song. Clients would need to keep the hash for each cloud-stored object, and their local name for it, of course, to be able to access and unencrypt it. Regarding some of the other features claimed in the article: "compression" -- wouldn't work server-side (the encrypted content will not compress well) but could be applied client-side before encryption "accessible anywhere" -- if the objid-to-filename-and-hash/key mapping is only on the client then the files are useless from other devices, which limits the usefulness of cloud storage. Could be solved by e.g. also storing the collection of objid-to-filename-and-hash/key tuples, client-side encrypted with a passphrase. "patented de-duplication algorithms" -- there must be more going on than the above to justify a patent -- possibly de-duplication at a block, rather than file level? the RIAA/MPAA would be able to come with a subpoena and an encrypted-with-its-own-hash copy of whatever song/movie they suspect people have copies of. Bitcasa would then be able to confirm whether or not that file had been stored or not. They wouldn't be able to decrypt it (without RIAA/MPAA giving them the hash/key), and (particularly if they aren't enforcing per-user quotas becausrer they offer "infinite storage") they might not have retained logs of which users uploaded/downloaded it. However, I suspect they could be required to remove the file (under DMCA safe harbour rules) or possibly to retain the content but then log any accounts which upload/download it in the future. | {
"source": [
"https://security.stackexchange.com/questions/7142",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1898/"
]
} |
7,161 | My question today come from my homework question from class ethics in IT. Our question states roughly that I'm IT guy in in big company and I am asked to hire few hackers to to find vulnerabilities in the system. Hackers would take a role in finding out what's wrong with security and alarm company what need to patched. But, I find myself uneasy on criminal record of such hackers. What should I do, and what are ethical issue about this problem. Well, my thoughts about it at first is to say no. There are many companies which could do same thing as hackers could do. Security companies would be more expensive, but hacker have their own knowledge on underground hacking. The problem with (some) hackers is that their record even though it points that they been using their knowledge to do unethical things but can't be overlooked because what they can do for companies. Checking on their background is a must. Also checking on their achievements in security training should be done also, and find out how much they learned and if they could be trusted. In my opinion its all about the trust in the hacker, who unknowingly could install a back door in the company. Well that's roughly my thoughts. So should hackers/crackers be hired for penetration testing? Or it is too much of a risk, and companies well known for security testing should be hired instead? | There are a few drawbacks to hiring a blackhat "hacker" instead of a security company. They are harder to trust Apart from backdooring your system, I would not trust a blackhat I pick off the street to keep his findings about my network confidential. Hackers like to boast to their peers. The knowledge they obtain about your security can bite you in the ass in more than one way. They are adrenaline junkies OK, that's a bit strong. But a hacker who is just in it for the fun and not for the money, will focus on what he finds fun. I have worked with both professional penetration testers and "recreational hackers", and the latter kind performs a different kind of test. If you find yourself a good hacker, he may have more knowledge than the professional penetration tester, but he will not deliver the same quality report. He will find a fun way to enter your network and exploit that fully, while the penetration tester will see if there are multiple ways in, will weigh the issues he finds against the actual risk, and can give you a less black-and-white advice about how to solve the issues. They are harder to do business with Think in terms of planning, deadlines, availability for status updates et cetera. So, if you find yourself the perfect gentleman hacker with knowledge of the underground, who will keep your findings confidential, propose realistic solutions, and do the test from the perspective of what is useful to you instead of just what is fun for him, then you have a winner. Good luck finding him :-) By the way, if you already have a security company you hire for regular penetration tests and such (which imho a big company should have), then hiring some hackers for an ``extra'' test to see if the company misses anything, can be a great move. The confidentiality issue stays though... | {
"source": [
"https://security.stackexchange.com/questions/7161",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4961/"
]
} |
7,204 | Is formatting the disk and reinstalling the system from scratch (to Ubuntu) enough to remove any potential hidden software spyware, keyloggers etc.? Or can something still persist installed in the bios or something like that? What to do then? To be clear, not concerned about common malware. The question is more specific about a machine to which people other than the user had physical access for many hours. So when the user returns cannot be sure that nothing changed, so the user makes a fresh install after some weeks. Is that enough to "clean" it? Hardware keyloggers are not part of this question as they should continue after software reinstallation. | It is possible for malware to persist across a re-format and re-install, if it is sufficiently ingenious and sophisticated: e.g., it can persist in the bios, in the firmware for peripherals (some hardware devices have firmware that can be updated, and thus could be updated with malicious firmware), or with a virus infecting data files on removable storage or on your backups. However, most malware doesn't do anything quite this nasty. Therefore, while there are no guarantees, re-formatting and re-installing should get rid of almost all malware you're likely to encounter in the wild. Personally, I would be happy with re-formatting and re-installing. It's probably good enough in practice. | {
"source": [
"https://security.stackexchange.com/questions/7204",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4980/"
]
} |
7,219 | I am currently taking a Principles of Information Security class. While talking about different encryption methods, a large number of my classmates seem to believe that Asymmetric Encryption is better (more secure) than Symmetric Encryption. A typical statement is something like this: Generally asymmetric encryption schemes are more secure because they
require both a public and a private key. Certainly, with symmetric encryption, you have to worry about secure key exchange, but as far as I can tell there's no inherent reason why one must be more secure than the other. Especially given that the asymmetric part is often just used for the key exchange and then the actual data is encrypted with a symmetric algorithm. So, am I missing something or can a general statement like this really be made about which is more secure? If I have a message encrypted with AES and another copy encrypted with RSA, and all other things being equal, which is more likely to be cracked? Can this comparison even be made? | There is a sense in which you can define the strength of a particular encryption algorithm¹: roughly speaking, the strength is the number of attempts that need to be made in order to break the encryption. More precisely, the strength is the amount of computation that needs to be done to find the secret. Ideally, the strength of an algorithm is the number of brute-force attempts that need to be made (weighed by the complexity of each attempt, or reduced if some kind of parallelization allows for multiple attempts to share some of the work); as attacks on the algorithm improve, the actual strength goes down. It's important to realize that “particular encryption algorithm” includes considering a specific key size. That is, you're not pitching RSA against AES, but 1024-bit RSA (with a specific padding mode) with AES-256 (with a specific chaining mode, IV, etc.). In that sense, you can ask: if I have a copy of my data encrypted with algorithm A with given values of parameters P and Q (in particular the key size), and a copy encrypted with algorithm B with parameters P and R, then which of (A,Pval₁,Qval₁) and (B,Pval₂,Rval₂) is likely to be cracked first? In practice, many protocols involve the use of multiple cryptographic primitives. Different primitives have different possible uses, and even when several primitives can serve a given function, there can be one that's better suited than others. When choosing a cryptographic primitive for a given purpose, the decision process goes somewhat like this: What algorithms can do the job? → I can use A or B or C. What strength to I need? → I want 2 N operations, so I need key size L A for primitive A, L B for primitive B, L C for primitive C. Given my constraints (brute speed, latency, memory efficiency, …), which of these (L A -bit A or L B -bit B or L C -bit C) is best? For example, let's say your requirement is a protocol for exchanging data with a party you don't trust. Then symmetric cryptography cannot do the job on its own: you need some way to share the key. Asymmetric cryptography such as RSA can do the job, if you let the parties exchange public keys in advance. (This is not the only possibility but I won't go into details here.) So you can decide on whatever RSA key length has the right strength for your application. However RSA is slow and cumbersome (for example there aren't standard protocols to apply RSA encryption to a stream — mainly because no one has bothered because they'd be so slow). Many common protocols involving public-key cryptography use it only to exchange a limited-duration secret: a session key for some symmetric cryptography algorithm. This is known as hybrid encryption . Again, you choose the length of the session key according to the desired strength. In this scenario, the two primitives involved tend to have the same strength. ¹ The same notion applies to other uses of cryptography, such as signing or hashing. | {
"source": [
"https://security.stackexchange.com/questions/7219",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3425/"
]
} |
7,398 | While looking up methods for creating secure session cookies I came across this publication: A Secure Cookie Protocol . It proposes the following formula for a session cookie: cookie = user | expiration | data_k | mac where | denotes concatenation. user is the user-name of the client. expiration is the expiration time of the cookie. data_k is encrypted data that's associated with the client (such as a session ID or shopping cart information) encrypted using key k . k is derived from a private server key sk ; k = HMAC(user | expiration,
sk) . data_k could be encrypted using AES using the key k . mac is an HMAC of the cookie to verify the authenticity of the cookie; mac = HMAC(user | expiration | data | session-key, k) . data is the unencrypted data associated with the client. session-key is the SSL session key. HMAC is HMAC-MD5 or HMAC-SHA1 . According to the paper, it provides cookie confidentiality, and prevents against replay and volume attacks. To me (being an amateur in security/cryptography) this looks pretty good. How good is this method for session cookies or cookies in general? | Yes, this does look like a pretty good scheme for protecting cookies. Another more recent scheme in this area is SCS: Secure Cookie Sessions for HTTP , which is a solid scheme, very well thought-out. I recommend reading the security discussion of that document to get a good sense of what the security threats may be. To help understand the purpose and role of the cookie scheme you mention, let me back up and provide some context. It is common that web applications need to maintain session state: i.e., some state whose lifetime is limited to the current session, and that is bound to the current session. There are two ways to maintain session state: Store session state on the server. The web server feeds the browser a session cookie: a cookie whose only purpose is to hold a large, unguessable bit-string that serves as the session identifier. The server keeps a lookup table, with one entry per open session, that maps from the session identifier to all of the session state associated with this session. This makes it easy for web application code to retrieve and update the session state associated with a particular HTTP/HTTPS request. Most web application frameworks provide built-in support for storing session state on the server side. This is the most secure way to store session state. Because the session state is stored on the server, the client has no direct access to it. Therefore, there is no way for attackers to read or tamper with session state (or replay old values). It does require some extra work to keep session state synchronized across all servers, if your web application is distributed across multiple back-end compute nodes. Store session state on the client. The other approach is to put session state in a cookie and send the cookie to the browser. Now each subsequent request from the browser will include the session state. If the web application wants to modify the session state, it can send an updated cookie to the browser. If done naively, this is a massive security hole, because it allows a malicious client to view and modify the session state. The former is a problem if there is any confidential data included in the session state. The latter is a problem if the server trusts or relies upon the session state in any way (for example, consider if the session state includes the username of the logged-in user, and a bit indicating whether that user is an administrator or not; then a malicious client could bypass the web application's authentication mechanism). The proposal you mention, and the SCS scheme, are intended to defend against these risks as well as possible. They can do so in a way that is mostly successful. However, they cannot prevent a malicious client from deleting the cookie (and thus clearing the session state). Also, they cannot prevent a malicious client from replaying an older value of the cookie (and thus resetting the session state to an older value), if the older version came from within the same session. Therefore, the developer of the web application needs to be aware of these risks and take care about what values are stored in session state. For this reason, storing session state in cookies is a bit riskier than storing it on the server, even if you use one of these cryptographic schemes to protect cookies. (However, if you are going to store session state in cookies, I definitely recommend you use one of these two schemes to protect the values in the cookies.) Why would anyone store session state in cookies, given that it can just as easily be stored on the server side? Most of the time, there is no reason to. However, in some exceptional cases -- such as a HTTP server that is extremely constrained in the amount of storage it has, or in load-balanced web applications that are distributed across multiple machines without good support for synchronized session state -- there might be justifiable reasons to consider storing session state in cookies, and then using one of these schemes is a good idea. P.S. A related topic: if you use ASP.NET View State , make sure you configure it to be encrypted and authenticated: i.e., configure ViewStateEncryptionMode to Always and EnableViewStateMac to true ; if you use multiple server nodes, generate a strong cryptographic key and configure each server's machineKey to use that key . Finally, make sure you have an up-to-date version of the ASP.NET framework; older versions had a serious security flaw in the ViewState crypto . | {
"source": [
"https://security.stackexchange.com/questions/7398",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5076/"
]
} |
7,421 | I have been trying to understand how ssl works. Instead of Alice and Bob, lets consider client and server communication.
Server has a digital certificate acquired from a CA. It also has public and private keys.
Server wants to send a message to Client. Server's public key is already available to client. Assuming that ssl handshake is completed. Server to Client : Server attaches its public key to the message. Runs hash function on (message+public key). Results is known as HMAC. Encrypt HMAC using it's private key. Result is called digital
signature. Send it to Client along with the digital certificate. Client checks the certificate and finds that it's from the expected
Server. Decrypts HMAC using Server's public key. Runs the hash function on (message+public key) to obtain the original
message. Client to Server Client runs hash function on (message+public key) and then encrypts
using the same public key. Server decrypts using private key, runs the hash function on the
resultant data to obtain the message. Please let me know if my understanding is correct. | There are a few confusions in your post. First of all, HMAC is not a hash function . More about HMAC later on. Hash Functions A hash function is a completely public algorithm (no key in that) which mashes bit together in a way which is truly infeasible to untangle: anybody can run the hash function on any data, but finding the data back from the hash output appears to be much beyond our wit. The hash output has a fixed size, typically 256 bits (with SHA-256) or 512 bits (with SHA-512). The SHA-* function which outputs 160 bits is called SHA-1, not SHA-160, because cryptographers left to their own devices can remain reasonable for only that long, and certainly not beyond the fifth pint. Signature Algorithms A signature algorithm uses a pair of keys, which are mathematically linked together, the private key and the public key (recomputing the private key from the public key is theoretically feasible but too hard to do in practice, even with Really Big Computers, which is why the public key and be made public while the private key remains private). Using the mathematical structure of the keys, the signature algorithm allows: to generate a signature on some input data, using the private key (the signature is a mathematical object which is reasonably compact, e.g. a few hundred bytes for a typical RSA signature); to verify a signature on some input data, using the public key. Verification takes as parameters the signature, the input data, and the public key, and returns either "perfect, man !" or "dude, these don't match". For a secure signature algorithm, it is supposedly unfeasible to produce a signature value and input data such that the verification algorithm with a given public key says "good", unless you know the corresponding private key, in which case it is easy and efficient. Note the fine print: without the private key, you cannot conjure some data and a signature value which work with the public key, even if you can choose the data and the signature as you wish. "Supposedly unfeasible" means that all the smart cryptographers in the world worked on it for several years and yet did not find a way to do it, even after the fifth pint. Most (actually, all) signature algorithms begin by processing the input data with a hash function, and then work on the hash value alone. This is because the signature algorithm needs mathematical objects in some given sets which are limited in size, so they need to work on values which are "not too big", such as the output of a hash function. Due to the nature of the hash function, things work out just well (signing the hash output is as good as signing the hash input). Key Exchange and Asymmetric Encryption A key exchange protocol is a protocol in which both parties throw mathematical objects at each other, each object being possibly linked with some secret values that they keep for them, in a way much similar to public/private key pairs. At the end of the key exchange, both parties can compute a common "value" (yet another mathematical object) which totally evades the grasp of whoever observed the bits which were exchanged on the wire. One common type of key exchange algorithm is asymmetric encryption . Asymmetric encryption uses a public/private key pair (not necessarily the same kind than for a signature algorithm): With the public key you can encrypt a piece of data. That data is usually constrained in size (e.g. no more than 117 bytes for RSA with a 1024-bit public key). Encryption result is, guess what, a mathematical object which can be encoded into a sequence of bytes. With the private key, you can decrypt , i.e. do the reverse operation and recover the initial input data. It is assumed that without the private key, tough luck. Then the key exchange protocol runs thus: one party chooses a random value (a sequence of random bytes), encrypts that with the peer's public key, and sends him that. The peer uses his private key to decrypt, and recovers the random value, which is the shared secret. An historical explanation of signatures is: "encryption with the private key, decryption with the public key". Forget that explanation. It is wrong. It may be true only for a specific algorithm (RSA), and, then again, only for a bastardized-down version of RSA which actually fails to have any decent security. So no , digital signatures are not asymmetric encryption "in reverse". Symmetric Cryptography Once two parties have a shared secret value, they can use symmetric cryptography to exchange further data in a confidential way. It is called symmetric because both parties have the same key, i.e. the same knowledge, i.e. the same power. No more private/public dichotomy. Two primitives are used: Symmetric encryption : how to mangle data and unmangle it later on. Message Authentication Codes : a "keyed checksum": only people knowing the secret key can compute the MAC on some data (it is like a signature algorithm in which the private and the public key are identical -- so the "public" key had better be not public !). HMAC is a kind of MAC which is built over hash functions in a smart way, because there are many non-smart ways to do it, and which fail due to subtle details on what a hash function provides and does NOT provide. Certificates A certificate is a container for a public key. With the tools explained above, one can begin to envision that the server will have a public key, which the client will use to make a key exchange with the server. But how does the client make sure that he is using the right server's public key, and not that of a devious attacker, a villain who cunningly impersonates the server ? That's where certificates come into play. A certificate is signed by someone who is specialized in verifying physical identities; it is called a Certificate Authority . The CA meets the server "in real life" (e.g. in a bar), verifies the server identity, gets the server public key from the server himself, and signs the whole lot (server identity and public key). This results in a nifty bundle which is called a certificate. The certificate represents the guarantee by the CA that the name and public key match each other (hopefully, the CA is not too gullible, so the guarantee is reliable -- preferably, the CA does not sign certificates after its fifth pint). The client, upon seeing the certificate, can verify the signature on the certificate relatively to the CA public key, and thus gain confidence in that the server public key really belongs to the intended server. But, you would tell me, what have we gained ? We must still know a public key, namely the CA public key. How do we verify that one ? Well, we can use another CA. This just moves the issue around, but it can end up with the problem of knowing a priori a unique or a handful of public keys from über-CAs which are not signed by anybody else. Thoughtfully, Microsoft embedded about a hundred of such "root public keys" (also called "trust anchors") deep within Internet Explorer itself. This is where trust originates (precisely, you forfeited the basis of your trust to the Redmond firm -- now you understand how Bill Gates became the richest guy in the world ?). SSL Now let's put it all together, in the SSL protocol, which is now known as TLS ("SSL" was the protocol name when it was a property of Netscape Corporation). The client wishes to talk to the server. It sends a message ("ClientHello") which contains a bunch of administrative data, such as the list of encryption algorithms that the client supports. The server responds ("ServerHello") by telling which algorithms will be used; then the server sends his certificate ("Certificate"), possibly with a few CA certificates in case the client may need them (not root certificates, but intermediate, underling-CA certificates). The client verifies the server certificate and extracts the server public key from it. The client generates a random value ("pre-master secret"), encrypts it with the server public key, and sends that to the server ("ClientKeyExchange"). The server decrypts the message, obtains the pre-master, and derive from it secret keys for symmetric encryption and MAC. The client performs the same computation. Client sends a verification message ("Finished") which is encrypted and MACed with the derived keys. The server verifies that the Finished message is proper, and sends its own "Finished" message in response. At that point, both client and server have all the symmetric keys they need, and know that the "handshake" has succeeded. Application data (e.g. an HTTP request) is then exchanged, using the symmetric encryption and MAC. There is no public key or certificate involved in the process beyond the handshake. Just symmetric encryption (e.g. 3DES, AES or RC4) and MAC (normally HMAC with SHA-1 or SHA-256). | {
"source": [
"https://security.stackexchange.com/questions/7421",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5087/"
]
} |
7,440 | There are many great questions that ask what is the best certificate to use for a website; but once the certificate is purchased, there is also the possibility to choose or edit the Cipher list. Although each vendor may call this setting something slightly different, my understanding is that the Cipher List is used to negotiate encryption protocols between the client and the server. What are the basics of choosing a Cipher list for my website? If
the defaults need to be altered Where should "beginners" go to get
reliable advice? Have any of the traditional recommendations changed as of September
2011's BEAST or 2012's CRIME attack? Does anyone maintain a list of ciphers supported by OS/Vendor/and
version? Is it correct to say that something like this would be
useful? Are some certificates incompatible or not preferred with certain ciphers? Where can I learn more? Specifically, how can I get a cursory
ability to compare Ciphers without having to retake some
post-secondary math classes? | In SSL/TLS , the cipher suite selects a set of algorithms, for several tasks: key agreement, symmetric encryption, integrity check. The certificate type impacts the choice of the key agreement. Two parameters must be taken into account: the key type and the key usage : With a RSA key you can nominally use the "RSA" and "DHE_RSA" cipher suite. But if the server certificate has a Key Usage extension which does not include the "keyEncipherment" flag, then you are nominally limited to "DHE_RSA". With a DSA key you can use only a "DHE_DSS" cipher suite. With a Diffie-Hellman key, you can use only one of "DH_RSA" or "DH_DSS", depending on the issuing certificate authority key type. Most SSL server certificates have a RSA key which is not restricted through a Key Usage extension, so you can use both "RSA" and "DHE_RSA" key types. "DHE" stands for "Diffie-Hellman Ephemeral". This allows Perfect Forward Secrecy . PFS means that if an attacker steals the server private key (the one which is stored in a file, hence plausibly vulnerable to ulterior theft), he will still not be able to decrypt past recorded transactions. This is a desirable property, especially when you want your system to look good during an audit. For the integrity check , you should not use MD5, and, if possible, avoid SHA-1 as well. None of the currently known weaknesses of MD5 and SHA-1 impacts the security of TLS (except possibly when used within a certificate, but that's chosen by the CA, not you). Nevertheless, using MD5 (or, to a lesser extent, SHA-1) is bad for public relations. MD5 is "broken". If you use MD5, you may have to justify yourself. Nobody would question the choice of SHA-256. The general consensus is that SHA-1 is "tolerable" for legacy reasons. For symmetric encryption , you have the choice between (mostly) RC4, 3DES and AES (for the latter, the 256-bit version is overkill; AES-128 is already fine). The following points can be made: RC4 and 3DES will be supported everywhere. The oldest clients may not support AES (e.g. Internet Explorer 6.0 does not appear to be able to negotiate AES-based cipher suites). There are known weaknesses in RC4. None is fatal right away. Situation is somewhat similar to that of SHA-1: academically "broken", but not a problem right now. This still is a good reason not to use RC4 if it can be avoided. 3DES is a 64-bit block cipher. This implies some (academic) weaknesses when you encrypt more than a few gigabytes in a single session. 3DES can be heavy on your CPU. On a 2.7 GHz Intel i7, OpenSSL achieves 180 MB/s encryption speed with AES-128 (it could do much better if it used the AES-NI instructions ) but only 25 MB/s with 3DES. 25 MB/s is still good (that's 2.5x what a 100 Mbits/s link can handle, and using a single core) but might not be negligible, depending on your situation. The BEAST attack is an old academic weaknesses which has recently been demonstrated to be applicable in practice. It requires that the attacker spies on the link and runs hostile code on the client (and that code must communicate with the external spying system); the BEAST authors have managed to run it when the hostile internal code uses Java or Javascript. The generic solution is to switch to TLS 1.1 or 1.2, which are immune. Also, this concerns only block ciphers in CBC mode; RC4 is immune. In a SSL/TLS handshake, the client announces his supported cipher suites (preferred suites come first), then the server chooses the suite which will be used. It is traditional that the server honours the client preferences -- i.e. chooses the first suite in the list sent by the client that the server can handle. But a server could enforce its own order of preferences. DHE implies somewhat higher CPU consumption on the server (but it will not make a noticeable difference unless you establish several hundreds new SSL sessions per second). There is no DHE cipher suite which uses RC4. Summary: this leads me to the following preferred list of cipher suites. If the BEAST attack may apply to you (i.e. the client is a Web browser), use this: If the session uses SSL-3.0 or TLS-1.0, try to use TLS_RSA_WITH_RC4_128_SHA . If the client supports TLS 1.1+, or if it does not support TLS_RSA_WITH_RC4_128_SHA , or if you consider PFS to be more important to you than BEAST-like active attacks (e.g. you are most concerned about long-term confidentiality, not immediate breaches), then use TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (fallback to TLS_DHE_RSA_WITH_AES_128_CBC_SHA if the client does not support SHA-256). If DHE cipher suites are not supported by the client, use the corresponding non-DHE suite. Generic fallback is TLS_RSA_WITH_3DES_EDE_CBC_SHA which should work everywhere. Note that the choices above assume that you can alter the suite selection depending on the negotiated protocol version, which may or may not be an available option for your particular SSL server. If BEAST does not apply to you (the client will not run hostile code), then drop RC4 support altogether; concentrate on AES-128 and SHA-256; fallback on 3DES and SHA-1, respectively; use DHE if available. | {
"source": [
"https://security.stackexchange.com/questions/7440",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
7,443 | I recently helped a client who had their server hacked. The hackers added some PHP code into the header of the homepage redirecting the user to a porn website — but only if they came from Google. This made it slightly harder for the client to spot. The client would see the website fine. Only new website visitors from Google would be directed to the porn site. Last night a similar thing appeared to happen to a different client. I assumed it was a similar hack, but when I checked the codebase I could not find any malicious code.
His chrome browser is redirecting from the clients website to www(dot)pc-site(dot)com . I cannot replicate this behaviour. I guess it is possible that malicious code is being added and removed. So I need a more comprehensive way to tell if the server has been hacked. Only 2 developers have access to this dedicated server (and the hosting company Rackspace).
The server is Red Hat Linux. What are the steps I go through to find out if the server has been hacked? | UPDATED I would check the following: Logs. If you have root access you should check things like history which will give you command history and log files in /var/logs . Baseline. If you have a baseline like file hashes to work with for application and system files this will help a lot. You can also use backups to compare a previous state. If using a backup to compare files, use a slightly older one if you can. The site may have been compromised a while before and it is only now that the redirect has been activated. Check any includes. The files may not be on your server. They may be script includes such as <script src=”http://baddomain.com/s.js” /> or iframe type tags. Also do not exclude images, PDFs of Flash (SWF), video files. It is a fairly common trick to embed links in to files of a different content type. I would suggest you inspect them by hand particularly at the start and end of a file. The file may be completely a link/html/javascript or may be a legitimate image file with a link trailing at the end of the file. Check for unusual file dates, sizes and permissions e.g. 777. Check cron jobs for unusual jobs. Someone compromising a system will often leave a back door to get back in again and again. Cron is a very popular way to do this if they managed to get that far. Check for the absence of files, you may not be able to have access to logs but the absence of them is equally a tell tail sign that someone has cleaned up after themself. Use search engines. Not surprising search engines are great at finding everything. Use directives like site: e.g. site:yoursitehere.com baddomain.com see if you get any hits. Often a link or redirect will be obfuscated so long javascript code with single letter variables should be analyzed carefully. Do a packet capture with a tool like Wireshark or tcpdump from a secure workstation to the site. Save it to file and search the file for a parts of the url. Check database records that may be queried or updated. The link could be injected in the database not the PHP. Don't exclude the client's workstation. Use a free online virus scanner if need be. Also check nslookup and see what that resolves to. Check browser extensions, clear cache and check hosts files. To clean it up (if you are compromised) you really do need to go back to bare metal and reinstall. It is painful but is really the only way to be sure that you have got the whole lot. To prevent it in the future you should be doing the following (although you may already be doing some of these): Harden servers, including using vendor recommendations on secure configurations, using up-to-date software. Apply tight security control such as permissions, password policies. Also see folder and file permission shared host advice . Implement quality control proceedures such as testing on low security environments, code review and testing. Have your web application / web site vulnerability tested by a professional certified tester at least once. Look for EC-Council, ISO 27001 and PCI certified testers. http://www.eccouncil.org/certification/licensed_penetration_tester.aspx Check out OWASP www.owasp.org and http://phpsec.org/projects/guide/2.html for web application security resources. Use Intrusion Prevention System (IPS) tools. However depending on your hosting provider you may have limitations on what you can use. Host based IPS tools should be ok if you have a dedicated virtual machine. Hope that helps. Otherwise maybe you could provide more information about the systems you are running? | {
"source": [
"https://security.stackexchange.com/questions/7443",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5097/"
]
} |
7,467 | In some password-authenticated sites, you are asked to enter a random selection of specific characters from your password rather than the whole word/phrase. For example, it might say 'Enter the 1st, 4th and 8th letter' and provide three separate input boxes. From what little I know of security mechanisms, I would have expected this to be less secure than entering the whole password, salting + hashing it and comparing it to the stored hash, as there is no plain text anywhere in sight. This system is used by (hopefully) very secure websites, though, including certain bank sites in the UK. So my two-part question is, is this as secure as traditional salt/hash comparison and why is it so? | This method of password entry is popular in bank sites in Poland. It's called masked passwords. It's meant to protect from keyloggers - and keyloggers only. The password transmission itself is protected from sniffing with SSL, so the reasoning is that if keylogger is installed on client's device, it will never get access to the full password. However, this logic has a few flaws: Attacker needs to enter fewer characters (e.g. only 4 characters, often numbers only) for a single try. Therefore it's easier to brute force this authentication step. That is why masked passwords need to be paired with account lockout policy. With just a few known characters at certain positions (e.g. gathered by a keylogger/screengrabber) attacker can simply try logging in when the server chose positions he knows and refresh when others were chosen. So often masked passwords implementation stores the positions choice for an account server-side for certain amount of time (e.g. a day) until successful authentication. Getting to know the whole password only needs capturing a few successful authentications (e.g. when password length is 12 and there are always 4 positions chosen, it usually takes 8 tries ), so a keylogger/screengrabber will get it - it will just take a little bit longer. The biggest threat for Internet banking authentication is malware (man-in-the-browser attacks) like ZeUS or SpyEye and this kind of software usually conducts social engineering attacks that totally overcome masked passwords scheme. For example, this software can: ask for a whole password display a fake password change form after fake authentication simulate password entry errors and redisplay the form with other positions to fill to get full password in 2-3 tries Masked passwords are being difficult to handle for users and tricky to implement correctly. At the very least developers need to add account lockout policy, positions choice storage and partial hashes. Contrary to popular belief, masked password, especially in e-banking sites, though they offer protection from basic keylogging, completely fail to other, more prevalent threats like malware utilizing social engineering. | {
"source": [
"https://security.stackexchange.com/questions/7467",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5171/"
]
} |
7,489 | While there is the possibility that this is just to prevent people from viewing offensive images against their will, somehow I don't think that's the reason why pretty much every email client defaults to making the user white list every single email address that sends them images. This level of paranoya screams "security feature" to me. My question is: is it? And if so, what are the possible problems? Or is it something else entirely? | Several reasons: The more content in the email that the client loads and interprets, the greater the possibility that the email will deliver a malicious payload. The relatively recent vulnerability in JPEG 2000 rendering code comes to mind -- merely displaying a malicious image could be dangerous. Images in email are commonly used by spammers and marketers to determine whether or not you've opened an email. This implicitly also tells them whether the email was delivered successfully and whether the destination email address was valid (useful for spammers). Depending on your mail platform, image downloads may tell the sender the user's IP address. Image URLs can theoretically be used to attack a network from the inside. For example: <img src="http://192.168.0.1/apply.pl?user=admin&password=admin&action=EnableRemoteLogin"> Hopefully an attack like the above would fail, but security folks prefer to limit exposure as much as possible. | {
"source": [
"https://security.stackexchange.com/questions/7489",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5091/"
]
} |
7,539 | The layman's counter-argument I run in to for any complaint about inadequate security seems to always take the form: You don't need security if you aren't doing something illegal. This kind of response is frustrating to say the least. In part because it's not constructive, but also because it's blatantly false. How do you deal with these kinds of responses from people? I'm looking for concrete examples that can be presented that show the need for strong security when conducting perfectly legitimate activities. Examples in the areas of trust worthy encryption on end-to-end communications for cellular networks, network identity obfuscation services like Tor or VPNs, complete and total data destruction, and so on are what I'm after. I'm always inclined to point to social uprisings in states like Libya and Egypt but these events tend to be presented to too many of the people I encounter that use this argument as "things that happen on TV" and not real things that have any effect on them or their personal liberties. So counter-arguments that keep it squarely in the first world, it-could-hurt-you-or-your-grandma kind of are really valuable here. This question was IT Security Question of the Week . Read the Oct 10, 2011 blog entry for more details or submit your own Question of the Week. | You don't need to lock your front door unless you're a thief. It's the same idea in all relevant respects. Each person needs to take reasonable measures to protect himself and his property from those who would harm him or his property, in accordance with his best judgment of the risks. You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it's the same reason why you lock your Internet front door. | {
"source": [
"https://security.stackexchange.com/questions/7539",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5141/"
]
} |
7,610 | Does anybody have experience with securing/hardening MongoDB server? Check lists or guides would be welcome. | NoSQL databases are relatively new (although arguably an old concept), I haven't seen any specific MongoDB hardening guides and the usual places I look ( CISSecurity , vendor publications, Sans etc all come up short). Suggests it would be a good project for an organisation, uni student, infosec community to write one and maintain it. There is some basic information in Mongodb.org. All the steps in here should be followed including enabling security. The site itself states MongoDB only has a very basic level of security. http://www.mongodb.org/display/DOCS/Security+and+Authentication MongoDB and other NoSQL databases also have a lot less (especially security) features than mature SQL databases, so you are unlikely to find fine-grained permissions or data encryption, it uses MD5 for password hashing with the username as the seed. There are also limitations such as authentication not being available with sharding before version 1.9.1 so as always performing a risk assessment and building a threat model to work out your security needs and threats faced is good idea. Based on this output MongoDB or NoSQL databases in general may not be suitable for your needs, or you may need to use it in a different way that maximizes its advantages and minimizes its weaknesses (e.g. for extracts of data rather than your most sensitive information, or on behind a number of layers of network controls rather than directly connected to your web application). That said, I firmly believe security principles are technology agnostic. If you analyse even the latest attacks, and a good list on datalossdb.org it is amazing how many are still related to default passwords and missing patches. With defense in depth if you follow the following practices should have sufficient security to protect most assets (e.g. individual, commercial) maybe probably not military. Database hardening principles: Authentication - require authentication, for admin or privileged users have two factor if possible (do this at the platform level or via a network device as the database itself doesn't support it). Use key based authentication to avoid passwords if possible. Authorization - minimal number of required accounts with minimal required permissions, read only accounts are supported so use them. As granular access control does not exist use alternate means e.g a web service in front of the database which contains business logic including access control rules or within the application. Minimize the permissions that Mongodb runs as on the platform e.g. should not run as root. Default and system accounts - change the passwords of all default accounts, remove/lock/disable what you can, disable login where you can. Logging and monitoring - enable logging and export these to a central monitoring system. Define specific alerts and investigation procedures for your monitoring staff Input validation - NoSQL databases are still vulnerable to injection attacks so only passing it validated known good input, use of paramaterisation in your application frameworks, all the good practices for passing un-trusted input to a database is required Encryption - depending on the sensitivity of the data, as you cannot encrypt at the database level, encrypting or hashing any sensitive data at the application layer is required. Transport encryption also via the network layer (e.g. VPN). Minimize services and change the default listening port Remove any sample or test databases Have a patch management process in place to identify, evaluate and install all relevant security patches in a timely manner Harden the platform and virtualization platform if used Configure appropriate network controls e.g. firewalls, VLAN's to minimize access to the database, upstream denial of service filtering service, fully qualified DNS, seperate production and non production databases Physically secure environment Have a change management process | {
"source": [
"https://security.stackexchange.com/questions/7610",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3775/"
]
} |
7,651 | We had an incident where some of our managers were given passwords for the people they supervise using a particular company website. Ostensibly it was done so the managers could check in on the users and see that they're doing what they've been directed to do with this third party website. When I found out that a list of the passwords were printed out and given to the managers, I immediately thought that the passwords on the commercial website weren't being stored in a secure fashion and warned users that they should immediately change any passwords that nearly matched their "throwaway" accounts; I'm also afraid that, being typical humans, there are a number of people that used the same password on that site that they use with our internal password system so they didn't need to remember more than one password. I was also shocked that the users weren't warned that their passwords would be distributed to other people/supervisors. I went to the website in question and clicked on their privacy policy link; it returned a 404 error. Was I being paranoid? What are the chances that the commercial website is storing their
passwords in the clear if a manager is able to retrieve a
plaintext list of passwords? | No, you're not paranoid. The chances are pretty high that passwords are stored as plaintext in the database (it's the most obvious explanation). Some estimations say that 30% of websites store (or have stored) their users' passwords in plaintext. Examples include Reddit.com or RockYou.com. It's typically only after a serious breach occurs that the password storage procedures are put to the test. Often sites that store passwords in plaintext will offer you a possibility to resend you the old password once you forgot it. That pretty much proves this insecure practice. There is a possibility that the passwords are stored encrypted and were decrypted for the report, but it's a rare practice. Even if the passwords are encrypted there will always be a problem of key storage as the application most likely would need access to the decryption key for authenticating its users. Of course the proper thing to do for the application is to salt and hash the passwords . | {
"source": [
"https://security.stackexchange.com/questions/7651",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5208/"
]
} |
7,666 | This question is sort-of spun off of a previous one. Why do law-abiding citizens need strong security? There are a lot of great security-focused answers there. However, I think the true question that is brought up is more about privacy and anonymity than it is just security. I'm looking for concrete examples that can be presented that show the need for strong security when conducting perfectly legitimate activities. Examples in the areas of trust worthy encryption on end-to-end communications for cellular networks, network identity obfuscation services like Tor or VPNs, complete and total data destruction, and so on are what I'm after. I'm always inclined to point to social uprisings in states like Libya and Egypt but these events tend to be presented to too many of the people I encounter that use this argument as "things that happen on TV" and not real things that have any effect on them or their personal liberties. So counter-arguments that keep it squarely in the first world, it-could-hurt-you-or-your-grandma kind of are really valuable here. The examples brought up in the post copied above are really more specific to privacy and anonymity than they are about general security. I'm sure you'll find that most "Joe Users" will agree there is a need for Antivirus, Wi-Fi encryption, and other common defensive measures. But, why would the same people have a need for things like Tor, or end-to-end encryption over cellular networks? To be a bit more clear: What are some arguments for personal online privacy/anonymity that your regular Joe User - who plays Angry Birds on Facebook while sitting in his boxers on the living room couch - will relate to? | One real world example - when you are naked in your shower, not doing anything wrong, would you like it if everyone came by and took pictures? Or televised your shower for the world? Probably not. Another example - if I send a love letter, or write a will dividing up my savings, should that be published on the front page of the national papers? Again - no. If I am carrying out my own business, the expectation should be that I have privacy, except where I have consciously and deliberately waived it. This was the case before technology became pervasive - it should still be the case. In the old days law enforcement needed a warrant before they could access your property or communications, because the assumption has to be innocent until proven guilty. This has been eroded as technology has developed. If I encrypt all my emails to my friends, the expectation should not be that I am a criminal for doing it, just that I want privacy, like leaving a room to take a private phone call. I could be planning a surprise birthday party, or applying for a new job, or possibly just enjoy using PGP - it doesn't really matter - it's my business. From the EFF's privacy page: Privacy rights are enshrined in our Constitution for a reason — a
thriving democracy requires respect for individuals' autonomy as well
as anonymous speech and association. These rights must be balanced
against legitimate concerns like law enforcement, but checks must be
put in place to prevent abuse of government powers. Admittedly, I don't live in the US, but those constitutional rights sound good to me. | {
"source": [
"https://security.stackexchange.com/questions/7666",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/953/"
]
} |
7,705 | Suppose I type this in my browser https://www.mysite.com/getsecret?username=alice&password=mysecret and an attacker is watching all traffic from me to my ISP. What information is protected by HTTPS? Is the URL revealed? Are the parameters of the get request revealed? Also, does HTTPS provide integrity for the url? I tried looking at various HTTPS articles and the TLS specification but was not able to figure this out. [EDIT:] It is okay if only the server domain name is revealed. However, no part of ?username=alice&password=mysecret should be revealed. | The HTTPS protocol is equivalent to using HTTP over an SSL or TLS connection (over TCP). Thus, first a TCP connection (on port 443) is opened to the server. This is usually enough to reveal the server's host name (i.e. www.mysite.com in your case) to the attacker. The IP address is directly observed, and: you usually did an unencrypted DNS query before, many HTTPS servers serve only one domain per IP address, The server's certificate is sent in plain, and contains the server name (between multiple ones, maybe), in newer TLS versions, there is the server name indication, by which the client indicates to the server which host name is wished, so the server can present the right certificate, if it has multiple ones. (This is done to be able to go away from 2.) Then a TLS handshake takes place. This includes negotiation of a cipher suite and then a key exchange. Assuming at least one of your browser and the server didn't include the NONE cipher in the accepted suites, everything following the key exchange is encrypted. And assuming there is no successful man-in-the-middle attack (i.e. an attacker which does intercept the connection, and presents a forged server certificate which your browser accepts), the key exchange is secure and no eavesdropper can decrypt anything which is then sent between you and the server, and also no attacker can change any part of the content without this being noticed. This includes the URL and any other part of the HTTP request, as well as the response from the server. Of course, as D.W. mentions, the length of both request (which contains not much more variable data than the URL, maybe some Cookies) and response can be seen from the encrypted data stream. This might subvert the secrecy, specially if there are only a small number of different resources on the server. Also any follow-up resource requests. Your password in the URL (or any other part of the request) should still be secure, though - at most its length can be known. | {
"source": [
"https://security.stackexchange.com/questions/7705",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5251/"
]
} |
7,726 | I'm a professional software developer with a high interest in web application security. I'd say that I probably have a better understanding of the security of web applications than the average developer. My problem is, that my knowledge is heavily focused on the theoretical part of web security - meaning, I've read lots of books, I follow blogs and mailing lists, but my practical experience is limited. I know the root cause to vulnerabilities, I know how they work and how to stop them. What I'd like to do, is to take my knowledge to the next step - I want that practical experience. I know that I'll probably end up doing security related stuff at some point in my career, but not just yet - at least not full time. Meanwhile I'd like to build up my knowledge of web application security and get more experience, especially in fields such as penetration testing. My question to you, my dear security experts, what advice do you have for me to get that practical experience I so desire? What resources could you recommend for learning about the different tools and techniques used in penetration testing? One option could be, to install a vulnerable web application locally (I know there is at least one on google code, can't remember its name), and then use guides such as OWASP's Testing Guide to try out the different techniques. However, I'm probably looking for a more guided way of learning, meaning, I want to know in which order I should do stuff, make sure I'm doing stuff in the right way and in the right order - baby steps, so to say. Any and all advice is more than welcome. | This is great that you are doing so much learning on your own. You are on the right track. Your enthusiasm to learn on your own will put you a step above a lot of your competition. Kudos. My main advice would be: don't worry too much about planning out a path through all the material you want to learn. You don't need a carefully thought-out plan. Instead, get your hands dirty, play around, don't be afraid to explore. When you see something that looks cool, follow up on it opportunistically. At first you might be overwhelmed but in the longer run I think you'll find it a valuable experience. I think one good experience is to engage in a security evaluation of a software system. This would be a fun experience for you. You could pick an open-source web application or web programming framework and do a security evaluation of it: do an architectural risk assessment (what Microsoft calls "threat modeling", with STRIDE and the like), read the code and do security code review, try out some static analysis tools. Then, write up your results and post a blog entry with your evaluation. Do this a few times and I think the experience will be extremely helpful for you. Another very good experience for you would be to build a non-trivial web application or two. I like web2py as a web programming framework for its ease of learning curve, but alternatively picking something that's more popular (e.g., Ruby on Rails, PHP or one of its frameworks) might be good experience to see what the issues are with common frameworks. Write a web application, with some client-side JavaScript code. Play around learning how to use jQuery or the like. Maybe experiment with Node.js. I think this will be a fantastic way to get exposure to some of the challenges and mindset that web developers are likely to face. Depending upon your personality, you might find it fun to create a blog and write down fun things you have learned, as you learn them. For many people, the act of writing something out solidifies it in your brain and forces you to understand it better. You mentioned practicing hacking a web application. Learning how to hack a web application is fun, and it is worthwhile up to a certain point. It's good to have some hands-on experience hacking a web application, to help you think about the risks and to make the concepts concrete. However, I wouldn't make it the primary focus of your learning. Web penetration testing is a lower-end activity these days, with less promising career opportunities, so I wouldn't advise trying to focus your energy on learning to be a web pentester. That's not how I'd advise you to position yourself, if you have the choice (and it sounds like you do). | {
"source": [
"https://security.stackexchange.com/questions/7726",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5242/"
]
} |
7,797 | I was wondering exactly how powerful can keyloggers be? For example someone wanted to access his bank account (which of course is through HTTPS), he will "enter" his password using a combination of type delete highlight delete highlight-drag-drop cut (from another program etc) copy (from another program etc) and basically you know, funny ways to foil keyloggers. I'm not very adept with this technology on keyloggers and I was wondering are these attempts sufficiently strong enough? Or rather, how many % of keyloggers would be foiled by these attempts in masking the password? Of course I understand it is more critical that we do Methods of mitigating threats from keyloggers but I mean in this case we assume that the user has not managed to block the tunnel that sends the attacker the password. | Malicious software that only logs keyboard strokes rarely exists in the wild. Most key loggers for graphical interfaces (e.g. Windows) are more sophisticated and log all user interaction including mouse, copy and paste events by hooking into the operating system. Key loggers are normally a small subset of a rootkit that may also include the ability to act as a man-in-the-middle (MITM) and capture your credentials or session information without logging any key strokes. The best way to foil key loggers is not to have them. Ninefingers answer on Methods of mitigating threats from keyloggers has good recommendations. E.g. Monitor network traffic, use an intrusion prevention system (IPS) or intrusion detection system (IDS). In addition I would add: Avoid logging into websites/accounts using computers that you don’t have control over. E.g. At work, at a friend or parent’s house. Avoid installing software that is not from a reputable source. Use digital signatures and file hashes. Be aware of what applications and services run on your computer. While rootkits do stealth themselves making them hard to detect, knowing what should be running is definitely an advantage. Use two factor specifically one-time-password (OTP) authentication to websites where possible. In the specific scenario of Internet banking, financial intuitions often offer a token or SMS based service that provides a password or number that can only be used once. Use protected mode browsing that disabled browser plugins or scripts. Use low security accounts for normal activities. Apply security updates. Change password regularly. And while this does not prevent key loggers, backup you files regularly. I say this because if you suspect that you have a rootkit then you should wipe your installation and restore only the data you need. | {
"source": [
"https://security.stackexchange.com/questions/7797",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2379/"
]
} |
7,840 | The thought of having a 3rd party send javascript, and images to end users seems to be a scary thought, but that is exactly what we are doing when I place advertisements onto my site. Does serving advertisements from AdSense, or any of the online marketing companies decrease the safety of my browsing session? What is the maximum damage a malicious advertisement could do? Suppose my business model requires serving ads, how can I safely serve advertisements on my site? What precautions can I take? | Yes. Serving advertising is opening yourself up to attacks from the marketing company, or any of their middleman, etc. There are two ways you can serve advertisements. One way is to put the advertisement in an IFRAME. The second is to include it inline, via SCRIPT SRC=. An iframed advertisement is safer: it is walled away from the rest of your page by the same-origin policy. While the ad can still serve unsavory content, display spoofed content, or try to exploit vulnerabilities in the user's browser (in a drive-by download attack), it cannot tamper with the content on the rest of your site or the user's interaction with your site. However, because the iframe limits what the ad can do (it cannot look at or interact with the containing page; it cannot do expando ads and the like), advertisers generally pay less for these kinds of ads. An inline (SCRIPT SRC=) advertisement is a greater danger. If the ad were malicious, it could completely take over your site: it could steal session cookies, plant a keylogger, steal the user's password, disrupt the site's appearance, grab personal information from the user and forward it off-site, spoof user actions, plant unsavory content on your site, etc. Therefore, if you use this method of including ads on your site, you are placing total trust in the advertising company and everyone they do business with. Similarly, you can embed a Flash ad in your web page. This poses similar risks. Malicious ads have been seen in the wild. In 2009, the visitors to the New York Times web page were attacked by a malicious ad that was being served on the NYT pages and that showed a fake A/V alert ( technical details and more details ); the attacker bought coverage for his/her ad by pretending to be a customer of the NY Times . Apparently, the FoxNews website has also been attacked by a malicious advertisement , as has MySpace , Excite , Expedia , Rhapsody , MayoClinic , Bing , Yahoo, the London Stock Exchange ( details ), eBay , Doubleclick , MSN, Spotify , Drudge Report , and undoubtedly others. There have been some studies of the prevalence of malicious ads. Dasient estimated that three million malicious advertising impressions were served per day in 2010. In principle, there are technical defenses. For instance, Yahoo's AdSafe is a restricted subset of Javascript, designed to allow advertisers to build rich media ads (written in Javascript) that can be embedded directly into the page (via SCRIPT SRC), while maintaining security. However, AdSafe has not caught on, and advertising networks have been reluctant to adopt technical defenses. Instead, they rely upon their vetting of their clients -- which can be fairly cursory. There are also some other approaches that might be applicable, including Google Caja , Microsoft's Web Sandbox, and sandboxed iframes , but I'm not familiar with whether they can be readily applied to typical advertising scenarios. As a result, if you accept ads, you are taking on a security risk. In many cases this risk is acceptable, particularly if the revenue stream from ads is significant enough. But I would generally recommend that, if your site is especially security-sensitive, then you should probably avoid putting ads on your pages. | {
"source": [
"https://security.stackexchange.com/questions/7840",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
7,874 | Is S/MIME an abstracted system for general MIME type encryption, whereas PGP is more for email? Why would I want to choose one over the other, or can I use both at the same time? | Summary: S/MIME and PGP both provide "secure emailing" but use distinct encodings, formats, user tools, and key distribution models. S/MIME builds over MIME and CMS . MIME is a standard way of putting arbitrary data into emails, with a "type" (an explicit indication of what the data is supposed to mean) and gazillions of encoding rules and other interoperability details. CMS means "Cryptographic Message Syntax": it is a binary format for encrypting and signing data. CMS relies on X.509 certificates for public key distribution. X.509 was designed to support top-down hierarchical PKI: a small number of "root certification authorities" issue (i.e. sign) certificates for many users (or possibly intermediate CA); a user certificate contains his name (in an email context, his email address) and his public key, and is signed by a CA. Someone wanting to send an email to Bob will use Bob's certificate to get his public key (needed to encrypt the email, so that only Bob will be able to read it); verifying the signature on Bob's certificate is a way to make sure that the binding is genuine, i.e. this is really Bob's public key, not someone else's public key. PGP is actually an implementation of the OpenPGP standard (historically, OpenPGP was defined as a way to standardize what the pre-existing PGP software did, but there now are other implementations, in particular the free opensource GnuPG ). OpenPGP defines its own encryption methods (similar in functionality to CMS) and encoding formats, in particular an encoding layer called "ASCII Armor" which allows binary data to travel unscathed in emails (but you can also mix MIME and OpenPGP ). For public key distribution, OpenPGP relies on Web of Trust : you can view that as a decentralized PKI where everybody is a potential CA. The security foundation of WoT is redundancy : you can trust a public key because it has been signed by many people (the idea being that if an attacker "cannot fool everybody for a long time"). Theoretically , in an enterprise context, WoT does not work well; the X.509 hierarchical PKI is more appropriate, because it can be made to match the decisional structure of the envisioned companies, whereas WoT relies on employees making their own security policy decisions. In practice , although most emailing softwares already implement S/MIME (even Outlook Express has implemented S/MIME for about one decade), the certificate enrollment process is complex with interactions with external entities, and requires some manual interventions. OpenPGP support usually requires adding a plugin, but that plugin comes with all that is needed to manage keys. The Web of Trust is not really used: people exchange their public keys and ensure binding over another medium (e.g. spelling out the "key fingerprint" -- a hash value of the key -- over the phone). Then people keep a copy of the public keys of the people they usually exchange emails with (in the PGP "keyring"), which ensures appropriate security and no hassle. When I need to exchange secure emails with customers, I use PGP that way. OpenPGP is also used, as a signature format, for other non-email tasks, such as digitally signing software packages in some Linux distributions (at least Debian and Ubuntu do that). | {
"source": [
"https://security.stackexchange.com/questions/7874",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4726/"
]
} |
8,048 | As far as I understand, AES is believed to be extremely secure. (I have read somewhere that it would certainly not be broken in the next 20 years, but I am still not sure if the author was serious.) DES is still not so bad for an old cypher, and 3DES is still used (maybe not so much, but at least I see 3DES in about:config in Firefox). It looks like (good) block cyphers are trusted by the crypto community. OTOH, many problems with cryptographic hash functions are discovered. From the point of view of the non-crypto-specialist: hashing functions and symmetric cyphers are the really same thing: a "random" function (with different inputs and output). So, why not use just AES for hashing? This seems the obvious things to do to get the strong safety of AES for hashing. As a bonus, could hardware implementations of AES help? Is there a simple explanation of the real difference between hash functions and symmetric cyphers? | A block cipher has a key; the secrecy of the key is what the cipher security builds on. On the other hand, a hash function has no key at all, and there is no "secret data" on which security of the hash function is to be built. A block cipher is reversible : if you know the key, you can decrypt what was encrypted. Technically, for a given key, a block cipher is a permutation of the space of possible block values. Hash functions are meant to be non-reversible, and they are not permutations in any way. A block cipher operates on fixed-sized blocks (128-bit blocks for AES), both for input and output. A hash function has a fixed-sized output, but should accept arbitrarily large inputs. So block ciphers and hash functions are really different animals; rather than trying to differentiate them, it is easier to see what they have in common: namely, that the people who know how to design a block cipher are also reasonably good at designing hash functions, because the analysis mathematical tools are similar (quite a lot of linear algebra and boolean functions, really). Let's go for more formal definitions: A block cipher is a family of permutations selected by a key. We consider the space B of n -bit blocks for some fixed value of n ; the size of B is then 2 n . Keys are values from a space K , usually another space of sequences of m bits ( m is not necessarily equal to n ). A key k selects a permutation among the 2 n ! possible permutations of B . A block cipher is deemed secure as long as it is computationally indistinguishable from a permutation which has been chosen uniformly and randomly among the 2 n ! possible permutations. To model that, imagine a situation where an attacker is given access to two black box, one implementing the block cipher with a key that the attacker does not know, and the other being a truly random permutation. The goal of the attacker is to tell which is which. He can have each box encrypt or decrypt whatever data he wishes. On possible attack is to try all possible keys (there are 2 m such keys) only one is found, which yields the same values than one of the boxes; this has average cost 2 m-1 invocations of the cipher. A secure block cipher is one such that this generic attack is the best possible attack. The AES is defined over 128-bit blocks ( n = 128) and 128-, 192- and 256-bit keys. A hash function is a single, fully defined, computable function which takes as input bit sequences of arbitrary length, and outputs values of a fixed length r (e.g. r = 256 bits for SHA-256). There is no key, no family of function, just a unique function which anybody can compute. A hash function h is deemed secure if: It is computationally infeasible to find preimages: given a r -bit value x , it is not feasible to find m such that h(m) = x . It is computationally infeasible to find second preimages: given m , it is not feasible to find m' distinct from m , such that h(m) = h(m') . It is computationally infeasible to find collisions: it is not feasible to find m and m' , distinct from each other, such that h(m) = h(m') . There are generic attacks which can find preimages, second preimages or collisions, with costs, respectively, 2 r , 2 r , and 2 r/2 . So actual security can be reached only if r is large enough so that 2 r/2 is an overwhelmingly huge cost. In practice, this means that r = 128 (a 128-bit hash function such as MD5) is not enough . In an informal way, it is good if the hash function "looks like" it has been chosen randomly and uniformly among the possible functions which accept the same inputs. But this is an ill-defined property since we are talking about a unique function (probabilities are always implicitly about averages and repeated experiences; you cannot really have probabilities with one single function). Also, being a random function is not exactly the same as being resistant to collisions and preimages; this is the debate over the Random Oracle Model . Nevertheless, it is possible to build a hash function out of a block cipher. This is what the Merkle-Damgård construction does. This entails using the input message as the key of the block cipher; so the block cipher is not used at all as it was meant to be. With AES, this proves disappointing: It results in a hash function with a 128-bit output, which is too small for security against technology available in 2011. The security of the hash function then relies on the absence of related-key attacks on the block cipher. Related-key attacks do not really have any practical significance on a block cipher when used for encryption; hence, AES was not designed to resist such attacks, and, indeed, AES has a few weaknesses in that respect -- not a worry for encryption, but a big worry if AES is to be used in a Merkle-Damgård construction. The performance will not be good. The Whirlpool hash function is a design which builds on a block cipher inspired from the AES -- not the real one. That block cipher has a much improved (and heavier) key schedule, which resists related-key attacks and makes it usable as the core of a hash function. Also, that block cipher works on 512-bit blocks, not 128-bit blocks. Whirlpool is believed secure. Whirlpool is known to be very slow, so nobody uses it. Some more recent hash function designs have attempted to reuse parts of the AES -- to be precise, to use an internal operation which maps well on the AES-NI instructions which recent Intel and AMD processors feature. See for instance ECHO and SHAvite-3 ; these two functions both received quite a bit of exposure as part of the SHA-3 competition and are believed "reasonably secure". There are very fast on recent Intel and AMD processors. On other weaker architectures, were hash function performance has some chance to actually matter, these functions are quite slow. There are other constructions which can make a hash function out of a block cipher, e.g. the one used in Skein ; but they also tend to require larger blocks than what the AES is defined over. Summary: not only are block ciphers and hash functions quite different; but the idea of building a hash function out of the AES turns out to be of questionable validity. It is not easy, and the limited AES block size is the main hindrance. | {
"source": [
"https://security.stackexchange.com/questions/8048",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5133/"
]
} |
8,110 | Let's say I sign a SSL certificate for myself, and I'm not using a certified CA. What are the risks and/or threats of doing it? | The risks are for the client. The point of the SSL server certificate is that it is used by the client to know the server public key, with some level of guarantee that the key indeed belongs to the intended server. The guarantee comes from the CA: the CA is supposed to perform extensive verification of the requester identity before issuing the certificate. When a client (the user and his Web browser) "accepts" a certificate which has not been issued by one of the CA that the client trusts (the CA which were embedded in Windows by Microsoft), then the risk is that the client is currently talking to a fake server, i.e. is under attack. Note that passive attacks (the attacker observes the data but does not alter it in any way) are thwarted by SSL regardless of whether the CA certificate was issued by a mainstream CA or not. On a general basis, you do not want to train your users to ignore the scary security warning from the browser, because this makes them vulnerable to such server impersonation attacks (which are not that hard to mount, e.g. with DNS poisoning ). On the other hand, if you can confirm, through some other way, that the certificate is genuine that one time , then the browser will remember the certificate and will not show warnings for subsequent visits as long as the same self-signed certificate is used. The newly proposed Convergence PKI is an extension of this principle. Note that this "remembered certificate" holds as long as the certificate is unchanged, so you really want to set the expiry date of your self-signed certificate in the far future (but not beyond 2038 if you want to avoid interoperability issues ). It shall be noted that since a self-signed certificate is not "managed" by a CA, there is no possible revocation. If an attacker steals your private key, you permanently lose, whereas CA-issued certificates still have the theoretical safety net of revocation (a way for the CA to declare that a given certificate is rotten). In practice, current Web browser do not check revocation status anyway. | {
"source": [
"https://security.stackexchange.com/questions/8110",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5425/"
]
} |
8,113 | I wanted to know if its generally possible to inject executable code into files like PDFs or JPEGs etc., or must there be some kind of security hole in the application? And if so, how would one do that? I often hear that people get infected by opening PDFs that contain malicious code, that's why I ask. | There must be some security hole in the application. Think like any very-simple-and-common .txt file: if you open it with an hex viewer, or with a well-designed textpad editor, it should only display the file content, and ok. Then think about of processing the file, somehow, instead of just showing the contents. For example, reading the file and interpreting it's values. If it isn't done correctly, this could lead to execution of the bytes that are inside the file. For example: if you have designed your app to load the whole file and show it, but somehow you have a variable inside your program that only holds 256 bytes. This could make you read (and write to memory) more bytes than your app expected. And, imagine, inside your app there would be any command to jump to position NNNN in memory and execute what is there , but since that memory position was written with data your program didn't expect, then you'll execute some code that shouldn't be there, and was loaded from your file... That was a buffer overflow attack. The same could happen with pdf, jpg, mp3, etc, if the app didn't load the data correctly. Another possibility: for any other reason, the app (or some DLL it loads to read your data) executes some part of the data, instead of reading it. If you know what would be the command (or the data) that would trigger this behavior, you put those commands inside the data file (like the pdf file) so that the app executes it. PDF virus : read this site: http://lwn.net/2001/0809/a/adobe-pdf-vul.php3 to know a bit about one virus that spread using PDF files. | {
"source": [
"https://security.stackexchange.com/questions/8113",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4450/"
]
} |
8,145 | There is a desktop client A connecting to website W in a https connection A --> W Somehow between A and W, there is a proxy G. A --> G --> W In this case, will G be able to get the certificate which A
previously got from W? If G can get the certificate, does that mean that G will be able to decrypt the data? | How does HTTPS work? HTTPS is based on public/private-key cryptography . This basically means that there is a key pair: The public key is used for encryption and the secret private key is required for decryption. A certificate is basically a public key with a label identifying the owner. So when your browser connects to an HTTPS server, the server will answer with its certificate. The browser checks if the certificate is valid : the owner information need to match the server name that the user requested. the certificate needs to be signed by a trusted certification authority. If one of these conditions is not met, the user is informed about the problem. After the verification, the browser extracts the public key and uses it to encrypt some information before sending it to the server. The server can decrypt it because the server has the matching private key . How does HTTPS prevent man in the middle attacks? In this case, will G be able to get the certificate which A previously got from W? Yes, the certificate is the public key with the label. The webserver will send it to anyone who connects to it. If G can get the certificate, does that mean that G will be able to decrypt the data? No. The certificate contains the public key of the webserver . The malicious proxy is not in the possession of the matching private key. So if the proxy forwards the real certificate to the client, it cannot decrypt information the client sends to the webserver. The proxy server may try to forge the certificate and provide his own public key instead. This will, however, destroy the signature of the certification authorities . The browser will warn about the invalid certificate. Is there a way a proxy server can read HTTPS? If the administrator of your computer cooperates , it is possible for a proxy server to sniff https connections. This is used in some companies in order to scan for viruses and to enforce guidelines of acceptable use. A local certification authority is setup and the administrator tells your browser that this CA is trustworthy . The proxy server uses this CA to sign his forged certificates. Oh and of course, user tend to click security warnings away. | {
"source": [
"https://security.stackexchange.com/questions/8145",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5444/"
]
} |
8,164 | Why are hand-written signatures still so commonly used? Can they actually prove anything? Two assumptions: If anyone wants to forge my signature I'm sure they will be able to do it. Even my own signature looks a little bit different every time I sign a document. If I commit to an agreement by signing a contract not with my typical signature but using a new random signature (maybe even using my left hand), I could just claim I didn't sign it and the best forensics will probably have to agree, because my "real" signature is completely different. On the other hand, why are digital signatures not more popular? Just because non-tech savvy people don't know how to use them? | It pays to investigate what we really trust in hand-written signatures. A signature is the physical manifestation of the will of the signer to acknowledge the contents of what is signed. Most legal systems define that a signature is yours and is binding if and only if "you really did it". This looks like a tautology, but it actually is quite profound: the hardness of forging, or even the involvement of a physical hand and pen, are not part of what defines a signature. So what's the trick ? At the core of the trust system is the set of laws which severely punish forgery: forging an hand-written signature is an offense which can land you in jail for much more time than whatever you signed. The idea is that a hand-written signature happens "in the physical world" where it leaves many traces, in particular witnesses. The risk of being caught forging a signature makes it "not worth it". The signature medium is not really important; typing your name at the end of an email is as much binding as an ink-based handcrafted smudge at the bottom of a piece of paper (at least in England; there are variations depending on the country). In Japan they use personalized stamps. The system works as long as forging signatures remains risky. When translating into the digital world, signatures become too easy to forge without any trace, which is why cryptography must be invoked. Cryptographic signatures also open the possibility of automation: being able to sign and verify at lightning speed (the verifying part is a novelty: with hand-written signatures, verification that the signature is legit is not a power given to just anybody). The hard part of designing a signature scheme remains the set of laws which make the link between the action of signing, and the legal consequences thereof (namely, the "binding" part). Technicalities such as length of a RSA key are the easy part, which can be done by mere scientists -- but laws take decades and an awful lot of negotiation. Such laws exist for hand-written signatures; actually, they have existed for thousands of years. Digital signatures will begin to compete with hand-written signatures only when legal systems will be up to it. Europe is currently trying to do that, but it takes time. | {
"source": [
"https://security.stackexchange.com/questions/8164",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5455/"
]
} |
8,174 | Can someone explain why the BEAST attack wasn't considered plausible? I saw an article quoting the creator as saying 'It is worth noting that the vulnerability that BEAST exploits has been presented since the very first version of SSL. Most people in the crypto and security community have concluded that it is non-exploitable' (http://threatpost.com/en_us/blogs/new-attack-breaks-confidentiality-model-ssl-allows-theft-encrypted-cookies-091911) and several other articles mentioned that the attack was previously though implausible but I don't know why. | The attack requires cooperation between an outer component (which can intercept traffic) and an inner component which runs on the attacked machine and is able to inject arbitrary data (chosen by the attacker) within the SSL tunnel, along with the piece of data which is to be decrypted. The general view among most people in the crypto and security communities is that when the attacker can do that, he has enough control on the attacked machine that he can be considered to have already won . A fix was nonetheless published in TLS 1.1 (published in 2006) and ulterior versions. It so happens that with 2011's Internet, there can be a considerable amount of hostile code running on the user's system, through Java and/or Javascript, and such code has considerable power over what happens on said system. This is the kind of über-attacker that was envisioned and declared as "not plausible in practice" back in 2006. Note that BEAST is as yet unpublished except through some slides, so while it has run successfully in lab conditions, it is unclear whether it would be worth the effort to build it in the wild. After all, it seems that nobody ever bothered decrypting 40-bit SSL connections on a regular, industrial basis, despite the computational ease of doing so (that "nobody" is about attackers who are after credit card numbers and banking access passwords, not governmental security agencies -- if your country's secret service does not routinely decrypt 40-bit SSL, then you are entitled to question what the heck they do with your tax money). One way to see it is that crypto and security researchers, and/or Web browser vendors, failed to envision the evolution of the Web architecture. Another way to see it is that browser vendors are hard at work building a Web structure which is, security-wise, doomed from start. | {
"source": [
"https://security.stackexchange.com/questions/8174",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5461/"
]
} |
8,210 | In a comment on this answer , AviD says: "There are numerous security issues with wildcard SSL certs." So, what are the problems? I understand that the same private key is being used in multiple contexts, but given that I could host all of my applications under the same host name I don't see this as a 'new' issue introduced by wildcard certificates. | A "wildcard certificate" is a certificate which contains, as possible server name, a name which contains a " * " character. Details are in RFC 2818, section 3.1 . The bottom-line: when the server certificate contains *.example.com , it will be accepted by clients as a valid certificate for any server whose apparent name matches that name. In the certification business for Web sites, there are four main actors: The SSL server itself. The vendor of the Web browser which the client will use. The human user, who controls to some extent what the client browser will do. The CA who issued the certificate to the server. Wildcard certificates don't imply extra vulnerabilities for the SSL server; indeed, the SSL server has no interest in looking at its own certificate. That certificate is for the benefits of clients, to convince them that the public key contained in the certificate is indeed the public key of the genuine SSL server. The SSL server knows its own public/private key pair and does not need to be convinced about it. The human user has no idea what a public key is. What he sees is a padlock icon and, more importantly, the intended server name : that's the name at the right of " https:// " and before the next " / ". The Web browser is supposed to handle the technical details of verifying that the name is right, i.e. validation of the server certificate, and verification that the name matches that which is written in the said certificate. If the browser does not do this job, then it will be viewed as sloppy and not assuming its role, which can have serious commercial consequences, possibly even legal. Similarly, the CA is contractually bound to follow defined procedures for identifying SSL server owners so that fake certificates will be hard to obtain for attackers (contract is between the CA and its über-CA, recursively, up to the root CA which is itself bound by a pact with the OS or browser vendor, who accepted to include the root CA key in the OS or browser under defined conditions). What this amounts to, is that the browser and the CA must, in practice, pamper the user through the verification process. They are more or less under obligation (by law or, even stricter, by business considerations) to prevent the user from being swindled through fake sites which look legit. The boundary between the user's job and the browser/CA job is not clearly defined, and has historically moved. In Days of Yore, I mean ten years ago or so, browsers just printed out the raw URL, and it was up to the human user to find the server name in it. This lead forged site operators (i.e. "phishing sites") to use URL which are technically valid, but misleading, like this one: https://www.paypal.com:[email protected]/confirm.html Since human users are, well, human , and most of them read left-to-right (most rich and gullible scam targets are still in Western countries), they will begin on the left, see www.paypal.com , stop at the colon sign ("too technical"), and be scammed. In reaction, browser vendors have acknowledged that the URL-parsing abilities of human users are not as good as was initially assumed, and thus recent browsers highlight the domain part. In the case above, this would be xcvhjvb.com , and certainly not anything with paypal.com in it. Now comes the part where wildcard certificates enter the game. If the owner of xcvhjvb.com buys a wildcard certificate containing " *.xcvhjvb.com ", then he can setup a phishing site called: https://PayPal-payment-interface-login-session.xcvhjvb.com/confirm.html which will be accepted by the browser (it matches the wildcard name), and is still likely to catch unwary users (and there are many...). This name could have been bought by the attacker without resorting to wildcards, but then the CA employees would have seen the name with obvious fraudulent attempt (good CA do a human validation of every request for certificates, or at least raise alerts for names which are very long and/or contain known bank names in them). Therefore, wildcard certificates decrease the effectiveness of fraud-containment measures on the CA side . This is like a blank signature from the CA. If wildcard-based phishing attempts become more commonplace, one can expect that one or several of the following measures will come into existence: Browsers highlight only the parts of the domain name which matches non-wildcard elements in the certificate. CA require heavier paperwork and contracts for wildcard certificates (and these will be more expensive). Browsers deactivate support for wildcard certificates altogether. I actually expect all three measures to be applied within the next the years. I could be totally wrong about it (that's the problem with predicting the future) but this is still my gut feeling. Nitpickingly, we can also point out that wildcard certificates are useful for sharing the same key pair between different server names , which makes it more probable that the private key will be shared between different server machines . Traveling private keys are a security risk in their own right; the more a private key wanders around, the less "private" it remains. | {
"source": [
"https://security.stackexchange.com/questions/8210",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
8,245 | If I encrypt a file for multiple users how does the file size change? Does the size of the output double for two users. How about 10 or 100 users? | GPG encrypts the file once with a symmetric key, then places a header identifying the target keypair and an encrypted version of the symmetric key. The intricate details of that are defined in section 5.1 of RFC 2440 . When encrypted to multiple recipients, this header is placed multiple times providing a uniquely encrypted version of the same symmetric key for each recipient. Thus, file size growth for each recipient is small and roughly linear. Some variation may exist for key length and padding so it's not predictable different for different key sizes and algorithms, but it's small. In a quick test demonstration using no compression: 11,676,179 source
11,676,785 encrypted-to-one (+606 bytes)
11,677,056 encrypted-to-two (+277 bytes)
11,677,329 encrypted-to-three (+273 bytes) | {
"source": [
"https://security.stackexchange.com/questions/8245",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5518/"
]
} |
8,263 | I once saw JavaScript code which was only written as multiple brackets () . Does anybody remember this kind of code? There was also an online converter to convert "normal" JS into this style of code. I want to show it while a presentation as an example of state of the art XSS. Or what are other really surprising XSS examples? | Found it http://sla.ckers.org/forum/read.php?24,33349 http://security.bleurgh.net/javascript-without-letters-or-numbers http://sla.ckers.org/forum/read.php?24,28687 Converter to convert normal JS into brackets only JS: http://utf-8.jp/public/jjencode.html | {
"source": [
"https://security.stackexchange.com/questions/8263",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2829/"
]
} |
8,264 | I can't really fully understand what same origin domain means. I know it means that when getting a resource from another domain (say a JS file) it will run from the context of the domain that serves it (like Google Analytics code), which means it can't modify the data or read the data on the domain that "includes the resource". So if domain a.com is embedding a js file from google.com in its source, that js will run from google.com and it can't access the DOM\cookies\any other element on a.com -- am I right? Here is a definition for the same origin policy which I can't really understand: The same-origin policy is a key mechanism implemented within browsers
that is designed to keep content that came from different origins from
interfering with each other. Basically, content received from one
website is allowed to read and modify other content received from the
same site but is not allowed to access content received from other
sites. What does that really mean? Can you please give me a real life example? Another question is: what is the purpose of Origin header and how do cross domain requests still exist? Why doesn't it influence the security or the same origin policy? | Why is the same origin policy important? Assume you are logged into Facebook and visit a malicious website in another browser tab. Without the same origin policy JavaScript on that website could do anything to your Facebook account that you are allowed to do. For example read private messages, post status updates, analyse the HTML DOM-tree after you entered your password before submitting the form. But of course Facebook wants to use JavaScript to enhance the user experience. So it is important that the browser can detect that this JavaScript is trusted to access Facebook resources. That's where the same origin policy comes into play: If the JavaScript is included from a HTML page on facebook.com, it may access facebook.com resources. Now replace Facebook with your online banking website, and it will be obvious that this is an issue. What is the origin? I can't really fully understand what same origin domain means. I know it means that when getting a resource from another domain (say a JS file) it will run from the context of the domain that serves it (like google analytics code), which means it can't modify the data or read the data on the domain that "includes the resource". This is not correct: The origin of a JavaScript file is defined by the domain of the HTML page which includes it. So if you include the Google Analytics code with a <script>-tag, it can do anything to your website but does not have same origin permissions on the Google website. How does cross domain communication work? The same origin policy is not enforced for all requests. Among others the <script>- and <img>-tags may fetch resources from any domain. Posting forms and linking to other domains is possible, too. Frames and iframes way display information from other domains but interaction with that content is limited. There are some approaches to allow XMLHttpRequest (ajax) calls to other domains in a secure way, but they are not well supported by common browsers. The common way to enable communication with another domain is JSONP : It is based on a <script>-tag. The information, which shall be sent to another domain, is encoded in the URL as parameters. The returned JavaScript consists of a function call with the requested information as parameter: <script type="text/javascript" src="http://example.com/
?some-variable=some-data&jsonp=parseResponse">
</script> The dynamically generated JavaScript from example.com may look like: parseResponse({"variable": "value", "variable2": "value2"}) What is Cross Site Scripting? Cross Site Scripting is a vulnerability that allows an attacker to inject JavaScript code into a website, so that it originates from the attacked website from the browser point of view. This can happen if user input is not sufficiently sanitised. For example a search function may display the string "Your search results for [userinput]". If [userinput] is not escaped an attacker may search for: <script>alert(document.cookie)</script> The browser has no way to detect that this code was not provided by the website owner, so it will execute it. Nowadays cross site scripting is a major issue, so there is work done to prevent this vulnerability. Most notable is the Content Security Policy approach. | {
"source": [
"https://security.stackexchange.com/questions/8264",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3905/"
]
} |
8,307 | How secure is the data in a encrypted NTFS folder on Windows (XP, 7)? (The encryption option under file|folder -> properties -> advanced -> encrypt.) If the user uses a decent password, can this data be decrypted (easily?) if it, say, resides on a laptop and that is stolen? | How secure is the data in a encrypted NTFS folder on Windows (XP, 7)? What is EFS? Folders on NTFS are encrypted with a specialized subset of NTFS called Encrypting File System(EFS). EFS is a file level encryption within NTFS. The folder is actually a specialized type of file which applies the same key to all files within the folder. NTFS on disk format 3.1 was released with Windows XP. Windows 7 uses NTFS on disk format. However the NTFS driver has gone from 5.1 on windows XP to 6.1 on Windows 7. The bits on the disk have not changed but the protocol for processing the bits to and from the disk has added features in Windows 7. What algorithm does it use? Windows XP (no service pack): DES-X (default), Triple DES (available) Windows XP SP1 - Windows Server 2008: AES-256 symmetric (default), DES-X (available), Triple DES (available) Windows 7, Windows Server 2008 R2: "mixed-mode" operation of ECC and RSA algorithm What key size does it used? Windows XP and Windows 2003: 1024-bits Windows Server 2003: 1024-bits (default), 2048-bits, 4096-bits, 8192-bits, 16384-bits Windows Server 2008: 2048-bit (default), 1024-bits, 4096-bits, 8192-bits, 16384-bits Windows 7, Windows Server 2008 R2 for ECC: 256-bit (default), 384-bit, 512-bit Windows 7, Windows Server 2008 R2 for for AES, DES-X, Triple DES: RSA 1024-bits (default), 2048-bits, 4096-bits, 8192-bits, 16384-bit; How is the encryption key protected? The File Encryption Key (FEC) is encrypted with the user's RSA public key and attached to the encrypted file. How is the user's RSA private key protected? The user's RSA private key is encrypted using a hash of the user's NTLM password hash plus the user name. How is the user's password protected? The user's password is hashed and stored in the SAM file. So, If an attacker can get a copy of the SAM file they may be able to discover the user's password with a rainbow table attack. Given the username and password, an attacker can decrypt the RSA private key. With the RSA private key, the attacker can decrypt any FEC stored with any encrypted file and decrypt the file. So... The contents of the encrypted folder are as secure as the user's password. If the user uses a decent password, can this data be decrypted (easily?) if it, say, resides on a laptop and that is stolen? Probably not by an adversary with a typical personal computer. However, given sufficient resources, like a GPU or FPGA password cracking system, EFS data may be vulnerable within a short period. A random 12-character (upper lower and symbol) password may hold out for weeks or months against a password cracking system. See "Power of Graphics Processing Units May Threaten Password Security" A significantly longer password may hold out for years or decades. | {
"source": [
"https://security.stackexchange.com/questions/8307",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3785/"
]
} |
8,323 | Is it legal to store/log mistyped passwords? How many of you have seen this happen in a log file or DB? | I don’t think that “legal” is the right term to use. It’s not wise, a lot of times “right” password is only one letter different from the “wrong” password (typo/capital letters/…).
So if somebody evil will get this log he may easily guess the correct password. Other problem is that people re-use passwords, so they use same password for your site/gmail/facebook/bank.
So even if your site doesn’t have sensitive information about users, it’s very possible that getting user’s credentials from your site will let hacker access other user’s accounts (email/CC/bank). And you don’t want to be a source of something like that. | {
"source": [
"https://security.stackexchange.com/questions/8323",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4580/"
]
} |
8,394 | What are the security implications of an expired SSL certificate? For example if an SSL certificate from a trusted CA has expired will the communication channel continue to remain secure? | The communication is still encrypted, but the trust mechanism is undermined. But usually the most important factor is that users will get ugly warning messages about the security of your site. Most won't make informed judgements about the integrity of the connection, they'll just go buy stuff elsewhere. | {
"source": [
"https://security.stackexchange.com/questions/8394",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5412/"
]
} |
8,417 | I'm a decent programmer, fluent in several languages. Python, Ruby, JavaScript, Haskell, and Scheme are my favorites. I'm currently adding Perl to the mix. I haven't done much "low-level" programming. I've screwed around at the logic gate level and built a few basic chips, but I've never programmed in assembly. While I can program in C, I don't do so often enough to be comfortable with the language. By day, I'm a web developer, so I'm familiar with some of the basic exploits that are used to break websites. With my "qualifications" out of the way, I'm looking for some good hacking resources for someone in my position. I'm not looking to get certified or anything, I'd just like to be able to hack. I do all of the exercises in SICP every year... Is there a "SICP like" book for security? Any other resources would be appreciated. Also, if anyone could suggest an appropriate "curriculum" for someone in my shoes, it would be appreciated. Edit: I should make this question more specific. I'd like to learn the tools and techniques of the trade that "actual" hackers use. I'm not particularly interested in "writing secure code." I figure I should be able to figure out how to do that if I can hack. | When you say "Hack" I'm personally wondering what sort of hacking you mean - it's a fairly varied skill with many different interpretations. Firstly, by far and away the biggest domain going forward in security will be web. That means SQL injection, javascript bugs, browser bugs, studies of authentication schemes etc. So as others have mentioned, OWASP and the like are fantastic resources. Not my particular favourite area though, so here's my guide to "things to know" if you want to start looking for vulnerabilities in compiled code on operating systems. One of the first things you'll begin to realise is that there is a lot to know - you are not going to become an uber cool h@x0r!11!! overnight, or actually if you do nothing else for the next 6 months but read up on all of this, top to bottom. Programming knowledge You need to know assembly . Contrary to popular belief it is not that hard - most assembly takes the form instruction register, register and translates directly to machine code. You need to know an assembler, such as NASM , YASM or GNU AS . There are two different syntaxes in assembler - AT&T and Intel. They're not that far apart. You need to know your processor's instruction set. The Intel Software Manuals for IA-32 and Intel 64 are a great resource and explain every instruction you could ever need. AMD publish equivalents; since both AMD processors and Intel processors use the same x86 instruction sets, there are, mostly, a lot of commonalities. Knowing a debugger and a disassembler will help you. GDB is the canonical debugger used on Linux platforms, WinDBG is one such debugger from windows. Other people like OllyDbg . In terms of disassemblers, many Linux ones are powered by objdump . I personally like objconv from this author . The tool in the field of disassemblers is IDA Pro , which provides much more than just disassembly. Have an understanding of how to use hex notation and the various word sizes. Program Internals knowledge Know C. Know C++. Know the difference between C/C++; for example what effect a template keyword has. Know what name mangling is and why it exists. Know what C cannot do which assembly can. Understand how your CPU represents data and which C types most closely represent a register. Binary file formats: knowing how executables work is very important, especially what formats they exist in and how to manipulate them. You don't need to know the internals of the ELF, COFF or PE formats unless there's a really hairy exploit going on, but knowing how to extract symbols from these files and how data is laid out in them generally will give you an advantage. Shared object loading. Understand how shared objects or DLLs work and load. Understand the basic run time constructions of your program. Where is the "heap"? Where is the stack, what is on it and how does it work? Where is your vtable and how does that work? Program Internals for other languages Nobody said you'd be working with C/C++ programs, although if you're looking at traditional, OS software it probably is C/C++. That said, you may be interested in python or jvm bytecode and how each of these runtimes work. Or perhaps the CLR. Program environment - Operating Systems I Basics. You should know that there are permissions and what they mean. You should know who the administrator account is, what the default is and how you acquire administrator access normally (e.g. root vs Administrator group). Program environment. You should know where and how config for programs and the OS is laid out. You should know how services run and as what user. How do programs run automatically/on boot? How is media loading handled? System subversion - Operating Systems II Know the difference between the various CPU Rings . Know how drivers are loaded into the operating system and what they can do. Have an accurate idea of how the OS handles permissions, resources, filesystems etc. Better if you know exactly how. Know about mechanisms for intercepting system actions. Understand how to perform shared object injection on your system ( 1 , 2 ). Know how various subsystems in both user and kernel space work and how to attack programs over these subsystems. See here for OS internals . Networking & Network Services Knowing something about networking is important. What does a packet
look like? What is the OSI stack? How do you inspect packets and
network traffic; how do you connect to a network from your system? Know common network services for your target platform, most commonly HTTP, SSH. Understand how they can be exploited. Common exploits and their defences Know what smashing the stack is, ret2libc . Know what ASLR and stack canaries are. Know about vptr exploits . Analysis tools Know about fuzzing , static analysis and possibly instrumentation . Know some tools for achieving this, like pintool , splint and peachfuzzer . This list is probably incomplete and reflects a set of knowledge that should be helpful for understanding how reverse engineering works and how you go from there to finding vulnerabilities and exploiting them. I do not, myself, know absolutely everything in the above list inside out and I've been researching this stuff for a while. As I said, becoming good at this takes a lot of time, dedication and patience. Once here, you can start to study applications and how they process data and begin to work out how exploits against them work, such as PDF/Flash/Java vulnerabilities. | {
"source": [
"https://security.stackexchange.com/questions/8417",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
8,476 | How easily could someone crack my keepass .kdbx file if that person steals the file but never obtains the Master Password? Is this a serious threat, or would a brute force attack require massive computing time? Assume a password more than 10 characters long with randomly distributed characters of the set including all letters, numbers and most non-alphanumeric keyboard symbols. | KeePass uses a custom password derivation process which includes multiple iterations of symmetric encryption with a random key (which then serves as salt), as explained there . The default number of iterations is 6000, so that's 12000 AES invocations for processing one password (encryption is done on a 256-bit value, AES uses 128-bit blocks, so there must be two AES invocations at least for each round). With a quad-core recent PC (those with the spiffy AES instructions ), you should be able to test about 32000 potential passwords per second. With ten random characters chosen uniformly among the hundred-of-so of characters which can be typed on a keyboard, there are 10 20 potential passwords, and brute force will, on average, try half of them. You're in for 10 20 *0.5/32000 seconds, also known as 50 million years. But with two PC that's only 25 million years. This assumes that the password derivation process is not flawed in some way. In "custom password derivation process", the "custom" is a scary word. Also, the number of iterations is configurable (6000 is only the default value). | {
"source": [
"https://security.stackexchange.com/questions/8476",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5372/"
]
} |
8,529 | The OpenSSL website provides a long list of different ciphers available for SSL and TLS.
My question is, which of those ciphers can be considered secure nowadays. I am especially interested in HTTPS, if this should matter, although I guess it doesn't. I am aware of the Apache Recommendation to use SSLCipherSuite HIGH:MEDIUM and agree that this is best practice. What I am looking for is an official standard or a recent paper from an accepted and recognized source like a well know security organization. If such a paper exists including estimates on how long certain ciphers with specific key length will be considered secure, this would be even better. Does such a thing exist? | The cipher suites with a " NULL " do not offer data encryption, only integrity check. This means "not secure" for most usages. The cipher suites with " EXPORT " are, by design, weak. They are encrypted, but only with keys small enough to be cracked with even amateur hardware (say, a basic home PC -- symmetric encryption relying on 40-bit keys). These suites were defined to comply with the US export rules on cryptographic systems, rules which were quite strict before 2000. Nowadays, these restrictions have been lifted and there is little point in supporting the " EXPORT " cipher suites. The cipher suites with " DES " (not " 3DES ") rely for symmetric encryption on DES , an old block cipher which uses a 56-bit key ( technically , it uses a 64-bit key, but it ignores 8 of those bits, so the effective key size is 56 bits). A 56-bit key is crackable, albeit not in five minutes with a PC. Deep crack was a special-purpose machine built in 1998 for about 250,000 $, and could crack a 56-bit DES key within 4.5 days on average. Technology has progressed, and this can be reproduced with a few dozens FPGA . Still not off-the-shelf-at-Walmart hardware, but affordable by many individuals. All other cipher suites supported by OpenSSL are non-weak; if you have a problem with them, it will not be due to a cryptographic weakness in the algorithms themselves. You may want to avoid cipher suites which feature " MD5 ", not because of an actual known weakness, but for public relations. MD5 , as a hash function, is "broken" because we can efficiently find many collisions for that function. This is not a problem for MD5 as it is used in SSL; yet, that's enough for MD5 to have a bad reputation, and you are better avoiding it. Note that the cipher suite does not enforce anything on the size of the server key (the public key in the server certificate), which must be large enough to provide adequate robustness (for RSA or DSS, go for 1024 bits at least, 1536 bits being better -- but do not push it too much, because computational overhead raises sharply with key size). NIST , a US federal organization which is as accepted and well-known as any security organization can possibly be, has published some recommendations (see especially the tables on pages 22 and 23); this is from 2005 but still valid today. Note that NIST operates on an "approved / not approved" basis: they do not claim in any way that algorithms which are "not approved" are weak in any way; only that they, as an organization, do not vouch for them. | {
"source": [
"https://security.stackexchange.com/questions/8529",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2147/"
]
} |
8,535 | There are at least three different common approaches to creating a threat model: Attacker-centric Software-centric Asset-centric You can take a look at Wikipedia for a quick overview. I wonder if one of those approaches has proven to be superior in general or if you have to choose them depending on the situation at hand. I have experience with all three of them and I personally prefer the attacker-centric approach because it feels straight forward and it fits my way of thinking. I can see the benefits of the asset-centric approach, especially if you want to see the business impact of certain threats directly. The software-centric approach feels clumsy and heavy-weight to me. If you want to drill in really deep and have a lot of time at hand for threat modeling it might be a good option though. These are just my personal experiences. Is there an approach that has has proven to be superior in general or do you have to choose them depending on the situation at hand? Is there a common understanding about that in the Security Community. | The cipher suites with a " NULL " do not offer data encryption, only integrity check. This means "not secure" for most usages. The cipher suites with " EXPORT " are, by design, weak. They are encrypted, but only with keys small enough to be cracked with even amateur hardware (say, a basic home PC -- symmetric encryption relying on 40-bit keys). These suites were defined to comply with the US export rules on cryptographic systems, rules which were quite strict before 2000. Nowadays, these restrictions have been lifted and there is little point in supporting the " EXPORT " cipher suites. The cipher suites with " DES " (not " 3DES ") rely for symmetric encryption on DES , an old block cipher which uses a 56-bit key ( technically , it uses a 64-bit key, but it ignores 8 of those bits, so the effective key size is 56 bits). A 56-bit key is crackable, albeit not in five minutes with a PC. Deep crack was a special-purpose machine built in 1998 for about 250,000 $, and could crack a 56-bit DES key within 4.5 days on average. Technology has progressed, and this can be reproduced with a few dozens FPGA . Still not off-the-shelf-at-Walmart hardware, but affordable by many individuals. All other cipher suites supported by OpenSSL are non-weak; if you have a problem with them, it will not be due to a cryptographic weakness in the algorithms themselves. You may want to avoid cipher suites which feature " MD5 ", not because of an actual known weakness, but for public relations. MD5 , as a hash function, is "broken" because we can efficiently find many collisions for that function. This is not a problem for MD5 as it is used in SSL; yet, that's enough for MD5 to have a bad reputation, and you are better avoiding it. Note that the cipher suite does not enforce anything on the size of the server key (the public key in the server certificate), which must be large enough to provide adequate robustness (for RSA or DSS, go for 1024 bits at least, 1536 bits being better -- but do not push it too much, because computational overhead raises sharply with key size). NIST , a US federal organization which is as accepted and well-known as any security organization can possibly be, has published some recommendations (see especially the tables on pages 22 and 23); this is from 2005 but still valid today. Note that NIST operates on an "approved / not approved" basis: they do not claim in any way that algorithms which are "not approved" are weak in any way; only that they, as an organization, do not vouch for them. | {
"source": [
"https://security.stackexchange.com/questions/8535",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2147/"
]
} |
8,540 | Is Facebook allowed to sell information about their users to other companies? For example selling name, address and IP information on a specific geographic location could be very valuable information for competitive ISP's trying to win customers from each other. According to Facebook's privacy help page it sais: While you are allowing us to use the information we receive about you, you always own all of your information. Your trust is important to us, which is why we don't share information we receive about you with others unless we have: received your permission; given you notice, such as by telling you about it in this policy; or removed your name or any other personally identifying information from it. By registering at Facebook, have I already given my permission? | There is a classic phrase: "If you are not paying for it, you're not the customer; you're the
product being sold" In case of companies' liquidation, they openly sell their user databases on the internet as one of the liquidated assets. Well, think this way. If Microsoft bought , say, Skype, had Skype sold its user database and what is the sense/value of Skype, or Facebook, without their user database? Besides, online service companies, like Facebook, are functioning on the basis of license (terms of service, etc.) agreements with users not contracts. That means that if one of the sides breaks it, this would constitute violation of copyright law not contract law. Update: Here is an excerpt from Hundreds of websites share usernames sans permission. Photobucket, Wall Street Journal, Home Depot take liberties with your personal info http://www.theregister.co.uk/2011/10/11/websites_share_usernames/ " Home Depot , The Wall Street Journal , Photobucket , and
hundreds of other websites share visitor's names, usernames, or other
personal information with advertisers or other third parties, often
without disclosing the practice in privacy policies, academic
researchers said. Sixty-one percent of websites tested by researchers from Stanford Law
School's Center for Internet and Society leaked the personal
information, sometimes to dozens of third-party partners. Home Depot,
for example, disclosed the first names and email addresses of visitors
who clicked on an ad to 13 companies. The Wall Street Journal divulged
to seven of its partners the email address of users who enter the
wrong password. And Photobucket handed over the usernames of those who
use the site to share images with their friends." Your phone company is selling your personal data (CNN, Nov 1, 2011) "Verizon (VZ, Fortune 500) is the first mobile provider to publicly confirm that it is actually selling information gleaned from its customers directly to businesses. But it's hardly alone in using data about its subscribers to make extra cash" Facebook is blurting out your private information no date but comments start on Oct, 2010, and the author regularly tweet this article (Nov, 2011) "... the moment you land into one of their [Facebook's] “trusted partners” sites, your personal information has just been given away" I could not resist from visualizing the comment by Hendrik Brummermann from this answer here pointing to this image found on the web: as well as to answer that answer by quote of "Privacy Zuckering" definition : "The act of creating deliberately confusing jargon and user-interfaces which trick your users into sharing more info about themselves than they really want to." ( As defined by the EFF ). The term "Zuckering" was suggested in an EFF article by Tim Jones on Facebook's "Evil Interfaces" . It is, of course, named after Facebook CEO Mark Zuckerberg | {
"source": [
"https://security.stackexchange.com/questions/8540",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/294/"
]
} |
8,548 | I have an android phone which, like many others, has quickly become unsupported and is not receiving any updates. At the same time there are publicly available exploits for privilege-escalation vulnerabilities, which are mainly used for legitimate rooting the phone, however as far as I can see there is nothing stopping an attacker from using these exploits to completely bypass the android permissions system. This is already done by the applications used for easy rooting of the device - they do not require any special permissions and are able to execute the exploits that give them full access to the system. It seems like the only thing stopping a normal looking application in the market from bypassing all android restrictions and taking control of a device (which does not receive updates) is hoping that Google can catch all such applications and ban them from the market. This does not seem realistic to me. The other option is to run a custom ROM which often receives updates, assuming you trust the ROM developers and assuming that the ROM is fully compatible with the particular device. So, the questions are: Is this accurate, or am I missing something? And what is the best solution for somebody who would rather not deal with custom ROMs? | Yes, this is accurate. If your version of the Android OS has known privilege escalation vulnerabilities, there is nothing stopping a rogue application from exploiting a privilege escalation vulnerability and thus escaping the sandbox (i.e., gaining unrestricted access to your phone). This absence of security upgrades is a shortcoming of the Android ecosystem. The ecosystem is reliant upon handset manufacturers and carriers to continue providing security upgrades, but many handset manufacturers/carriers have declined to do so, for economic reasons. They treat the phones as disposable, and don't always show loyalty to older customers. Once the phone is a few years old, they stop providing upgrades and focus on the latest shiny models that are being sold, prioritizing selling new handsets over supporting past customers. This is not very eco-friendly and not particularly customer-friendly. I think it is unfortunate, but it appears to be a fact of life. And so it goes. There is an excellent analysis of this phenomenom by Michael DeGusta. Here is a infographic showing the results of his analysis: Credits: Michael DeGusta at The Understatement. Update (12/26/2012): Ars Technica has a nice overview of the situation with Android updates, a year later. Unfortunately, it's not pretty: things haven't gotten any better, and many Android phones are not receiving updates. The security risks remain. | {
"source": [
"https://security.stackexchange.com/questions/8548",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5670/"
]
} |
8,559 | At a large enterprise environment I have come across a deployment approach for Digital Certificates where each user is issued two (2) key pairs: One for signing documents, emails, etc. that is completely "personal" (perhaps kept only by him in an e.g. smart card) One for encryption. To avoid any situations of user unavailability, blackmail etc. encryption by this latter key pair can be circumvented by the key management system (using appropriate policies etc.) This approach is supposed to safeguard from an administrator signing as a user but I find certain usage scenarios making things complicated. E.g. how about sending signed and encrypted emails? Two public keys maintained for each user in the contact list? So, is this an overall preferred (and widely used) design? Or should we just use it in certain cases where prevention of impersonation is the highest priority? | In a sane organization, it is actually necessary to have two distinct keys, one for signing and one for encryption. When you receive some encrypted data (e.g. an encrypted email, as in S/MIME or PGP ), you normally store the encrypted data (that's what happens by default for email). Therefore, if your private key becomes "unavailable", you cease to be able to read previously stored data: this is a data loss situation. "Unavailability" of the private key can take multiple forms, including hardware failure (your dog chew your smartcard to death) or "hardware" failure (the key holder is hit by a bus, or unceremoniously fired, and his successor should be able to read previously received business emails). To remove the risk of data loss through key loss, a backup of the private key must be stored somewhere (e.g. printed on a paper, in a safe)(this is often called escrow ). In short words: encryption keys MUST be escrowed. Loss of a signature private key does not imply any kind of data loss. Signatures which where previously generated keep on being verifiable. Recovering after a signature key loss involves getting a new key, and that's all. So there is no strong need for key backup here. On the other hand, signatures are normally meant to have legal value (there is little point in requesting a signature if you cannot use it against the signer, should he later on fail to follow on his promises). The legal value is conditional to the impossibility for any other individual than the key owner to generate a signature; this does not mix well at all with an escrow on the key. Hence, a signature key MUST NOT be escrowed. Since a key cannot be both escrowed and non-escrowed simultaneously, you need two keys. | {
"source": [
"https://security.stackexchange.com/questions/8559",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1903/"
]
} |
8,583 | I have little security knowledge and looking at image hosting for a startup: Considering S3 doesn't allow you to set a cap on costs, how likely is it that someone could flood S3 with requests for my files and run up a considerable amount of money? Say I have a 2MB document, is it possible for someone to send millions of requests to that get file? From what I understand the costs for an end user requesting a file is: Request Pricing: GET and all other Requests † $0.01 per 10,000 requests Data Transfer Pricing: Up to 10 TB / month $0.120 per GB Does that pricing mean this is a non-issue? Does Amazon S3 have security measures in place to stop something like that happening? | Figure out the cost level where you start getting uncomfortable. Calculate the GB that would need to be transferred to reach that cost level. See if you think an attacker will be willing to spend that much effort to hurt you for that amount of money. Every time I run this calculation I end up thinking that if somebody hated me that much, they could certainly find an easier way to hurt me. For example, a million downloads of your 2MB document is going to cost you about $240 in data transfer charges plus $1 in request charges. To create this cost to you, the attacker is going to have to download 2,000 GB (2TB). That's weeks of completely filling up a 10Mbps line. Just for a measly $240 impact. Amazon generally doesn't discuss publicly all of the security measures they have in place to stop DOS, DDOS, and other attacks against their customers. In one whitepaper, Amazon says: "Proprietary DDoS mitigation techniques are used." Of course, it's not always easy to differentiate between a DDOS and a popular resource :-) You can read more about Amazon Web Services security on their site: http://aws.amazon.com/security/ | {
"source": [
"https://security.stackexchange.com/questions/8583",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5690/"
]
} |
8,587 | I've just read this question What is the corrupted image vulnerability? How does it work? (GIFAR, EXIF data with javascript, etc..) I'm asking myself how can I protect myself and my website's users. My users are allowed to upload their own images (e.g. forum avatars, pictures as part of a message), these pictures being displayed to all other visitors of the corresponding page. What can I do to be sure that an uploaded file is a real, plain picture and not something else? I'm not asking about a way to overcome specific vulnerability, I'm asking how can I be sure that file contain nothing else than a plain image data? (so I'll probably be protected also against 'yet to be find' vulnerabilities) | I have some suggestions: Use a separate domain. Host the images on a separate domain that is used only to host user-provided images. This will ensure that many browser-level attacks can have only limited effects. For instance, suppose the user's browser is vulnerable to content-type sniffing attacks, so it is possible to upload an image that some browsers will treat as HTML containing malicious Javascript; this defense ensures that the malicious Javascript can only tamper with other user's images, and cannot access your site's cookies or content or other security-sensitive stuff. However, this does not defend against code-injection attacks that exploit a vulnerability (e.g., a buffer overrun, a double-free) and execute native code. Defend against content-type sniffing. Follow practices I've outlined elsewhere to defend against content-type sniffing attacks. The most important one is to set a correct Content-Type: header on the HTTP responses where you serve the image. It can also be helpful to include a X-Content-Type-Options: nosniff header, to prevent some versions of IE from trying to do content-type sniffing. Convert to a fixed format. Convert the input image to a bitmap (keeping only the bitmap data, and throwing away all the extra annotations), then convert the bitmap to your desired output format. One reasonable way to do this is to convert to PBM format, then convert to PNG. This is not a reliable way to stop attacks, but it reduces some of the attack surface available to the attacker. For instance, it prevents the attacker from attaching malicious metadata that are crafted to exploit some vulnerability in the image parser. It also does not give the attacker a choice of image formats. To defeat this defense, the attacker must find a vulnerability in the PNG decoder, in the code that reads image pixel data. It prevents the attacker from exploiting a vulnerability in the decoder for some other image format or in the part of the PNG parser that reads metadata. So, while potentially helpful at reducing the risk, I would not expect this defense alone to be sufficient. Consider randomization. Consider inserting some random noise into the image. For instance, you might loop over all the pixels, and for each of the three intensities (corresponding to RGB) for that pixel, randomly choose among adding 1, subtracting 1, or leaving that intensity value alone. This introduces a tiny bit of noise into the image, but hopefully not enough to be noticeable to viewers. And, if you are lucky, it may make some attacks less likely to succeed, because the attacker cannot fully predict the result of the transformation. This defense is highly heuristic and is certainly not guaranteed to be effective, but it is possible it might help, if used in addition with the other defenses I've outlined, as a sort of belt-and-suspenders defense-in-depth strategy. But please understand that this defense alone is probably not adequate. Depending upon how concerned you are about these risks and the sensitivity of your site, you don't need to do all four. If you are not concerned about code-injection vulnerabilities in the browser, you could do just #1 and #2. If you want partial protection against code-injection vulnerabilities in the browser, you could do just #1, #2, and #3. | {
"source": [
"https://security.stackexchange.com/questions/8587",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5694/"
]
} |
8,596 | I am building a web application which requires users to login. All communication goes through https. I am using bcrypt to hash passwords. I am facing a dilemma - I used to think it is safer to make a password hash client-side (using JavaScript) and then just compare it with the hash in DB server-side. But I am not sure this is any better than sending plain-text password over https and then hashing it server-side. My reasoning is that if attacker can intercept the https traffic (= read plaintext password) he can for example also change the JavaScript so it sends the plaintext password alongside the hashed one - where he can intercept it. The reason against hashing client-side is just ease of use. If I hash client-side I need to use two separate libraries for hashing. This is not an unsurmountable problem, but it is a nuisance. Is there a safety gain in using client-side hashing? Why? Should I also be using challenge-response then? UPDATE: what interests me the most is this - do these techniques (client-side hashing, request-response) add any significant security gain in case where https is used? If so, why? | If you hash on the client side, the hashed password becomes the actual password (with the hashing algorithm being nothing more than a means to convert a user-held mnemonic to the actual password). This means that you will be storing the full "plain-text" password (the hash) in the database, and you will have lost all benefit of hashing in the first place. If you decide to go this route, you might as well forgo any hashing and simply transmit and store the user's raw password (which, incidentally, I wouldn't particularly recommend). | {
"source": [
"https://security.stackexchange.com/questions/8596",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5700/"
]
} |
8,607 | If you use a quick hashing algorithm like MD5 or SHA-1 to hash passwords and you don't use any salt at all, how quickly could one expect a hacker to find my password out? If I use a truly random salt for each user, on what order of magnitude will this affect the length of time to crack my password? I have heard that hashing algorithms like md5 or sha-1 can be computed very quickly and at great scale, so you should not use them for password schemes today. But I know that there are lots of systems out there that use them, and I am curious to understand just how quickly these systems can be beaten, or if it is more of a theoretical problem that won't truly exist until a decade from now. I know the user chosen password matters a lot here, and if you can work that into your response, that would be great. As a bonus, which hashing algorithms are safest to use? | Assume you have no rainbow table (or other precomputed list of hashes),
and would actually need to do a brute-force or dictionary attack. This program IGHASHGPU v0.90 asserts to be able to do about 1300 millions of SHA-1 hashes (i.e. more than 2^30) in each second on a single ATI HD5870 GPU. Assume a password of 40 bits of entropy, this needs 2^10 seconds, which is about 17 minutes. A password of 44 bits of entropy ( like the one in the famous XKCD comic ) takes 68 minutes (worst case, average case is half of this). Running on multiple GPUs in parallel speeds this up proportionally. So, brute-forcing with fast hashes is a real danger, not a theoretical one. And many passwords have a much lower entropy, making brute-forcing even faster. If I use a truly random salt for each user, on what order of magnitude
will this affect the length of time to crack my password? The salt itself is assumed to be known to the attacker, and it by itself doesn't much increase the cracking time for a single password (it might increase it a bit, because the hashed data becomes one block longer, but that at most doubles the work). The real benefit of a (independent random) salt is that an attacker can't use the same work to attack the passwords of multiple users at the same time. When the attacker wants just any user's password (or "as many as possible"), and you have some millions of users, not having a salt would could down the attack time proportionally, even if all users would have strong passwords. And certainly not all will have. As a bonus, which hashing algorithms are safest to use? The current standard is to use a slow hashing algorithm. PBKDF2, bcrypt or scrypt all take both a password and a salt as input and a configurable work factor - set this work factor as high as your users just accept on login time with your server's hardware. PBKDF2 is simply an iterated fast hash (i.e. still efficiently parallelizable). (It is a scheme which can be used with different base algorithms. Use whatever algorithm you are using anyways in your system.) Bcrypt needs some (4KB) working memory, and thus is less efficiently implementable on a GPU with less than 4KB of per-processor cache. Scrypt uses a (configurable) large amount of memory additionally to processing time, which makes it extremely costly to parallelize on GPUs or custom hardware, while "normal" computers usually have enough RAM available. All these functions have a salt input, and you should use it . | {
"source": [
"https://security.stackexchange.com/questions/8607",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5709/"
]
} |
8,765 | Amazon's S3 storage service offers server-side encryption of objects, automatically managed for the user ( Amazon's Documentation ). It's easy to enable so I'm thinking "why not?", but what kind of security does this really provide? I guess it prevents someone from wandering into the AWS datacenter and grabbing a hard drive, but that seems very unlikely, and presumably anyone with access like that could also get the AES keys, wherever they're stored. It doesn't seem to protect the data once it's off the drives, since at that point it's decrypted, and anyone who has your credentials or can intercept the traffic will see the data in the clear. So what's the point, really? Just to say the data is "encrypted"? | The short answer is this: We have no idea, probably none. It might protect against stolen backups. But that assumes Amazon even makes backups. That seems very unlikely. If they did, why couldn't they recover data from their last S3 data loss? It's much cheaper and more efficient just to use multiple live copies. Also, Amazon would need the keys on every access. So it seems very unlikely that they store the keys anywhere other than approximately the same places they store the data. So if you're imagining a theft of live data devices, it's just as likely that they get the keys as well. But we don't know how Amazon stores, replicates, and/or backs up data. Nor do we know where they store the keys or how they distribute them. However, I've yet to hear a plausible argument that there exists a realistic threat they protect against. The "stolen backups" theory seems to be based on the false premise that Amazon uses backups when all the evidence suggests they use multiple, live copies with the keys quite nearby. Dropbox's encryption, however, does protect against one real threat model, albeit a very unlikely one. Dropbox stores their own keys and sends them to you, so it does protect you from a rogue Amazon employee. In exchange, you're vulnerable to a rogue Dropbox employee or Dropbox security bug. My own opinion is that Amazon added this feature just so they could say that data could be stored encrypted. Some people will mindlessly compare check boxes on feature lists and Amazon wanted a check box on the "secure/encrypted" line. Either way, the weakest link is most likely Amazon's internal network and human security and the validity of the implementation of the code that decides whether to permit accesses or not. | {
"source": [
"https://security.stackexchange.com/questions/8765",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5803/"
]
} |
8,772 | I've signed on to help a department move buildings and upgrade their dated infrastructure. This department has about 40 employees, 25 desktops, an old Novell server, and a handful of laboratory processing machines with attached systems. At the old location, this department had two networks - a LAN with no outside access whatsoever on an entirely separate switch, and a few machines with outside access. We are trying to modernize this setup a bit as pretty much every user needs to access email and the time tracking system. The parent organization (~10k employees) has a large IT department that is in charge of the connection and phone system at the new offsite location. The IT dept. had uverse dropped in and setup a VPN to their central network. Each desktop needs to be registered in the IT dept's system/website to get a (static) IP Address. Each IP Address given is outside accessible on any port that has a service listening on the client machine. The server has confidential (HIPPA) data on it, the desktops have mapped network drives to access (some) of this data. There is also a client/server LIS in place. My question is this: Is it worth making a stink that all of these machines are outside accessible? Should we: Request NAT to abstract the outside from the inside, as well as a firewall that blocks all traffic not explicitly defined as allowed? If so, what argument's can I make for NAT/firewall that outweigh the benefits of them having each machine registered in their system? I would be relaying all IT related requests from the end users to the IT department in either case - so it doesn't seem very necessary to have them tied down to specific addresses in their system. Most importantly, it sounds like a nightmare to manage separate firewalls on every desktop (varying platforms/generations) and on the server. Request the IT dept. block all incoming traffic to each wan accessible IP on whatever existing firewalls they have in place Keep the departments LAN completely isolated from the internet. Users must share dedicated machines for accessing email, internet, and time tracking system. Thanks in advance for any comments or advice on this. | NAT and firewalling are completely orthogonal concepts that have nothing to do with each other. Because some NAT implementations accidentally provide some firewalling, there is a persistent myth that NAT provides security. It provides no security whatsoever. None. Zero. For example, a perfectly reasonable NAT implementation might, if it only had one client, forward all inbound TCP and UDP packets to that one client. The net effect would be precisely the same as if the client had the outside address of the NAT device. Don't think that because most NAT devices have some firewalling built in by design or do some by accident that this means NAT itself provides any security. It is the firewalling that provides the security, not the NAT. The purpose of NAT is to make things work. You must not assume a machine is not outside accessible just because it's behind a NAT device. It's not outside accessible if some device is specifically configured not to permit it to be accessed from the outside, whether that device does NAT or not. Every machine having an outside address but with a stateful firewall that's properly configured, managed, and monitored is vastly superior to a cheap SoHo NAT box. Many actual SoHo NAT boxes forward traffic to inside hosts despite no inside host having ever sent traffic to the source of the forwarded traffic. Permissive NAT does really exist. | {
"source": [
"https://security.stackexchange.com/questions/8772",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5807/"
]
} |
8,861 | I have a classic DMZ architecture: My webserver is placed in the DMZ.
The webserver needs to communicate with a database server. This database server is the most critical component of my network as it contains confidential data. Where should I place the DB server and why? Should I add a second firewall and create another DMZ? | The best placement is to put the database servers in a trusted zone of their own. They should allow inbound connections from the web servers only, and that should be enforced at a firewall and on the machines. Reality usually dictates a few more machines (db admin, etc). Obey reality as needed, of course. They should only be making outbound connections if you're updating software on them. | {
"source": [
"https://security.stackexchange.com/questions/8861",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4508/"
]
} |
8,912 | I'd like to know what it means to say "the cryptosystem C uses keys with a length of x bits". I do not understand what the bits length means... doesn't it depend on the encoding? The same word encodes to bit strings of different lengths in utf8, iso and unicode, so is there a general encoding used to define the length of a key? Or does "length of x bits" mean something completely different? | For symmetric algorithms ( symmetric encryption , Message Authentication Code ), a key is a sequence of bits, such that any sequence of the right length is a possible key. For instance, AES is a symmetric encryption algorithm (specifically, a block cipher ) which is defined over keys of 128, 192 and 256 bits: any sequence of 128, 192 or 256 bits can be used as a key. How you encode these bits is not relevant here: regardless of whether you just dump them raw (8 bits per byte), or use Base64, or hexadecimal, or infer them from a character string, or whatever, is up to you. There are a few gotchas with some algorithms. The prime example is DES , a predecessor to AES. DES is defined to use a 64-bit key. However, if you look at the algorithm definition, you see that only 56 of these bits are used; the other 8 are simply ignored (if you number bits from 1 to 64, these are bits 8, 16, 24, 32, 40, 48, 56 and 64; they are supposed to be "parity bits" depending on the 56 others, but nobody really bothers with setting or checking them). So it is often said that DES has a 56-bit key . This is because key length is related to security: if an algorithm accepts keys of length n bits, then there are 2 n possible keys, and thus trying them all (attack known as "exhaustive search" or "brute force") has time proportional to 2 n (with n big enough, i.e. more than about 85, this is technologically infeasible). In that sense, DES offers the security of a 56-bit key ( 2 56 "really distinct" possible keys). Yet, if you use a library implementing DES, that library will expect DES keys as sequences of 64 bits (often provided as 8 bytes). Another algorithm with a special rule is RC2 . It accepts keys of length 8 to 128 bits (multiple of 8 only); but it also has an extra parameter called effective key length denoted by "T1". In the middle of the processing of the key, an internal value is "reduced" to a sequence of T1 bits, which means that subsequent encryption will depend only on the values of T1 specific bits in that internal value. The resistance of RC2 against exhaustive search is then no more than that offered by a T1-bit key, because one can try all possible sequences of T1 bits for that internal value. Yet, RC2 still has an actual key length (the length of the sequence of bits which is provided as key) which can be greater than T1. For asymmetric algorithms (also known as public-key cryptography , encompassing asymmetric encryption, digital signatures, some key exchange protocols, and a few more esoteric algorithms), keys work by pairs consisting in a public key and a private key. These keys are mathematical objects with some heavy internal structure. The "key length" is then a conventional measure of the size of one of the involved mathematical objects. For instance, a RSA public key contains a big integer called the modulus , as well as an other integer (usually small) called the public exponent . When we say a "1024-bit RSA key", we mean that the modulus has length 1024 bits, i.e. is an integer greater than 2 1023 but lower than 2 1024 . Such an integer could be encoded as a sequence of 1024 bits, i.e. 128 bytes. Yet, the public key must also contain the public exponent, so the actual encoded length will be greater. And the private key is, on a theoretical point of view, knowledge of how the modulus can be factored in prime numbers; the traditional encoding for that knowledge is that of the prime factors, along with a bunch of helper values which could be recomputed from the factors (but that would be slightly expensive) and may help in executing the algorithm faster. For distinct key types which work over distinct mathematics, other "lengths" are used, so you cannot directly compare security of algorithms by simply comparing the key lengths. A 256-bit ECDSA key is vastly more secure than a 768-bit RSA key. Also, the mathematical structure inherent to public/private key pairs allows for much faster attacks than simply trying out bunch of random bits, and there are many subtle details. See this site for explanations and online calculators for the various set of rules about comparing key sizes that many regulatory organizations have come up with. | {
"source": [
"https://security.stackexchange.com/questions/8912",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2370/"
]
} |
8,917 | Could anyone provide some insight on performing an internal vulnerability assessment regarding Novell NetWare as the backend with client workstations and servers running various versions of Windows? Essentially, I'm looking for tools or techniques on how to run a thorough scan by joining my machine to the Novell "tree" and then launching a scan against the Windows workstations and servers. Additionally, it would be beneficial if a tool exists that scans for vulnerabilities on both systems. Other questions surrounding this are: If there is a separate Windows domain, would I have to join that domain in order to run a scan? If the Windows machines are part of a WORKGROUP, would I just need to join the Novell "tree" and enter the Windows admin credentials inside of a scanner tool to scan the Windows machines? Could I join the WORKGROUP and run the scan with the Novell NetWare admin credentials used inside of the scanner tool? To sum up, a person plugs their laptop into an environment running Novell NetWare and wants to scan the Windows clients connected to the Novell server. Please ask for more detail if necessary. | For symmetric algorithms ( symmetric encryption , Message Authentication Code ), a key is a sequence of bits, such that any sequence of the right length is a possible key. For instance, AES is a symmetric encryption algorithm (specifically, a block cipher ) which is defined over keys of 128, 192 and 256 bits: any sequence of 128, 192 or 256 bits can be used as a key. How you encode these bits is not relevant here: regardless of whether you just dump them raw (8 bits per byte), or use Base64, or hexadecimal, or infer them from a character string, or whatever, is up to you. There are a few gotchas with some algorithms. The prime example is DES , a predecessor to AES. DES is defined to use a 64-bit key. However, if you look at the algorithm definition, you see that only 56 of these bits are used; the other 8 are simply ignored (if you number bits from 1 to 64, these are bits 8, 16, 24, 32, 40, 48, 56 and 64; they are supposed to be "parity bits" depending on the 56 others, but nobody really bothers with setting or checking them). So it is often said that DES has a 56-bit key . This is because key length is related to security: if an algorithm accepts keys of length n bits, then there are 2 n possible keys, and thus trying them all (attack known as "exhaustive search" or "brute force") has time proportional to 2 n (with n big enough, i.e. more than about 85, this is technologically infeasible). In that sense, DES offers the security of a 56-bit key ( 2 56 "really distinct" possible keys). Yet, if you use a library implementing DES, that library will expect DES keys as sequences of 64 bits (often provided as 8 bytes). Another algorithm with a special rule is RC2 . It accepts keys of length 8 to 128 bits (multiple of 8 only); but it also has an extra parameter called effective key length denoted by "T1". In the middle of the processing of the key, an internal value is "reduced" to a sequence of T1 bits, which means that subsequent encryption will depend only on the values of T1 specific bits in that internal value. The resistance of RC2 against exhaustive search is then no more than that offered by a T1-bit key, because one can try all possible sequences of T1 bits for that internal value. Yet, RC2 still has an actual key length (the length of the sequence of bits which is provided as key) which can be greater than T1. For asymmetric algorithms (also known as public-key cryptography , encompassing asymmetric encryption, digital signatures, some key exchange protocols, and a few more esoteric algorithms), keys work by pairs consisting in a public key and a private key. These keys are mathematical objects with some heavy internal structure. The "key length" is then a conventional measure of the size of one of the involved mathematical objects. For instance, a RSA public key contains a big integer called the modulus , as well as an other integer (usually small) called the public exponent . When we say a "1024-bit RSA key", we mean that the modulus has length 1024 bits, i.e. is an integer greater than 2 1023 but lower than 2 1024 . Such an integer could be encoded as a sequence of 1024 bits, i.e. 128 bytes. Yet, the public key must also contain the public exponent, so the actual encoded length will be greater. And the private key is, on a theoretical point of view, knowledge of how the modulus can be factored in prime numbers; the traditional encoding for that knowledge is that of the prime factors, along with a bunch of helper values which could be recomputed from the factors (but that would be slightly expensive) and may help in executing the algorithm faster. For distinct key types which work over distinct mathematics, other "lengths" are used, so you cannot directly compare security of algorithms by simply comparing the key lengths. A 256-bit ECDSA key is vastly more secure than a 768-bit RSA key. Also, the mathematical structure inherent to public/private key pairs allows for much faster attacks than simply trying out bunch of random bits, and there are many subtle details. See this site for explanations and online calculators for the various set of rules about comparing key sizes that many regulatory organizations have come up with. | {
"source": [
"https://security.stackexchange.com/questions/8917",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4762/"
]
} |
8,964 | The EFF recommends using HTTPS everywhere on your site, and I'm sure this site would agree. When I asked a question about using Django to implement HTTPS on my login page, that was certainly the response I got :) So I'm trying to do just that. I have a Django/nginx setup that I'm trying to configure for HTTPS-only - it's sort of working, but there are problems. More importantly, I'm sure if it's really secure , despite seeing the https prefix. I have configured nginx to redirect all http pages to https, and that part works. However... Say I have a page, https://mysite.com/search/ , with a search form/button on it. I click the button, Django processes the form, and does a redirect to a results page, which is http://mysite.com/search/results?term="foo" . This URL gets sent to the browser, which sends it back to the nginx server, which does a permanent redirect to an https -prefixed version of the page. (At least I think that's what is happening - certainly IE warns me that I'm going to an insecure page, and then right back to a secure page :) But is this really secure? Or, at least as much security as a standard HTTPS-only site would have? Is the fact that Django transmits a http-prefix URL, someone compromising security? Yes, as far as I can tell, only pages that have an https-prefix get replied to, but it just doesn't feel right :) Security is funky, as this site can attest to, and I'm worried there's something I'm missing. | Secure your cookies In settings.py put the lines SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True and cookies will only be sent via HTTPS connections. Additionally, you probably also want SESSION_EXPIRE_AT_BROWSER_CLOSE=True . Note if you are using older versions of django (less than 1.4), there isn't a setting for secure CSRF cookies. As a quick fix, you can just have CSRF cookie be secure when the session cookie is secure ( SESSION_COOKIE_SECURE=True ), by editing django/middleware/csrf.py : class CsrfViewMiddleware(object):
...
def process_response(self, request, response):
...
response.set_cookie(settings.CSRF_COOKIE_NAME,
request.META["CSRF_COOKIE"], max_age = 60 * 60 * 24 * 7 * 52,
domain=settings.CSRF_COOKIE_DOMAIN,
secure=settings.SESSION_COOKIE_SECURE or None) Direct HTTP requests to HTTPS in the webserver Next you want a rewrite rule that redirects http requests to https, e.g., in nginx server {
listen 80;
rewrite ^(.*) https://$host$1 permanent;
} Django's reverse function and url template tags only return relative links; so if you are on an https page your links will keep you on the https site. Set OS environmental variable HTTPS to on Finally, (and my original response excluded this), you need to enable the OS environmental variable HTTPS to 'on' so django will prepend https to fully generated links (e.g., like with HttpRedirectRequest s). If you are using mod_wsgi, you can add the line: os.environ['HTTPS'] = "on" to your wsgi script . If you are using uwsgi, you can add an environmental variable by the command line switch --env HTTPS=on or by adding the line env = HTTPS=on to your uwsgi .ini file. As a last resort if nothing else works, you could edit your settings file to have the lines import os and os.environ['HTTPS'] = "on" , which also should work. If you are using wsgi, you may want to additionally set the environmental variable wsgi.url_scheme to 'https' by adding this to your settings.py : os.environ['wsgi.url_scheme'] = 'https' The wsgi advice courtesy of Vijayendra Bapte's comment . You can see the need for this environmental variable by reading django/http/__init__.py : def build_absolute_uri(self, location=None):
"""
Builds an absolute URI from the location and the variables available in
this request. If no location is specified, the absolute URI is built on
``request.get_full_path()``.
"""
if not location:
location = self.get_full_path()
if not absolute_http_url_re.match(location):
current_uri = '%s://%s%s' % (self.is_secure() and 'https' or 'http',
self.get_host(), self.path)
location = urljoin(current_uri, location)
return iri_to_uri(location)
def is_secure(self):
return os.environ.get("HTTPS") == "on" Additional Web Server Things: Take that guy's advice and turn on HSTS headers in your web server by adding a line to nginx: add_header Strict-Transport-Security max-age=31536000; This tells your web browser that your website for the next 10 years will be using HTTPS only. If there's any Man-in-the-middle attack on any future visit from the same browser (e.g., you log on to a malicious router in a coffee-shop that redirects you to an HTTP version of the page), your browser will remember it is supposed to be HTTPS only and prevent you from inadvertently giving up your information. But be careful about this, you can't change your mind and later decide part of your domain will be served over HTTP (until the 10 years have passed from when you removed this line). So plan ahead; e.g., if you believe your application may soon grow in popularity and you'll need to be on a big CDN that doesn't handle HTTPS well at a price you can afford, you may have an issue. Also make sure you disable weak protocols. Submit your domain to an SSL Test to check for potential problems (too short key, not using TLSv1.2, using broken protocols, etc.). E.g., in nginx I use: ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; | {
"source": [
"https://security.stackexchange.com/questions/8964",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2486/"
]
} |
9,011 | I would like to know if it is safe for the host system of a virtual machine (VM - VirtualBox OSE in my case) to execute malware. Can a virus break out and read or write data from the host system? Can it establish an Internet connection if I disable it in my VM? Is a VM a safe environment to try to find out what a virus does? Can a fork bomb "kill" the host system if I reduce the memory to about 1/4 of my total real memory? How much CPU-time/resources can it use? | Theoretically, the guest system is totally isolated by the VM and cannot even "see" the host, let alone attack it; so the guest cannot break out of the VM. Of course, in practice, it has occasionally happened ( web archive link ). An attack requires exploiting a security issue (i.e. a programming bug which turns out to have nasty consequences) in the VM implementation or, possibly, the hardware features on which the VM builds on. There are few exit routes for data out of the VM; e.g., for Internet access, the VM is emulating a virtual network card, which deals only with the lowest level packets, not full TCP/IP -- thus, most IP-stack issues remain confined within the VM itself. So bugs leading to breakout from VM tend to remain rare occurrences. There are some kinds of attacks against which VM are very effective, e.g. fork bombs. From the point of view of the host system, the VM is a single process. A fork bomb in the guest will bring to its knees the scheduler in the guest OS, but for the host this will be totally harmless. Similarly for memory: the VM emulates a physical machine with a given amount of RAM, and will need about that amount of "real" RAM to back it up efficiently. Regardless of what the guest does, the VM will never monopolize more RAM than that. (You still want to limit VM RAM size to, say, at most 1/2 of your physical RAM size, because the extra "real" RAM is handy for disk caching; and the host OS will want to use some, too.) | {
"source": [
"https://security.stackexchange.com/questions/9011",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3286/"
]
} |
9,037 | Most modern browsers support " private browsing mode " (also known in Chrome as "Incognito mode"), where the browser does not save any information to disk about your browsing while in this mode. In modern browsers, can a web site detect whether a user who is visiting the web site has private browsing mode enabled or not? The background research I've done. Here's what I've been able to find related to this question. Unfortunately, it doesn't really answer the question above. A 2010 study of private browsing mode showed that it is possible for web sites to detect whether the browser is in private browsing mode, by using a CSS history sniffing attack. (In private browsing mode, sites are not added to the history, so you can use history sniffing to check whether the visitor is in private browsing mode.) Since then, though, modern browsers have incorporated defenses against CSS history sniffing attacks. Consequently, I would not expect that method of detecting whether the browser is in private browsing mode to be successful any longer. (I realize the defenses against history sniffing are not perfect , but they may be good enough for these purposes.) There may be ways for a website you're visiting to learn whether you are currently logged into other sites (think: Facebook). If the user is currently logged into other services (like Facebook), a website could plausibly guess that the user is not currently using private browsing mode -- this is not a sure thing, but perhaps one could make some kind of probabilistic inference. However, if the user isn't logged into other services, then I guess all we can say is that we don't know whether private browsing mode is in use. It is possible this might yield a partial leak of information, I suppose, but it sounds unreliable at best -- if it even works. It is also possible that this might not work at all. So, can anyone provide any more recent information about whether there's a way for a website to test whether its visitors are using private browsing mode? | Note this answer was given in 2011. Today the answer is an unequivocal YES -- as of this writing in 2020 there are reliable techniques in wide use and have been for a while. Please see one of the good current answers below 1 2 for more up to date information. I'm not sure you could reliably detect private browsing, but I think you may be able to apply some heuristics to make a good guess that a user is using various privacy-enhancing features. As indicated in my comment on the question, whether this is good enough or fits your application depends on what you want to be able to do in reaction to detecting private browsing. As Sonny Ordell mentioned, I'm also not sure that you can distinguish private browsing from the ad hoc use of various privace-enhancing features (e.g. manually clearing history or cookies). Let's assume you operate a web application, and you want to detect when one of your users (with an account) switches to private browsing. I'm specifying that the user has an account, because this strategy relies on tracking various bits of behavior data. The aspects of private browsing are (at least in Firefox ): history, form/search entries, passwords, downloads, cookies, cache, DOM storage. I'm not sure how to probe for downloads, but I think the others can be probed. If you get a positive detection on all of them, it seems more likely that your user is private browsing. In the trivial case, you keep track of (IP, user-agent) for each user. When you get a cookie-less request for a matching (IP, UA) record, you might infer that the corresponding user is private browsing. This method fails (no detection) if: He uses something like ProxySwitchy or TorButton to activate Tor during private browsing, thus changing IP. He switches to a different browser (e.g. usually uses FF and switches to Chrome for Incognito mode). The switch to private browsing is not immediate and his ISP has issued a new IP (e.g. on Friday he was 10.1.2.3, he didn't use your app over the weekend, and on Monday he is 10.1.4.5). As mentioned in Sonny Ordell's answer , if another person uses the same browser in private browsing mode to access a separate account on your site, you will get a detection -- but this is a slightly different case than if the "normal" user simply switches to private browsing mode. You'll get a false-positive if the user simply clears his cookies for your site, or uses a secondary profile (e.g. I keep a few different Firefox profiles with different sets of plugins for certain testing and/or to avoid tracking, though I'd guess this is very uncommon). As a more complex check, you could use something like EFF's panopticlick and maintain a browser fingerprint (or collection of fingerprints) instead of just the UA for each user. This fails in situation 2 mentioned above (e.g. if the user exclusively uses FF for identifiable browsing and Chrome for incognito). The fingerprint will be much more generic (and thus much less useful) if the user has javascript disabled. The fingerprint will change if the user selectively enables javascript in different sessions (e.g. NoScript with temporarily allowed sites). You may be able to defeat issue 1 (Tor) by detecting access via a Tor exit node, and combining this with fingerprinting. This seems like it would only be helpful in a narrow range of cases. Instead of just cookies for the checks above, test localStorage . If it's typically enabled, and your key isn't in the storage for this visit, and the fingerprint matches, then this is probably private browsing. Obviously, if the user normally has storage disabled, then you can't use it. The failure modes are similar to those described above for cookies. I haven't tested or developed the idea, but I suppose you could play games with Cache-Control . (A quick search reveals that this isn't an original idea -- that project has what looks like proof-of-concept code.) This strategy fails if the user goes through a shared caching proxy -- the meantime page mentions anonymizer.com. Firefox, at least, doesn't use the cache in private browsing mode. (See this site for a demo of cache-based tracking.) So you could combine this with the UA/fingerprinting mentioned above: if your cache tracker indicates this is a first visit, then you can guess that the user is private browsing. This fails with a false positive if the user cleans his cache; combine with other techniques to get a better guess. You could detect and track, for each user, whether the browser autofills a certain form element. If you detect that a given user doesn't get autofill on that form element, you might infer private browsing. This is brittle -- perhaps the user is not using his "primary" computer, but you could combine it with fingerprinting as mentioned above for a more reliable guess. Side-channel timing attack: detect and track the typical time it takes for each user to log into your app. There will be variations, but I'm guessing that you could get an accurate guess about whether someone is using password autofill. If a user normally uses password autofill (i.e. fast transition through the login page), and then for a given visit (with a matching fingerprint) is not using autofill, you can infer private browsing. Again this is brittle; combine with other techniques for a better guess. You'll also want to detect and correct for network latency on a given page load (e.g. perhaps the user's network is just slow on a given day, and a slow login page transition is just latency and not a lack of autofill). You can be slightly evil and auto-logout the user (give them a bogus error message, "please try again") to get a second data point if you're willing to annoy your users a bit. Combine this with what you mentioned in the question about detecting if the user is logged in to other services (e.g. Facebook), and you can have more confidence in your guess. If you're really motivated, you could play games with DNS and tracking page load times. A quick test of FF 3.6 and Chrome 15 seems to indicate that neither browser clears the DNS cache in private browsing mode. And the browser has absolutely no control over the local system's DNS cache. If you use a side-channel DNS timing attack to perform user tracking as an alternative (or in addition to) fingerprinting, you may get a more reliable guess. I'm not sure how reliable tracking via DNS timing will be. Detection of "anonymous" users in private browsing mode will be much harder, since you haven't had the opportunity to accumulate data on their "typical" behavior. And, since most of the features only kick in when they end the browser session, you don't really know if they're ever going to be back. With that said, here's an idea to detect private browsing by anonymous users, if you're willing to be evil, and you had some resource for which you knew a user was willing to give your site a second chance, and you can force the user to enable javascript. Track fingerprint, set a persistent cookie, localStorage, cache -- whatever you can do to track the user. If it's a first visit according to your fingerprint, crash/hang the browser via javascript (or flash, or whatever evil tricks you know). Suck up tons of memory, or get stuck in a loop, or whatever it takes so that the user closes the browser. Then when they return, you see (from the fingerprint) that it's a second visit. If the cookie/storage/cache/etc aren't set, then you can infer that the first session was private browsing, and I suppose you might infer that the second session is probably also private browsing. This obviously fails if the user doesn't come back, or if you can't crash / convince them to kill the browser window. As a bonus, if you send them to a custom URL, and they're in non-private-mode and restore the browsing session then you can guess they aren't in private browsing mode (unless they bookmarked the URL). Everything above is full of holes -- plenty of room for false positives or negatives. You'll probably never know if I'm using private browsing, or if I'm running a browser in a VM with no persistent storage. (What's the difference?) The worst part is probably that if you do get an answer with a reliable method for detecting private browsing is that it seems unlikely to remain viable for very long as browsers either "fix" it or users find workarounds to avoid detection. | {
"source": [
"https://security.stackexchange.com/questions/9037",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/971/"
]
} |
9,172 | I am Developing a program in several platforms and languages, But I don't want anybody to discover the origin computer where the program was developed, is there any way that someone can discover that ? | Source code consists in a bunch of text files. The contents of a text file are exactly what a text editor shows, so you can control that "visually". Beware of revision control systems such as CVS or Subversion: they can automatically replace some specific tags in source code (like " $Id$ ") with an identifying string which may contain the current date and time, your login name, and other information -- that feature is considered to be good for traceability, but I understand that you would not like it in your specific case. Compiled code is quite something else. Some compilers may automatically added identifying strings like what revision control software does, as "comment" fields in the executable structure. This needs not even be a deliberate spying device: traceability is really a good idea in the general case; no need to imagine a government bribing compiler developers into adding such things in compilers just to be able to spy on programmers. Also, executable formats often include some "blanks" -- unused parts added for alignment reasons -- which the compiler might not have bothered with filling with zeros, instead of just writing what was in RAM at that place. This has occurred with an older version of lcc-win32 , which was thus writing out random excerpts of the RAM contents which could contain confidential information (I think this has been fixed for lcc-win32, but it could happen with other toolsets). Other file formats can also embed (and thus leak) some information. For instance, PNG images can include "comments" (which do not change the visual aspect of the picture in any way). GIMP , an image manipulation program, uses the comment field to state that it was involved in the image processing; any tool could also add some information which, in your view, would be less benign. Many potential leaks can be detected visually, by looking at the files as if they were text. But this does not cover the possibility of one of your tools being voluntarily bugged so that it includes incriminating evidence in its output (such tracing information would be encrypted so as to "look random" except for whoever knows where to look). Unfortunately for the state of the World at large, "a revolution in my country" is not a very precise indication. There currently are armed insurrections or similar unrest in quite a few countries just now, including, but not limited to, Afghanistan, Yemen, Syria, Somalia, parts of Libya, Colombia, Sudan, and Southern Sahara; and things are not completely clear in Egypt, Iraq or Iran, just to include the few I can think of from memory. | {
"source": [
"https://security.stackexchange.com/questions/9172",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/147/"
]
} |
9,234 | I need some help tracing a vulnerability on my server. For the second time, my server has been compromised with files being replaced with virus-ridden downloads. According to the filesystem dates, over a period of 45 minutes 4 exe files on my server were replaced with renamed versions of the same virus. My web server is running Ubuntu 10.4.3 LTS with kernel version 2.6.32-31-generic, kept completely patched and up to date. The only way of accessing the shell is via SSH and a password-protected private key that I have with me on a USB stick. Password SSH login is disabled and the server logs (which I know can be modified, but I have good reason to believe they haven't) indicate that SSH was not used to log into the server. The web serving software stack is very complicated. There's PHP (5.3.3-1) w/ Suhosin v0.9.29, nginx 1.0.9 (now updated to 1.0.10), Tomcat (in a jail and I suspect not associated), and MySQL 5.1.41. I admit that at the time of the first attack, I had been content to blithely chmod -R 777 my web directory for headache-mitigation purposes. Now I run a complete mess of PHP scripts including but not limited to WordPress, vBulletin, and several homemade products; the first two of which are always up to date and the latter has been written with fairly great care to escape or normalize any user-inputted values. Given the weak file permissions but strongly-locked down server access, I was highly tempted to suspect a vulnerability in one of the many PHP scripts that allowed the execution of random code. I have since completely locked down the file permissions. nginx/php both run as www-data:www-data with all files given only execute and read permissions ( chmod -R 550 /var/www ). Yet today, after all this, my server was again compromised. The thing is, the files that were replaced still have 550 permissions, the SSH logs indicate no log in, and I'm completely lost as to what to do or try next. I tried to recreate the attack on the paths that were replaced with a very basic PHP script: $file = fopen('/var/www/mysite.com/path/to/file', 'w');
fwrite($file, 'test');
fclose($file) But that gave me the appropriate permissions denied error. Can anyone please, please advise me where to look next for the source of this vulnerability? Am I missing something with my file permissions? I know that server, once compromised, is pretty much "gone" forever. But that's not really an option here. I've recursively grepped my entire /var/log folder for the afflicted file names hoping to find something, but nothing came up. I also searched for any scripts in the cron folder or elsewhere that might have been placed at the time of the first attack to attack again at a later date, but (a) found nothing, and (b) shouldn't find anything as the files in /etc/ are not modifiable by www-data (assuming a nginx/PHP point of infiltration). I should add that both times I have grep'd the nginx access logs (combined style) for the names of the infected files, but found nothing. I do understand/realize that many ways of obscuring the file names from my greps exist, however. | Some techniques for trying to find how your attacker got in: Look at the timestamps on any files you know the attacker changed then look through all your logs for entries as close to each timestamp as possible. As others have said, the web access logs and web error logs are the most likely to hold the evidence of the original attack vector but other log files may hold clues as well. Error logs often hold the best clues. The logs of all network-accessible daemons are also good places to look. It may also be worthwhile looking for other files with timestamps close to the ones you know. /etc/passwd is an obvious one but even log files can be suspicious if they have an unusual timestamp. If logrotate runs at the same time every day and one of your log files has a timestamp that doesn't match this time, it was probably altered to cover his tracks, and now you know a little more about what he did. Don't forget the .bash_history files in user's home directories. the find command should be able to handle this for you. Run scalp over your web access logs. The original attack may have happened some time before the files appeared. Scalp will produce false positives but it should narrow down the potential suspect log entries for you to manually analyse for irregularities. It may also miss the attack entirely - it's not perfect - but it may help. Don't spend too much time on forensics in the compromised system. As others have noted, the attacker has had the opportunity to remove all evidence of the original attack and to add a rootkit to hide his continued presence. If he has missed something or if he has not even tried, the above tasks may work but if he has hidden himself well then you would be just wasting valuable time. If you fail to find the source of the attack, wipe and re-install the server in question but with some additions in order to catch him next time. Ship your logs off to a different server using some variant of syslog. ( rsyslog and syslog-ng are the recommended choices) Preferably, this server should do nothing but receive logs and should not share login keys or passwords with any other server. The goal is to make sure your logs cannot be tampered with or deleted. Add extra logging beyond the default. Jeff already mentioned AppArmor and since you are using Ubuntu this will probably be the best choice. Make sure its logs are sent to the logging box. Install and turn on the audit daemon . Make sure its logs are sent to the logging box. Run an IDS such as Snort, PHP-IDS or mod_security. (or more than one, these tool don't all do exactly the same job) Some hardware firewalls come with IDS/IPS modules. Make sure the logs from the IDS are sent to the logging box. Add a file integrity monitoring system such as AIDE or Tripwire . Make sure the logs from these tools are sent to the logging box. Monitor your logs. Falling short of a commercial SIEM system, something like Splunk can be installed for free and can analyse a limited quantity of logs. Set up rules to match what is normal for your servers and filter them out. Whatever is left is worthy of closer inspection. There is much more you can do if you have the time and the money, such as full network packet captures, but realistically this is about all a lone sysadmin can be expected to handle. If the attacker shows up again, you will be much more likely to find the attack vector and much more likely to detect him as soon as the attack is made. | {
"source": [
"https://security.stackexchange.com/questions/9234",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6062/"
]
} |
9,260 | SHA is the hashing mechanism. However, RSA is the encryption algorithm. So does RSA algorithm use SHA hashing mechanism to generate hashing keys which in turn is used to encrypt the message?? Moreover, RSA itself gives 2 keys. One can be kept public and one private. Now, these keys can be used to encrypt as well as decrypt. Ref : RSA . Then what is the use of SHA in RSA? In a certificate given by any site that gives HTTPS security, there is an SHA as well as a MD5 key present. How are these produced and used in the eccryption or decryption of data transferred to the browser? | RSA is actually two algorithms, one for asymmetric encryption, and one for digital signatures (the signature algorithm is traditionally -- but incorrectly -- described as "encryption with the private key" and this is an endless source of confusion). Asymmetric encryption uses keys. Keys are parameters to the algorithm; the algorithm itself is the same for everybody (in software terms, it is the executable file) while keys vary between users. In a key pair , the public key is the key which is used to encrypt data (convert a piece of data, i.e. a sequence of bytes, into another sequence of bytes which is unfathomable for everybody) while the private key is the key which allows one to decrypt data (i.e. reverse the encryption). Whereas in symmetric encryption , the encryption and decryption keys are identical, but with asymmetric encryption, the encryption and decryption keys are distinct from each other (hence the name); they are mathematically linked together, but it should be unfeasible (i.e. too hard to do with a mere bunch of computers) to recover the decryption key from the encryption key. This is why the encryption key can be made public while the decryption key is kept private: revealing the public key does not reveal the private key. What asymmetric encryption achieves is no trivial feat. The possibility to reveal the public key while not saying too much about the private key, but such that both keys work together (what is encrypted with the public key can be decrypted by the corresponding private key, but none other), requires a lot of mathematics! RSA is full of mathematics. This contrasts with symmetric encryption algorithms which are "just" ways to make a big mess of data by mixing bits together. Asymmetric encryption is the natural tool to use when we want to allow for confidential transmissions between any two users among a big population. If you have 1000 users, and you want any of the two users to be able to exchange data with each other without allowing anybody to spy on them (including the 998 other users), then the classical solution would be to distribute keys for symmetric encryption to every pair of users. Alice and Bob would have a known, common key; Alice and Charlie would also have a shared key (not the same); and so would Bob and Charlie; and so on. Each user would need to remember his "shared key" with every other one of the 999 other users, and you would have 499500 keys in total. Adding a 1001th user would involve creating 1000 additional symmetric keys, and give one to each of the 1000 existing users. The whole key distribution soon turns into an unusable/infeasible nightmare. With asymmetric encryption though, things are much more straightforward in terms of key distribution: every user just has to remember his/her own private key; and the public keys(being public) can be distributed through some sort of broadcasting (e.g. a directory). RSA has some operational constraints. With the most used variant (the one known as PKCS#1 v1.5 ), if the size of the RSA key is "1024 bits" (meaning that the central mathematical component of the key pair is a 1024-bit integer), then RSA can encrypt a message of up to 117 bytes in length, and yield an encrypted message of length 128 bytes. That limited size, and the size increase when encrypting, are unavoidable consequences of the mathematical structure of the RSA encryption process. Due to these constraints, we do not usually encrypt data directly with RSA; instead, we select a small sequence of random bytes, which we call session key . We encrypt the session key with RSA; and then we use the session key with a symmetric encryption algorithm to process the whole message. This is called hybrid encryption . SHA is the common name for a family of cryptographic hash functions . The very first member of that family was described under the name 'SHA' but was soon deprecated after a serious weakness was found in it; a fixed version was published under the name SHA-1 (the weak version is colloquially known as SHA-0 ). Four new SHA-like functions were added to the family later on ( SHA-224 , SHA-256 , SHA-384 and SHA-512 : which are collectively known as 'SHA-2'). Hash functions have no key. A hash function is an executable algorithm which is pure code. There is one SHA-1 and everybody uses the same. Hash functions "just" make a big mess of the input data, which is not meant to be unraveled. Actually, it is meant to be resilient to unraveling. Even though everybody knows all that is to be known about a hash function (there is no key, only code, and nothing of it is secret), it still turns out to be "too hard" to recompute a matching input message, given the hash function output. It is even unfeasible to find two distinct input messages which, when given to the hash function, yield the same output; there must exist such pairs of messages -- called collisions -- because a hash function output has a fixed small size, while accepted inputs can be widely larger, so there are more possible inputs than possible outputs. It is a mathematical certainty that collisions exist for every hash function, but actually finding one is another matter. A hash function, as itself, does not do anything of immediate high value, but it is a very important building block for other algorithms. For instance, they are used with digital signatures . A digital signature "proves" conscious action of a designated signer over a piece of data; like asymmetric encryption, this involves key pairs and mathematics, and associated constraints on the signed data. A hash function h is such that signing h(m) is as good as signing m itself: since it is unfeasible to find two distinct messages which hash to the same value, approval of the hash output is good enough. The point being that the output of the hash function is small enough to be usable with the mathematics hidden in the signature algorithm, even if the message itself is big (SHA-1 can process gigabytes of data, and yield a 20-byte output). It can be noted that some recent variants of RSA-the-encryption-algorithm (with the 'OAEP padding' from PKCS#1 v2.0) internally uses hash functions. Hash functions are good "randomizers" (the output of a hash function does not exhibit recognizable structure) and this makes them appropriate for building more elaborate cryptographic algorithms with good security features. In SSL/TLS (HTTPS is just HTTP-within-a-SSL/TLS-tunnel), hash functions are used for several things: as part of asymmetric encryption and/or digital signatures; as part of HMAC to allow client and server to verify that exchanged data has not been altered in transit; as a building brick for a Key Derivation Function , which "expands" a given session key into several symmetric keys used for symmetric encryption and integrity checks in both directions of the tunnel. The KDF relies on the "randomizing" and non-invertibility of the hash function. In SSL/TLS up to TLS 1.1, the KDF is built over two hash functions, MD5 and SHA-1, in an attempt to make it robust even if weaknesses were later found in either MD5 or SHA-1. It turns out that weaknesses were found in both , but it did not allow any break on the KDF as used in SSL/TLS. Nevertheless, TLS 1.2 switched to another KDF which uses a single, configurable hash function, usually SHA-256, for which no weakness is currently known. | {
"source": [
"https://security.stackexchange.com/questions/9260",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6078/"
]
} |
9,322 | I'm performing an port scanning on a range of IPs on our remote site. I tried running nmap scan on that IP range and some of the IP result are shown as filtered When I perform a nessus scan on the box, there is no result at all for some of the IPs. As such is it safe to assume that there is no open ports on some of the remote server? | Unless you've got nmap configured not to perform host discovery ( -PN or -PN --send-ip on the LAN), if it is indicating that all ports are filtered , then the host is up , but the firewall on that host is dropping traffic to all the scanned ports. Note that a default nmap scan does not probe all ports. It only scans 1000 TCP ports. If you want to check for any services, you'll want to check all 65535 TCP ports and all 65535 UDP ports. Also, to be precise, but when the port scan says a port is filtered , that doesn't mean that there is no service running on that port. It's possible that the host's firewall has rules that are denying access to the IP from which you're running the scan , but there may be other IPs which are allowed to access that service. If the port scan reports that a port is closed , that's more definitive that there's no service listening on that port. I can't comment on the lack of results from nessus, it's been a while since I've used it. Example of closed vs. filtered vs. host-down E.g., on my network, this host is up, has no services running, and does not have a firewall, note that the ports are reported as closed (this means the host responded to probes on that port): % sudo nmap -T4 -n 192.168.1.24
Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-30 11:20 EST
All 1000 scanned ports on 192.168.1.24 are closed
MAC Address: 00:0E:00:AB:CD:EF (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 7.70 seconds This host is up, has no services running on ports 100-1000, and has a firewall. Note that the ports are reported as filtered (this means that the host dropped probes to those ports): % sudo nmap -T4 -n -p 100-1000 192.168.1.45
Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-30 11:24 EST
All 901 scanned ports on 192.168.1.45 are filtered
MAC Address: 00:12:34:AA:BB:CC (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 20.03 seconds Just for illustration, I punched a temporary hole in the firewall for that last host for port 443 and reran the scan. (There's nothing running on 443 there.) Notice how 998 ports are reported filtered , but port 443 is reported as closed ; the firewall is allowing 443 through, and the OS responds with an RST. % sudo nmap -T4 -n 192.168.1.45
Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-30 11:43 EST
Interesting ports on 192.168.1.45:
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
443/tcp closed https
MAC Address: 00:12:34:AA:BB:CC (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 5.67 seconds There is no host at this address ( host down ): % sudo nmap -T4 -n 192.168.1.199
Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-30 11:26 EST
Note: Host seems down. If it is really up, but blocking our ping probes, try -PN
Nmap done: 1 IP address (0 hosts up) scanned in 0.56 seconds if I rescan with -PN --send-ip (the latter is needed because I'm scanning the LAN, and I don't want to use ARP probes), I see: % sudo nmap -T4 -n -PN --send-ip 192.168.1.199
Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-30 11:29 EST
All 1000 scanned ports on 192.168.1.199 are filtered
Nmap done: 1 IP address (1 host up) scanned in 101.44 seconds | {
"source": [
"https://security.stackexchange.com/questions/9322",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6112/"
]
} |
9,336 | I've taken, out of curiosity, the Phishing Quiz by OpenDNS that tries to teach the general public about detecting phishing: Ever wonder how good you are at telling the difference between a legitimate website and one that's a phishing attempt? Take this quiz to find out. I did it mostly because my phishing detection process basically looks like this: Does the address bar look legit? ...and if it does I'll think the site is legit. That is most likely simplistic, but (spoiler alert!) you can get all 14 questions right just by looking at the address bar. "Yahoo!! is upgrading!!!" from "docs.google.com/spreadsheets"? Yeeeeah, that kind of barely smells like phishing. Assuming the DNS server is not compromised, is the address bar really that bulletproof, though? If it isn't, how can I detect whether it's been tampered with or not? | The answer depends upon what kind of browser you are using, so I'll break it down. Mobile browsers. You can't trust anything you see. Sorry. Life on a small screen sucks. That's just the way it is. (Did you want an explanation why? Any web page can go full-screen, using mobile-specific tricks like scrolling the page down so that the address bar is not showing. Then, it can draw a fake address bar. As a user, there is no reasonable method you can use to detect this forgery by looking at the screen. See also the research paper iPhish: Phishing Vulnerabilities on Consumer Electronics , where the authors implemented this attack and tried it out; they learned that users could not detect it, not even computer science graduate students with knowledge of security, not even when they had been warned in advance of phishing attacks.) So basically, there is no reliable way to detect phishing attacks on mobile browsers by looking at the screen. The only way to protect yourself is to go straight to the site yourself (e.g., by clicking on a bookmark) before entering your credentials: never get there via a link. Older desktop browsers. On older browsers, like IE6 or Firefox 2, I don't know whether there is anything you can trust. They are so riddled with security holes that I would not count on them for security. The only way to protect yourself is to simply avoid those browsers: friends don't let friends use IE6. But you said you are not interested in them, so I think we can disregard them. Modern desktop browsers. This is the nub of the issue, and I suspect what you were most interested in. On modern desktop browsers, there are only a few things you can trust: Domain name. You can trust the domain name in the address bar to indicate the site you are currently on. Blue/green glow. You can trust the blue/green glow behind the name in the address bar to indicate the presence of SSL. You can also trust the `https://`` schema to indicate the use of SSL, but the glow may be easier to check for. That's essentially it. You can not trust the icon next to the URL (the favicon), even if it looks like a padlock or something; you cannot trust anything below the address bar; you cannot trust the text in the status bar when you hover over a link before clicking on it. You should not rely upon anything in the URL after the domain name, as the same-origin policy makes no distinction between different URLs on the same domain. How to protect yourself. Let's put it all together. If you want to be safe when visiting some site -- say, your bank -- there are two viable strategies you can use to protect yourself: Prevention: use a bookmark. To prepare, make a bookmark to the login page for your bank. (Make sure it is a https page you are bookmarking.) Then, when you want to log in, click on the bookmark, then enter in your credentials and use the site. Never navigate to your bank's site by clicking on links; if you find yourself on your bank's site from some other source, then make sure you're on the right page by clicking on the bookmark before you use the site. This essentially prevents any opportunity for a phishing attack on your bank credentials. Detection: check the address bar. Alternatively, you can try to detect phishing attacks. Every time you visit your bank web site, check to make sure that the domain name in the address bar matches your bank's domain name before entering your credentials or using the site. This will be a pretty reliable way to detect phishing attacks -- as long as you always remember to check the address bar. The primary shortcoming of the approach is the obvious one: you have to always remember to check the address bar. Apart from that caution, this is a viable strategy, too. Also, you may want to check that you are currently using SSL, by checking for the blue/green glow (this is especially important when using a wireless network); if you happen to know that your bank has had a green glow in the past, then on subsequent visits make sure the glow is still green. Alternatively, if you use Firefox, you could install HTTPS Everywhere and save yourself from having to check for SSL. Caveats and attacks. I need to qualify the above remarks a bit. Checking the address bar is not a foolproof defense. There are some sophisticated presentation attacks that could potentially fool you, even if you look at the address bar. Let me outline the major ones: Picture-in-picture attacks. Recall that a web page can completely control what pixels are drawn on the portion of the browser window where it is displayed (e.g., by providing a bitmap image, which will be displayed exactly as is). In a picture-in-picture attack, a malicious web page arranges to draw something inside its window, so visually it looks like there a second, smaller browser window has popped up on top (and its outline is completely contained within the outer window). This is hard to describe in words, so here is an example image: Credits: Fig.2 of An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks by Jackson, SImon, Tan, and Barth. In this example, the user is visiting a malicious web page: http://paypal.login.com , as shown in the outermost address bar. The malicious page controls the contents of all the pixels inside this outer window (except for the chrome around the border). In this area, the attacker has drawn an image that replicates the appearance of a second smaller browser window, popped up on top. Since the attacker controls all of the pixels of that image, the attacker can completely control the "address bar" of the "inner window". The attacker has chosen to spoof an "address bar" containing https://www.paypal.com . If you weren't careful, you might conclude that the inner "window" has focus, look at its address bar, conclude you are talking to Paypal, and enter your Paypal password or other personal details into the inner "window". If you do that, you've actually revealed your Paypal password to the attacker. This attack can be tricky to mount successfully. It relies upon the user not getting suspicious when a new browser window appears to pop up for no good reason, when they weren't trying to visit Paypal. It also requires significant engineering effort from the attacker to make it look and feel correct. For instance, the attacker would need to use Javascript to identify what browser and operating system you are using, then craft a spoofed image that matches your browser version, and for full fidelity, the attacker might need to implement Javascript handlers to let you drag the inner window around (you won't be able to drag it outside the confines of the outer window), interact with the spoofed chrome of the inner window, and so on. For more on this attack, read An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks by Jackson, SImon, Tan, and Bart. I'm only aware of one instance of this sort of attack in the wild. Homograph attacks. See "The Homograph Attack" by Gabrilovich and Gontmakher, and "The methodology and an application to fight against unicode attacks" by Fu, Deng, Wenyin, and Little. Full-screen attacks. (TBD) To my knowledge, these attacks are rarely (if ever) seen in practice, and they might have only a partial chance of fooling users, but I wanted to outline them so you are aware of the ways in which browser security is not perfect. | {
"source": [
"https://security.stackexchange.com/questions/9336",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1451/"
]
} |
9,409 | Instead of asking about the pros and cons of specific hardware, I thought I would ask a broader question: What are the differences between really expensive and inexpensive firewalls? What extra features/support will you typically get? And while firewalls (hardware) will need software to run, is it necessary to still use a standard OS' firewall? It was suggested here (in scenario 3) that multiple firewalls are more beneficial to a larger company (500+ people). Is the reason just that the more firewalls you have, the more protected you will be? This question was IT Security Question of the Week . Read the Jan 6, 2012 blog entry for more details or submit your own Question of the Week. | One of the biggest and most important factors between firewall hardware is maximum throughput and typical latency. Low-cost firewall hardware usually has a maximum throughput of less than 100Mbit/sec even though the network adapters might theoretically support more than that. This means that the firewall becomes your network bottleneck in many scenarios. Determining the throughput of your firewall involves more than just looking at the port speed. Processing data quickly requires fast CPUs, high-speed interconnects and plenty of fast RAM, and very high-quality network adapters. Just the hardware alone for a firewall capable of passing data at 1 Gbit/sec with minimal examination can put you up in the several hundred dollar range. Furthermore, the more examination you do on passing traffic, the more CPU power and memory you consume. Adding complex filtering logic will significantly increase the CPU and RAM utilization. And any resource exhaustion will result in delays or dropped packets. Doing stateful packet inspection and particularly application-level protocol logic is possible even on a $60 home router, but the performance impact at high utilization would be severe. The interface for your hardware is generally not a major expense, but since most people equate the quality of the interface with the quality of the hardware in general, companies with deep interest in the survival of their product generally will put some extra effort into making the interface easier to use. At least, that's the theory. Sometimes companies like Juniper and Cisco make me doubt that assumption. Also bear in mind that some companies build an entire business around not only providing firewall hardware and software, but also in providing regular updates to the OS, the filtering rules, patterns for spam and malware, and so forth. If you want that kind of service, you have to help pay the salaries for the people who provide it. | {
"source": [
"https://security.stackexchange.com/questions/9409",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6161/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.