source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
1,599
What is the difference between SSH and SSL? Which one is more secure, if you can compare them together? Which has more potential vulnerabilities?
SSL and SSH both provide the cryptographic elements to build a tunnel for confidential data transport with checked integrity. For that part, they use similar techniques, and may suffer from the same kind of attacks, so they should provide similar security (i.e. good security) assuming they are both properly implemented. That both exist is a kind of NIH syndrome: the SSH developers should have reused SSL for the tunnel part (the SSL protocol is flexible enough to accommodate many variations, including not using certificates). They differ on the things which are around the tunnel. SSL traditionally uses X.509 certificates for announcing server and client public keys; SSH has its own format. Also, SSH comes with a set of protocols for what goes inside the tunnel (multiplexing several transfers, performing password-based authentication within the tunnel, terminal management...) while there is no such thing in SSL, or, more accurately, when such things are used in SSL they are not considered to be part of SSL (for instance, when doing password-based HTTP authentication in a SSL tunnel, we say that it is part of "HTTPS", but it really works in a way similar to what happens with SSH). Conceptually, you could take SSH and replace the tunnel part with the one from SSL. You could also take HTTPS and replace the SSL thing with SSH-with-data-transport and a hook to extract the server public key from its certificate. There is no scientific impossibility and, if done properly, security would remain the same. However, there is no widespread set of conventions or existing tools for that. So we do not use SSL and SSH for the same things, but that's because of what tools historically came with the implementations of those protocols, not due to a security related difference. And whoever implements SSL or SSH would be well advised to look at what kind of attacks were tried on both protocols.
{ "source": [ "https://security.stackexchange.com/questions/1599", "https://security.stackexchange.com", "https://security.stackexchange.com/users/377/" ] }
1,606
On a recent certification exam, I was presented with a question about ways to secure an 802.11 wireless network. It was a multiple-answer question, but the only two available answers that related at all to security were addressing SSID Hiding and MAC Address Filtering. Fortunately, this particular certification was neither wireless- nor security-focused. I'm aware that these two should definitely not be your only means of securing your wireless network, but are they still really considered worth implementing on top of more sturdy authentication and encryption mechanisms?
They're stumbling blocks, but not insurmountable. SSID hiding can provide some protection from people looking for any SSID they can get their hands on, and MAC filtering can keep casual riffraff out. As the only methods of protecting a WLAN they're pretty weak. For someone who targets your network specifically, encryption (especially unbroken encryption) will provide vastly better security. MAC spoofing is trivial in most adapters these days, and after you've cracked the network to the point you can monitor in-flight packets, you can get a list of valid MAC addresses. SSID is trivial at that point as well. Due to the automated nature of the toolsets available, MAC filtering and SSID hiding aren't really worth the effort any more. In my opinion.
{ "source": [ "https://security.stackexchange.com/questions/1606", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
1,687
Consider this. Many websites with software downloads also make available MD5 or SHA1 hashes, for users to verify the integrity of the downloaded files. However, few of these sites actually use HTTPS encryption or digital signatures on the website itself. So, if you're downloading a file from what is effectively an unauthenticated source, and validating it with a hash from the same source (or even another unauthenticated source), what is the real value of hashing the file? Does this not establish a false sense of security, since (in the absence of a digital signature) both the download and the hash could have been tampered with, without the user's knowledge?
So, if you're downloading a file from what is effectively an unauthenticated source, and validating it with a hash from the same source (or even another unauthenticated source), what is the real value of hashing the file? The provided hash lets you double-check that the file you downloaded was not corrupted accidentally in transit, or that the file you downloaded from another source (a faster mirror) is the same as the file available for download at this website. However, there is not really much additional security. A sufficiently skilled cracker can replace the file with a maliciously modified version and the hash with one that matches the modified file; or he can MITM the requests over the network and replace both the file requested with his own and the hash with his own.
{ "source": [ "https://security.stackexchange.com/questions/1687", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
1,692
Working on the assumption that SSL serves both to encrypt data and to provide assurance as to the identity and legitimacy of the website, should the practice of providing a logon form on a page requested over HTTP be avoided, even when it posts to HTTPS? The question relates to a post I made yesterday about the Who's who of bad password practices and some of the feedback suggesting that not visibly seeing the certificate represented in the browser before authenticating was fine if indeed the form posted securely. To my mind, this sells SSL short as you not only lose the ability to validate the legitimacy of the site before handing over your credentials but you also have no certainty that it is posting over HTTPS. I'm conscious the likes of Twitter and Facebook take this approach, but should they? Am I overlooking something here or is this a practice which should be discouraged? Update: I ended up detailing the outcome of this question and subsequent discussion in the blog post SSL is not about encryption
OWASP states: (copied verbatim from http://www.owasp.org/index.php/SSL_Best_Practices ) Secure Login Pages There are several major considerations for securely designing a login page. The following text will address the considerations with regards to SSL. Logins Must Post to an SSL Page This is pretty obvious. The username and password must be posted over an SSL connection. If you look at the action element of the form it should be https. Login Landing Page Must Use SSL The actual page where the user fills out the form must be an HTTPS page. If its not, an attacker could modify the page as it is sent to the user and change the form submission location or insert JavaScript which steals the username/password as it is typed. There must be no SSL Error or Warning Messages The presence of any SSL warning message is a failure. Some of these error messages are legitimate security concerns; others desensitize the users against real security concerns since they blindly click accept. The presence of any SSL error message is unacceptable - even domain name mismatch for the www. HTTP connections should be dropped If a user attempts to connect to the HTTP version of the login page the connection should be denied. One strategy is to automatically redirect HTTP connections to HTTPS connections. While this does get the user to the secure page there is one lingering risk. An attacker performing a man in the middle attack could intercept the HTTP redirect response and send the user to an alternate page. To repeat: Login Landing Page Must Use SSL
{ "source": [ "https://security.stackexchange.com/questions/1692", "https://security.stackexchange.com", "https://security.stackexchange.com/users/136/" ] }
1,728
I recently discovered that my web site was hacked: there was a hidden HTML div that's about selling shoes...! I googled the text in question and voila: thousands of sites have been hacked. Check this out: Google the text 'There is also hang tag made of leather, a slip pocket to put cards' and go to the sites in the results and look at the source code of the page. You'll see something like: <div style="position: absolute; top: -966px;left: -966px>...</div> with a lot of spammy shoe keywords in there. Example of hacked site: Some band page: http://www.milsteinmusic.com/about.html (In case that should be cleaned up: Shoe-ized version archived here .) (check out the page HTML and look for shoes...). My question is: How did this happen? How can we contact all those guys who have been hacked? Where can I report this to an authority?
If you want to do a good turn, you can report the malicious site to several centralized sources. There are some companies that maintain centralized lists of malicious web sites, and you can report the web sites to those companies. Here are some places you can report phishing sites: Report a phishing site to Google Report a phishing site to Symantec Report a phishing site to PhishTank (previously existing account required) Report a phishing email to Anti Phishing Working Group (via [email protected]) Report a phishing site to the US Government (US-CERT) (via [email protected]) And some places you can report bad/malicious sites in general: Report a malicious site to Google [*] Report a phishing or malware site to Spam404 Report a phishing or malware site to Microsoft (account required) Reporting the site to these lists helps other users. Many modern browsers will query one of the lists maintained by these companies, and warn other users who try to visit that site. Here is a good list of places to report to: https://decentsecurity.com/#/malware-web-and-phishing-investigation/ Notifying the owners of the website is a bit harder. Here are some options: You can poke around the website to see if it lists any information about how to notify the owners about security problems. Sometimes email to [email protected], [email protected], [email protected], or [email protected] will reach a system administrator (replace example.org with the domain of the malicious site). You could try emailing all of those addresses. You could use WHOIS to look for contact information for the site owners. See, this example . You can use abuse.net to simplify the process of contacting the site owners: you'll have to register, but once you register, email to [email protected] is forwarded to the site owners of example.com. Related: What are common/official methods of reporting spam/phishing/nasty-grams to organizations? Unknown malware, how to report it and whom to report it to? What is a good method to report security breaches that are being used to actively spam? Footnote: Thanks to Zoredache for the sites listed with a *!
{ "source": [ "https://security.stackexchange.com/questions/1728", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1135/" ] }
1,735
I am learning to use Metasploit as part of one of my college lessons. As you may know there are software builds like NOWASP (Mutillidae) or Damn Vulnerable Linux that allow you to exercise on pentest or similar things. I have heard that in order the payloads to work the target-victim should run its PC as server. I have tried to set up a server in to the same machine (through Virtualbox) and make it as target but it failed. So, do you know if there is a server or something similar to allow me practice (legally, against test systems)?
http://www.irongeek.com/i.php?page=security/wargames WebGoat . WebGoat is a set of deliberately insecure Java server pages http://www.hackthissite.org/ http://www.smashthestack.org/wargames from their FAQ: The Smash the Stack Wargaming Network hosts several Wargames. A Wargame in our context can be described as an ethical hacking environment that supports the simulation of real world software vulnerability theories or concepts and allows for the legal execution of exploitation techniques. Software can be an Operating System, network protocol, or any userland application. http://www.astalavista.com/page/wargames.html http://www.governmentsecurity.org/forum/index.php?showtopic=15442 http://www.overthewire.org/wargames/ the list is long... some are up, some not... Update 26 Feb 2011, i found a nice post from http://r00tsec.blogspot.com/2011/02/pentest-lab-vulnerable-servers.html . Some links might be broken. I copy from there: Holynix Similar to the de-ice Cd’s and pWnOS, holynix is an ubuntu server vmware image that was deliberately built to have security holes for the purposes of penetration testing. More of an obstacle course than a real world example. http://pynstrom.net/index.php?page=holynix.php WackoPicko WackoPicko is a website that contains known vulnerabilities. It was first used for the paper Why Johnny Can’t Pentest: An Analysis of Black-box Web Vulnerability Scanners found: http://cs.ucsb.edu/~adoupe/static/black-box-scanners-dimva2010.pdf https://github.com/adamdoupe/WackoPicko De-ICE PenTest LiveCDs The PenTest LiveCDs are the creation of Thomas Wilhelm, who was transferred to a penetration test team at the company he worked for. Needing to learn as much about penetration testing as quickly as possible, Thomas began looking for both tools and targets. He found a number of tools, but no usable targets to practice against. Eventually, in an attempt to narrow the learning gap, Thomas created PenTest scenarios using LiveCDs. http://de-ice.net/hackerpedia/index.php/De-ICE.net_PenTest_Disks Metasploitable Metasploitable is an Ubuntu 8.04 server install on a VMWare 6.5 image. A number of vulnerable packages are included, including an install of tomcat 5.5 (with weak credentials), distcc, tikiwiki, twiki, and an older mysql. http://blog.metasploit.com/2010/05/introducing-metasploitable.html Owaspbwa Open Web Application Security Project (OWASP) Broken Web Applications Project, a collection of vulnerable web applications. http://code.google.com/p/owaspbwa/ Web Security Dojo A free open-source self-contained training environment for Web Application Security penetration testing. Tools + Targets = Dojo http://www.mavensecurity.com/web_security_dojo/ Lampsecurity LAMPSecurity training is designed to be a series of vulnerable virtual machine images along with complementary documentation designed to teach linux,apache,php,mysql security. http://sourceforge.net/projects/lampsecurity/files/ Damn Vulnerable Web App (DVWA) Damn Vulnerable Web App is a PHP/MySQL web application that is damn vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, help web developers better understand the processes of securing web applications and aid teachers/students to teach/learn web application security in a class room environment. www.dvwa.co.uk Hacking-Lab This is the Hacking-Lab LiveCD project. It is currently in beta stadium. The live-cd is a standardized client environment for solving our Hacking-Lab wargame challenges from remote. http://www.hacking-lab.com/hl_livecd/ Moth Moth is a VMware image with a set of vulnerable Web Applications and scripts, that you may use for: http://www.bonsai-sec.com/en/research/moth.php Damn Vulnerable Linux (DVL) Damn Vulnerable Linux is everything a good Linux distribution isn’t. Its developers have spent hours stuffing it with broken, ill-configured, outdated, and exploitable software that makes it vulnerable to attacks. DVL isn’t built to run on your desktop – it’s a learning tool for security students. http://www.damnvulnerablelinux.org pWnOS pWnOS is on a "VM Image", that creates a target on which to practice penetration testing; with the “end goal” is to get root. It was designed to practice using exploits, with multiple entry points http://www.backtrack-linux.org/forums/backtrack-videos/2748-%5Bvideo%5D-attacking-pwnos.html http://www.krash.in/bond00/pWnOS%20v1.0.zip Virtual Hacking Lab A mirror of deliberately insecure applications and old softwares with known vulnerabilities. Used for proof-of-concept /security training/learning purposes. Available in either virtual images or live iso or standalone formats. http://sourceforge.net/projects/virtualhacking/files/ Badstore Badstore.net is dedicated to helping you understand how hackers prey on Web application vulnerabilities, and to showing you how to reduce your exposure. http://www.badstore.net/ Katana Katana is a portable multi-boot security suite which brings together many of today’s best security distributions and portable applications to run off a single Flash Drive. It includes distributions which focus on Pen-Testing, Auditing, Forensics, System Recovery, Network Analysis, and Malware Removal. Katana also comes with over 100 portable Windows applications; such as Wireshark, Metasploit, NMAP, Cain & Able, and many more. www.hackfromacave.com/katana.html
{ "source": [ "https://security.stackexchange.com/questions/1735", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1136/" ] }
1,750
On this answer , cjk says RSA and PGP are different. What you are essentially asking is how do I run my petrol car on diesel? The answer is you can't. I would be interested in a more detailed comparison between the two, why they are different, and why one would choose one over the other.
RSA is an algorithm (actually, two algorithms: one for asymmetric encryption, and one for digital signatures -- with several variants). PGP is originally a piece of software, now a standard protocol, usually known as OpenPGP . OpenPGP defines formats for data elements which support secure messaging, with encryption and signatures, and various related operations such as key distribution. As a protocol, OpenPGP relies on a wide range of cryptographic algorithms, which it assembles together (which is not as easy as it seems, if you want the result to be secure). Among the algorithms that OpenPGP can use is RSA. So, to keep with the car analogy, your question is like: "What is the difference between a combustion engine and a Honda Accord ? Why would one choose one over the other ?" The question makes no sense per se: the Accord comes with a combustion engine under its lid. It also comes with a bunch of other useful features, such as wheels; you cannot do much with a combustion engine alone. Still in that analogy, you can imagine cars without a combustion engine, e.g. electric cars. Translated into the OpenPGP world, the question becomes: can OpenPGP perform its work without using RSA ? And the answer is yes: there are other asymmetric encryption and digital signature algorithms that OpenPGP can use, which will provide the same functionality than what OpenPGP uses RSA for. Historically, when OpenPGP was first defined, there were still a few unsolved questions about the RSA patent, so implementations were encouraged to use El Gamal and DSA (for asymmetric encryption and digital signatures, respectively), instead of RSA. (The RSA patent expired in 2000)
{ "source": [ "https://security.stackexchange.com/questions/1750", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
1,751
I'm interested in updating this two pronged question for 2011: What cryptology is most appropriate for low-powered devices (such as a cellphone), and yet still effective? What cryptology is most secure for a .NET developer? In November of '08 Rasmus Faber answered this similar Stack Overflow question with this response: Symmetric cipher: AES-256 Asymmetric cipher: RSA with 4096 bit key (I believe that is the maximum in .NET) or ECDSA with 571 bit key (but that is only supported in .NET 3.5) Hash: SHA-512 Message Authentication Code: HMAC with SHA-512 That being said, those are overkill for most applications, and you should do fine using AES-128, RSA with 2048 bit key, SHA-256 and HMAC with SHA-256. Are these recommendations still true today?
The recommendations you cite are kind of overkill. One point to take into account is that beyond a certain level (e.g. on key size or hash function output size), all functions are "unbreakable with foreseeable technology" and it is a bit delicate to compare them. Stating that SHA-512 is "more robust" than SHA-256 means that you are imagining that SHA-256 could be broken, which, as far as we can tell for now and the next 40 years, is not true (beyond 40 years, trying to envision what technology we could have is risky; 40 years ago, nobody was imagining the Internet as it is today, but most people assumed that by 2010 we would all drive flying cars). AES-128 is already secure enough, and less expensive (AES-256 uses 14 rounds, while AES-128 uses 10 rounds). The currently largest broken RSA key is a 768-bit modulus, and it took some huge effort (four years, and really big brains). 1024-bit keys are considered usable for short term security, although larger keys are encouraged. 2048-bit keys are appropriate. Using a key twice larger means 8 times more work for signing or decryption, so you do not want to overdo it. See this site for a survey of how RSA key length can be related to security. ECDSA over a 256-bit curve already achieves an "unbreakable" level of security (i.e. roughly the same level than AES with a 128-bit key, or SHA-256 against collisions). Note that there are elliptic curves on prime fields, and curves on binary fields; which kind is most efficient depends on the involved hardware (for curves of similar size, a PC will prefer the curves on a prime field, but dedicated hardware will be easier to build with binary fields; the CLMUL instructions on the newer Intel and AMD processors may change that). SHA-512 uses 64-bit operations. This is fast on a PC, not so fast on a smartcard. SHA-256 is often a better deal on small hardware (including 32-bit architectures such as home routers or smartphones). Right now, cheap RFID systems are too low-powered to use any of the above (in other words, RFID systems which can are not as cheap as they could be). RFID systems still use custom algorithms of often questionable security. Cellphones, on the other hand, have ample enough CPU power to do proper cryptography with AES or RSA (yes, even cheap non-smart phones).
{ "source": [ "https://security.stackexchange.com/questions/1751", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
1,786
If I have a message that I need to send to another person, how do I achieve non repudiation ? Is digitally signing the message sufficient ?
No. Digital signatures are not sufficient for non-repudiation -- not by a long shot. Non-repudiation is a legal concept. It means that, if there is a dispute, in a lawsuit it will be possible to hold one party to their commitments. For example, mathematical schemes that claim to provide non-repudiation have to withstand the "jury attack". Some expert witness is going to have to be able to explain, in non-technical terms that an ordinary juror (and judge) can understand, why the mathematics proves anything at all. Meanwhile, an expert witness for the other side is going to be arguing the opposite. If the scheme uses fancy mathematics, it is likely to be incomprehensible to the jurors and the judge, and hence not likely to be of much use in a lawsuit. This is a kind of attack that most mathematical schemes in the literature are unlikely to be able to stand up to. I'm afraid much of the cryptographic research community has screwed this up. Researchers have written many technical papers that claim to address "the non-repudiation problem", trying to solve it with mathematics -- but what they've failed to accept is that there is a tremendous gap between the crypto-mathematics and the pragmatics and legal issues. And unfortunately, the hardest part of the problem to solve is not the mathematics, but rather the pragmatics and legal issues. Unfortunately, this seems to be a long-standing blind spot within the cryptographic research community. Here are some of the challenges to achieving true non-repudiation that a court or lawyer would be satisfied with: Malware. What if Grandpa's computer is infected with malware, which steals his private key? Are we going to hold him responsible for anything signed by that malware -- even if it means he loses his house? That'd be ridiculous. In particular, an easy way to repudiate is simply to claim "my private key must have been leaked/stolen". Similar remarks can be made about social engineering. When social engineering attacks have a good chance of being successful at stealing the private key, and when the scheme is designed in such a way that ordinary people cannot use it securely, and when the designers know (or should have known) this, I think it is questionable whether jurors will be willing to hold Grandpa responsible, simply because he got screwed by a poorly-designed security system. Humans vs. computers. Legally, non-repudiation is about the actions of a human . A court is going to be looking for evidence that a human (e.g., Grandpa) assented to the terms of the contract/transaction. The cryptographic schemes cannot achieve that. They can only show that some computer performed some action. Cryptographers like to assume that the computer acts as an agent of the human and the computer's actions can stand in for the human's actions, but this is not a reasonable assumption. For example, malware on the person's computer can apply the private key without the human's consent. Basically, most of the cryptographic research into non-repudiation schemes has the wrong threat model. It is based on assumptions that we've since discovered are faulty. If you'd like to learn more, there has been a great deal published on these gaps between what's called "non-repudiation" in the cryptographic literature vs what lawyers would accept as adequate. Here are some example publications where you can read more: Carl Ellison, Non-repudiation . Ross Anderson, Liability and Computer Security: Nine Principles . Brian Gladman, Carl Ellison, and Nicholas Bohm, Digital Signatures, Certificates and Electronic Commerce . Adrian McCullagh and William Caelli, Non-Repudiation in the Digital Environment . Michael Roe, Cryptography and Evidence . Peter Gutmann, Digital Signature Legislation . Greg Broiles provides a legal perspective on non-repudiation .
{ "source": [ "https://security.stackexchange.com/questions/1786", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1157/" ] }
1,806
In an answer to a question about RSA and PGP, PulpSpy noted this: It is possible to generate an RSA key pair using GPG (for both encryption and signing -- you should not use the same key for both). What is the reasoning behind this? Perhaps my understanding of public key encryption is flawed, but I thought the operations went something akin to this: When Bob wants to encrypt a message to Alice, he uses Alice's public key for the encryption. Alice then uses her private key to decrypt the message. When Alice wants to digitally sign a message to Bob, she uses her private key to sign it. Bob then uses Alice's public key to verify the signature. Why is it important to use different keys for encryption and signing? Would this not also mean you need to distribute two public keys to everyone with whom you wish to communicate? I imagine this could easily lead to some amount of confusion and misuse of keys.
It is mostly that the management approaches and timeframes differ for the use of signing and encryption keys. For non-repudiation, you never want someone else to get control to your signing key since they could impersonate you. But your workplace may want to escrow your encryption key so that others who need to can get to the information you've encrypted. You also may want a signing key to be valid for a long time so people around the world can check signatures from the past, but with an encryption key, you often want to roll it over sooner, and be able to revoke old ones without as many hassles.
{ "source": [ "https://security.stackexchange.com/questions/1806", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
1,918
We are implementing self password reset on a web application, and I know how I want to do it (email time limited password reset URL to users pre-registered email address). My problem is that I can't find any references to point the developers at around using that technique. Can anyone point me in the direction of some good references on this technique?
Some suggestions: Don't reset the user's password until confirmed. Don't immediately reset the user's password. Only reset it once the user clicks on a confirmation link sent to their pre-registered email address. Require a CAPTCHA. When a user requests that their password be reset, force them to solve a CAPTCHA before proceeding any further. This is to prevent automated tools from trying to give many users grief, and force the user to prove they are a human (not a robot). Randomness. The time-limited password reset URL should include a random, unguessable component. Make sure you use crypto-quality randomness. The output from /dev/urandom or System.Security.Cryptography.RNGCryptoServiceProvider would be a good choice. The output from rand() or random() or System.Random are not random enough and would be a bad choice. A GUID or timestamp is not random enough and would not be a good choice. Include a time limit. The reset confirmation link should expire after some reasonable time: say, 24 hours. The link should be usable only once, and should immediately expire as soon as it is used. Include explanatory text in the email. You may want to add some explanatory text to the email, to explain why the email was sent, in case someone requests a reset for an account that is not your own. You could include some text like "Someone has requested that the password be reset for your account username on site . If you made this request, click here to change your password. If you did not make this request, click here to cancel the request." Send email after the password is reset. Once the password is successfully reset, send email to the user to let them know that the password has been changed. Don't include the new password in that email. Monitor cancellations. You might consider including some logic to monitor the frequency with which users click the cancellation link indicating that they didn't request a reset. If this goes above a certain threshold, it might be useful to send an alert to the system operators. Also, if a cancellation link for some request is visited after the confirmation link is visited, that's a potential indication of an attack against that user -- you may want to take action at that point, e.g., invalidate the user's password and require them to reset their password again. (This is a defense against the following attack: the attacker gains access to the user's mailbox, then requests that their password on your site be reset, then visits the confirmation link. If the attacker doesn't delete these emails from the user's inbox, then when the real user reads their email, they may click the cancellation link, giving you an indication of possible trouble.) Use HTTPS. The link should use https (not http:), to protect against various attacks (e.g., Firesheep attacks on users surfing the web from an Internet cafe). Log these operations. I suggest logging all such requests. In addition to logging the user's username, you may want to log the IP address of the client that requested a reset link be emailed to the user, as well as the IP address of the client that visited the reset link. Additional reading. You may also want to read Troy Hunt's excellent blog post, Everything you ever wanted to know about building a secure password reset feature . Thanks to @coryT for a link to this resource. Lastly, consider non-password authentication. Passwords have many problems as an authentication mechanism, and you might consider other methods of authenticating users, such as storing a secure persistent cookie on their machine with an unguessable secret to authenticate them. This way, there is no password to forget and no way for the user to be phished, though you do need to provide a way for a user to authorize access from a new machine or a new browser (possibly via email to the user's pre-registered email address). This survey paper has an excellent survey of many fallback authentication methods and their strengths and weaknesses.
{ "source": [ "https://security.stackexchange.com/questions/1918", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1206/" ] }
1,952
NIST provides good guidelines on the length of keys and hashes for various algorithms . But I don't see anything specifically on the length of a random or pseudo-random nonce (number used once). If there is a single good answer for a variety of uses, I'd love to see that. But to make this concrete, I'll use the common "password reset via email" situation, in which the server generates a URL with a pseudo-random path component. It seems a bit similar to HTTP Digest Authentication, in which the example in the RFC seems to have 136 bits (dcd98b7102dd2f0e8b11d0f600bfb0c093). I note that many folks seem to use version 4 UUIDs (which provide 122 pseudo-random bits) or this, as discussed at Are GUIDs safe for one-time tokens? , though the user has to beware the use of previous much-more-predictable UUID versions, and nasty persistent local attacks on the Windows random number generator which were mostly patched by 2008. But ignoring the riskiness of getting tangled up in UUID versions and implementations, how many pseudo-random bits should be incorporated in the URL?
A 64-bit nonce is likely more than sufficient for most practical purposes, if the 64 bits are crypto-quality randomness. Why is 64 bits sufficient? Let me lay out the kind of reasoning you can use to answer this question. I'll assume this is a single-use time-limited URL; after it is used once, it is no longer valid, and after a little while (3 days, say), it expires and is no longer valid. Since the nonce is only meaningful to the server, the only way that an attacker can try a guess is to send the 64-bit guess to the server and see how the server responds. How many guesses can the attacker try in the time before the nonce expires? Let's say the attacker can make 1000 HTTP requests per second (that's a pretty beefy attacker); then the attacker can make about 1000*3600*24*3 = 2 28 guesses within a 3-day period. Each guess has a 1/2 64 chance of being right. Therefore, the attacker has at most a 1/2 36 chance of breaking the scheme. That should be more than secure enough for most settings.
{ "source": [ "https://security.stackexchange.com/questions/1952", "https://security.stackexchange.com", "https://security.stackexchange.com/users/453/" ] }
2,096
I am planning to develop a website that require that the users register a username and a password. When I let the user choose a password, what chars should I allow the users to have in the password? is there any that I shouldn't because of security issues with the http protocol or implementation language? I haven't decided for a implementation language yet but I will use Linux.
From a security/implementation perspective, there shouldn't be any need to disallow characters apart from '\0' (which is hard to type anyway). The more characters you bar, the smaller the total phase space of possible passwords and therefore the quicker it is to brute-force passwords. Of course, most password-guessing actually uses dictionary words rather than systematic searches of the input domain... From a usability perspective, however, some characters are not typed the same way on different machines. As an example, I have two different computers here where shift-3 produces # on one and £ on the other. When I type a password in, both appear as '*' so I don't know whether I got it right or not. Some people think that could confuse people enough to start disallowing those characters. I don't think it's worth doing. Most real people access real services from one or maybe two computers, and don't tend to put many extended characters in their passwords.
{ "source": [ "https://security.stackexchange.com/questions/2096", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69/" ] }
2,120
I'm running a rather large site with thousands of visits every day, and a rather large userbase. Since I started migrating to MVC 3, I've been putting the AntiForgeryToken in a number of forms, that modify protected data etc. Some other forms, like the login and registration also use the AntiForgeryToken now, but I'm becoming dubious about their need there in the first place, for a couple reasons... The login form requires the poster to know the correct credentials. I can't really think of any way an CSRF attack would benefit here, especially if I check that the request came from the same host (checking the Referer header). The AntiForgeryToken token generates different values every time the page is loaded. If I have two tabs open with the login page, and then try to post them, the first one will successfully load. The second will fail with a AntiForgeryTokenException (first load both pages, then try to post them). With more secure pages - this is obviously a necessary evil, with the login pages - seems like overkill, and just asking for trouble. There are possibly other reasons why one should use or not use the token in ones forms. Am I correct in assuming that using the token in every post form is overkill, and if so - what kind of forms would benefit from it, and which ones would definitely not benefit? P.S. This question is also asked on StackOverflow, but I'm not entirely convinced. I thought I'd ask it here, for more security coverage
Yes, it is important to include anti-forgery tokens for login pages. Why? Because of the potential for "login CSRF" attacks. In a login CSRF attack, the attacker logs the victim into the target site with the attacker's account. Consider, for instance, an attack on Alice, who is a user of Paypal, by an evil attacker Evelyn. If Paypal didn't protect its login pages from CSRF attacks (e.g., with an anti-forgery token), then the attacker can silently log Alice's browser into Evelyn's account on Paypal. Alice gets taken to the Paypal web site, and Alice is logged in, but logged in as Evelyn. Suppose Alice then clicks on the page to link her credit card to her Paypal account, and enters her credit card number. Alice thinks she is linking her credit card to her Paypal account, but actually she has linked it to Evelyn's account. Now Evelyn can buy stuff, and have it charged to Alice's credit card. Oops. This is subtle and a bit obscure, but serious enough that you should include anti-forgery tokens for the form action target used to log in. See this paper for more details and some real-world examples of such vulnerabilities. When is it OK to leave off the anti-forgery token? In general, if the target is a URL, and accessing that URL has no side effects, then you don't need to include anti-forgery token in that URL. The rough rule of thumb is: include an anti-forgery token in all POST requests, but you don't need it for GET requests. However, this rough rule of thumb is a very crude approximation. It makes the assumption that GET requests will all be side-effect-free. In a well-designed web application, that should hopefully be the case, but in practice, sometimes web application designers don't follow that guideline and implement GET handlers that have a side effect (this is a bad idea, but it's not uncommon). That's why I suggest a guideline based upon whether the request will have a side effect to the state of the web application or not, instead of based on GET vs POST.
{ "source": [ "https://security.stackexchange.com/questions/2120", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1394/" ] }
2,144
I recently came across an odd JPEG file: Resolution 400x600 and a filesize of 2.9 MB. I got suspicious and suspected that there is some additional information hidden. I tried some straight forward things: open the file with some archive tools; tried to read its content with an editor, but I couldn't locate anything interresting. Now my questions: What else can I do? Are there any tools available that analyze images for hidden data? Perhaps a tool that scans for known file headers?
To detect Steganography it really comes down to statistical analysis (not a subject I know very well). But here are a few pages that may help you out. Steganography Countermeasures and detection - Wikipedia page worth a read to cover the basics. An Overview of Steganography for the Computer Forensics Examiner - Has quite a long list of tools and some other useful information. Steganography Detection - Some more information about Stegonography. Steganography Detection with Stegdetect - Stegdetect is an automated tool for detecting steganographic content in images. It is capable of detecting several different steganographic methods to embed hidden information in JPEG images. Tool hasn't been updated in quite a while but it was the best looking free tool I could find with a quick search.
{ "source": [ "https://security.stackexchange.com/questions/2144", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1183/" ] }
2,202
Cryptology is such a broad subject that even experienced coders will almost always make mistakes the first few times around. However encryption is such an important topic, often we can't afford to have these mistakes. The intent of this question is to identify and list what not to do with a given algorithm or API. This way we can learn from other's experiences and prevent the spread of bad practices. To keep this question constructive, please Include a "wrong" example Explain what is wrong with that example Provide a correct implementation (if applicable). To the best of your ability, provide references regarding #2 and #3 above.
Don't roll your own crypto. Don't invent your own encryption algorithm or protocol; that is extremely error-prone. As Bruce Schneier likes to say, "Anyone can invent an encryption algorithm they themselves can't break; it's much harder to invent one that no one else can break". Crypto algorithms are very intricate and need intensive vetting to be sure they are secure; if you invent your own, you won't get that, and it's very easy to end up with something insecure without realizing it. Instead, use a standard cryptographic algorithm and protocol. Odds are that someone else has encountered your problem before and designed an appropriate algorithm for that purpose. Your best case is to use a high-level well-vetted scheme: for communication security, use TLS (or SSL); for data at rest, use GPG (or PGP). If you can't do that, use a high-level crypto library, like cryptlib , GPGME, Keyczar , or NaCL , instead of a low-level one, like OpenSSL, CryptoAPI, JCE, etc.. Thanks to Nate Lawson for this suggestion.
{ "source": [ "https://security.stackexchange.com/questions/2202", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
2,214
We have a WiFi network that we want to be public and free. Does having a password that is known to everyone provide any additional security advantage to the people using this network as opposed to just leaving it without a password? i.e. Can a hacker do more damage on a WiFi network that has no password than he can on a network that does have a password that the hacker knows?
After some discussion with @epeleg in chat , I think I may have a more thorough and (hopefully) clear answer. TL;DR: The protection afforded to a Wi-Fi network by encryption with a PSK is directly proportional to the complexity of the PSK, and the effort taken to safeguard that PSK. For any environment, this requires striking a careful balance between security and usability. Lowest Security/Easiest Usability: No encryption. Highest Security/Hardest Usability: WPA2-AES, high-complexity PSK, MAC address filtering, Wireless Intrusion Detection/Prevention System. Require user & device registration for access to PSK and addition to the MAC filter. If you intend to provide free WiFi as a service to the community, the balance is probably somewhere in between these - and likely leans toward the former solution. However, even the latter of the above options is very doable if you are willing to put in the effort. Still, protecting a "free WiFi" network by any means doesn't so much prevent attacks outright as it does make them more difficult. Encrypting network traffic on the WiFi connection is always more secure than sending the traffic in the clear. While not impossible, it is very difficult and time-consuming for an outsider to translate WPA2-encrypted traffic into cleartext. However, most encrypted SOHO and "Free WiFi" networks must rely on a passcode, or Pre-Shared Key (PSK) to protect the encryption mechanism. The amount of protection offered by implementing a password in any system will always vary in direct proportion to the password complexity, and the effort taken to protect that password. Wireless networks are no exception. To try to simply express how this relates to your "Free WiFi" situation, I'll give a few possible configuration scenarios and the benefits/drawbacks of each: Scenario: Your network is left fully unsecured. Anyone within range of the AP can just hop on and enjoy the free WiFi. Benefit: This is the easiest for anyone to use and requires practically no administrative overhead. Drawbacks: This is the most vulnerable network of all. All traffic that does not otherwise use an encryption protocol (such as HTTPS) will be sent in the clear. This network is easy to sniff, spoof, and otherwise manipulate to the benefit of even very inexperienced attackers. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted the SSID and PSK in a publicly viewable location. Benefits: The data on your wireless network is encrypted, and nobody can read the data or connect to your wireless network without the PSK. This network is also fairly easy for the end-user to join, and requires little to no administrative overhead. Drawbacks: Having the PSK publicly accessible in this manner makes it trivial for anyone within range of the network to just grab it and hop on. Attackers will not likely be much deterred by this method. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted advertisement of the Free WiFi service in a publicly viewable location, which includes contact information for potential users to obtain the password. Benefits: The data on your wireless network is encrypted, and nobody can read the data or connect to your wireless network without the PSK. With this method, you have personal contact to one degree or another with every user - this helps to somewhat disenchant them of their sense of anonymity on your network. This may help deter some would-be attackers who would rather move on to a less secure network, than go to the trouble of contacting someone for your PSK. Drawbacks: This requires that someone is available within a reasonable amount of time (include the timeframe in your advertisement) either via phone or e-mail to give users login credentials. Users may also circumvent this measure by simply passing the PSK peer-to-peer. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted advertisement of the Free WiFi service in a publicly viewable location, which includes contact information for potential users to request access. You have also implemented a user and device registration process which includes an Acceptable Use Policy, contact information for registered users, and MAC addresses for all devices. You have also implemented MAC address filtering on the AP, and monitoring/logging services on the network. Benefits: The data on your wireless network is encrypted, and nobody can read the data on your wireless network without the PSK. Nobody can connect to the wireless network without both the PSK and a registered MAC address. With this method, you have the ability to see if/when your network is being inappropriately used and by whom. You also now have an agreement in place which informs your users that inappropriate use will not be tolerated, and which may absolve you of some legal responsibility if such use occurs.* Potential attackers would much rather find an easier victim than go through such a thorough process, especially when they read the clause of the AUP that mentions monitoring is in use. Users will not easily be able to circumvent the device registration by simply passing along the PSK. You can also revoke a user's access if necessary, by de-registering their MAC address(es) and/or changing to (and distributing via registered user contact info) a new PSK. Drawbacks: Of all these scenarios, this requires the most administrative work. This will require that someone is available within a reasonable timeframe to perform the complete user registration process - gathering personal information, gathering device information (helping users who don't know how - and most probably won't), archiving the paperwork, and registering new devices with the network. To be fully effective, it will also require that the logs be checked on a regular basis for suspicious activity and/or having some form of IDS/IPS in place. Attackers who obtain the PSK will easily be able to spoof the MAC addresses of other registered devices to either bypass the device filter, or pose as that device's user on the network. In all scenarios, there are a few things that should be kept in mind: By providing unconditional Free WiFi, there is always the possibility that you may be allowing a malicious user onto your network regardless of what registration or PSK distribution process you put in place. For all PSK-secured WiFi systems currently existing (WEP, WPA, WPA2) there are known attack vectors that allow an authenticated user to sniff the traffic of other users on the network as if it were in the clear. (Provided, of course, that the traffic is not encrypted by other means such as HTTPS.) Make sure the administration interfaces of all your network equipment are protected by strong, non-default passwords which are not similar to any PSKs you distribute. Depending on your local jurisdiction, you may be held liable for the actions of those who use your WiFi network.* Your contract with your ISP may not allow promiscuous sharing of your Internet connection. Lastly, to address your final query: Can a hacker do more damage on a wifi network that has no password then he can on a network that does have a password that the hacker knows ? When it comes to unconditionally Free WiFi networks, it's not so much a matter of how much damage the attacker can do as it is how easily he can do it. I hope I've clearly addressed the latter, above. * I am not a lawyer, and this is not legal advice.
{ "source": [ "https://security.stackexchange.com/questions/2214", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1453/" ] }
2,218
I copied this question verbatim from a tweet by Dave Hull. CIRT = Computer Incident Response Team
After some discussion with @epeleg in chat , I think I may have a more thorough and (hopefully) clear answer. TL;DR: The protection afforded to a Wi-Fi network by encryption with a PSK is directly proportional to the complexity of the PSK, and the effort taken to safeguard that PSK. For any environment, this requires striking a careful balance between security and usability. Lowest Security/Easiest Usability: No encryption. Highest Security/Hardest Usability: WPA2-AES, high-complexity PSK, MAC address filtering, Wireless Intrusion Detection/Prevention System. Require user & device registration for access to PSK and addition to the MAC filter. If you intend to provide free WiFi as a service to the community, the balance is probably somewhere in between these - and likely leans toward the former solution. However, even the latter of the above options is very doable if you are willing to put in the effort. Still, protecting a "free WiFi" network by any means doesn't so much prevent attacks outright as it does make them more difficult. Encrypting network traffic on the WiFi connection is always more secure than sending the traffic in the clear. While not impossible, it is very difficult and time-consuming for an outsider to translate WPA2-encrypted traffic into cleartext. However, most encrypted SOHO and "Free WiFi" networks must rely on a passcode, or Pre-Shared Key (PSK) to protect the encryption mechanism. The amount of protection offered by implementing a password in any system will always vary in direct proportion to the password complexity, and the effort taken to protect that password. Wireless networks are no exception. To try to simply express how this relates to your "Free WiFi" situation, I'll give a few possible configuration scenarios and the benefits/drawbacks of each: Scenario: Your network is left fully unsecured. Anyone within range of the AP can just hop on and enjoy the free WiFi. Benefit: This is the easiest for anyone to use and requires practically no administrative overhead. Drawbacks: This is the most vulnerable network of all. All traffic that does not otherwise use an encryption protocol (such as HTTPS) will be sent in the clear. This network is easy to sniff, spoof, and otherwise manipulate to the benefit of even very inexperienced attackers. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted the SSID and PSK in a publicly viewable location. Benefits: The data on your wireless network is encrypted, and nobody can read the data or connect to your wireless network without the PSK. This network is also fairly easy for the end-user to join, and requires little to no administrative overhead. Drawbacks: Having the PSK publicly accessible in this manner makes it trivial for anyone within range of the network to just grab it and hop on. Attackers will not likely be much deterred by this method. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted advertisement of the Free WiFi service in a publicly viewable location, which includes contact information for potential users to obtain the password. Benefits: The data on your wireless network is encrypted, and nobody can read the data or connect to your wireless network without the PSK. With this method, you have personal contact to one degree or another with every user - this helps to somewhat disenchant them of their sense of anonymity on your network. This may help deter some would-be attackers who would rather move on to a less secure network, than go to the trouble of contacting someone for your PSK. Drawbacks: This requires that someone is available within a reasonable amount of time (include the timeframe in your advertisement) either via phone or e-mail to give users login credentials. Users may also circumvent this measure by simply passing the PSK peer-to-peer. Scenario: Your network is protected with a strong PSK, using WPA2 for authentication and encryption. You have posted advertisement of the Free WiFi service in a publicly viewable location, which includes contact information for potential users to request access. You have also implemented a user and device registration process which includes an Acceptable Use Policy, contact information for registered users, and MAC addresses for all devices. You have also implemented MAC address filtering on the AP, and monitoring/logging services on the network. Benefits: The data on your wireless network is encrypted, and nobody can read the data on your wireless network without the PSK. Nobody can connect to the wireless network without both the PSK and a registered MAC address. With this method, you have the ability to see if/when your network is being inappropriately used and by whom. You also now have an agreement in place which informs your users that inappropriate use will not be tolerated, and which may absolve you of some legal responsibility if such use occurs.* Potential attackers would much rather find an easier victim than go through such a thorough process, especially when they read the clause of the AUP that mentions monitoring is in use. Users will not easily be able to circumvent the device registration by simply passing along the PSK. You can also revoke a user's access if necessary, by de-registering their MAC address(es) and/or changing to (and distributing via registered user contact info) a new PSK. Drawbacks: Of all these scenarios, this requires the most administrative work. This will require that someone is available within a reasonable timeframe to perform the complete user registration process - gathering personal information, gathering device information (helping users who don't know how - and most probably won't), archiving the paperwork, and registering new devices with the network. To be fully effective, it will also require that the logs be checked on a regular basis for suspicious activity and/or having some form of IDS/IPS in place. Attackers who obtain the PSK will easily be able to spoof the MAC addresses of other registered devices to either bypass the device filter, or pose as that device's user on the network. In all scenarios, there are a few things that should be kept in mind: By providing unconditional Free WiFi, there is always the possibility that you may be allowing a malicious user onto your network regardless of what registration or PSK distribution process you put in place. For all PSK-secured WiFi systems currently existing (WEP, WPA, WPA2) there are known attack vectors that allow an authenticated user to sniff the traffic of other users on the network as if it were in the clear. (Provided, of course, that the traffic is not encrypted by other means such as HTTPS.) Make sure the administration interfaces of all your network equipment are protected by strong, non-default passwords which are not similar to any PSKs you distribute. Depending on your local jurisdiction, you may be held liable for the actions of those who use your WiFi network.* Your contract with your ISP may not allow promiscuous sharing of your Internet connection. Lastly, to address your final query: Can a hacker do more damage on a wifi network that has no password then he can on a network that does have a password that the hacker knows ? When it comes to unconditionally Free WiFi networks, it's not so much a matter of how much damage the attacker can do as it is how easily he can do it. I hope I've clearly addressed the latter, above. * I am not a lawyer, and this is not legal advice.
{ "source": [ "https://security.stackexchange.com/questions/2218", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22/" ] }
2,231
I'm not security literate, and if I was, I probably wouldn't be asking this question. As a regular tech news follower, I'm really surprised by the outrage of Anonymous (hacker group) , but as a critical thinker, I'm unable to control my curiosity to dig out how exactly they are doing this? Frankly, this group really scares me. One thing that I don't understand is how they haven't been caught yet. Their IP addresses should be traceable when they DDOS, even if they spoof it or go through a proxy. The server with which they are spoofing should have recorded the IPs of these guys in its logs. If the govt. ask the company (which owns the server) don't they give the logs? Even if it is a private server owned by these guys, doesn't IANA (or whoever the organization is) have the address & credit card details of the guy who bought & registered the server? Even if they don't have that, can't the ISPs trace back to the place these packets originated? I know, if it was as simple as I said, the government would have caught them already. So how exactly are they able to escape? PS: If you feel there are any resources that would enlighten me, I'll be glad to read them. [Update - this is equally appropriate when referring to the Lulzsec group , so have added a quick link to the Wikipedia page on them]
My answer pokes at the original question. What makes you think that they don't get caught? The CIA and DoD found Osama bin Laden. Typical means include OSINT, TECHINT, and HUMINT. Forensics can be done on Tor. Secure deletion tools such as sdelete, BCWipe, and DBAN are not perfect. Encryption tools such as GPG and Truecrypt are not perfect. Online communications was perhaps Osama bin Laden's biggest strength (he had couriers that traveled to far away cyber-cafes using email on USB flash drives) and Anonymous/LulzSec's biggest weakness. They use unencrypted IRC usually. You think they'd at least be using OTR through Tor with an SSL proxy to the IM communications server(s) instead of a cleartext traffic through an exit node. Their common use of utilities such as Havij and sqlmap could certainly backfire. Perhaps there is a client-side vulnerability in the Python VM. Perhaps there is a client-side buffer overflow in Havij. Perhaps there are backdoors in either. Because of the political nature of these groups, there will be internal issues. I saw some news lately that 1 in 4 hackers are informants for the FBI. It's not "difficult" to "catch" anyone. Another person on these forums suggested that I watch a video from a Defcon presentation where the presenter tracks down a Nigerian scammer using the advanced transform capabilities in Maltego. The OSINT capabilities of Maltego and the i2 Group Analyst's Notebook are fairly limitless. A little hint; a little OPSEC mistake -- and a reversal occurs: the hunter is now being hunted.
{ "source": [ "https://security.stackexchange.com/questions/2231", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1152/" ] }
2,268
This question has been revised & clarified significantly since the original version. If we look at each trusted certificate in my Trusted Root store, how much should I trust them? What factors should be taken into consideration when I evaluate the trust of each Root CA for potential removal from my local store? More Information: If a CA issues a certificate to an improperly validated party, then that causes all machines that trust that CA vulnerable to MITM attacks. As a result all CA's stringently validate the requester of a given SSL certificate request to ensure the integrity of their CS chain. However, a large part of this CA verification process is subject to human intervention and provides opportunities to issue a cert to the wrong party. This may be done by CA operator error, government demands, or perhaps the coercion (bribe) of a CA operator. I'd like to learn more about which default CA's are more likely to issue certificates to the wrong party. I intend to use this information to advise users to remove that CA from their Trusted Cert Store Examples: Suppose the government controlling a particular CA wants to assume the identity of Microsoft.com, and demands an exception to the CA's verification process. That government then also requires the secrecy of this exception be maintained. The generated key pair would then be used in a MITM attack. Windows Azure Default Trust Windows Azure supports 275 CA's as shown in the following link . Depending on the use of the particular CA, some of those CA's may increase the surface area of a particular attack. In fact this may be technically required to make some applications work correctly. Amazon Default Trust (not available) Please share links to Amazon, Google, and VMWare's default CA list if you come across them. Mozilla A list of all certificates and audit statements is available. Apple iOS List of all iPhone root certificates as mentioned in this #WWDC2017. video
Update 5 The root problem (heh) with the CA model is that in general practice, any CA can issue certs for any domain, so you're vulnerable to the weakest link. As to who you can trust, I doubt that the list is very long at all, since the stakes are high and security is hard. I recommend Christopher Soghoian's post on the subject, which clarifies the various approaches that governments around the world have used to get access to private user data - whether by directly demanding it from companies that operate cloud services, via wiretap, or increasingly now via CA coercion or hacks: slight paranoia: The forces that led to the DigiNotar hack . Here I provide some specifics, and end with links to some potential fixes. In 2009, Etisalat (60% owned by the United Arab Emirates government), rolled out an innocuous looking BlackBerry patch that inserted spyware into RIM devices, enabling monitoring of e-mail, so it can hardly be considered trustworthy. But it is in a lot of trusted CA lists: http://arstechnica.com/business/news/2009/07/mobile-carrier-rolls-out-spyware-as-a-3g-update.ars Update 1 See also an example of a successful attack, allegedly by an Iranian named ComodoHacker , against Comodo: Rogue SSL certificates ("case comodogate") - F-Secure Weblog . F-Secure notes that Mozilla includes certificates issued by CAs in China, Israel, Bermuda, South Africa, Estonia, Romania, Slovakia, Spain, Norway, Colombia, France, Taiwan, UK, The Netherlands, Turkey, USA, Hong Kong, Japan, Hungary, Germany and Switzerland. Tunisia is another country that runs a widely-trusted CA, and there is also good documentation of the actions of their government to invade privacy: The Inside Story of How Facebook Responded to Tunisian Hacks - Alexis Madrigal - Technology - The Atlantic Mozilla notes another questionable practice to watch out for: CAs that allow an RA partner to issue certs directly off the root, rather than via an intermediary: Comodo Certificate Issue – Follow Up at Mozilla Security Blog . See also more detail, including speculation about the claim of responsibility by a lone Iranian hacker Web Browsers and Comodo Disclose A Successful Certificate Authority Attack, Perhaps From Iran | Freedom to Tinker Update 3 : Another successful attack seemingly also by ComodoHacker was against the DigiNotar CA. Their website was compromised starting in 2009, but this was not noticed until after DigiNotar had also been used in 2011 by Iranians to sign false certificates for the websites of Google, Yahoo!, Mozilla, WordPress and The Tor Project. DigiNotar did not reveal its knowledge of the intrusion into its site for over a month. See more at DigiNotar Hack Highlights the Critical Failures of our SSL Web Security Model | Freedom to Tinker . I'd guess that the range of vulnerability of various CAs varies pretty widely, as does their utility. So I'd suggest refocusing your strategy. When you can narrow it to specific assets you're trying to protect, just delete all CAs except those necessary for using those assets. Otherwise, consider eliminating the CAs you judge to be most vulnerable to those who care about your assets, or least popular, just to reduce the attack surface. But accept the fact that you'll remain vulnerable to sophisticated attacks even against the most popular and careful CAs. Update 2 : There is a great post on fixing our dangerous CA infrastructure at Freedom to Tinker: Building a better CA infrastructure It talks about these innovations: ImperialViolet - DNSSEC and TLS Network Notary - Perspectives : Improving SSH-style Host Authentication with Multi-path Network Probing Google Online Security Blog: Improving SSL certificate security Update 4 One of our IT Security blog posts in August 2011 also covers the case for moving to DNSSEC: A Risk-Based Look at Fixing the Certificate Authority Problem « Stack Exchange Security Blog Update 6 Several Certificate Authorities have been caught violating the rules. That includes the French cyberdefense agency (ANSSI), and Trustwave, each of which was linked to spoofing of digital certificates . Update 7 Yet another set of "misissued certificates", via the Indian Controller of Certifying Authorities (India CCA) in 2014: Google Online Security Blog: Maintaining digital certificate security See also the question on Certificate Transparency which looks like a helpful approach to discovering bad certificates and policy violations earlier.
{ "source": [ "https://security.stackexchange.com/questions/2268", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
2,298
MD5 tools output hexadecimal values. In the same manner, do SHA and RSA together produce a hexadecimal (or any other) output? What are the differences between the MD5, SHA and RSA algorithms?
It's not the type of output. Hex is just the way the data is formatted - since all of them are working on binary data, hex makes a great deal of sense. The important part is what they do and how they do it: MD5 and SHA are hash functions (SHA is actually a family of hash functions) - they take a piece of data, compact it and create a suitably unique output that is very hard to emulate with a different piece of data. They don't encrypt anything - you can't take MD5 or SHA output and "unhash" it to get back to your starting point. The difference between the two lies in what algorithm they use to create the hash. Also note that MD5 is now broken as a way was discovered to easily generate collisions and should not be used nor trusted anymore. RSA is an assymetric encryption algorithm. You have two keys (private and public) and you can perform a function with one key (encrypt or decrypt) and reverse with the other key. Which key you use depends on whether you are trying to do a digital signature or an encryption.
{ "source": [ "https://security.stackexchange.com/questions/2298", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1552/" ] }
2,330
The students are skeptical that turning off non-executable stacks, turning off canaries and turning off ASLR represents a realistic environment. If PaX, DEP, W^X, etc., are effective at stopping buffer overflow exploits, is there still value in learning about them?
Absolutely. ASLR and DEP are defense-in-depth measures. There are exploits that exist that can bypass each of them (for a real-world example, look at Peter Vreugdenhil's Pwn2Own exploit that he used against IE ). All you need to bypass ASLR for Windows is an information disclosure vulnerability that will let you know the base address of a loaded DLL in the process (that was the first vuln that Vreugdenhil exploited). From that, you can use a ret-to-libc attack to call any function in that DLL. The bottom line: stack (and heap) overflows are absolutely still relevant today. They're harder to exploit than they used to be but they're still relevant.
{ "source": [ "https://security.stackexchange.com/questions/2330", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1431/" ] }
2,335
In my project I'm using the value of public exponent of 4451h. I thought it's safe and ok until I started to use one commercial RSA encryption library. If I use this exponent with this library, it throws exception. I contacted developers of this library and got the following reply: "This feature is to prevent some attacks on RSA keys. The consequence is that the exponent value is limited to {3, 5, 17, 257 or 65537}. Deactivating this check is still being investigated, as the risks may be great." It's the first time in my life I hear that values other than {3, 5, 17, 257 or 65537} are used to break RSA. I knew only of using 3 with improper padding being vulnerable. Is that really so? Surely, I can use another library, but after such answer I worried about security of my solution.
There is no known weakness for any short or long public exponent for RSA, as long as the public exponent is "correct" (i.e. relatively prime to p-1 for all primes p which divide the modulus). If you use a small exponent and you do not use any padding for encryption and you encrypt the exact same message with several distinct public keys, then your message is at risk: if e = 3 , and you encrypt message m with public keys n 1 , n 2 and n 3 , then you have c i = m 3 mod n i for i = 1 to 3 . By the Chinese Remainder Theorem , you can then rebuild m 3 mod n 1 n 2 n 3 , which turns out to be m 3 (without any modulo) because n 1 n 2 n 3 is a greater integer. A (non modular) cube root extraction then suffices to extract m . The weakness, here, is not the small exponent; rather, it is the use of an improper padding (namely, no padding at all) for encryption. Padding is very important for security of RSA, whether encryption or signature; if you do not use a proper padding (such as the ones described in PKCS#1 ), then you have many weaknesses, and the one outlined in the paragraph above is not the biggest, by far. Nevertheless, whenever someone refers to an exponent-size related weakness, he more or less directly refers to this occurrence. That's a bit of old and incorrect lore, which is sometimes inverted into a prohibition against big exponents (since it is a myth, the reverse myth is also a myth and is no more -- and no less -- substantiated); I believe this is what you observe here. However, one can find a few reasons why a big public exponent shall be avoided: Small public exponents promote efficiency (for public-key operations). There are security issues about having a small private exponent; a key-recovery attack has been described when the private exponent length is no more than 29% of the public exponent length. When you want to force the private exponent to be short (e.g. to speed up private key operations), you more or less have to use a big public exponent (as big as the modulus); requiring the public exponent to be short may then be viewed as a kind of indirect countermeasure. Some widely deployed RSA implementations choke on big RSA public exponents. E.g. the RSA code in Windows (CryptoAPI, used by Internet Explorer for HTTPS) insists on encoding the public exponent within a single 32-bit word; it cannot process a public key with a bigger public exponent. Still, "risks may be great" looks like the generic justification ("this is a security issue" is the usual way of saying "we did not implement it but we do not want to admit any kind of laziness").
{ "source": [ "https://security.stackexchange.com/questions/2335", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1586/" ] }
2,384
I don't know why do we authenticate by prompting the user to enter both username and password. In my mental model, prompting password only suffices. The reason is as follows: Assume there are x valid characters to use. Case 1 (prompting username and password) Let the length of username and password be n/2 characters each. Since the username is exposed to the public, the probability of success to break the password is one over x^(n/2). The username is unique. Case 2 (prompting password only) Let the length of the password be n characters. The probability of success to break the password is one over x^n. Why do we authenticate by prompting a user to enter both username and password? Does prompting the password only suffice? The password is unique.
I think the issue is in requiring passwords to be unique. If I entered my desired password, and you told me I can't use it, it's already in use, then I know that I can log in to a random persons account with the password that I would have wanted. So, you need a username, which is unique, and can be known to everyone. Then you have a personal password, which is not necessarily unique, making even harder to guess. While you are at it, hash and salt that password.
{ "source": [ "https://security.stackexchange.com/questions/2384", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1625/" ] }
2,411
I use a few websites that prevent me from copying & pasting into the username or password fields. It's quite frustrating when using a password manager, and if anything I'd think it discourages users from good password-management because they're going to have to choose something they can type manually over and over again. Are there actually any benefits to preventing the paste operation on an application or website?
In my opinion, I don't think it's a net win. Those restrictions always frustrate me. (I'm hoping someone here will post details about how to defuse or work around them. Maybe a tweak to Firefox's user_prefs.js? An extension?) Presumably the reason why sites disable the password manager is because they're worried that Alice might sit down in front of Bob's browser and log into the web site as Bob, maybe purchasing something on Bob's tab. This is particularly an issue for roommates, family members, etc. who live together with each other. (See also "friendly fraud".) A related risk is that Bob might actually purchase something, but then claim that Alice did it to get out of paying for it. Presumably, the sites hope that by disabling the password manager, Bob will be forced to type in his password anew every time; Alice won't know the password and won't be able to type it in. However, these restrictions come at a significant cost. They make the website less usable and more annoying for users. They also drive users to either select poor passwords (which may be more susceptible to password-guessing attacks) or to write down their passwords (potentially enabling roommates and family members to learn the password, leaving everyone back where we started). For users who do trust everyone else who has physical access to their computer, these restrictions strictly decrease security. Personally, I suspect most sites should be reluctant to employ such measures. Odds are that you will annoy your users more than you will help them. But you will be in a better position to make an informed decision. If you do decide to employ such restrictions, you might consider providing users a way to opt out if they do not share their computer with others. Perhaps this may only be of interest to power users, so I don't know if it's worth your time, but you could consider it.
{ "source": [ "https://security.stackexchange.com/questions/2411", "https://security.stackexchange.com", "https://security.stackexchange.com/users/303/" ] }
2,430
That security through obscurity is A Bad Thing is received wisdom and dogma in information security. Telling people why something is to be avoided can be considerably more difficult when there is no line delineating what you are trying to ban from apparently effective strategies. For example - running ssh on a non-default port and port knocking are both suggested as ways of improving ssh security and both are criticised as being ineffective security through obscurity. In this particular case both solutions reduce the visibility of the system to automated attempts. This does nothing to improve the effectiveness of ssh as a tool or reduce the need for other ssh security measures. It does provide a way of separating serious attempts from automated passers by though, which improves the manageability of the system. Besides manageability/effectiveness what distinctions describe the boundary between valid/invalid uses of obscurity? What analogies describe effective use of obscurity or draw the distinction between this and ineffective use? Which analogies apparently supporting the effectiveness of obscurity don't hold up and why? What other specific implementations are examples of the valid role of obscurity?
Interesting question. My thoughts on this are that obscuring information is helpful to security in many cases as it can force an attacker to generate more "noise" which can be detected. Where obscurity is a "bad thing" can be where the defender is relying on that obscurity as a critical control, and without that obscurity, the control fails. So in addition to the one you gave above, an effective use of obscurity could be removing software name and version information from Internet facing services. The advantages of this are: If an attacker wants to find out if a vulnerable version of the service is in use they will have to make multiple queries (eg. looking for default files, or perhaps testing timing responses to some queries). This traffic is more likely to show up in IDS logs than a single request which returned the version. Additionally fingerprinting protocols aren't well developed for all services, so it could actually slow the attacker down considerably The other benefit is that the version number will not be indexed by services like Shodan . This can be relevant where an automated attack is carried out for all instances of a particular version of a service (eg. where a 0-day has been discovered for that version). Hiding this from the banner, may actually prevent a given instance of the service from falling prey to that attack. That said, it shouldn't ever be the only line of defense. In the above example, the service should still be hardened and patched to help maintain its security. Where I think that obscurity fails is where it's relied on. Things like hard-coded passwords that aren't changed, obfuscating secrets with "home grown encryption", or basing a risk decision on whether to patch a service on the idea that no-one will attack it. So the kind of idea that no one will find/know/attack this generally fails, possibly because the defenders are limiting their concept of who a valid attacker might be. It's all very well saying that an unmotivated external attacker may not take the time to unravel an obscure control, but if the attacker turns out to be a disgruntled ex-employee, that hard-coded password could cause some serious problems.
{ "source": [ "https://security.stackexchange.com/questions/2430", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1477/" ] }
2,687
I've tested the tool from Microsoft available here which tests password strength and rates them. For a password such as "i am going to have lunch tonight", the tool rates it's strength as "BEST" and for a password such as "th1$.v4l" it rates it as "Medium". I'm wondering how important password length really is. In this case the first password is better according the their tool but is it really better? It's based on dictionary words and has no combination of numbers and other characters besides spaces, and seems very easy to crack (not considering brute force). Is that tool giving precedence to length instead of actual strength ?
"Not considering brute force" - that's exactly what these tools measure. Obviously they dont try social engineering, or trying to discover if it's the user's first girlfriend's dog's birthday. The attacker might know that, but these tools don't. What they do measure is simply the difficulty for a bruteforcing tool to crack it. Not even the entropy of the password, which is an attribute of the generation method, just an estimate of how long it would take a bruteforcing tool to successfully find the correct password. Obviously, entropy has an effect on this, but it is only total entropy that matters, not entropy-per-character. So yes, having a lot of equi-probable options for each character does add to the entropy, but length can play an even more important part in making a password uncrackable, by raising the entropy-per-character to a higher power, by character count. This makes for a much higher total entropy , which is the only thing that matters. So, in your case - yes, the 32-character, alpha-only passphrase is much stronger than the 8-character punctuation password. I'm gonna try and do the maths here for a bit: (please correct me when I'm wrong): If we assume standard US-style keyboard, there are 85 possible printable characters (possibly be able to scrape a few more, but lets go with this for now): lowercase letters + upper case letters + numerals + standard punctuation + space. This grants ~6.3 bits strength per character; at 8 chars length the password gives you ~50.4 bits strength. Note really very strong... Even if we throw in a few more "special" characters, you're not going to upgrade that very much. Now, for your 32 character, alpha-only passphrase... Let's assume lowercase and uppercase letters only (even though you didnt use any), plus a space (U+32). Not even numerals... That gives you 54 possible characters, around ~5.8 bits per character. At 32 chars long, thats over 185 bits strength. Substantially stronger. And that's even without numerals, which are usually accepted even in "simple" password schemes. Bruce Schneier often talks about how switching to long memorable passphrases would be much more secure than short, randomized weird-looking passwords. Now you see why.
{ "source": [ "https://security.stackexchange.com/questions/2687", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1834/" ] }
2,837
What is the difference between a penetration test and a vulnerability assessment? Why would you choose one over the other? What deliverables would you expect to receive and how would you rate the quality of them?
I'm sure I posted an answer to this previously, but my google-fu must be weak this morning. From my blog post on Penetration Taxonomy, we have a list of testing types that is gaining acceptance. Also, working with the Penetration Testing Execution Standard we hope to further develop this. This list should help explain how to choose one over another. Deliverables are almost a separate issue - and should be defined by the need and the audience (eg for a governance body you would not expect the same detail as you would provide to a technical remediation team, but you would want to include business risk information). There is also a Reporting stream within PTES development to try and codify this area: Discovery The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities. Vulnerability Scan Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts). Vulnerability Assessment This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context. Security Assessment Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to. Penetration Test Penetration testing simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage. Security Audit Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test). Security Review Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)
{ "source": [ "https://security.stackexchange.com/questions/2837", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21/" ] }
2,881
I've been reading about the LANMAN (LM) hash and I'm curious about a particular part of the algorithm. The LM hash is computed as follows: The user’s ASCII password is converted to uppercase. This password is null-padded to 14 bytes. The 14-byte password is split into two 7-byte halves. These values are used to create two DES keys, one from each 7-byte half. Each of the two keys is used to DES-encrypt the constant ASCII string "KGS!@#$%" , resulting in two 8-byte cipher-text values. These two cipher-text values are concatenated to form a 16-byte value, which is the LM hash. There are a lot of security weaknesses outlined in the linked Wikipedia article and talked about elsewhere, but I'm particularly interested in steps 3 through 6. I'm curious about what led to this design. Is there any real security advantage to splitting a password, encrypting the two halves separately, then combining the two halves to form one hash again? Or is this just an example of "security through obscurity" ?
Splitting the password is a weakness , not an advantage . It allows breaking each password half independently. Beginning with ASCII characters (codes from 32 to 126, inclusive), then removing the lowercase letters, you end up with 127-32-26 = 69 possible characters in the password alphabet. This leads to 69 7 possible halves, which is somewhat below 2 43 . In other words, this is highly tractable through brute force. You do not even need a dictionary. This is not security through obscurity. This is insecurity through incompetence. Edit: "highly tractable with brute force" also opens the road for various optimizations. Note that LanMan is not salted, thus precomputed tables can be efficient (you pay the cost of table building once , then you attack several half-passwords -- it is actually worth it even for a single password, since one password is two half-passwords). In 2003, Philippe Oechslin published an improved time-memory trade-off (it is the article in which he coined the term "rainbow table") and computed tables for cracking LanMan passwords. He restricted himself to alphanumeric passwords (letters and digits, but no special signs), thus a space of 2 37 . The cumulative size of the tables would then be 1.4 GB, with cracking efficiency of 99.9%, and attack time under one minute. With a 2 43 space, i.e. 64 times larger, table size and attack time both rise by a factor 16 (that's 64 2/3 ), so we are talking about 23 GB or so (that's not much for today's disks) and a 15-minute attack. Actually, the attack would be faster than that, because the bottleneck is lookups on the hard-disk, and the smart attacker will use a SSD which can do lookups 50 times faster than a mechanical hard-disk (a 32 GB SSD costs less than 70$...). The table-building effort (a one-time expenditure) could take a few weeks on a single PC, or a few days on any decent cloud, so it is rather cheap. Apparently , such tables already exist...
{ "source": [ "https://security.stackexchange.com/questions/2881", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1369/" ] }
2,896
Can anyone suggest an automated tool to scan a PDF file to determine whether it might contain malware or other "bad stuff"? Or, alternatively, assigns a risk level to the PDF? I would prefer a free tool. It must be suitable for programmatic use, e.g., from the Unix command line, so that it is possible to scan PDFs automatically and take action based upon that. A web-based solution might also be OK if it is scriptable.
Very easy. Didier Stevens has provided two open-source, Python-based scripts to perform PDF malware analysis. There are a few others that I will also highlight. The primary ones you want to run first are PDFiD (available another with Didier's other PDF Tools ) and Pyew . Here is an article on how to run pdfid.py and see the expected results; Here is another for pyew . Finally, after identifying possible JS, Javascript, AA, OpenAction, and AcroForms -- you will want to dump those objects, filter the Javascript, and produce a raw output. This is possible with pdf-parser.py . Additionally, Brandon Dixon maintains some extremely elite blog posts on his research with PDF malware, including a post about scoring PDFs based on malicious filters just like you describe. I, personally, run all of these tools!
{ "source": [ "https://security.stackexchange.com/questions/2896", "https://security.stackexchange.com", "https://security.stackexchange.com/users/971/" ] }
2,914
At work my company uses internet monitoring software (Websense). I know if I visit a https ssl-encrypted site (such as https://secure.example.com ) they can't see what I'm doing on the site since all the traffic is encrypted. But do they see, that I visited https://secure.example.com ?
An encrypted connection is established first before any HTTP requests are performed (e.g. GET , POST , HEAD , etc.), but the hostname and port are visible. There are many other ways to detect which sites you’re visiting as well, for example: your DNS queries (i.e. they’ll see the IP request for secure.example.com) via network monitoring (e.g. netflow, IP to IP sessions, sniffing, etc.) if the device you are working on is owned by the company and they have administrator access/privileges to view anything on the device (e.g. view your browser caches) A popular way to evade a Websense proxy is to first establish a connection via HTTPS to an outside proxy (e.g. https://proxy.org/ ) and make your request from there.
{ "source": [ "https://security.stackexchange.com/questions/2914", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1961/" ] }
2,943
I could generate ssh keys on the client like (without password): ssh-keygen -b 8192 -t rsa -f /home/THEUSER/.ssh/id_rsa -P '' -v and copy the generated id_rsa.pub to the server: ssh-copy-id -i /home/THEUSER/.ssh/id_rsa.pub '-p PORTNUMBER SSHUSER@SERVER-IP-ADDRESS' How much is the chance that someone would generate the exact same key with ssh-keygen? so that they could log in without password too! I think it has a very low chance, but I want to know: how much is it?
The chance is very much lower than any of these events: The computer spontaneously catches fire during the key generation process. Great Britain is wiped out by a falling asteroid during the very same second. A rogue gorilla escaped from a zoo enters your living room and mauls you. You win millions of dollars at the lottery three times in a row. So the basic conclusion is that you should not worry about getting twice the same SSH key: it really will not happen in your lifetime. On a more theoretical point of view, there are about 2 8164 possible 8192-bit RSA keys (that's really a lot). However, ssh-keygen will use a pseudo-random number generator which works over a much more reduced internal seed, which depends on the operating system but will typically have size at least 160 bits. This reduces the number of possible keys to a much lower (but still huge) number, 2 160 . Even with tremendous computing power (I am not talking about a bored student with a few dozens of PC; rather, think "Google"), probability of finding the very same key after a few years of effort is less than 2 -100 . Comparatively, the events I list above can be estimated to occur with probabilities roughly equal to 2 -45 , 2 -50 , 2 -60 and 2 -71 , respectively: these are billions of times more probable. Of course, with a flawed PRNG, anything goes.
{ "source": [ "https://security.stackexchange.com/questions/2943", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
2,985
Can anyone suggest a cheatsheet or ToDo list of web site and application security? A local small business owner prompted a question about web security, basically her company website just got XSS attacked last week. I spent some free time to highlight where she should spend time on fixing in the future. Given that she outsourced her web site, is there a cheatsheet or ToDo list online about web security that I can share with her - i.e. A list of TODO for smart average joe / SMB owner? (Not limited to XSS)
There is always the OWASP top ten web vulnerabilities list A little summary of each from OWASP's report: Injection - Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. Cross-Site Scripting - XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. Broken Authentication and Session Management - Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users’ identities. Insecure Direct Object References - A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. Cross-Site Request Forgery (CSRF) - A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. Security Misconfiguration - Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. All these settings should be defined, implemented, and maintained as many are not shipped with secure defaults. This includes keeping all software up to date, including all code libraries used by the application. Insecure Cryptographic Storage - Many web applications do not properly protect sensitive data, such as credit cards, SSNs, and authentication credentials, with appropriate encryption or hashing. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud, or other crimes. Failure to Restrict URL Access - Many web applications check URL access rights before rendering protected links and buttons. However, applications need to perform similar access control checks each time these pages are accessed, or attackers will be able to forge URLs to access these hidden pages anyway. Insufficient Transport Layer Protection - Applications frequently fail to authenticate, encrypt, and protect the confidentiality and integrity of sensitive network traffic. When they do, they sometimes support weak algorithms, use expired or invalid certificates, or do not use them correctly. Unvalidated Redirects and Forwards - Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages.
{ "source": [ "https://security.stackexchange.com/questions/2985", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2004/" ] }
3,001
After reading Part I of Ross Anderson's book, Security Engineering , and clarifying some topics on Wikipedia, I came across the idea of Client Nonce (cnonce). Ross never mentions it in his book and I'm struggling to understand the purpose it serves in user authentication. A normal nonce is used to avoid replay attacks which involve using an expired response to gain privileges. The server provides the client with a nonce (Number used ONCE) which the client is forced to use to hash its response, the server then hashes the response it expects with the nonce it provided and if the hash of the client matches the hash of the server then the server can verify that the request is valid and fresh. This is all it verifies; valid and fresh . The explanations I've found for a client nonce however are less straight forward and questionable. The Wikipedia page for digest access authentication and several responses here on Stack Overflow seem to suggest that a client nonce is used to avoid chosen-plaintext attacks. I have several problems with this idea: If a person can sniff and insert packets, the greatest vulnerability is a man-in-the-middle attack which neither a nonce nor a cnonce can overcome, therefore making both meaningless. Assuming for a second that the attacker doesn't want to engage in a man-in-the-middle attack and wants to recover the authentication details, how does a cnonce provide additional protection? If the attacker intercepts communication and replies to a request with its own nonce, then the response from the client will be a hash of the nonce, data and cnonce in addition to the cnonce in unencrypted form. Now the attacker has access to the nonce, cnonce and the hash. The attacker can now hash its rainbow tables with the nonce and cnonce and find a match. Therefore the cnonce provides zero additional protection. So what is the purpose of a cnonce? I assume there is some part of the equation I'm not understanding but I haven't yet found an explanation for what that part is. EDIT Some answers have suggested that the client can provide a nonce and it will serve the same purpose. This breaks the challenge-response model however, what are the implications of this?
A nonce is a unique value chosen by an entity in a protocol, and it is used to protect that entity against attacks which fall under the very large umbrella of "replay". For instance, consider a password-based authentication protocol which goes like this: server sends a "challenge" (a supposedly random value c ) to the client client shall respond by sending h(c || p) where h is a secure hash function (e.g. SHA-256), p is the user password, and ' || ' denotes concatenation server looks up the password in its own database, recomputes the expected client response, and sees if it matches what the client sent Passwords are secret values which fits in human brains; as such, they cannot be very complex, and it is possible to build a big dictionary which will contain the user password with high probability. By "big" I mean "can be enumerated with a medium-scale cluster in a few weeks". For the current discussion, we accept than an attacker will be able to break a single password by spending a few weeks of computation; this is the security level that we want to achieve. Imagine a passive attacker: the attacker eavesdrops but does not alter the messages. He sees c and h(c || p) , so he can use his cluster to enumerate potential passwords until a match is found. This will be expensive for him. If the attacker wants to attack two passwords then he must do the job twice . The attacker would like to have a bit of cost sharing between the two attack instances, using precomputed tables ("rainbow tables" are just a kind of precomputed table with optimized storage; but building a rainbow table still requires enumerating the complete dictionary and hashing each password). However, the random challenge defeats the attacker: since each instance involves a new challenge, the hash function input will be different for every session, even if the same password is used. Thus, the attacker cannot build useful precomputed tables, in particular rainbow tables. Now suppose that the attacker becomes active. Instead of simply observing the messages, he will actively alter messages, dropping some, duplicating others, or inserting messages of its own. The attacker can now intercept a connection attempt from the client. The attacker chooses and sends his own challenge ( c' ) and waits for the client response ( h(c' || p) ). Note that the true server is not contacted; the attacker just drops the connection abruptly immediately after the client response, so as to simulate a benign network error. In this attack model, the attacker has made a big improvement: he still has a challenge c' and the corresponding response, but the challenge is a value that the attacker has chosen as he saw fit. What the attacker will do is always server the same challenge c' . Using the same challenge every time allows the attacker to perform precomputations: he can build precomputed tables (i.e. rainbow tables) which use that special "challenge". Now the attacker can attack several distinct passwords without incurring the dictionary-enumeration cost for each. A client nonce avoids this issue. The protocol becomes: server sends a random challenge c client chooses a nonce n (should be distinct every time) client sends n || h(c || n || p) server recomputes h(c || n || p) (using the p from its database) and sees if this value matches what the client sent Since the client includes a new random value (the "nonce") in the hash function input for each session, the hash function input will be distinct every time, even if the attacker can choose the challenge. This defeats precomputed (rainbow) tables and restores our intended security level. A crude emulation of a unique nonce is the user name. Two distinct users within the same system will have distinct names. However, the user will keep his name when he changes his password; and two distinct users may have the same name on two distinct systems (e.g. every Unix-like system has a "root" user). So the user name is not a good nonce (but it is still better than having no client nonce at all). To sum up, the client nonce is about protecting the client from a replay attack (the "server" being in fact an attacker, who will send the same challenge to every client he wishes to attack). This is not needed if the challenge is executed over a channel which includes strong server authentication (such as SSL). Password Authenticated Key Exchange are advanced protocols which ensure mutual password-based authentication between client and server, without needing some a priori trust (the "root certificates" when a SSL client authenticates the SSL server certificate) and protecting against active and passive attackers (including the "cluster-for-two-weeks" attack on a single password, so that's strictly better than the protocol above, nonce or no nonce).
{ "source": [ "https://security.stackexchange.com/questions/3001", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2014/" ] }
3,056
I was reading this CompTIA Security+ SYO-201 book , and the author David Prowse claims that: Whichever VM you select, the VM cannot cross the software boundaries set in place. For example, a virus might infect a computer when executed and spread to other files in the OS. However, a virus executed in a VM will spread through the VM but not affect the underlying actual OS. So if I'm running VMWare player and execute some malware on my virtual machine's OS, I don't have to worry about my host system being compromised, at all ? What if the virtual machine shares the network with the host machine, and shared folders are enabled? Isn't it still possible for a worm to copy itself to the host machine that way? Isn't the user still vulnerable to AutoRun if the OS is Windows and they insert a USB storage device? How secure are virtual machines, really? How much do they protect the host machine from malware and attacks?
VMs can definitely cross over. Usually you have them networked, so any malware with a network component (i.e. worms) will propagate to wherever their addressing/routing allows them to. Regular viruses tend to only operate in usermode, so while they couldn't communicate overtly, they could still set up a covert channel. If you are sharing CPUs, a busy process on one VM can effectively communicate state to another VM (that's your prototypical timing covert channel). Storage covert channel would be a bit harder as the virtual disks tend to have a hard limit on them, so unless you have a system that can over-commit disk space, it should not be an issue. The most interesting approach to securing VMs is called the Separation Kernel . It's a result of John Rushby's 1981 paper which basically states that in order to have VMs isolated in a manner that could be equivalent to physical separation, the computer must export its resources to specific VMs in a way where at no point any resource that can store state is shared between VMs. This has deep consequences, as it requires the underlying computer architecture to be designed in a way in which this can be carried out in a non-bypassable manner. 30yrs after this paper, we finally have few products that claim to do it. x86 isn't the greatest platform for it, as there are many instructions that cannot be virtualized, to fully support the 'no sharing' idea. It is also not very practical for common systems, as to have four VMs, you'd need four harddrives hanging off four disk controllers, four video cards, four USB controllers with four mice, etc.. 2020 update: all the recent hardware based vulnerabilities (Meltdown,Spectre,Foreshadow,ZombieLoad,CacheOut,SPOILER,etc) family of vulnerabilities are great examples of how VM's are always going to be able to communicate, simply because they share hardware (caches, TLB, branch prediction, TSX, SGX) that were never intended or prepared to be partitioned and isolated.
{ "source": [ "https://security.stackexchange.com/questions/3056", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2053/" ] }
3,133
The password hash used for MySQL passwords prior to version 4.1 (now called OLD_PASSWORD() ) seems like a very simple ad-hoc hash, without salts or iteration counts. See e.g an implementation in Python at Django snippets: Old MySQL Password Hash Has it by cryptanalyzed? Broken? All I see on the web is brute-force attacks. These are very successful for short-to-medium length passwords as one would expect. Though I also wonder if brute force on this ad-hoc algorithm are slower than attacks on the newer PASSWORD() function which simply uses SHA1 (twice), since I assume there is more widespread hardware acceleration support for SHA1. See more on flaws and attacks on MySQL passwords at Looking for example of well-known app using unsalted hashes
I am not aware of any published cryptanalysis on MySQL OLD_PASSWORD() , but it is so weak that it is kind of a joke. It could be given as an exercise during a cryptography course. Update: a cryptanalysis similar to the meet-in-the-middle described below was published in F. Muller and T. Peyrin "Cryptanalysis of T-Function-Based Hash Functions" in International Conference on Information Security and Cryptology - ICISC 2006 in 2006, with a more generic description, and some optimization to find short passwords and keep in RAM. For instance, here is a C code which "reverts" the internal state: static int oldpw_rev(uint32_t *pnr, uint32_t *pnr2, uint32_t add, unsigned char *cc, unsigned len) { uint32_t nr, nr2; uint32_t c, u, e, y; if (len == 0) { return 0; } nr = *pnr; nr2 = *pnr2; c = cc[len - 1]; add -= c; u = nr2 - nr; u = nr2 - ((u << 8) ^ nr); u = nr2 - ((u << 8) ^ nr); nr2 = nr2 - ((u << 8) ^ nr); nr2 &= 0x7FFFFFFF; y = nr; for (e = 0; e < 64; e ++) { uint32_t z, g; z = (e + add) * c; g = (e ^ z) & 0x3F; if (g == (y & 0x3F)) { uint32_t x; x = e; x = y ^ (z + (x << 8)); x = y ^ (z + (x << 8)); x = y ^ (z + (x << 8)); nr = y ^ (z + (x << 8)); nr &= 0x7FFFFFFF; if (oldpw_rev(&nr, &nr2, add, cc, len - 1) == 0) { *pnr = nr; *pnr2 = nr2; return 0; } } } return -1; } This function, when given the internal state after the len password characters given in the cc[] array ( nr and nr2 , two 31-bit words, and the add value which is the sum of the password characters), computes a valid solution for nr and nr2 before the insertion of the password characters. This is efficient. This leads to an easy meet-in-the-middle attack. Consider the sequences of 14 lowercase ASCII letters, such that each letter is followed by its complement (the complement of 'a' is 'z', the complement of 'b' is 'y', and so on...). There are about 8 billions such sequences. Note that the sum of the characters for any of those sequences is always the fixed value 1533. Take N of those sequences; for each of them, compute the corresponding hash with OLD_PASSWORD() , and accumulate the values in a big file: each entry contains the sequence of characters, and the corresponding nr and nr2 . Then sort the file by the 62-bit nr / nr2 pair. This file is a big table of: "using this sequence, we get from the initial values to that internal state". Then takes the N sequences again, and this time use oldpw_rev() (as shown above) for each of them, using the actual attacked hash as starting point for nr and nr2 , and 2*1533 == 3066 for add . This will given you N other pairs nr / nr2 , each with a corresponding sequence. These values are accumulated in another file, which you again sort by 62-bit nr / nr2 pair. This second file is a big table of: "using this sequence on that internal state, we obtain the hash value which we are currently attacking". At that point you just have to find two matching pairs, i.e. the same nr / nr2 in the first file and the second file. The corresponding 14-character sequences are then the two halves of a 28-character password which matches the hash output. That's probably not the password which was used in the first place, but this is a password which will hash to the same value and will be accepted by MySQL. Chances are high that you get a matching pair when N reaches 2 billions or so (we are in a space of size 2 62 , so it suffices that N is on the order of sqrt(2 62 ) ). This attack has work factor about 2 37 (accounting for the sorting step), which is vastly smaller than the 2 62 work factor that could theoretically be achieved with a hash function with a 62-bit output (which is already too low for proper security). Thus, the OLD_PASSWORD() function is cryptographically broken. (There are probably much better attacks than that.)
{ "source": [ "https://security.stackexchange.com/questions/3133", "https://security.stackexchange.com", "https://security.stackexchange.com/users/453/" ] }
3,165
Nota bene: I'm aware that the good answer to secure password storage is either scrypt or bcrypt . This question isn't for implementation in actual software, it's for my own understanding. Let's say Joe Programmer is tasked with securely storing end user passwords in a database for a web application; or storing passwords on disk for logins to a piece of software. He will most likely: Obtain $password from the end user. Create $nonce as a random value about 64 or 128 bits large. Create $hash = SHA256($nonce$password) and store $nonce together with $hash in the database. Question one: Why isn't the following substantially better than the above? Create $long_string once and only once. Store this as a constant in the application code. $long_string could f.x. be 2 kilobyte of random characters. Obtain $password from the end user. Create $mac = HMAC-SHA256($long_string)[$password] (i.e. create a MAC using the end user password as key) and store this $mac in the database. I would imagine the HMAC has the following benefits? Collisions are less frequent? It is computationally somewhat more expensive than plain hashing? (But not anywhere near scrypt, of course.) In order to succeed with a brute-force attack within a reasonable time, and attacker would need to gain access to two things: 1) the database, where $mac is stored, and 2) the application code, where the original $long_string is stored. That's one better than a hash function, where the attacker only needs access to the database? But still, nobody seems to suggest using an HMAC, so I must be misunderstanding something? Question two: What would the implications of adding a salt value $nonce be? Create $long_string once and only once. Store this as a constant in the application code. Obtain $password from the end user. Create $nonce as a random value about 128 bits large. Create $mac = HMAC-SHA256($long_string)[$nonce$password] and store $nonce and $mac in the database.
The point of the salt is to prevent attack cost sharing: if an attacker wants to attack two passwords, then it should be twice as expensive than attacking one password. With your proposal (your "question 1"), two users with the same password will end up using the same MAC. If an attacker has read access to your database, he can "try" passwords (by recomputing the MAC) and lookup the database for a match. He can then attack all the passwords in parallel, for the cost of attacking one. If your long_string is an hardcoded constant in the application source code, then all installed instances of the application share this constant, and it becomes worthwhile (for the attacker) to precompute a big dictionary of password-to-MAC pairs, also known as "a rainbow table". With a nonce (your "question 2") you avoid the cost sharing. The nonce is usually known as a "salt". Your long_string and the use of HMAC does not buy you much here (and you are not using HMAC for what it was designed for, by the way, so you are on shaky foundations, cryptographically speaking). Using a salt is a very good idea (well, not using a salt is a very bad idea, at least) but it does only half of the job. You must also have a slow hashing procedure. The point here is that the salt prevents cost sharing, but does not prevent attacking a single password. Attacking a password means trying possible passwords until one matches (that's the "dictionary attack"), and, given the imagination of the average human user, dictionary attacks tend to work: people just love using passwords which can be guessed. The workaround is to use a hashing process which is inherently slow, usually by iterating the hash function a few thousand times. The idea is to make password verification more expensive: having the user wait 1ms instead of 1µs is no hardship (the user will not notice it), but it will also make the dictionary attack 1000 times more expensive. Your long_string may be used for that, provided that it is really long (not 2 kilobytes, rather 20 megabytes). HMAC may be used instead of a raw hash function to strengthen a password-verification system, but in a different setup. Given a system which checks passwords with salts and iterated hash functions, you can replace the hash function with HMAC, using a secret key K . This prevents offline dictionary attacks as long as you can keep K secret. Keeping a value secret is not easy, but it is still easier to keep a 128-bit K secret than a complete database.
{ "source": [ "https://security.stackexchange.com/questions/3165", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
3,170
In some environments, it is required that users change a certain number of characters every time they create a new password. This is of course to prevent passwords from being easily-guessable, especially with knowledge of old passwords such as a departed employee might have for a shared service account. I have separate questions open to address button-pushing side of this enforcement. However, I'm also curious as to how this enforcement works on the back end. If a cleartext password cannot be derived from a strong hashing algorithm, how does the system determine how many characters have been changed in new passwords?
I'm not sure about comparing with all the passwords the user has previously used, as it really depends on the hashing system your using and I would say if its possible to derive any similarity from the hash then its not a very good system to begin with. But assuming that the user has to supply their current password when setting their new password, you could at least check the new one against the current one as you'll have both as unhashed at that point. The pam_cracklib module on Linux checks passwords like this and does a few basic checks by default. Is the new password just the old password with the letters reversed ("password" vs. "drowssap") or rotated ("password" vs. "asswordp")? Does the new password only differ from the old one due to change of case ("password" vs. "Password")? Are at least some minimum number of characters in the new password not present in the old password? This is where the "difok" parameter comes into play. You can find some more details about it here .
{ "source": [ "https://security.stackexchange.com/questions/3170", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
3,172
I have a question posted already for this issue in Windows systems , and thought that non-Windows systems should perhaps be covered separately. In NIST SP 800-53 Rev. 3, IA-5 is the control addressing "Authenticator Management". The requirements in this control include such things as enforcement of password length, complexity, lifetime, history, and proper storage/transmission of passwords. The first enhancement for this control, which is selected for all (Low/Moderate/High) systems includes this requirement: The information system, for password-based authentication: ... (b) Enforces at least a [Assignment: organization-defined number of changed characters] when new passwords are created; In most systems, it's pretty easy to find and configure the rules that enforce long and complex passwords which are changed regularly and do not exactly match a certain number of old passwords. But, how do you implement a policy that requires a certain amount of characters to be changed with every new password? Some systems I'm interested in (feel free to address others): Mac OS X Linux/Unix (Any/all flavors) Cisco IOS
I'm not sure about comparing with all the passwords the user has previously used, as it really depends on the hashing system your using and I would say if its possible to derive any similarity from the hash then its not a very good system to begin with. But assuming that the user has to supply their current password when setting their new password, you could at least check the new one against the current one as you'll have both as unhashed at that point. The pam_cracklib module on Linux checks passwords like this and does a few basic checks by default. Is the new password just the old password with the letters reversed ("password" vs. "drowssap") or rotated ("password" vs. "asswordp")? Does the new password only differ from the old one due to change of case ("password" vs. "Password")? Are at least some minimum number of characters in the new password not present in the old password? This is where the "difok" parameter comes into play. You can find some more details about it here .
{ "source": [ "https://security.stackexchange.com/questions/3172", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
3,204
Target hardware is a rather low-powered MCU (ARM Cortex-M3 @72MHz, with just about 64KB SRAM and 256KB flash), so walking the thin line here. My board does have ethernet, and I will eventually get lwIP (lightweight TCP/IP FOSS suite) running on it (currently struggling). However, I also need some kind of super light-weight alternative to SSL/TLS. I am aware of the multiple GPL'd SSL/TLS implementations for such MCU's, but their footprint is still fairly significant. While they do fit-in, given everything else, don't leave much room for others. My traffic is not HTTP, so I don't have to worry about HTTPS, and my client/server communication can be completely proprietary, so non-standard solution is okay. Looking for suggestions on what might be the minimalistic yet robust (well a weak security is worthless), alternative that helps me -- Encrypt my communication (C->S & S->C) Do 2-way authentication (C->S & S->C) Avoid man-in-middle attacks I won't be able to optimize library at ARMv7 assembly level, and thus bank entirely on my programming skills and the GNU-ARM compiler's optimization. Given above, any pointers of what might be the best options ? C: Client, S: Server. My communication is all binary data.
Edit: after some effort, I did re-implement a RAM-efficient SSL library, that can run in the kind of RAM amounts indicated below. It has many more features and flexibility than my previous creations, and yet it is still very small. More importantly, it is also opensource (MIT license). Enjoy: https://www.bearssl.org/ It is possible to implement a SSL/TLS client (or server) in about 21 kB of ARM code (thumb), requiring less than 20 kB of RAM when running(*). I know it can be done because I did it (sorry, not open source). Most of the complexity of TLS comes from its support of many kinds of cryptographic algorithms, which are negotiated during the initial handshake; if you concentrate on only one set of cryptographic algorithms, then you can strip the code down to something which is quite small. I recommend using TLS 1.2 with the TLS_RSA_WITH_AES_128_CBC_SHA256 cipher suite: for that one, you will only need implementations for RSA, AES and SHA-256 (for TLS 1.1 and previous, you would also need implementations for both MD5 and SHA-1, which is not hard but will spend a few extra kBytes of code). Also, you can make it synchronous (in plain TLS, client and server may speak simultaneously, but nothing forces them to do so) and omit the "handshake renegotiation" part (client and server perform an initial handshake, but they can redo it later on during the connection). The trickiest part in the protocol implementation is about the certificates. The server and the client authenticate each other by using their respective private keys -- with RSA, the server performs a RSA decryption, while the client computes a RSA signature. This provides authentication as long as client and server known each other public keys; therefore, they send their public keys to each other, wrapped in certificates which are signed blobs. A certificate must be validated before usage, i.e. its signature verified with regards to an a priori known public key (often called "root CA" or "trust anchor"). The client cannot blindly use the public key that the server just sent, because it would allow man-in-the-middle attacks. X.509 certificate parsing and validation is a bit complex (in my implementation, it was 6 kB of code, out of the 21 kB). Depending on your setup, you may have lighter options; for instance, if you can hardcode the server public key in the client, then the client can simply use that key and throw away the server certificate, which is "just a blob": no need for parsing, no certification, very robust protocol. You could also define your own "certificate" format. Another possibility is to use SRP , which is a key exchange mechanism where both parties authenticate each other with regards to the knowledge of a shared secret value (the magic of SRP is that it is robust even if the shared secret has relatively low entropy, e.g. is a password); use TLS_SRP_SHA_WITH_AES_128_CBC_SHA . The point here is that even with a custom protocol, you will not get something really lighter than a stripped-down TLS, at least if you want to keep it robust. And designing a robust protocol is not easy at all ; TLS got to the point of being considered as adequately secure through years of blood and tears. So it is really better to reuse TLS than inventing your own protocol. Also, this makes the code much easier to test (you can interoperate with existing SSL/TLS implementations). (*) Out of the 20 kB of RAM, there is a 16.5 kB buffer for incoming "records", because TLS states that records may reach that size. If you control both client and server code, you can arrange for a smaller maximum record size, thus saving on the RAM requirements. Per-record overhead is not much -- less than 50 bytes on average -- so you could use 4 kB records and still have efficient communication.
{ "source": [ "https://security.stackexchange.com/questions/3204", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2155/" ] }
3,214
I've been researching disk/file system encryption, and on the surface it seems like a good idea for a lot of things. But as I dig further, the security it offers seems more mirage like than real. For example, it seems like there is little point in encrypting your file systems in a data center somewhere, because the employees there need to be have physical access to the system in order for backups to be made, replacing failed hardware, that sort of thing. And if the server reboots, you have to supply it with the key/passphrase for it to boot up. If you don't give them that, you probably have to figure out how to ssh in or something to provide that, which is a) a PITA, and b) not really that secure anyway, since if they can physically access the machine they could theoretically read the memory and the key etc. If someone hacks in via the network, then it does not matter that your data is encrypted because if someone has root they will see plain text. So it seems to me that it would be more worthwhile putting effort into finding a data center with people/security you trust, OR hosting it yourself if you are that paranoid. Encrypting a filesystem on a system you don't have physical control over seems to me to be about as secure as DRM and for similar reasons. Where file system encryption does seem to make some sense is in storing backups - if you store in multiple off-site locations, you may not be able to trust them as well, so encryption would be welcome. If you stored backups of the keys and pass-phrases in different areas, it still might be worth doing because it is a lot easier to hide a USB key than it is to hide a HDD or tape. Another area it seems to make some sense is in a laptop. Keeping a USB key on your person along with an encrypted drive on your laptop would be good security if the laptop got stolen. Never letting the laptop out of your sight might be nearly as good though. If you control physical security and have access to the machine (e.g. server, workstation or desktop at home for example), it could conceivably be a good idea to encrypt. Again, controlling and securing a USB key is a lot easier than securing a computer system. Those are the conclusions I've come to so far, but there is a good chance I'm overlooking something - which is why I thought I'd ask here. Thoughts? Agree? Disagree?
In a data center, disk encryption can be useful for handling old disks: when a disk fails, you can simply discard (recycle) it, because the data it may still contain is encrypted and cannot be recovered without the corresponding key (this assumes that the server has the encryption key somewhere on its "system" disk -- or some other device -- and that the failed disk is not a system disk). Otherwise, disposal of failed disk is an issue (you want the equivalent of a shredder, e.g. a cauldron full of acid). For laptops, disk encryption is useful only if the laptop cannot be stolen with the decryption dongle, which, in practice, means that the user must have the dongle attached to its wrist, not simply kept plugged in the laptop. It also means that the dongle must be used regularly, not just at boot time (also, take "sleep mode" into account: users reboot very rarely). It can be predicted that users will actively resist such security features (and a security system that the user works around is worse than having no security system at all).
{ "source": [ "https://security.stackexchange.com/questions/3214", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1971/" ] }
3,272
Please Note: I'm aware that the proper method for secure password storage hashing is either scrypt or bcrypt. This question isn't for implementation in actual software, it's for my own understanding. Related How to apply a pepper correctly to bcrypt? How to securely hash passwords? HMAC - Why not HMAC for password storage? Background As far as I know, the recommended/approved method for storing password verifiers is to store: $verifier = $salt + hash( $salt + $password ) Where: hash() is a cryptographic hashing algorithm $salt is a random, evenly distributed, high entropy value $password is the password entered by the user Some people advice to add a secret key into the mix (sometimes called pepper ). Where the pepper is a secret, high entropy, system-specific constant. The rationale seems to be that even if the attacker gets hold of the password verifiers, there is a good chance he or she does not know the pepper value. So mounting a successful attack becomes harder. So, my question is: Does adding a pepper value in addition to a salt when hashing passwords increase the overall security? Or is the perceived increased security based on false assumptions? Quick Update I know the purpose of the $salt (I wrote quite a long answer on StackOverflow about it) the additional $pepper key is not improving upon what the salt does. The question is, does the $pepper add any security other than what the salt does?
In some circumstances, peppers can be helpful. As a typical example, let's say you're building a web application. It consists of webapp code (running in some webapp framework, ASP.NET MVC, Pyramid on Python, doesn't matter) and a SQL Database for storage. The webapp and SQL DB run on different physical servers . The most common attack against the database is a successful SQL Injection Attack. This kind of attack does not necessarily gain access to your webapp code, because the webapp runs on a different server & user-ID. You need to store passwords securely in the database, and come up with something on the form of: $hashed_password = hash( $salt . $password ) where $salt is stored in plaintext in the database, together with the $hashed_password representation and randomly chosen for each new or changed password . The most important aspect of every password hashing scheme is that hash is a slow cryptographically secure hash function, see https://security.stackexchange.com/a/31846/10727 for more background knowledge. The question is then, given that it is almost zero effort to add a constant value to the application code, and that the application code will typically not be compromised during an SQL Injection Attack, is the following then substantially better than the above? $hashed_password = hash( $pepper . $salt . $password ) where $salt is stored in plaintext in the database, and $pepper is a constant stored in plaintext in the application code (or configuration if the code is used on multiple servers or the source is public). Adding this $pepper is easy -- you're just creating a constant in your code, entering a large cryptographically secure random value (for example 32byte from /dev/urandom hex or base64 encoded) into it, and using that constant in the password hashing function. If you have existing users you need a migration strategy, for example rehash the password on the next login and store a version number of the password hashing strategy alongside the hash. Answer: Using the $pepper does add to the strength of the password hash if compromise of the database does not imply compromise of the application. Without knowledge of the pepper the passwords remain completely secure. Because of the password specific salt you even can't find out if two passwords in the database are the same or not. The reason is that hash($pepper . $salt . $password) effectively build a pseudo random function with $pepper as key and $salt.$password as input (for sane hash candidates like PBKDF2 with SHA*, bcrypt or scrypt). Two of the guarantees of a pseudo random function are that you cannot deduce the input from the output under a secret key and neither the output from the input without the knowledge of the key. This sounds a lot like the one-way property of hash functions, but the difference lies in the fact that with low entropy values like passwords you can effectively enumerate all possible values and compute the images under the public hash function and thus find the value whose image matches the pre-image. With a pseudo random function you cannot do so without the key (i.e. without the pepper) as you can't even compute the image of a single value without the key. The important role of the $salt in this setting comes into play if you have access to the database over a prolonged time and you can still normally work with the application from the outside. Without the $salt you could set the password of an account you control to a known value $passwordKnown and compare the hash to the password of an unknown password $passwordSecret . As hash($pepper . $passwordKnown)==hash($pepper . $passwordSecret) if and only if $passwordKnown==$passwordSecret you can compare an unknown password against any chosen value (as a technicality I assume collision resistance of the hash function). But with the salt you get hash($pepper . $salt1 . $passwordKnown)==hash($pepper . $salt2 . $passwordSecret) if and only if $salt1 . $passwordKnown == $salt2 . $passwordSecret and as $salt1 and $salt2 were randomly chosen for $passwordKnown and respectively $passwordSecret the salts will never be the same (assuming large enough random values like 256bit) and you can thus no longer compare password against each other.
{ "source": [ "https://security.stackexchange.com/questions/3272", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2113/" ] }
3,342
I have an encrypted HDD (dm_crypt). That's why I store my passwords in a simple text file. I usually copy/paste the passwords from it. Ok! Q: If I open this text file then it goes into the memory. So all my passwords in clear text format goes "there"..will the passwords be deleted from the memory if I close the text file? Could the memory that I'm using accessed by other in real time? Are there any better/more secure ways to copy/paste my passwords? E.g. to set that if I press Ctrl+V then it should clear the clipboard? Running Fedora 14.. Or how can I "encrypt my RAM"? Are there any ways for it? Thank you!
Bluntly put, yes, they could. That's the massively oversimplified answer. The more complicated answer is that you need to understand how your operating system handles memory. You might hear talk of rings, privilege levels etc. Let me explain briefly: When your operating system starts (getting passed the whole 16-bit BIOS thing) it basically has a whole pile of memory (your entire RAM) to work with and does so in a privileged mode, which means it can do whatever it likes to any part of memory. However, x86 architectures since 1980-something before I was born have supported the concept of protected mode, where the CPU hardware provides mechanisms to allow the operating system to launch applications in such a way that their memory addresses are segregated. Thus this protected mode allows the operating system the ability to create a virtual address space for each application. What this means is that an application's concept of memory is not the same as actual physical memory addresses and that the application does not have privileged access to memory. Indeed, applications are directly prevented from modifying each other's memory addresses because they can't see it, except via well defined requests to the operating system (syscalls) for requests for things such as shared memory, where two processes share an address space. (this is a simplification and I'm skipping huge chunks for brevity's sake, but that's the gist - applications have their requests to access memory managed by the OS and hardware). So theoretically, you're OK, right? Not technically true. As I said, operating systems do provide well defined ways to access other processes' memory. In terms of the possibilities, here are how a few might present: If I dump the memory of the application , I can resolve that stored password fairly easily. A debugger or an appropriately designed userland-style rootkit might be able to provide me access to that memory. I'm not an expert on such techniques, but know it can be done under certain circumstances. Finally, the operating system may be self-defeating here. The virtual memory you have access to has some other consequences. Operating systems present it as virtual because they often swap memory in order to ensure there is sufficient RAM available for currently running apps. So an interesting attack might be to cause the system to swap, then crash it and examine the swap partition. Is your swap encrypted too? Finally, I should point out that if your attacker is running code in the kernel via a kernel module, the game is over anyway, since there is nothing stopping them searching your memory space for ascii strings. However, to be realistic: If your kernel is compromised via a rootkit, a keylogger is probably easier to implement than something designed to scan memory in terms of grabbing your passwords. Userland style rootkits involving debuggers for evaluating programs not designed to be debugged (i.e. without debugging symbols etc) are not going to be an easy thing to implement, even if they are theoretically possible. It also isn't easy to exploit this - you, the user, would have to be tricked into executing said editor under the debugger which probably implies social engineering or physical access. My recommendation, however, is that you never store plaintext passwords anywhere. If you need a reminder, I suggest using a partial incomplete prompt that will jog your memory and allow you to deduce the passwords but reasonably prevent other people from doing so, even if they know you. This is very far from ideal, but better than plaintext passwords.
{ "source": [ "https://security.stackexchange.com/questions/3342", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
3,374
Given the recent spate of intrusions into various networks which have included compromise of subscriber identity and contact information, I figured it would be good for us to have a thread on how to spot and react to a "phishing" attempt. While the most common and prominent avenue for phishing is by e-mail, these social engineering attempts can really take on any form. If Oscar has all of Alice's contact information (as may be the case in recent high-profile attacks) he may try to manipulate her via e-mail, phone call, SMS, or even by postal letter. Oscar's attacks can be further augmented by intimate knowledge of Alice's more personal details. (Again, these may have been gained in some recent incidents.) Such details may include credit card numbers, purchase histories, birth date, dependents, "security questions and answers", etc. Still, regardless of the attack vector and complexity, there are a number of features that often set phishing attempts apart from legitimate vendor correspondence. Additionally, there's a number of preventative and reactive measures which can be taken to protect oneself from falling victim to these attacks. What are some "red flags" that are commonly found in phishing attempts? What are some ways Alice can verify the identity of a suspected phisher, if she believes the contact may be legitimate? If a suspect message includes a request for actions to be taken by Alice, and she believes the message may be legitimate, how should she follow up? Again, answers are welcome for all attack vectors which may be used by someone with complete contact information for the target, possibly including: E-mail Phone number Voice call - "vishing" SMS messaging - "smishing" Physical address Postal mail Door-to-door solicitation Note to moderators - This thread might be a good fit for Community Wiki.
Phishing "red flags": Any un-solicited communication regarding any account you have. Certainly, this criteria is the easiest one to have a false-positive hit on, and probably shouldn't be the only clue you act on, but it's also your first clue. Any un-solicited communication regarding any account you don't have. There's definitely something wrong, if it appears that an organization with whom you have no business relationship is contacting you. These require careful consideration, and may necessitate additional defensive actions. Generally, a communication of this type is one of three things: Spam Phishing Evidence of identity theft Un-solicited, or unexpected e-mail attachments. Anymore, I actually get a little irritated by anyone who sends me an attachment in e-mail without advance notice or request. There's so many other ways to share data over the Internet and across intranets these days, that e-mailing it as an attachment is rarely an actual necessity. If there's a form that you must fill out, or document you really need to read, most legitimate organizations will post it on their official website somewhere that you can access it with an appropriate level of security. Requests for you to send your username and/or password, or other personal details. No legitimate organization should be requesting any of this from you, via a contact that they initiated. Also, no legitimate organization will ever ask you for your password via any person-to-person contact. Common phrases used here are "verify your account" or "confirm billing information". Proliferate spelling, grammar, or factual errors. Some phishers are getting better about avoiding this, but it is still a common hallmark of cheap phishing attacks. An overwhelming emphasis on urgency. Phishers often want you to think you must rush to action, so that you might not take enough time to realize their scam. Overly formal, yet very generalized salutations. Stuff like "Mr/Mrs" or "Dear Sir or Madam" or "To Whom It May Concern". Unless this is a message you're expecting, and the tone is appropriate for the context, the unnecessary cordiality is probably just being used to warm you up to buy snake oil. Most legitimate organizations know their audience, and will customize their greeting to either identify you by your first and/or last name or username, or will have a greeting that specifically identifies you as their customer. Anything "too good to be true". You know the old saying. This is also another very general indicator that should set off anyone's alarms, regardless of how the "deal" is conveyed. FROM addresses that don't match the REPLY address. This is another criteria that may be prone to false positives, but should still raise your level of suspicion. Many legitimate organizations that send mass e-mails will more than likely be doing so from an address dedicated for that purpose. So, a legitimate e-mail will probably include either non-e-mail-based follow-up instructions, separate follow-up e-mail addresses, specific instructions for replying to the e-mail (keywords for subject and/or body), and/or a specific notice stating that direct replies to the e-mail will neither be received nor answered. Hyperlinked URLs whose targets do not match the link text. Before you even think about actually clicking on any hyperlink in an e-mail, hover your mouse over it for a second to see where it really sends you. If the link text is http://google.com/ but the link actually points you to somewhere else like http://lmgtfy.com/ *, you probably don't want to go there. Such link may appear like this: http://google.com/ (Mouseover to see actual target.) Hyperlinks that use shortened URLs. This criteria may have a lot of false-positive hits, but still warrants some cautionary measures and perhaps a raised level of suspicion. Hyperlinks with very long and complex targets, even to "legitimate" websites. These may possibly be cross-site scripting attacks. * http://lmgtfy.com is actually a benign website, and was only used to provide an example of URL link-text not pointing to where it says it's pointing. Phishing countermeasures: Stop, breathe, and think. No matter what they tell you, don't let yourself get into any rush. If someone is initiating a contact with you, taking time out of your day, they can stand to wait a few minutes (or even hours) while you sort things out for yourself, and decide what you're going to do. Do not offer any information. This is what phishers want. Even if you're not giving them the specific information they're asking for, you may still be giving them something else they can use against you later. Do not open any e-mail attachments. Just. Don't. Do. It. Do not follow any hyperlinks or URLs. Again, just don't. Do your own research. Google. Wiki. Snopes. Repeat. Do not do anything they ask, in the way they want you to do it. If it's a legitimate communication, you'll be able to find your own way of doing what's asked of you without them. For e-mails wanting you to go to a specific URL, instead go to the known-good-and-trusted HTTPS website of the organization and find your way to the requested function from there. For phone, mail, or other interactions, end the conversation and go use Google or the organization's known-good-and-trusted homepage to find (or verify) the correct contact information for follow-up. Do not reply. This goes along with not doing what they ask, how they ask it. Again, if the contact is legitimate, you should be able to follow up without actually answering back to the solicitor themselves. Even if the e-mail appears to come from an address referenced on a legitimate website, do not use the reply function . Instead, use a link or form on the known-good-and-trusted site, or manually fill in the e-mail address on a new message. Just say no. To drugs, and solicitors of every kind. Whatever service they are offering is not one that you need them in order to acquire. If the offer is legitimate, you will be able to find a comparable level of service via your own research, and likely through safer and more secure mechanisms. If they really insist that they need to get credit for the service, take their information and do your own research and validation before doing any business directly with them. Ask a pro. When in doubt, ask someone you trust who's "in the know" about these things. This may even just be part of the "Don't do what they ask." step - the purported organization's help desk (which you'll look up yourself) should definitely be able to tell you if the contact was legitimate.
{ "source": [ "https://security.stackexchange.com/questions/3374", "https://security.stackexchange.com", "https://security.stackexchange.com/users/953/" ] }
3,424
Is OpenSSH using OpenSSL to encrypt traffic? Or something else?
OpenSSH is a program depending on OpenSSL the library , specifically OpenSSH uses the libcrypto part of OpenSSL.
{ "source": [ "https://security.stackexchange.com/questions/3424", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
3,458
I have always thought that you are not supposed to use a password manager but to keep your passwords in your head, but lately I have thought about the pros and cons of having a password manager. Some areas might be: password length, key logger prevention, entropy between passwords, accessibility. (I'm not asking how to create hard but memorable password! It might be a part of the solution but not the whole question.) Finally, is there any way of combining them: keep a half in the manager and typing the other, to avoid key loggers.
I wrote this last year on the pro's and cons of password managers: Pros: Great balance of convenience and security - people tend to choose simple passwords and the reuse the same password (or base) because there are so many of them and you have to enter them so often. With 1Password or Lastpass you can generate a truly strong password (at least for your critical accounts) but still have the convenience of having it auto-filled or at least available written down on your phone. A real benefit is also in things like secret questions, this is commonly a weak point where a really strong password has a 5 letter dictionary word as a secret question answer. You can now generate strong secret question answers also Portability - the problem with using your browsers save password function is that unless you combine it with something like Google or Firefox sync it is not portable. Even then it is currently not available on your phone (at least not the iPhone, not sure whether the Android browser has Google sync) Secure storage - your sensitive information is encrypted in storage and protected by a master password. This is a lot better than just writing it somewhere or storing in a note or unencrypted spreadsheet Not just for passwords - you can store bank details, insurance numbers, credit cards, passport numbers, etc which can save you time entering in these details and provide you secure access to the details on move. You can also store files like scans of your documents or your private keys Improve your memory - on sites I hardly ever use, and government sites with those complicated usernames I can never remember these details. Launch up the iPhone, 1Password and everything to hand with easy search People also add anti-phishing / anti-malware to this list but that one I don't agree with. You still have to enter your master password which malware can capture, if you have it on your phone and enter the password again it can be captured. If you launch websites from the tool I guess it could be anti-phishing but that's the same as typing it in directly or using your bookmarks Cons: Single point of failure, keys to the kingdom - if you sync your keychain to your phone or have it on your desktop or laptop some could get access to that. If your master password is weak then you lose everything in one go. As far as I'm aware 1Password does not offer a hardware based two factor authentication option for the master password which would reduce the risk of this significantly. Lastpass does offer a using a yubikey as a two factor mechanism but because Lastpass has a web application it can suffer from web application vulnerabilities (e.g. XSS) which could leave your account details and at worst case passwords exposed. Terms and conditions - it is still technically 'writing a password down'. This maybe against the terms and conditions on things like your Internet Banking site. This may reduce or remove any protection you get in case of a fraud. You can always check this and not store the password for these sites Trust in the cloud - it is supposed to be encrypted in storage but if you do synchronize the data some people will never trust that 1Password or Lastpass does not have a backdoor, potentially allowing a malicious or disgruntled employee access. All software has vulnerabilities, again a serious one could allow an attacker access to your data Another option is to use a password vault stored in a hardware encrypted device like an Ironkey. Versions come with a password manager loaded in. It is a little bit less convenient as you have to attach it to a USB drive and have read access to this but it is definitely more secure. It mitigates some of the risks highlighted above, it is hardware encrypted and only stored on your device. Also if your Ironkey is on your physical key chain you are far less likely to lose it than your phone or laptop. You can also remotely destroy it if you do manage to lose it. For the online remote destruction you need the enterprise version of the key. The remote destruction is a feature in the management console. When the key is plugged in it, it phones home. If the destruction has been activated, at that stage it becomes unusable and all data is effectively lost (believe by trashing the decryption keys). There is also an offline mode (similar to an iPhone), where you can set it to auto self destruct after 10 failed master password attempts. Conclusion Overall I believe the pro's outweigh the cons. If you have no option for two factor authentication then having a strong password is your only defense. Using a password vault just makes this a lot more practical and convenient. There is no reason why you could not keep half the password in a password manager and remember the rest, it would make it more difficult for a key logger to capture your password, however the trade-off for usability may not be worth it. A better option maybe to use two factor for your really sensitive information and a password manager for the rest
{ "source": [ "https://security.stackexchange.com/questions/3458", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2023/" ] }
3,592
I have always enjoyed trying to gain access to things I'm not really supposed to play around with. I found Hack This Site a long time ago and I learned a lot from it. The issue I have with HTS is that they haven't updated their content in a very long time and the challenges are very similar. I'm no longer 13 and I want bigger and more complex challenges. I was thinking about challenges like Cyber Security Challenge and US Cyber Challenge ( @sjp wrote about these on the meta ) Also, are there any big social engineering competitions besides the one at Def Con ? Current list: Wargames: Over The Wire They have lots of small hacking challenges like: analyze the code, simple TCP communication application, crypto cracking. We Chall We Chall is similar to Over The Wire. Lots of challenges. They also have a large list of other sites with similar challenges. Smash The Stack spider.io Downloads: Damn Vulnerable Web Application Google Jarlsberg exploit Exercises Competitions: DC3 Is the DOD:s Forensic Challenge. It's an annual competition with different scenarios that you gain points for solving. NetWars Offers tournaments at some conferences. And one longer challenge over 4 months. With scenario challenges. Cyber Security Challenge Needs description US Cyber Challenge Needs description Codegate Quals CRT: iCTF iCTF is a capture the flag contest held once a year. PlaidCTF Hack.lu CTF 2012 Defcon Quals RuCTFe Other list like this one: http://captf.com/practice-ctf/ http://www.stumbleupon.com/su/1YNSxi/www.brighthub.com/internet/security-privacy/articles/77093.aspx/ http://www.wechall.net/challs Other interesting sites: http://captf.com/ , calendar http://facebook.com/hackercup Please help me add more to the list.
I don't know a good reference to point to for further reading. Thus I will try to list a few time-wasters that I personally enjoy. In the following I will allow myself to differentiate between various styles of hacking competitions. I don't know if this is a canonical approach, but it will probably help explaining the differences between the ones I know: Wargames These games take place on given server, where you start with an ssh login and try to exploit setuid-binaries to gain higher permissions. These games are usually available 24/7 and you can join whenever you want. Over The Wire Smash The Stack Intruded Challenge based competitions These games will present you numerous tasks that you can solve separately. The challenges mostly vary from exploitation, CrackMes, crypto, forensic, web security and more. These games are usually limited to a few days and the team with the most tasks solved is announced the winner. I will list my favorite, since I am quite convinced that you will easily find more of them. Some of the listed have just taken place and others will take place in the following months. Defcon Quals Codegate Quals CSAW CTF (usually during summer) Hack.lu CTF 2011 (end of September this year) and Hack.lu CTF 2010 PlaidCTF Capture The Flag These actually require you to capture and protect "flags". The best known is probably iCTF, which underwent some rule changes within the last years. This game is also limited to a certain time frame. Contestants are typically equipped with a Virtual Machine that they are to connect to a VPN. Your task is to analyze the presented machine, find security bugs, patch them and exploit the bugs on other machines in your VPN. The "flags" are stored and retrieved by a central game-server that checks a team's availability and whether previously stored flags have not been stolen. iCTF (typically in December) CIPHER CTF (will be renewed by new organizers this year) RuCTF and RuCTFe (a Russian CTF and its international version) Other There are also a bunch of downloadable virtual machines available to play offline, which is some kind of mix between 3) and 2) I suppose. Damn Vulnerable Web Application Damn Vulnerable Linux Google Jarlsberg Edit: Tag I have just come across a fifth game-type that I have not seen anywhere else. All teams compete with each other during several rounds and each round is a match between two teams. Phase 1: Both teams get root on a Linux System and try to hide as many back-doors within 15 minutes as possible. After these 15 minutes, the teams swap PCs and try to discover and remove as many back-doors as possible (also with root access). In the third phase, each team gets its server back (without root access) and is supposed to exploit as many back-doors to gain root access again. Remotely exploitable back-doors get bonus points :) It appears that games like this has been carried out during the LinuxTag Linux Conventions in Germany in the last years. The scenario is explained more detailed here (German only!) /Edit I hope this post has not become too confusing due to its length ;) Unordered list of lists of Hacking competitions: http://capture.thefl.ag/practice-ctf/ , Calendar http://www.wechall.net/sites.php http://exploit-exercises.com/ http://www.stumbleupon.com/su/1YNSxi/www.brighthub.com/internet/security-privacy/articles/77093.aspx/
{ "source": [ "https://security.stackexchange.com/questions/3592", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2023/" ] }
3,605
What are the advantages and drawbacks of the certificate based authentication over username and password authentication? I know some, but I would appreciate a structured and detailed answer. UPDATE I am interested as well in knowing what attacks are they prone to, e.g. as so far mentioned brute force, while nothing is mentioned for certificates... what about XSRF? A certificate is expected to have shorter lifetime and be able to be revoked while a password would live longer before an admin policy ask to change it...
1. Users are dumb A password is something that fits in the memory of a user, and the user chooses it. Since authentication is about verifying the user physical identity remotely (from the point of view of the verifier), the user behavior is necessarily involved in the process -- however, passwords rely on the part of the user which is most notoriously mediocre at handling security, namely his brain. Users simply do not grasp what password entropy is about. I am not blaming them for that: this is a technical subject, a specialization, which cannot realistically become "common sense" any time soon. On the other hand, security of a physical token is much more "tangible" and average users can become quite good at it. Evolutionists would tell that humans have been positively selected for that for the last million years, because those who could not hold to their flint tools did not survive enough to have offspring. Hollywood movies can be used as a model of how users think about passwords -- if only because those users go to movies, too. Invariably, the Arch Enemy has a short password and just loves to brag about it and distributes clues whenever he can. And, invariably, a British Secret Agent guesses the password in time to deactivate the fusion bomb which was planted under the Queen's favorite flower bed. Movies project a distorted, exaggerated reality, but they still represent the mental baseline on which average users operate: they envision passwords as providing security through being more "witty" than the attacker. And, invariably, most fail at it. "Password strength" can be somewhat improved by mandatory rules (at least eight characters, at least two digits, at least one uppercase and one lowercase letter...) but those rules are seen as a burden by the users, and sometimes as an insufferable constraint on their innate freedom -- so the users become to fight the rules, with great creativity, beginning with the traditional writing down of password on a stick-up note. More often than not, password strengthening rules backfire that way. On the other hand, user certificates imply a storage system, and if that system is a physical device that the user carries around with his house or car keys, then security relies (in part) on how well the average user manages the security of a physical object, and they usually do a good job at it. At least better than when it comes to choosing good password. So that's a big advantage of certificates. 2. Certificates use asymmetric cryptography The "asymmetry" is about separating roles. With a password, whoever verifies the password knows at some point the password or a password-equivalent data (well, that's not entirely true in the case of PAKE protocols). With user certificates, the certificate is issued by a certification authority, who guarantees the link between a physical identity and a cryptographic public key. The verifier may be a distinct entity, and can verify such a link and use it to authenticate the user, without getting the ability to impersonate the user. In a nutshell, this is the point of certificates: to separate those who define the user digital identity (i.e. the entity which does the mapping from the physical identity to the computer world) from those who authenticate users. This opens the road to digital signatures which bring non-repudiation. This particularly interests banks which take financial orders from online customers: they need to authenticate customers (that's money we are talking about, a very serious matter) but they would love to have a convincing trace of the orders -- in the sense of: a judge would be convinced. With mere authentication, the bank gains some assurance that it is talking to the right customer, but it cannot prove it to third parties; the bank could build a fake connection transcript, so it is weaponless against a customer who claims to be framed by the bank itself. Digital signatures are not immediately available even if the user has a certificate; but if the user can use a certificate for authentication then most of the hard work has been done. Also, passwords are inherently vulnerable to phishing attacks, whereas user certificates are not. Precisely because of asymmetry: the certificate usage never involves revealing any secret data to the peer, so an attacker impersonating the server cannot learn anything of value that way. 3. Certificates are complex Deploying user certificates is complex, thus expensive: Issuing and managing certificates is a full can of worm, as any PKI vendor can tell you (and, indeed, I do tell you). Especially the revocation management. PKI is about 5% cryptography and 95% procedures. It can be done, but not cheaply. User certificates imply that users store their private key in some way, under their "exclusive access". This is done either in software (existing operating systems and/or Web browsers can do that) or using dedicated hardware, but both solutions have their own set of usability issues. The two main problems which will arise are 1) the user loses his key, and 2) an attacker obtains a copy of the key. Software storage makes key loss a plausible issue (at the mercy of a failed hard disk), and sharing the key between several systems (e.g. a desktop computer and an iPad) implies some manual operations which are unlikely to be well protected against attackers. Hardware tokens imply the whole messy business of device drivers, which may be even worse. A user certificate implies relatively complex mathematical operations on the client side; this is not a problem for even an anemic Pentium II, but you will not be able to use certificates from some Javascript slapped within a generic Web site. Certificate requires active cooperation from client-side software, and said software tends to be, let's say, ergonomically suboptimal in that matter. Average users can normally learn to use client certificates for a HTTPS connection to a Web site, but at the cost of learning how to ignore the occasional warning popup, which makes them much more vulnerable to some attacks (e.g. active attacks where the attacker tries to feed them its own fake server certificate). On the other hand, password-based authentication is really easy to integrate just about everywhere. It is equally easy to mess up, of course; but at least it does not necessarily involve some incompressible extra costs. Summary User certificates allow for a separation of roles which passwords cannot do. They do so at the expense of adding a horde of implementation and deployment issues, which make them expensive. However, passwords remain cheap by fitting in a human mind, which inherently implies low security. Security issues with passwords can be somewhat mitigated by some trickeries (up to an including PAKE protocols) and, most of all, by blaming the user in case of a problem (we know the average user cannot choose a secure password, but any mishap will still be his fault -- that's how banks do it).
{ "source": [ "https://security.stackexchange.com/questions/3605", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2204/" ] }
3,611
Raw SQL When you're writing SQL -- for anything that takes human input really, a lot of things have been done to avoid the injection. Everyone that's heard of SQL injection knows that (I'm going to use PHP as a sample) doing something like this isn't safe: $sql = "SELECT * FROM `users` . WHERE `userName` = "{$_POST["username"]}" . AND `pass` = "{$_POST["pass"]}";"; Magic Then of course, someone came out with the idea of using "magic escape quotes" to deal with program input that wasn't sanitized correctly and directly put into sql as a result of bad practices. This didn't really solve the issue with SQL injection, but it did mean all user input got mangled up. Adding slashes So, some people turned off magic quotes. Then, they parsed user input before the point of SQL through addslashes() which in theory escapes all the quotes and your hacker can't do ' OR 1=1 , but even the documentation for addslashes it's self say that you shouldn't use addslashes, it says use the database-specific function such as mysql_real_escape_string() , but this is still said to not be enough by some. Adding slashes specific to Database So, we can't use DBMS specific *_real_escape_string , we can't use add slashes , the "magic quotes" thing caused lots of issues, and the web is full of short worded quotes such as: "A dedicated hacker will find a way to jump through your quote-escaping loops, just use the DBAL prepared statements" - John Q any programmer Okay, so that scared me enough to use prepare statements and a DBAL. It didn't really explain anything, but it sounds good because I've heard it a lot. Prepared statements So now we're using PDO, or a DBAL from a framework, or something else that wraps all our sql and makes sure someone can't run an sql injection. My question is basically a "why not?", not a "what should I use?". The web's full of people telling you to use this or use that or whatever , but no explanations of why these things had to happen. Direct questions Pointed questions (reminder, I'm asking about SQL, PHP was an example language because of it's bad rep around SQL, concepts are universal): Why can't we escape all user input using "magic" ? Why wasn't addslashes "good enough"? Whats wrong with using DB-specific escape functions, and why was it better than addslashes? Why are prepared statements with frameworks and PDO being hailed as the gold standard of SQL? Why are they better? Why can't I do an SQL injection with these, where as I COULD have with the previously mentioned means? Can a programmer not somehow manage to still screw this up? What should they look out for? Any other concerns I haven't brought up?
Why can't we escape all user input using "magic"? At the time the magic is applied, it is unknown where the data will end up. So magic quotes are destroying data that, unless it is written unescaped to a database. It may just be used in the HTML response sent back to the client. Think of a form that has not beem filled in completely and is therefore shown again to the user. With magic quotes, the data entered on the first attempt will now be SQL escaped, which is meaningless on an HTML page. Even worse: on the second submission the data is SQL escaped again. Why wasn't addslashes "good enough"? It has issues with multibyte characters: ' 27 \ 5c 뼧 bf 5c Those are two bytes, but only one Unicode character. Since addslashes does not know anything about Unicode, it converts the input bf 27 to bf 5c 27 . If this is read by a Unicode-aware program, it is seen as 뼧'. Boom. There is a good explanation of this issue at http://shiflett.org/blog/2006/jan/addslashes-versus-mysql-real-escape-string Whats wrong with using DB-specific escape functions, and why was it better than addslashes? They are better because they ensure that the escaping interprets the data in the same way the database does (see the last question). From a security point of view, they are okay if you use them for every single database input. But they have the risk that you may forget either of them somewhere. Edit : As getahobby added: Or that you use xxx_escape_strings for numbers without adding quotation marks around them in the SQL statement and without ensuring that they are actual numbers by casting or converting the input to the appropriate data type /Edit From a software development perspective they are bad because they make it a lot harder to add support for other SQL database server software. Why are prepared statements with frameworks and PDO being hailed as the gold standard of SQL? Why are they better? PDO is mostly a good thing for software design reasons. It makes it a lot easier to support other database server software. It has an object orientated interface which abstracts many of the little database specific incompatibilities. Why can't I do an SQL injection with these [prepared statements], where as I COULD have with the previously mentioned means? The " constant query with variable parameters " part of prepared statements is what is important here. The database driver will escape all the parameters automatically without the developer having to think about it. Parametrized queries are often easier to read that normal queries with escaped parameters for syntactic reasons. Depending on the environment, they may be a little faster. Always using prepared statements with parameters is something that can be validated by static code analysis tools . A missing call to xxx_escape_string is not spotted that easily and reliably. Can a programmer not somehow manage to still screw this up? What should they look out for? "Prepared statements" imply that the are constant . Dynamically generating prepared statements - especially with user input - still have all the injection issues.
{ "source": [ "https://security.stackexchange.com/questions/3611", "https://security.stackexchange.com", "https://security.stackexchange.com/users/488/" ] }
3,623
Note: One, I am not sure if synthetic queries is right word for the risk I am talking about. Second, though I am considering 3-tier model of web-applications general answers are welcome in client-server situations where server side validation is not possible. Currently I have a page where you do certain computations using what is called as DHTML, This computation generates a string which has no particular pattern as such. This string is sent to a server side script using AJAX. Anyone with basic training in these technologies can read the code and realize that the query is sent is something like this: http://domain.com/script.php?var=theStringSoGenerated Hoping to exploit a possible flaw, a hacker types in his browser: http://domain.com/script.php?var=aCompletelyRandomString He does this a few times and sees no evident benefit and quits, but in the middle tier, the PHP script, completely helpless without a possible validation, updates and inserts the random string into the database impacting its integrity and leading to wastage of resources. Question: How can I protect my application against such attacks?
Why can't we escape all user input using "magic"? At the time the magic is applied, it is unknown where the data will end up. So magic quotes are destroying data that, unless it is written unescaped to a database. It may just be used in the HTML response sent back to the client. Think of a form that has not beem filled in completely and is therefore shown again to the user. With magic quotes, the data entered on the first attempt will now be SQL escaped, which is meaningless on an HTML page. Even worse: on the second submission the data is SQL escaped again. Why wasn't addslashes "good enough"? It has issues with multibyte characters: ' 27 \ 5c 뼧 bf 5c Those are two bytes, but only one Unicode character. Since addslashes does not know anything about Unicode, it converts the input bf 27 to bf 5c 27 . If this is read by a Unicode-aware program, it is seen as 뼧'. Boom. There is a good explanation of this issue at http://shiflett.org/blog/2006/jan/addslashes-versus-mysql-real-escape-string Whats wrong with using DB-specific escape functions, and why was it better than addslashes? They are better because they ensure that the escaping interprets the data in the same way the database does (see the last question). From a security point of view, they are okay if you use them for every single database input. But they have the risk that you may forget either of them somewhere. Edit : As getahobby added: Or that you use xxx_escape_strings for numbers without adding quotation marks around them in the SQL statement and without ensuring that they are actual numbers by casting or converting the input to the appropriate data type /Edit From a software development perspective they are bad because they make it a lot harder to add support for other SQL database server software. Why are prepared statements with frameworks and PDO being hailed as the gold standard of SQL? Why are they better? PDO is mostly a good thing for software design reasons. It makes it a lot easier to support other database server software. It has an object orientated interface which abstracts many of the little database specific incompatibilities. Why can't I do an SQL injection with these [prepared statements], where as I COULD have with the previously mentioned means? The " constant query with variable parameters " part of prepared statements is what is important here. The database driver will escape all the parameters automatically without the developer having to think about it. Parametrized queries are often easier to read that normal queries with escaped parameters for syntactic reasons. Depending on the environment, they may be a little faster. Always using prepared statements with parameters is something that can be validated by static code analysis tools . A missing call to xxx_escape_string is not spotted that easily and reliably. Can a programmer not somehow manage to still screw this up? What should they look out for? "Prepared statements" imply that the are constant . Dynamically generating prepared statements - especially with user input - still have all the injection issues.
{ "source": [ "https://security.stackexchange.com/questions/3623", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2137/" ] }
3,630
How to find out that a NIC is in promiscuous mode on a LAN?
DNS test - many packet sniffing tools perform IP address to name lookups to provide DNS names in place of IP addresses. To test this, you must place your network card into promiscuous mode and sends packets out onto the network aimed to bogus hosts. If any name lookups from the bogus hosts are seen, a sniffer might be in action on the host performing the lookups. ARP Test - When in promiscuous mode the driver for the network card checks for the MAC address being that of the network card for unicast packets, but only checks the first octet of the MAC address against the value 0xff to determine if the packet is broadcast or not. Note that the address for a broadcast packet is ff:ff:ff:ff:ff:ff. To test for this flaw, if you send a packet with a MAC address of ff:00:00:00:00:00 and the correct destination IP address of the host. After receiving a packet, the Microsoft OS using the flawed driver will respond while in promiscuous mode. Probably it happens just with the default MS driver. Ether Ping test - In older Linux kernels when a network card is placed in promiscuous mode every packet is passed on to the OS. Some Linux kernels looked only at the IP address in the packets to determine whether they should be processed or not. To test for this flaw, you have to send a packet with a bogus MAC address and a valid IP address. Vulnerable Linux kernels with their network cards in promiscuous mode only look at the valid IP address. To get a response, an ICMP echo request message is sent within the bogus packet leading to vulnerable hosts in promiscuous mode to respond. Maybe there are more, the DNS test for me is the most reliable
{ "source": [ "https://security.stackexchange.com/questions/3630", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
3,657
This question concerns the session send and receive keys used in SSL/TLS protocol. my understanding is that this key uses symmetric encryption (DES, AES, BlowFish, etc.) I'm wondering, if public-private key pairs are superior to symmetric keys regarding key exchange security, why don't use asymmetric encryption for the session keys too? this is an extension of an existing question: security of PKI, Certificates, certificate authorities, forward secrecy
3 reasons (now): Asymmetric encryption is slower, much slower, than symmetric encryption. Orders of magnitude slower. Given the same keylength, asymmetric is much weaker than symmetric, bit-for-bit. Therefore, you need a much larger key to provide equivalent protection. This also contributes to the slowness mentioned in 1. (As per @ThomasPornin's comment:) Asymmetric encryption carries with it an increase in size of output. For instance, if you use RSA, encrypted data is at least 10% larger than the cleartext. Symmetric encryption, on the other hand, has a fixed size overhead even when encrypting gigabytes of data.
{ "source": [ "https://security.stackexchange.com/questions/3657", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2049/" ] }
3,667
I read the article on Wikipedia describing what a DMZ (demilitarized zone) is on a network, but am still failing to grasp both how it is set up (ie: is it within the main network or sequestered away?) and what its benefits and uses are. Can anyone explain to me why I'd like to have a DMZ on my network, given the following setup: I have around 10 client computer devices on the network, some of which host SSH. I have a single server which hosts SSH, HTTP, and a few other publicly accessible services. For this given use-case, how would I plug in a DMZ, and what would be the benefits?
Reasons why you want a DMZ and the benefits it offers. The general idea is that you put your public faced servers in the "DMZ network" so that you can separate them from your private, trusted network. The use case is that because your server has a public face, it can be remotely rooted. If that happens, and a malicious party gains access to your server, he should be isolated in the DMZ network and not have direct access to the private hosts (or to a database server for example that would be inside the private network and not on the DMZ). How to do it: There are several ways, but the 'book example' is by utilizing two firewalls (of course you can achieve the same result with one firewall and smart configuration, although hardware isolation is nicer). Your main firewall is between internet and the server and the second firewall between the server and the private network. On this second firewall, all access from the server to the private network ideally would be forbiden (of course it would be a statefull firewall so if you initiate a connection from the private network to the server it would work). So, this is a fairly high level overview of DMZ. If you want more technical details please edit your question accordingly.
{ "source": [ "https://security.stackexchange.com/questions/3667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2374/" ] }
3,668
When I open https://java.com my browser says “untrusted connection” but when I open https://www.java.com my browser says it’s ok. But it's the same site. So my question is: Is this a security bug? It's like having self-signed certs? Several domains struggle with correct HTTPS settings regarding having the "www" prefix or not.
Have a look at the actual error message: Certificate is only valid for www.java.com. It is a server misconfiguration. But in this case this is not a direct security issue because both domains belong to the same company. (It is an indirect one because it teaches people to ignore this kind of error message). Background What happens here is this: You told your browser to visit java.com But the server answered: I am www.java.com (without aliases) and here is my certificate to prove that. But your browser does not want to talk to www.java.com, it was told to connect to java.com. If both domains are not under the control of the same people, this is an issue (think of <something>.dyndns.org): The connection is encrypted just fine, but you had a secured connection to the attacker. The attacker would then read and possible modify it before passing it on to the real server. When the attacker gets the answer, he can again read and modify it, before he sends it to you. This is called a Man in the middle attack . Therefore this warning is important in the general case. What to do? To be on the safe side you should do this: Look at the domain in the error message. If it is likly that the domain is a valid destination for where you wanted to go (e. g. added or missing "www"), type that domain into the address bar. Do not copy it because some special character may look like valid characters, so you could end up elsewhere on a phishing side.
{ "source": [ "https://security.stackexchange.com/questions/3668", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
3,674
I've received a spam from one of my friends (well I'm sure he didn't send it). so there's this link, and i'd thought what exactly would be the implications of clicking the link (i've not clicked it yet)?
The common reasons for links in spam email are: verification that your email address is valid and that it is read which makes the email address more valuable for address brokers (the link needs to have some individual part, that can be a number, but it can also just be unique word from the dictionary). This kind of link may be labeld "unsubscribe". the link may point to a phishing site , pretending to be from well-known-company such as eBay, but just wanting to trick you into entering your username and password for that side (e. g. "your account needs to be verified" ). Please note two things: In HTML emails the displayed link text and the actual link target can be distinct. There are some special characters that look like normal ones the link may point to a website which tries to exploit your browser or plugins to get access to your computer, or trick you to manually execute malicious code (e. g. "get this video codec", "you computer is infected, get anti virus for free" ). the spammer might want to get people to visit his or her website to advertise his products or opinions, manipulate polls, etc. Uncommon: the spammer may try to flood the target with lots of visitors. This is not effective as a distributed denial of service attack because the email is a lot larger than the data send by the browser to the target server. Reflective DDoS usually use DNS where a small query with a faked sender address can result in a much larger reply to the target site. But it may be effective to exploit some pay per click advertisement programs. More than one point may be true.
{ "source": [ "https://security.stackexchange.com/questions/3674", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2379/" ] }
3,723
The question of how to balance pragmatism with an absolutist view of security has been discussed here already . But I need the answer to a concrete variant of that question. You're the security expert hired to help an application team with the security problems of their app. One of the members of the team takes the "good enough just isn't good enough" approach, and wants to implement the perfect security system. He's already decided that the platform the product will run on is fundamentally insecure, and will repeat this assertion when there's anyone listening (and even when they'd rather not). Any proposed mitigation strategy is not good enough. He's not interested in identifying and reducing risks to the activity supported by the product; he's interested in absolute security. Sneaky ways out of the problem like the fact that the project has a finite budget and finite expected revenue will not fly: it needs to be done completely or it isn't worth doing. Unfortunately, this person has the ear of the senior management team, so appealing to authority is not going to work. The project has stalled because discussions of the security requirements are just spinning around with no resolution. How would you move things on?
Consider the usual risk management statement: Don't spend 1000$ to protect 100$ Now, it might just be a situation that the execs are not aware that what they want will cost 1000$; more likely that they just don't realize that they're only protecting 100$ worth. If that is the case, you could consider trying to implement methodology that will provide some hard numbers, to replace the vague feeling of uneasiness that can accompany the fear of the big scary word "SECURITY". If they want to be fiscally responsible, they should try to understand the actual costs, risks, and benefits. I would even try to have that discussion with them without using the word security , which just seems to be confusing and irritating them. It is also likely that, since they dont understand it, they're worried about doing their due diligence, and/or can be held responsible (either personally or corporately) if anything goes wrong. They need to be shown how to do this effectively, and yet still be "good enough" - not to never be hacked, but to make it a fair tradeoff. Even if they do get hacked, they need plausible proof of having done their diligence, so as not to have their reputation damaged (or other similar fallouts). I recommend using FAIR , which is a quantitative methodology for putting a price tag on specific risks. Also see: "How do you compare risks...?" Either way, this should enable you to change the conversation from the soft, prickly, uneasy feeling of "security", to a hard talk about costs, benefit, and money. Always bring it back to showing them the money . Worst case, if nothing else works out, put together an expensive, phased, multi-year plan. Have it prioritize the important things, as you see them, and delay to later years the issues that you would have preferred to forgo. In most orgs, the later stuff will never get done anyway. And even if it does, this way, you're still getting them to do the right stuff, and they're spending money on the feeling of security - which, sometimes, is important too. Best part is, since it is in phases, you can build into the plan a re-adjustment step, between phases. Use this as a platform for a full security lifecycle... You can keep re-adjusting the unimportant phases as needed, to squeeze in other important bits.
{ "source": [ "https://security.stackexchange.com/questions/3723", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
3,759
Can anyone explain (or provide a link to a simple explanation) of what the Windows "Secure Desktop" mode is and how it works? I just heard about it in the KeePass documentation ( KeePass - Enter Master Key on a Secure Desktop ) and would like to understand it better.
Short answer There are three, separate issues claiming the name of "Secure Desktop": Windows builtin functions like GINA and the Credential Provider Model . Separation of privileged vs unprivileged applications running as the same user (nominally prevent privilege escalation), which may or may not be related to: SwitchDesktop() , which is what KeePass is using and may or may not (I'm not sure) be resistant to DLL Injection. Detailed answer As a quick primer to how Windows GUIs are built, basically everything runs through a function called CreateWindow() (I mean everything, every button, every menu, everything) and is given a hWnd or Window Handle. Modifying these Windows is done via another function, SendMessage() . Here's the catch. As a user mode application, making the right API calls I can fairly easily send messages to other Windows. It's fairly trivial to make buttons disappear from other people's forms. It is a little harder to perform DLL injection and hook the message loop that receives messages (the OS sends Windows messages when things happen to them) but not that much harder. If I can hook those events, I could automatically submit your "yes/no" form. Or, I could change the label from ReallyDodgyVirus.exe to explorer.exe and you'd be none the wiser. Insert : A really good article on the various techniques of getting your code into the address space of a running process. Now, what are KeePass doing? A very brief perusal of the source shows they are using CreateDesktop() , SwitchDesktop() and CloseDesktop() to create a second desktop connected to the physical viewing device you're on. In English, they're asking the kernel to create for them an isolated desktop whose hWnd objects are outside of the findable range of any other application's SendMessage() . I should point out that SwitchDesktop suspends the updating of the UI of the default desktop. I'm not sure if the message loops are also frozen - I suspect not since the desktop is created as a new thread. In this instance, KeePass is drawing the UI, so the execution is not , as I understand it, as NT AUTHORITY/SYSTEM . Instead, the new desktop is created in isolation from basically the rest of the current desktop, which protects it. I'll be happy to be corrected on that. However, see the MSDN for SwitchDesktop : The SwitchDesktop function fails if the desktop belongs to an invisible window station. SwitchDesktop also fails when called from a process that is associated with a secured desktop such as the WinLogon and ScreenSaver desktops. Processes that are associated with a secured desktop include custom UserInit processes. Such calls typically fail with an "access denied" error. I believe this means that these dialogs (screensavers, Windows Logon) are built more deeply into Windows such that they always execute as NT AUTHORITY\SYSTEM and the UserInit process creates the sub processes on valid authentication at the required privilege level. The reason I bring this up is because I believe there are two issues: different desktops and privilege separation. From Mark Russinovich's discussion of the topic of Secure Desktop : The Windows Integrity Mechanism and UIPI were designed to create a protective barrier around elevated applications. One of its original goals was to prevent software developers from taking shortcuts and leveraging already-elevated applications to accomplish administrative tasks. An application running with standard user rights cannot send synthetic mouse or keyboard inputs into an elevated application to make it do its bidding or inject code into an elevated application to perform administrative operations. As SteveS says, UAC runs a separate desktop process as NT AUTHORITY/SYSTEM . If you can catch UAC in action ( consent.exe ) via process explorer, it looks like this: Escalating privileges as a process I don't have the specifics of, but here is what I think I understand: I believe the process of privilege escalation in the Windows API causes a process running as NT AUTHORITY/SYSTEM (therefore able to execute the new process under whatever privileges it wants to, in this case an Administrator). When an application asks for higher privileges, that question is asked to you on a new desktop locally, to which none of your applications can get either the Desktop Handle or any of the GUI element handles. When you consent, consent.exe creates the process as the privileged user. Thus, the process running as NT AUTHORITY\SYSTEM is a consequence of the need to create a new privileged process, not as a method of creating a secure desktop. The fact the desktop is different to the default is what adds security in both cases. I believe what Mark means above is that, in addition to these secure desktops, two things are happening: Your default administrator desktop is in fact running unprivileged, contrary to Windows XP and earlier and Unprivileged and privileged applications now exist on separate desktops (disclaimer: could just be ACLs on the objects in memory, I'm not sure), ensuring that unprivileged code can't access privileged objects. The Windows Logon UI is different again in Vista/7. Clearly, none of these methods will defend you against kernel mode rootkits, but they do prevent privilege escalation and UI integrity compromise by isolating privileged applications, or in the case of KeePass, the sensitive dialog. Edit Having looked harder at the KeePass code, I saw this handy piece of C#: Bitmap bmpBack = UIUtil.CreateScreenshot(); if(bmpBack != null) UIUtil.DimImage(bmpBack); /* ... */ SecureThreadParams stp = new SecureThreadParams(); stp.BackgroundBitmap = bmpBack; stp.ThreadDesktop = pNewDesktop; From this you can see that in fact in order to mimic consent.exe, KeePass takes a screenshot of the background, dims it and creates its new desktop with the background of the old desktop. I therefore suspect the old desktop continues running even while it isn't being rendered. This I think confirms that no magic NT AUTHORITY\SYSTEM action is happening both with KeePass and consent.exe (I suspect consent.exe is doing the same thing UI-wise, it just happens to be launched in the context of NT AUTHORITY\SYSTEM ). Edit 2 When I say DLL Injection, I'm specifically thinking of DLL injection to corrupt the UI. DLL Injection remains possible on KeePass as a process, I'm just not sure whether it could be used to influence that secure UI. It could, however, be used to access the memory of the process and its threads, thereby grabbing the entered password pre-encryption. Hard, but I think possible. I'd appreciate someone advising on this if they know.
{ "source": [ "https://security.stackexchange.com/questions/3759", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1901/" ] }
3,772
What sorts of jobs are there, in which organizations, with what sorts of day-to-day responsibilities? What areas are good for folks coming out of school, vs what are good 2nd careers for experienced folks coming from various disciplines?
As niche as "security" seems, it actually encompasses a few main types of roles, and a couple of areas of coverage. These are actually quite different... Common roles: Enterprise IT security department These guys usually deal mostly with policy enforcement, auditing, user awareness, monitoring, maaaaybe some enterprise-wide initiatives (e.g. SIEM, IdM, etc), and an occasional Incident Response. Also probably give a security PoV on purchasing 3rd party products (whether COTS or FOSS), and in any outsourcing RFP. Security team in development group (either in enterprise or in dev shops) Mostly deal with programmer education and training, some security testing (or handling external testing, see below) - this includes both pentesting and reviewing code, maybe defining security features. Some orgs will have the security team also managing risks, participating in threat modeling, etc. External consultant / auditor / security tester This usually covers, in some form, all of the above, most often with an emphasis on penetration testing, code reviews, and auditing for regulatory compliance (e.g. PCI). In addition, serving as the security expert, go-to guys for the other types of organizations, such as supplying all the relevant advice.... therefore usually expected (though not necessarily the case ;-) ) to be more up to date than anyone else. Researcher This can include academic level research, such as cryptologists, and also research departments in some of the larger security vendors, researching and searching for new exploits / viruses / attacks / flaws / mitigation models / etc. These can actually be quite different, vendor research is often treated as product development, whereas academic research - well, I can't really speak to that, since I don't know... Likewise, in all the above there are different areas of expertise, and an expert in one won't necessarily have anything intelligent to say in any other area: Network security, e.g. routers, firewall, network segmentation and architecture, etc. O/S security, which is of course further subdivided according to O/S flavor (i.e. Windows security expert and Linux security experts might not know much about each other's stuff). Application security - i.e. how to program securely (which may be necessary to subdivide according to language, technology, etc.), but also application-layer attacks, e.g. Web attacks, etc. Risk management experts - more focused on the business side, less on the technical Compliance officers - some places have these dedicated, and they're experts on all the relevant regulations and such (note that this is borderline lawyer-like work!) Identity architects - for larger, security conscious orgs, that have complex IdM implementations and the like... Auditing and forensics experts, deal mainly with SIEM/SIM/SOC, and also with investigations after the fact. On top of that, there are some that specialize in building the secure systems (at each level of the stack), and some that spend their time breaking them - and it is not always shared expertise. There are probably even more niche-niches that I'm skipping over, but you're starting to get the picture.... As you can see, what a security guy or gal does on a day to day basis is as wide and varied as the companies in which they work, and the systems which they work on. Most often, this DOES require shifting several hats, and working mostly on short tasks... BUT what stays the same (usually) is the requirement to focus on the risks (and threats), whether its mostly a technical job as defining firewall rules, or communicating with the business and lawyer types about the organization's current security posture. As to how to get into the field? Ideally, you have some experience (preferably expertise) in some other field, that you can then specialize to security. You used to be network engineer? Great, start with focusing on network security, and go from there. You're currently a systems administrator? Wonderful, you've probably worked a bit on security already, start learning more in that field. You've been programming since you were a kid, and want to move to security? Fantastic, you should already have been learning about input validation, cryptography, threat mitigation, secure DB access, etc... Learn some more, figure out what you're missing, and then give me a call ;-). And so on... On the other hand, if you have no background and want to START in security, that's tougher - because as I've explained, most often the security guys is expected to be the expert on whatever it is. You can try to join a pentesting team, and grow from there... The important part is to focus on risk management (and, for the technical, threat modeling). I also strongly suggest reading lots of security books and blogs (I enjoy Bruce Schneier's stuff), and also try out OWASP for the application side of things.
{ "source": [ "https://security.stackexchange.com/questions/3772", "https://security.stackexchange.com", "https://security.stackexchange.com/users/453/" ] }
3,779
I would like to export my private key from a Java Keytool keystore, so I can use it with openssl. How can I do that?
Use Java keytool to convert from JKS to P12... Export from keytool 's proprietary format (called "JKS") to standardized format PKCS #12 : keytool -importkeystore \ -srckeystore keystore.jks \ -destkeystore keystore.p12 \ -deststoretype PKCS12 \ -srcalias <jkskeyalias> \ -deststorepass <password> \ -destkeypass <password> ...then use openssl to export from P12 to PEM Export certificate using openssl : openssl pkcs12 -in keystore.p12 -nokeys -out cert.pem Export unencrypted private key: openssl pkcs12 -in keystore.p12 -nodes -nocerts -out key.pem
{ "source": [ "https://security.stackexchange.com/questions/3779", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69/" ] }
3,851
Just wondering if it is possible to create a file which has its md5sum inside it along with other contents too.
Theoretically? Yes. Practically, however, since /any/ change to a file's contents, no matter how minute, causes a drastic change in the checksum (which is how md5 checksums work, after all), you'd need to be able to predict how the checksum will change when you alter the file to include the checksum -- for all intents and purposes this isn't much different from being able to break the md5 hashing algorithm. There's no such thing as "impossible" in cryptography, but the science does acknowledge the concept of "practically undoable" or "statistically improbable" and that's pretty much what you're dealing with here, at the moment.
{ "source": [ "https://security.stackexchange.com/questions/3851", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2508/" ] }
3,857
If I'm visiting (just a desktop PC, client side) a site that has a valid HTTPS cert/connection, that can it be compromised if I'm using a rogue DNS server (not deliberately, I'm concerned about an attack on the DNS service)? I'm thinking about e.g.: the CA's site (to check the HTTPS connection) is resolved by my nameserver (the compromised one)?
In order to connect to any website, through https or not, you need the ip address of the site, and you ask your DNS server for it using the domain name of your site. If your DNS server has not cached the answer, it will try to resolve your request by asking a whole series of DNS servers (the root dns server, the top level domain handler ... until the dns server that is authorative for the domain). An attacker that controls any of those servers can respond to you with a fake IP address for that website, and this is what your browser will try to visit. This IP address in the general case will have a replica of the website hosted, to make it look the same as the original one, or just act as a silent forwarder of your connection to the correct site after capturing what it needs. Now on to more details: If the website is HTTPS protected there will be many pitfalls. The normal website will have a certificate issued that binds details of the domain name to the website, but this is done using assymetric encryption. What this means is that through the process of SSL handshake, the website has to prove that it has knowledge of the private key that is associated with the public key in the certificate. Now, the malicious party can very well serve you the original certificate of the website when you try to access the wrong IP under the correct hostname, but he will not have knowledge of the private key so the SSL handshake will never complete. But there are ways for the interceptor to make the whole thing work, I can think of four: 1) The simplest solution is to serve you a self-signed certificate instead of a normal one. This will be issued by the attacker itself. Normally your browser will warn you about that, and if you run a recent browser version the warnings will be all over the place, but users tend to click-through that kind of stuff.. 2) Another approach, used in the Stuxnet attacks, is to steal the private keys used for a valid certificate from the organization that you want to impersonate. 3) Another solution, that has happened in a couple of cases (we are not talking about the average attacker here) is that he exploits some fault in the registration procedure that certificate authorities (or registration authorities) use, and manages to issue a certificate for a website that does not belong to him. There have been cases that RAs simply did not do enough checks and issued certificates for google.com .. 4) Similar to the above: A competent attacker 'hacks' a certificate or registration authority and manages to issue some certificates under whatever name he wants. It happened in May of 2011 (the famous comodo-hack) and July of 2011 (the DigiNotar hack). See more details at How feasible is it for a CA to be hacked? Which default trusted root certificates should I remove? - IT Security . 5) Finally the most scary tecnique is the one that three letter agencies and similar parties can use: If a government controls a Certificate Authority, in theory it can force it to issue certificates at will, for whatever site. Now think that Certificate Authorities are spread throughout the world, some being in countries where this can seem very possible. An example to watch here is the CA operated by the Emirates Telecommunications Corporation (Etisalat), 60% owned by the United Arab Emirates (UAE) government. Etisalat once rolled out an innocuous looking BlackBerry patch that inserted spyware into RIM devices, enabling monitoring of e-mail. In addition, if the client still supports the old SSL 2.0 protocol, a MITM can downgrade the SSL connection and use either a weaker symmetric encryption algorithm or a weaker key exchange. So to sum up, if the attacker controls the DNS server he can do very malicious things, but for intercepting SSL encrypted traffic he needs something more than that. And to answer your last question: The CA's site does not need to be resolved each time you visit a site: The website usually serves you the public certificate it uses itself, but it is possible that you get it from the CA instead. This does not change any of the mentioned things above though.
{ "source": [ "https://security.stackexchange.com/questions/3857", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
3,887
Using a public/private key pair is fairly convenient for logging in to frequented hosts, but if I'm using a key pair with no password, is that any safer (or less safe) than a password? The security around my private key file is paramount, but say my magical private key file was just a list of passwords to various hosts, is there a difference?
My answer is that using public key pairs is a much wiser thing to do than using passwords or lists of passwords. I will focus on things that are not widely known about different forms of SSH authentication, and I see no other answers mentioning them. First of all, you must understand that user authentication is a different and separate process than the establishment of the secure channel . In laymans terms what this means is that first, the public key of the server is used (if accepted!) to construct the secure SSH channel, by enabling the negotiation of a symmetric key which will be used to protect the remaining session, enable channel confidentiality, integrity protection and server authentication. After the channel is functional and secure, authentication of the user takes place. The two usual ways of doing that is by using a password or a public key pair. The password based authentication works as you can imagine: The client sends his password over the secure channel, the server verifies that this is indeed the password of the specific user and allows access. In the public key case, we have a very different situation. In this case, the server has the public key of the user stored. What happens next is that the server creates a random value (nonce), encrypts it with the public key and sends it to the user. If the user is who is supposed to be, he can decrypt the challenge and send it back to the server, who then confirms the identity of the user. It is the classic challenge-response model . (In SSHv2 something a bit different but conceptually close is actually used) As you can imagine, in the first case the password is actually sent to the server (unless SSH would use password challenge response), in the second your private key never leaves the client. In the imaginary scenario that someone intercepts the SSH traffic, and is able to decrypt it (using a compromised server private key, or if you accept a wrong public key when connecting to the server) or has access to the server or client, your password will be known - with public-private key authentication and the challenge response model your private details will never fall in the hand of the attacker. So even if one server you connect to is compromised, other servers you use the same key for would not be! There are other advantages of using a public key pair: The private key should not be stored in cleartext in your client pc as you suggest. This of course leaves the private key file open to compromise as an unencrypted password file would do, but it's easier to decrypt (on login) and use the private key. It should be stored encrypted , and need you to provide a usually long passphrase to decrypt it each time it is used. Of course this means that you will have to provide the long passphrase each time you connect to a server, to unlock your private key – There are ways around that. You can increase the usability of the system by using an authentication agent: This is a piece of software that unlocks your keys for the current session, when you log in to gnome for example or when you first ssh into your client, so you can just type ssh remote-system-ip and log in, without providing a passphrase, and do that multiple times until you log out of your session. So, to sum up, using public key pairs offers considerably more protection than using passwords or password lists which can be captured if the client , the server or the secure session is compromised . In the case of not using a passphrase (which shouldn't happen), still public key pairs offer protection against compromised sessions and servers.
{ "source": [ "https://security.stackexchange.com/questions/3887", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2493/" ] }
3,921
For instance, lets look at a common login system for a website HTTPS connection is made User submits credentials via POST Server-side code hashes the password and looks if it matches the user name Session is initialized, and a key may be issued to log in again without passwords ("remember me") This is generally the status quo right now, where passwords are all computed on the server. But, if we do the following client-side it's considered a bad practice: HTTPS connection is made JavaScript calculates the hash (may request specific user salt) Script sends AJAX with the user/hash values Server-side code looks if the password's hash and user name match. Session is initialized, and a key may be issued to log in again without passwords ("remember me") Can someone explain why this other method is considered bad practice to do it CS instead of SS? Both transmit over SSL, both create the secure hash, and both authenticate should be be considered reliable with well-written code. Both are susceptible to XSS and other bad design.
What you've described isn't improving the security of the system. Its not a matter of opinion or emotion, security just doesn't work that way. In your example the hash(salt+password) is now your password. If it wasn't over https, then an attacker could just replay that value. Also you didn't really address owasp a9 aka "firesheep" style attacks.
{ "source": [ "https://security.stackexchange.com/questions/3921", "https://security.stackexchange.com", "https://security.stackexchange.com/users/488/" ] }
3,936
Lets say I want to create a cookie for a user. Would simply generating a 1024 bit string by using /dev/urandom , and checking if it already exists (looping until I get a unique one) suffice? Should I be generating the key based on something else? Is this prone to an exploit somehow?
The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it). The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred; the only instant where /dev/urandom might imply a security issue due to low entropy is during the first moments of a fresh, automated OS install; if the machine booted up to a point where it has begun having some network activity then it has gathered enough physical randomness to provide randomness of high enough quality for all practical usages (I am talking about Linux here; on FreeBSD, that momentary instant of slight weakness does not occur at all). On the other hand, /dev/random has a tendency of blocking at inopportune times, leading to very real and irksome usability issues. Or, to say it in less words: use /dev/urandom and be happy; use /dev/random and be sorry. ( Edit: this Web page explains the differences between /dev/random and /dev/urandom quite clearly.) For the purpose of producing a "cookie": such a cookie should be such that no two users share the same cookie, and that it is computationally infeasible for anybody to "guess" the value of an existing cookie. A sequence of random bytes does that well, provided that it uses randomness of adequate quality ( /dev/urandom is fine) and that it is long enough . As a rule of thumb, if you have less than 2 n users ( n = 33 if the whole Earth population could use your system), then a sequence of n+128 bits is wide enough; you do not even have to check for a collision with existing values: you will not see it in your lifetime. 161 bits fits in 21 bytes. There are some tricks which are doable if you want shorter cookies and still wish to avoid looking up for collisions in your database. But this should hardly be necessary for a cookie (I assume a Web-based context). Also, remember to keep your cookies confidential (i.e. use HTTPS, and set the cookie "secure" and "HttpOnly" flags).
{ "source": [ "https://security.stackexchange.com/questions/3936", "https://security.stackexchange.com", "https://security.stackexchange.com/users/488/" ] }
3,959
I'm curious if anyone has any advice or points of reference when it comes to determining how many iterations is 'good enough' when using PBKDF2 (specifically with SHA-256). Certainly, 'good enough' is subjective and hard to define, varies by application & risk profile, and what's 'good enough' today is likely not 'good enough' tomorrow... But the question remains, what does the industry currently think 'good enough' is? What reference points are available for comparison? Some references I've located: Sept 2000 - 1000+ rounds recommended (source: RFC 2898) Feb 2005 - AES in Kerberos 5 'defaults' to 4096 rounds of SHA-1. (source: RFC 3962) Sept 2010 - ElcomSoft claims iOS 3.x uses 2,000 iterations, iOS 4.x uses 10,000 iterations, shows BlackBerry uses 1 (exact hash algorithm is not stated) (source: ElcomSoft ) May 2011 - LastPass uses 100,000 iterations of SHA-256 (source: LastPass ) Jun 2015 - StableBit uses 200,000 iterations of SHA-512 (source: StableBit CloudDrive Nuts & Bolts ) Aug 2015 - CloudBerry uses 1,000 iterations of SHA-1 (source: CloudBerry Lab Security Consideration (pdf) ) I'd appreciate any additional references or feedback about how you determined how many iterations was 'good enough' for your application. As additional background, I'm considering PBKDF2-SHA256 as the method used to hash user passwords for storage for a security conscious web site. My planned PBKDF2 salt is: a per-user random salt (stored in the clear with each user record) XOR'ed with a global salt. The objective is to increase the cost of brute forcing passwords and to avoid revealing pairs of users with identical passwords. References: RFC 2898: PKCS #5: Password-Based Cryptography Specification v2.0 RFC 3962: Advanced Encryption Standard (AES) Encryption for Kerberos 5 PBKDF2: Password Based Key Derivation Function v2
You should use the maximum number of rounds which is tolerable, performance-wise, in your application. The number of rounds is a slowdown factor, which you use on the basis that under normal usage conditions, such a slowdown has negligible impact for you (the user will not see it, the extra CPU cost does not imply buying a bigger server, and so on). This heavily depends on the operational context: what machines are involved, how many user authentications per second... so there is no one-size-fits-all response. The wide picture goes thus: The time to verify a single password is v on your system. You can adjust this time by selecting the number of rounds in PBKDF2. A potential attacker can gather f times more CPU power than you (e.g. you have a single server, and the attacker has 100 big PCs, each being twice faster than your server: this leads to f=200 ). The average user has a password of entropy n bits (this means that trying to guess a user password, with a dictionary of "plausible passwords", will take on average 2 n-1 tries). The attacker will find your system worth attacking if the average password can be cracked in time less than p (that's the attacker's "patience"). Your goal is to make the average cost to break a single password exceed the attacker's patience, so that they do not even try. With the notations detailed above, this means that you want: v·2 n-1 > f·p p is beyond your control; it can be estimated with regards to the value of the data and systems protected by the user passwords. Let's say that p is one month (if it takes more than one month, the attacker will not bother trying). You can make f smaller by buying a bigger server; on the other hand, the attacker will try to make f bigger by buying bigger machines. An aggravating point is that password cracking is an embarrassingly parallel task, so the attacker will get a large boost by using a GPU which supports general programming ; so a typical f will still range in the order of a few hundreds. n relates to the quality of the passwords, which you can somehow influence through a strict password-selection policy, but realistically you will have a hard time getting a value of n beyond, say, 32 bits. If you try to enforce stronger passwords, users will begin to actively fight you, with workarounds such as reusing passwords from elsewhere, writing passwords on sticky notes, and so on. So the remaining parameter is v . With f = 200 (an attacker with a dozen good GPU), a patience of one month, and n = 32 , you need v to be at least 241 milliseconds (note: I initially wrote "8 milliseconds" here, which is wrong -- this is the figure for a patience of one day instead of one month). So you should set the number of rounds in PBKDF2 such that computing it over a single password takes at least that much time on your server. You will still be able to verify four passwords per second with a single core, so the CPU impact is probably negligible(*). Actually, it is safer to use more rounds than that, because, let's face it, getting 32 bits worth of entropy out of the average user password is a bit optimistic; on the other hand, not many attacks will devote dozens of PC for one full month to the task of cracking a single password, so maybe an "attacker's patience" of one day is more realistic, leading to a password verification cost of 8 milliseconds. So you need to make a few benchmarks. Also, the above works as long as your PBKDF2/SHA-256 implementation is fast. For instance, if you use a fully C#/Java-based implementation, you will get the typical 2 to 3 slowdown factor (compared to C or assembly) for CPU-intensive tasks; in the notations above, this is equivalent to multiplying f by 2 or 3. As a comparison baseline, a 2.4 GHz Core2 CPU can perform about 2.3 millions of elementary SHA-256 computations per second (with a single core), so this would imply, on that CPU, about 20000 rounds to achieve the "8 milliseconds" goal. (*) Take care that making password verification more expensive also makes your server more vulnerable to Denial-of-Service attacks . You should apply some basic countermeasures, such as temporarily blacklisting client IP addresses that send too many requests per second. You need to do that anyway, to thwart online dictionary attacks.
{ "source": [ "https://security.stackexchange.com/questions/3959", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2533/" ] }
3,989
Is there a way to find what type of encryption/encoding is being used? For example, I am testing a web application which stores the password in the database in an encrypted format ( WeJcFMQ/8+8QJ/w0hHh+0g== ). How do I determine what hashing or encryption is being used?
Your example string ( WeJcFMQ/8+8QJ/w0hHh+0g== ) is Base64 encoding for a sequence of 16 bytes, which do not look like meaningful ASCII or UTF-8. If this is a value stored for password verification (i.e. not really an "encrypted" password, rather a "hashed" password) then this is probably the result of a hash function computed over the password; the one classical hash function with a 128-bit output is MD5. But it could be about anything. The "normal" way to know that is to look at the application code. Application code is incarnated in a tangible, fat way (executable files on a server, source code somewhere...) which is not, and cannot be, as much protected as a secret key can. So reverse engineering is the "way to go". Barring reverse engineering, you can make a few experiments to try to make educated guesses: If the same user "changes" his password but reuses the same, does the stored value changes ? If yes, then part of the value is probably a randomized "salt" or IV (assuming symmetric encryption). Assuming that the value is deterministic from the password for a given user, if two users choose the same password, does it result in the same stored value ? If no, then the user name is probably part of the computation. You may want to try to compute MD5("username:password") or other similar variants, to see if you get a match. Is the password length limited ? Namely, if you set a 40-character password and cannot successfully authenticate by typing only the first 39 characters, then this means that all characters are important, and this implies that this really is password hashing , not encryption (the stored value is used to verify a password, but the password cannot be recovered from the stored value alone).
{ "source": [ "https://security.stackexchange.com/questions/3989", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2405/" ] }
4,024
Do security questions subvert hard to crack passwords? For example, if a site requires passwords with a certain scheme (length + required character sets) and has a security question, why would someone try cracking the password instead of the security question? I assume most answers to these are shorter and have a smaller variety of characters. For example, "Mother's Maiden Name" (somewhat common question) is typically not as long as a decent password (even after satisfying password requirements) and often contains only letters. When a site requires a security question, is it best to fill it in with a lengthy string containing random characters?
The manner in which security questions are used by a site, determines whether they undermine the supposedly stronger authentication mechanism (of using good passwords). Typically, systems that allow access to users after they've answered a security question, are weaker than systems that would communicate a (temporary) password to the user via a (different and secure) channel. The previous statement conveys a best practice, and certain systems need not implement all of it; some systems would provide a new password (which need not be changed by a user), and there are other systems that would communicate the password via an insecure channel. Filling a security question with random characters is not necessarily a good approach (although it is better than having a smaller answer with low entropy), for it would make it difficult to remember, resulting in a potential lock-out scenario (from where this is often no point of recovery). It should be remembered that security questions are often not changed periodically unlike passwords. The answer therefore depends on how well the answer is protected (both by the user and the system), how public the answer actually is, and how frequently can the question (and answer) be changed. Reading this related StackOverflow question is recommended, for the answers discuss out-of-band communication, amongst other issues like the potential lock-out scenario.
{ "source": [ "https://security.stackexchange.com/questions/4024", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2560/" ] }
4,206
I'm not sure if this is the right website to ask this but I'm giving it a shot. I got the following message in an email today: (it's translated so sorry for the typo's/mistakes) This e-mail and it's attachments are confidential and only meant for the addressee. If this e-mail would end up in your inbox by accident please notify the sender and remove it and it's contents from your hard disk drive. Reading, publishing, adapting, forwarding, copying or distributing an e-mail that is not addressed to you is illegal. Is this true? I get that this is the case when regular mail ends up in your mailbox. That makes sense because addressed person on the envelope is probably not you. But if an e-mail is sent to my e-mail address that would make me the addressee, no? I don't see how this could be hard.
http://www.economist.com/node/18529895 "Spare us the e-mail yada-yada Automatic e-mail footers are not just annoying. They are legally useless" At least in the EU. And no case has ever succeeded in the US either.
{ "source": [ "https://security.stackexchange.com/questions/4206", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2681/" ] }
4,268
I was recently listening to the security now podcast, and they mentioned in passing that the linear congrunential generator (LCG) is trivial to crack. I use the LCG in a first year stats computing class and thought that cracking it would make a nice "extra" problem. Are there any nice ways of cracking the LCG that doesn't involve brute force? I'm not sure if this question is OT, but I wasn't sure where else to post the question. Also, my tags aren't very helpful since I don't have enough rep to create new tags.
Yes. There are extremely efficient ways to break a linear congruential generator. A linear congruential generator is defined by s n+1 = a s n + b mod m , where m is the modulus. In its simplest form, the generator just outputs s n as the n th pseudorandom number. If m is known to the attacker and a , b are not known, then Thomas described how to break it. If none of a , b , m are known, one can still break a linear congruential generator, by first recovering m . It is an interesting exercise to derive how to do so efficiently; it can be done. I'll show how below; don't read on if you prefer to try to figure it out for yourself. To recover m , define t n = s n+1 - s n and u n = | t n+2 t n - t 2 n+1 |; then with high probability you will have m = gcd( u 1 , u 2 , ..., u 10 ). 10 here is arbitrary; if you make it k , then the probability that this fails is exponentially small in k . I can share a pointer to why this works, if anyone is interested. The important lesson is that the linear congruential generator is irredeemably insecure and completely unsuitable for cryptographic use . Added: @AviD will hate me even more :), but here's the math for why this works, for those who requested it. The key idea: t n+1 = s n+1 - s n = (a s n - b) - (a s n-1 - b) = a s n - a s n-1 = a t n mod m , and t n+2 = a 2 t n mod m , and t n+3 = a 3 t n mod m . Therefore t n+2 t n - t n+1 2 = 0 mod m , i.e., | t n+2 t n - t n+1 2 | is a random multiple of m. Nifty number theory fact: the gcd of two random multiples of m will be m with probability 6/π 2 = 0.61; and if you take the gcd of k of them, this probability gets very close to 1 (exponentially fast in k). Is that cool, or what?
{ "source": [ "https://security.stackexchange.com/questions/4268", "https://security.stackexchange.com", "https://security.stackexchange.com/users/997/" ] }
4,320
Bit of newbie at the whole forensics stuff - but I'm trying to find out what I should have in place before an attack. While there is no end of material on the internet about forensics from seizure onwards, I'm trying to find out more about how I can make a secure record of events (specifically webserver logs) of adequate quality to be considered as evidence. There are vague references to non-volatile media, hashing and signatures in the stuff I've read; these certainly provide a means for demonstrating a consistent snapshot - but do not intrinsically provide a mechanism for proving the data has not changed between initial capture and the snapshot, e.g. I could take today's log files and do a search/replace to overwrite the date with something else before committing the snapshot. How does an electronic signature prove the data has not been tampered with between the initial capture and the signing - does it just support the signers assertion? Must the integrity verification method be implemented in real-time? E.g. it's not very practical to write data directly to DVD, at best a track at a time is as near to real-time as you can get - but at a huge performance penalty. Any pointers on content suitable for a non-lawyer? (pref with a EU/UK bias).
This is an excellent and important question. There are several important techniques to know about: Remote logging. Rather than store the log entries on the webserver, the webserver should be configured to send each log entry over the network to a log server. The log server should be a custom machine, configured for a single use (log recording only), and hardened. You should carefully minimize who has access to the log server, firewall it off from the outside world, and make sure it is running a minimum of services. This will help prevent tampering with logs after they are generated. Hash chaining. Another important technique is to use cryptographic methods to protect the integrity of log records, once they are stored. A log entry should be a pair X n = (M n , T n ) , where M n is the n th log message received, and where T n = Hash( X n-1 ) is the cryptographic hash of the last log entry. Use SHA256 or some other collision-resistant cryptographic hash function for this purpose. What this does is ensure that an attacker can't tamper with entries in the middle of the log, without being detected. The attacker can replace all the entries in the "chain", or throw away a suffix and replace them with something new, but this constraints what the attacker can do. Importantly, the crypto ensures that if you write any log entry to write-once media, then the attacker can not change any earlier log entry without being detected. That's huge. For more, read about secure timestamping using hash chaining. Write-once media. If you have write-once storage media, you can write the logs to the write-once media, and that will prevent tampering after they are written. Unfortunately, the choices here aren't great. As far as I know, they're basically: CD-ROMs and DVD-ROMS are write-once, and allow appending if you use TAO mode, but also slightly clunky. A line printer, printing onto paper, is a surprisingly effective write-once medium, as long as you keep the paper supply fed. PROMs are write-once, if you use a Manchester encoding . Unfortunately, PROMs have mostly been replaced by EEPROMs these days, so true PROMs are hard to find. Wikipedia also has a list of write-once media . SanDisk has a SD card with 1GB of write-once storage , which is specifically designed for forensics and log storage purposes. Write-once media complement hash chaining extremely well. What you can do is store all log entries on some ordinary storage medium (e.g., a hard drive), and then once an hour store the latest log entry (or just its hash) to write-once storage. This ensures that an attacker who compromises or otherwise tampers with the log server can only modify records going back an hour, as well as (of course) all future records, but not records that were logged more than an hour before the compromise. That's huge. Replication. You can store multiple copies of the log entries on multiple servers, to provide redundancy and protect against tampering. An attacker who breaks into one server and tampers with it won't be able to tamper with the copy on the other servers. For this to effective, you need the servers to be independent, so that it is unlikely that an attacker can compromise them all. For instance, you might physically locate the servers in different locations or different machine rooms, set up so that no one person has physical access to all locations. You might have separate individuals administering them, so that no one person has log-in access to all of the servers. You might have one log server running on your system, and a replica hosted remotely (e.g., running in the cloud). Replication complements hash chaining very nicely. For instance, rather than replicating every log entry, you can replicate just 1 out of every m log entries. Also, if you don't want to expose private log data to some of the replicas (e.g., ones running in the cloud), you don't have to store the entire log entry: you can store just the hash T n . This hash value reveals nothing about the log entries themselves, since cryptographic hash functions are one-way. I'm sure there is lots more that could be said about this, but I hope this helps introduce you to several technological methods that are available to protect logs.
{ "source": [ "https://security.stackexchange.com/questions/4320", "https://security.stackexchange.com", "https://security.stackexchange.com/users/543/" ] }
4,369
Why is HTTP still commonly used, instead what I would believe much more secure HTTPS?
SSL/TLS has a slight overhead. When Google switched Gmail to HTTPS (from an optional feature to the default setting), they found out that CPU overhead was about +1%, and network overhead +2%; see this text for details. However, this is for Gmail, which consists of private, dynamic, non-shared data, and hosted on Google's systems, which are accessible from everywhere with very low latency. The main effects of HTTPS, as compared to HTTP, are: Connection initiation requires some extra network roundtrips. Since such connections are "kept alive" and reused whenever possible, this extra latency is negligible when a given site is used with repeated interactions (as is typical with Gmail); systems which serve mostly static contents may find the network overhead to be non-negligible. Proxy servers cannot cache pages served with HTTPS (since they do not even see those pages). There again, there is nothing static to cache with Gmail, but this is a very specific context. ISPs are extremely fond of caching since network bandwidth is their lifeforce. HTTPS is HTTP within SSL/TLS. During the TLS handshake, the server shows its certificate, which must designate the intended server name -- and this occurs before the HTTP request itself is sent to the server. This prevents virtual hosting, unless a TLS extension known as Server Name Indication is used; this requires support from the client. In particular, Internet Explorer does not support Server Name Indication on Windows XP (IE 7.0 and later support it, but only on Vista and Win7). Given the current market share of desktop systems using WinXP, one cannot assume that "everybody" supports Server Name Indication. Instead, HTTPS servers must use one IP per server name; the current status of IPv6 deployment and IPv4 address shortage make this a problem. HTTPS is "more secure" than HTTP in the following sense: the data is authenticated as coming from a named server, and the transfer is confidential with regards to whoever may eavesdrop on the line. This is a security model which does not make sense in many situations: for instance, when you look at a video from Youtube, you do not really care about whether the video really comes from youtube.com or from some hacker who (courteously) sends you the video you wish to see; and that video is public data anyway, so confidentiality is of low relevance here. Also, authentication is only done relatively to the server's certificate, which comes from a Certification Authority that the client browser knows of. Certificates are not free, since the point of certificates is that they involve physical identification of the certificate owner by the CA (I am not telling that commercial CA price their certificates fairly; but even the fairest of CA, operated by the Buddha himself, would still have to charge a fee for a certificate). Commercial CA would just love HTTPS to be "the default". Moreover, it is not clear whether the PKI model embodied by the X.509 certificates is really what is needed "by default" for the Internet at large (in particular when it comes to relationships between certificates and the DNS -- some argue that a server certificate should be issued by the registrar when the domain is created). In many enterprise networks, HTTPS means that the data cannot be seen by eavesdroppers, and that category includes all kinds of content filters and antivirus software. Making HTTPS the default would make many system administrators very unhappy. All of these are reasons why HTTPS is not necessarily a good idea as default protocol for the Web. However, they are not the reason why HTTPS is not, currently, the default protocol for the Web; HTTPS is not the default simply because HTTP was there first.
{ "source": [ "https://security.stackexchange.com/questions/4369", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2742/" ] }
4,388
For example, say the following are HTTPS URLs to two websites by one IP over 5 mins: "A.com/1", "A.com/2", "A.com/3", "B.com/1", "B.com/2". Would monitoring of packets reveal: nothing, reveal only the IP had visited "A.com" and "B.com" (meaning the DNS only), reveal only the IP had visited "A.com/1" and "B.com/1" (the first HTTPS request for each site), reveal a complete list of all HTTPS URLs visited, only reveal IP's of "A.com" and "B.com", or something else? Related Question: can my company see what HTTPS sites I went to? While this question does have additional information, it as far as I'm able to tell does not address specifically the scenario of "reveal only the IP had visited "A.com/1" and "B.com/1" (the first HTTPS request for each site)" - though possibility being wrong about this is high, and happy to delete the question if it's a duplicate. NOTE: This is a followup question to an answer that was posted to as: Why is HTTPS not the default protocol?
TLS reveals to an eavesdropper the following information: the site that you are contacting the (possibly approximate) length of the rest of the URL the (possibly approximate) length of the HTML of the page you visited (assuming it is not cached) the (possibly approximate) number of other resources (e.g., images, iframes, CSS stylesheets, etc.) on the page that you visited (assuming they are not cached) the time at which each packet is sent and each connection is initiated. (@nealmcb points out that the eavesdropper learns a lot about timing: the exact time each connection was initiated, the duration of the connection, the time each packet was sent and the time the response was sent, the time for the server to respond to each packet, etc.) If you interact with a web site by clicking links in series, the eavesdropper can see each of these for each click on the web page. This information can be combined to try to infer what pages you are visiting. Therefore, in your example, TLS reveals only A.com vs B.com, because in your example, the rest of the URL is the same length in all cases. However, your example was poorly chosen: it is not representative of typical practice on the web. Usually, URL lengths on a particular site vary, and thus reveal information about the URL that you are accessing. Moreover, page lengths and number of resources also vary, which reveals still more information. There has been research suggesting that these leakages can reveal substantial information to eavesdroppers about what pages you are visiting. Therefore, you should not assume that TLS conceals which pages you are visiting from an eavesdropper. (I realize this is counterintuitive.) Added: Here are citations to some research in the literature on traffic analysis of HTTPS: Shuo Chen, Rui Wang, XiaoFeng Wang, Kehuan Zhang. Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow , IEEE Security & Privacy 2010. This paper is fairly mind-blowing; for instance, it shows how AJAX-based search suggestions can reveal what characters you are typing, even over SSL. Here is a high-level overview of the paper . Kehuan Zhang, Zhou Li, Rui Wang, XiaoFeng Wang, Shuo Chen. Sidebuster: Automated Detection and Quantification of Side-Channel Leaks in Web Application Development . CCS 2010. Marc Liberatore, Brian Neil Levine. Inferring the Source of Encrypted HTTPS Connections . CCS 2006. George Danezis. Traffic Analysis of the HTTP Protocol over TLS , unpublished. George Dean Bissias, Marc Liberatore, Brian Neil Levine. Privacy vulnerabilities in encrypted HTTPS streams . PET 2005. Qixiang Sun, Daniel R. Simon, Yi-Min Wang, Wilf Russell, Venkata N. Padmanabhan, Lili Qiu. Statistical identification of encrypted web browsing traffic . IEEE Security & Privacy 2002. Andrew Hintz. Fingerprinting websites using traffic analysis . PET2002. Heyning Cheng, Ron Avnur. Traffic analysis of SSL encrypted web browsing . Class project, 1998. Shailen Mistry, Bhaskaran Raman. Quantifying Traffic Analysis of Encrypted Web-Browsing . Class project, 1998.
{ "source": [ "https://security.stackexchange.com/questions/4388", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2742/" ] }
4,440
I have been told that PING presents a security risk, and it's a good idea to disable/block it on production web servers. Some research tells me that there are indeed security risks. Is it common practice to disable/block PING on publicly visible servers? And does this apply to other members of the ICMP family, like traceroute ( wikipedia on security )?
The ICMP Echo protocol (usually known as "Ping") is mostly harmless. Its main security-related issues are: In the presence of requests with a fake source address ("spoofing"), they can make a target machine send relatively large packets to another host. Note that a Ping response is not substantially larger than the corresponding request, so there is no multiplier effect there: it will not give extra power to the attacker in the context of a denial of service attack. It might protect the attacker against identification, though. Honored Ping request can yield information about the internal structure of a network. This is not relevant to publicly visible servers, though, since those are already publicly visible. There used to be security holes in some widespread TCP/IP implementations, where a malformed Ping request could crash a machine (the "ping of death" ). But these were duly patched during the previous century, and are no longer a concern. It is common practice to disable or block Ping on publicly visible servers -- but being common is not the same as being recommended . www.google.com responds to Ping requests; www.microsoft.com does not. Personally, I would recommend letting all ICMP pass for publicly visible servers. Some ICMP packet types MUST NOT be blocked, in particular the "destination unreachable" ICMP message, because blocking that one breaks path MTU discovery , symptoms being that DSL users (behind a PPPoE layer which restricts MTU to 1492 bytes) cannot access Web sites which block those packets (unless they use the Web proxy provided by their ISP).
{ "source": [ "https://security.stackexchange.com/questions/4440", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2785/" ] }
4,441
My understanding is that open source systems are commonly believed to be more secure than closed source systems . Reasons for taking either approach, or combination of them, include: cultural norms, financial, legal positioning, national security, etc. - all of which in some way relate to the culture's view on the effect of having that system open or closed source. One of the core concerns is security. A common position against open source systems is that an attacker might exploit weakness within the system if known. A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity . Question is, are open source systems on average better for security than closed source systems? If possible, please cite analysis in as many industries as possible, for example: software , military , financial markets , etc. This question was IT Security Question of the Week . Read the May 25, 2012 blog entry for more details or submit your own Question of the Week.
The notion that open source software is inherently more secure than closed source software -- or the opposite notion -- is nonsense. And when people say something like that it is often just FUD and does not meaningfully advance the discussion. To reason about this you must limit the discussion to a specific project. A piece of software which scratches a specific itch, is created by a specified team, and has a well defined target audience. For such a specific case it may be possible to reason about whether open source or closed source will serve the project best. The problem with pitching all "open source" versus all "closed source" implementations is that one isn't just comparing licenses. In practice, open source is favored by must volunteer efforts, and closed source is most common in commercial efforts. So we are actually comparing: Licenses. Access to source code. Very different incentive structures , for-profit versus for fun. Very different legal liability situations. Different, and wildly varying, team sizes and team skillsets. etc. To attempt to judge how all this works out for security across all software released as open/closed source just breaks down. It becomes a statement of opinion, not fact.
{ "source": [ "https://security.stackexchange.com/questions/4441", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2742/" ] }
4,518
How to estimate the time needed to crack RSA encryption? I mean the time needed to crack Rsa encryption with key length of 1024, 2048, 3072, 4096, 5120, 6144, 5120, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, and 16384?
See this site for a summary of the key strength estimates used by various researchers and organizations. Your "512-bits in 12μs" is completely bogus. Let's see from where it comes. 1999 was the year when the first 512-bit general factorization was performed, on a challenge published by RSA (the company) and called RSA-155 (because the number consisted in 155 decimal digits -- in binary, the length is 512 bits). That factorization took 6 months. At the Eurocrypt event organized the same year (in May; at that time the 512-bit factorization effort had begun but was not completed yet), Adi Shamir , from the Weizmann Institute, presented a theoretical device called TWINKLE which, supposedly, may help quite a bit in a factorization effort. It should consist in a huge number of diodes flashing at carefully selected frequencies, in a kind of black tube. Shamir brought a custom device which, from 10 meters away, looked like a coffee machine. He asked for people to switch off the light, so that the Eurocrypt attendee could marvel at the four red diodes flashing at invervals of 2, 3, 5 and 7 seconds. Ooh! and Aah! they went, although the actual machine, would it be built, would require a few millions of diodes and frequencies in the 10 or 100 gigahertz . So the idea is fun (at least for researchers in cryptology, who are known to have a strange sense of humor) but has not gone beyond the theoretical sketch step yet. Shamir is a great showman. However, TWINKLE is only "help". The best known factorization algorithm is called the General Number Field Sieve ; the two algorithms which come next are the Quadratic Sieve and the Elliptic Curve Method . A 512-bit number is out of reach of QS and ECM with today's technology, and a fortiori with 1999's technology. GNFS is very complex (mathematically speaking), especially since it requires a careful selection of some critical parameters ("polynomial selection"). So there must be an initial effort by very smart brains (with big computers, but brains are the most important here). Afterward, GNFS consists in two parts, the sieve and the linear reduction . The sieve can be made in parallel over hundreds or thousand of machines, which must still be relatively big (in RAM), but this is doable. The linear reduction involves computing things with a matrix which is too big to fit in a computer (by several orders of magnitude, and even if we assume that the said computer has terabytes of fast RAM). There are algorithms to keep the matrix (which is quite sparse) in a compressed format and still be able to compute on that, but this is hard. In the 512-bit factorization, sieving took about 80% of the total time, but for bigger numbers the linear reduction is the bottleneck. TWINKLE is only about speeding up the sieving part. It does nothing about the linear reduction. In other words, it speeds up the part which is easy (relatively speaking). Even a TWINKLE-enhanced sieving half would be nowhere near 12μs. Instead, it would rather help bringing a four month sieving effort down to, say, three weeks. Which is good, in a scientific way, but not a record breaker, especially since linear reduction dominates for larger sizes. The 12μs figure seems to come from a confusion with an even more mythical beast, the Quantum Computer , which could easily factor big numbers if a QC with 512 "qubits" could be built. D-Wave has recently announced a quantum computer with 128 qubits, but it turned out that these were not "real" qubits, and they are unsuitable for factorization (they still can do, theoretically, some efficient approximations in optimization problems, which is great but basically not applicable to cryptography, because cryptographic algorithms are not amenable to approximations -- they are designed so that a single wrong bit scrambles the whole thing). The best "real" QC so far seems to be the prototype by IBM with, as far as I recall, has 5 qubits, enabling it to establish that 15 is equal to 3 times 5. The current RSA factorization record is for a 768-bit integer , announced in December 2009. It took four years and involved the smartest number theorists currently living on Earth, including Lenstra and Montgomery, who have somewhat god-like status in those circles. I recently learned that the selection of the parameters for a 1024-bit number factorization has begun (that's the "brainy" part); the sieving is technically feasible (it will be expensive and involve years of computation time on many university clusters) but, for the moment, nobody knows how to do the linear reduction part for a 1024-bit integer. So do not expect a 1024-bit break any time soon. Right now, a dedicated amateur using the published code (e.g. Msieve ) may achieve a 512-bit factorization if he has access to powerful computers (several dozens big PC, and at least one clock full of fast RAM) and a few months of free time; basically, "dedicated amateur" means "bored computer science student in a wealthy university". Anything beyond 512 bits is out of reach of an amateur. Summary: in your code, you can return "practically infinite" as cracking time for all key lengths. A typical user will not break a 1024-bit RSA key, not now and not in ten years either. There are about a dozen people on Earth who can, with any credibility, claim that it is conceivable, with a low but non-zero probability, that they might be able to factor a single 1024-bit integer at some unspecified time before year 2020. (However, it is extremely easy to botch an implementation of RSA or of any application using RSA in such a way that what confidential data it held could be recovered without bothering with the RSA key at all. If you use 1024-bit RSA keys, you can be sure that when your application will be hacked, it will not be through a RSA key factorization.)
{ "source": [ "https://security.stackexchange.com/questions/4518", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2831/" ] }
4,574
Browsing over this site, many forums, online articles, there's always one specific way we're suggesting to store a password hash: function (salt, pass) { return ( StrongHash(salt + pass) ); } But why this exact way? Why aren't we suggesting to do this? function (salt, pass) { return (StrongHash(( StrongHash(salt) + StrongHash(pass) )); } Or even something like this? function (salt, pass) { var data = salt + pass; for (i=0;i < 1000; i++) { data += StrongHash(salt + data) }; return (data); } Or some other crazy combination? Why are we specifically saying hash the concatenation of the raw salt and the raw password? The hash of both being hashed seems to be a fairly high-entropy alternative, as does hashing 1000 times as per my third example. Why don't we hash the first one a few more times for entropy's sake? What's so amazing about the first way? By request, examples of this: http://www.codinghorror.com/blog/2007/09/rainbow-hash-cracking.html Password Hashing: add salt + pepper or is salt enough? Stretching a hash, many iterations versus longer input string https://www.aspheute.com/english/20040105.asp https://phpsec.org/articles/2005/password-hashing.html https://msdn.microsoft.com/en-us/library/aa545602.aspx#Y93 https://www.developerfusion.com/article/4679/you-want-salt-with-that/3/ https://ca3.php.net/manual/en/function.hash.php#101987 https://ca3.php.net/manual/en/function.hash.php#89568
Actually, "we" are not recommending any of what you show. The usual recommendations are PBKDF2 , bcrypt or the SHA-2 based Unix crypt currently used in Linux. If the hash function you use is a perfect random oracle then it does not really matter which way you input the salt and the password; only matters the time it takes to process the salt and password, and we want that time to be long, so as to deter dictionary searches; hence the use of multiple iterations. However , being a perfect random oracle is a difficult property for a hash function; it is not implied by the usual security properties that secure hash functions must provide (resistance to collisions and to preimages) and it is known that some widely used hash functions are not random oracles; e.g. the SHA-2 functions suffer from the so-called "length extension attack", which does not make them less secure, but implies some care when using the function in funky password-hashing schemes. PBKDF2 is often used with HMAC for that reason. You are warmly encouraged not to feel creative with password hashing schemes or cryptography in general. Security relies on details which are subtle and which you cannot test by yourself (during tests, an insecure function works just as well than a secure one).
{ "source": [ "https://security.stackexchange.com/questions/4574", "https://security.stackexchange.com", "https://security.stackexchange.com/users/488/" ] }
4,629
Auditd was recommended in an answer to Linux command logging? The default install on Ubuntu seems to barely log anything. There are several examples that come with it (capp.rules, nispom.rules, stig.rules) but it isn't clear what the performance impact of each would be, nor what sort of environment or assumptions each would be best suited for. What would be the best starting point for deploying auditd on, lets say, a web server? This would include an audit.rules file, settings to enable sending the audit log stream to a remote machine in real time, and the simplest of tools to see what has been logged. Next, how about a typical desktop machine? Update : dannysauer notes that for security it is important to start with the goal, and I agree. But my main intent is to spark some more useful explanations of the usage of this tool, and see a worked example of it in action, together with performance and storage implications, etc. If that already exists and I missed it, please point to it. If not, I'm suggesting that an example be created for one of the more common scenarios (e.g. a simple web server, running your stack of choice), where the goal might be to preserve information in case of a break-in to help track back to find out where the penetration started. If there is a more suitable or attainable goal for use in e.g. a small business without a significant IT staff, that would help also.
Auditd is an extraordinarily powerful monitoring tool. As anyone who has ever looked at it can attest, usability is the primary weakness. Setting up something like auditd requires a lot of pretty in-depth thought about exactly what it is that needs auditing on the specific system in question. In the question you decided on a web server as our example system, which is good since it's specific. For sake of argument let's assume that there is a formal division between test/dev web servers and production web servers where web developers do all of their work on the test/dev systems and changes to the production environment are done in a controlled deployment. So making those rather large assumptions, and focusing on the production system, we can get down to work. Looking at the auditd recommendation in the CIS benchmark for RHEL5 we can start building out the following suggested ruleset: -a exit,always -S unlink -S rmdir -a exit,always -S stime.* -a exit,always -S setrlimit.* -w /etc/group -p wa -w /etc/passwd -p wa -w /etc/shadow -p wa -w /etc/sudoers -p wa -b 1024 -e 2 This will cause logs to be written out whenever the rmdir, unlink, stime, or setrlimit system calls exit. This should let us know if anyone attempts to delete files or jigger with the times. We also set up specific file watches on the files that define groups, users, passwords, and sudo access. Instead of looking at system calls for each of those an audit log will be written every time one of those files is either: opened with the O_WRONLY or O_RDWR modes an attribute is changed Since we've already made the assumption that we're talking about a production web server, I would recommend adding the line: -w /var/www -p wa This will recursively watch all of the files under the /var/www directory tree. Now we can see the reason for the "controlled environment" assumption made earlier. Between monitoring all files in the web root, as well as all unlink or rmdir events, this could be prohibitively noisy in a development environment. If we can anticipate filesystem changes, such as during maintenance windows or deploy events, we can more reasonably filter out this noise. Combining all of this into a single, coherent, file we would want /etc/audit/audit.rules to look like # This file contains the auditctl rules that are loaded # whenever the audit daemon is started via the initscripts. # The rules are simply the parameters that would be passed # to auditctl. # First rule - delete all -D # Increase the buffers to survive stress events. # Make this bigger for busy systems -b 1024 -a exit,always -S unlink -S rmdir -a exit,always -S stime.* -a exit,always -S setrlimit.* -w /var/www -p wa -w /etc/group -p wa -w /etc/passwd -p wa -w /etc/shadow -p wa -w /etc/sudoers -p wa # Disable adding any additional rules - note that adding *new* rules will require a reboot -e 2
{ "source": [ "https://security.stackexchange.com/questions/4629", "https://security.stackexchange.com", "https://security.stackexchange.com/users/453/" ] }
4,632
I can put characters in my password for which there are no keys on a keyboard. On Windows, Alt+#### (with the numpad) inserts the character for whatever code you type in. When I put this in a password, does it pretty much guarantee that it will never be brute forced? I'm probably not the first to think of this, but am I right in guessing that attackers will never consider it worth their time to check non-keyboard characters? Is this even something they are aware of? If that is the case, with a single non-keyboard character somewhere in your password you'd never have to worry about keeping the rest of the password strong.
When I put this in a password, does it pretty much guarantee that it will never be brute forced? A brute force attack on a password tends to happen one of two ways: either an attacker obtains a hashed password database, or an attacker attempts to login to a live system with a username (or other account identifier) and password. A common method of attacking hashed password databases is to use a precomputed set of values and hashes called a rainbow table. See What are rainbow tables and how are they used? A rainbow table is not a pure brute force attack because it uses less than the full domain of possible inputs. Since a true brute force uses attack every possible input, regardless of whether it is an attack on a hashed password database or a live system login, no character set selection or combination of different sets will make any difference to a brute force attack. Practically true brute force attacks are rare. Rainbow table attacks are more common, and using a value from a uncommon set will be an effective defense against many rainbow table attacks. am I right in guessing that attackers will never consider it worth their time to check non-keyboard characters? Is this even something they are aware of? There are attackers out there who will use uncommon input sets to try and attack passwords, but I suspect they are rare. Any reasonably sophisticate attacker has thought about things like non-keyboard characters, and most make an economic decision to go after the easier targets. There are plenty of targets with poor and weak passwords, so attackers design attacks for these weak passwords. So, yes, many (but not all) attackers consider strong passwords not 'worth their time'. with a single non-keyboard character somewhere in your password you'd never have to worry about keeping the rest of the password strong. No. Strength is a measure or resistance to attacks. If your password is a single non-keyboard character it is weaker than a password of 33 lower case characters. The length in bits is an important measure of password strength. Bits instead of characters, because cryptographic computations like hashes are done in bits, not in characters. A character set, like the set of non-keyboard characters, is only one element in making strong passwords, it is not strong by itself.
{ "source": [ "https://security.stackexchange.com/questions/4632", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2842/" ] }
4,637
A question on Skeptics.SE asks whether current DRM techniques effectively prevent pirating: Is DRM effective? The question for IT Security is: Can DRM be made effective, and are there any examples? One approach that has been discussed here leverages hardware support for Trusted Computing - see Status of Trusted Computing and Remote Attestation deployment
Firstly, I think the skeptics answer pretty much covers it: DRM annoys people. Some of us actively avoid purchasing anything with it. In terms of software, this is pretty much impossible. One DRM scheme might be to use public key encryption with content encrypted with symmetric keys (performance of symmetric ciphers being vastly superior to most pk ones). You encrypt the content key with the private key and the software can then decrypt that key with the public key, and decrypt the content. Which works, except that I'm just going to step your program through a debugger or disassemble your code until I find your key, decrypt your content and the game's up. Crypto is for transmission over insecure networks, not for decrypting securely and in isolation on hostile systems. So, you could send lots of different keys over the internet for different parts of the content. Well, your paying customers are likely to experience problems with this, but, I can just automate the debugging process, or hook your receive events, or whatever. Either way, I can grab those keys. It isn't easy, but it can be done. So the next stage is to prevent me using my debugger or hooking your system calls in some way, which is where you start writing rootkits. As it happens, I can take the disk offline and examine/disable your rootkit, or modify it so your software believes it is secure. In fact, this will make it easier for me to identify what you're protecting. There is another case, one where the OS is complicit and provides some form of secure container. If it is possible for me to load code into ring 0, this security becomes irrelevant. If not, if I can clone your microkernel core and modify it to allow me to load code into ring 0, this security becomes irrelevant again. At this point you have to start using hardware controls. Simply put, since I can modify the operating system any way I please, you probably need hardware-implemented DRM that I have no chance of modifying or reading. You'd need your crypto to happen on-hardware such that it is impossible to read the decrypted data from the operating system. You'd need the public key I mentioned above to reside on that hardware, not on the OS. At this point, you've probably defeated me personally, but I am sure there are people capable of modifying their HDMI cables (or whatever) to split the data out onto the display and to another device such as storage. Also, how you store your keys securely on your device is going to be an issue. You'd need to encrypt them! Otherwise, I'm just going to attach your storage device to an offline system. And store the keys... wait... see the pattern? Once you have physical access, the game is up. I don't think DRM is technically possible. Whatever methods you employ, there will always be someone with sufficient skill to undo it, since at some level that protected content must be decrypted for their viewing. Whether they have the motivation to is another matter. From a software engineering perspective, getting it right, whilst not disrupting your users, allowing them to easily move their content to authorised devices, supporting new devices... all nightmares. Who's going to buy your content, when you haven't got Windows 8 support ready for launch? Does your content work on my Windows XP box too? What do you mean you don't support it?! If you use hardware, you have a deployment issue. Finally, DRM is just deeply unpopular.
{ "source": [ "https://security.stackexchange.com/questions/4637", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2915/" ] }
4,641
Why are people saying that the X Window System is not secure? The OpenBSD team succeeded with privilege separation in 2003 ; why didn't the "Linux developers" do this? To be clear: What security design flaws does X have? Why don't the Linux developers separate privileges in X?
This is a poorly phrased question. For instance, it does not define what is meant by "secure". That makes it harder to provide a useful answer. Here are three possible security concerns, and how X11 fares: Isolation between apps. X11 does not isolate apps from each other. If one app is malicious, it can log all keystrokes , tamper with other apps windows, steal the contents of copy/paste buffers, inject keystrokes into other windows, etc. (Windows has similar security properties.) Preventing privilege escalation. X11 apps run as a non-root user. However, on most platforms the X11 drivers run as root, so they can access the display hardware. This introduces the risk that a malicious app might be able to exploit some security vulnerability in the X11 code and use it to become root. This is a serious risk, because X11 is a complex system with a tremendous amount of code, and all it takes is one security vulnerability anywhere in that code to make a privilege escalation attack possible. This is indeed a concern. The question refers to privilege separating the X11 code. I do not know how easy or hard this is to do, or how effective OpenBSD's attempt is. However, the aim of privilege separation is to reduce the likelihood of such privilege escalation vulnerabilities. Enabling remote attacks. If I run X11 on my Linux machine, does that make it easy (or possible) for remote attackers to "hack" my machine? The answer is no. Remote attackers have no way to access or talk to X11, so running X11 on my machine does not make my machine insecure. All in all, I would say that X11 does pose some security risks, but they are relatively minor, compared to the risks you are already accepting when you use any desktop OS. In the desktop world, every app you run already must be completely trusted (since any one rogue app you run has access to all your files and everything, and can ruin your life); X11 does not make this fact any worse. Therefore, I would not hesitate to use X11, at least not on security grounds. If you find X11 useful, go ahead and use it.
{ "source": [ "https://security.stackexchange.com/questions/4641", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
4,667
In the last days one could frequently read about attacks from anonymous and LulzSec against different platforms like Sony or HBGary etc. Yesterday for example they DDoS'ed soca.gov.uk and jhw.gov.cn . My question is: How did this work? Since the PSN is a big network with a lot of traffic in common I'm wondering how much power their attacks must have. Do they just use their own machines and servers? Why can't anybody know who they are, the packets have to come from somewhere? Or did they first conquer a lot of public machines (of people that are not involved in these organizations having any kind of malware on their PC's) and let these machines do they job? And what about the attacks itself? Is it always the same, something like ping floods, or is it depending on the target itself, searching for very expensive reactions on the target machines? Can anyone please explain these techniques to someone never involved in DoS/DDoS? Thanks in advance! Disclaimer: I don't want to DoS/DDoS anything, but I want to understand how Anonymous or LulzSec or anyone else does it and get an idea of their power.
Anonymous tries to talk people into supporting their DDoS actions by installing a tool on their computer . It has a botnet -mode which allows the leaders to define the target for all the drowns. In other words: Anonymous uses social engineering instead of technical vulnerabilities to distribute their botnet client. This tool just generates a lot of direct requests, so the IP-addresses will show up in the log files of the target. There has been a considerable number of arrests of people taking part in the attacks in a couple of countries in Europe according to media reports: e. g. England , Spain , France , Netherlands , Turkey . This is noteworthy because arrests normally get very little media attention in Europe compared to the USA. In general there are roughly two types of DOS vulnerabilities: The network connection or firewall may be too small to handle the number of packets The application may require too many resources to handle specific requests. Simple flooding The first type is exploited by sending too much data for example using a botnet. Sometimes IP spoofing is used to send small requests to a large number of innocent third parties which will send a larger answer back. A commonly used example are DNS queries . DoS vulnerabilities The second type is more sophisticated. It exploits specific weaknesses. On the network layer for example the attacker may send a huge number of "requests to establish a connection" (TCP SYN Flood), but never completing the handshake. This causes the target to allocate a lot of memory to stores those connections in preparation. Using SYN cookies is a countermeasure. On the application layer there are usually some operations that take much more resources than average. For example web-servers are optimized to serve static content and they can do this really fast for many people. But a website may have a search function which is pretty slow compared to static pages. This is perfectly fine if only a few people use the search function from time to time. But an attacker can especially target it. Another operation that is usually pretty slow are logins because they require a number of database operations: Counting the number of recently failed logins from the same ip-address, counting the number of recently failed logins for the username, validating username and password, checking account ban status. As countermeasure the application may support a heavy-load mode, which disables resource intensive operations. A famous example of this was Wikipedia in the early days, although the high load was caused by normal users because of it's sudden popularity. PS: Please note that both of your examples , Sony and HBGary, suffered the most damage from targeted attacks , not flooding. It is unclear if those attacks have been done by the core anonymous group. I want to understand how Anonymous or LulzSec or anyone else does it and get an idea of their power. I think their real power is based on fear. In the German state Niedersachsen anonymous access to government websites is blocked now. According to the law it shall be possible to use online services anonymously. But the law goes on saying "as far as it is technical possible and feasible". The government claims that their desire to protect themselves is more important, pointing out that there is no right for citizen to demand access to the internet services of the state. (Source: Heise , German)
{ "source": [ "https://security.stackexchange.com/questions/4667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2471/" ] }
4,687
Scenario: a database of hashed and and salted passwords, including salts for each password, is stolen by a malicious user. Passwords are 6-10 chars long and chosen by non-technical users. Can this malicious user crack these passwords? My understanding is that MD5 and SHA-1 are not safe anymore as GPU assisted password recovery tools can calculate billions of these hashes per second per GPU. What about SHA-256 or SHA-512? Are they safe currently? What about in a few years?
The question doesn't state how many rounds of hashing are performed. And the whole answer hinges on that point. All hash functions are unsafe if you use only one iteration. The hash function, whether it is SHA-1, or one of the SHA-2 family, should be repeated thousands of times. I would consider 10,000 iterations the minimum, and 100,000 iterations is not unreasonable, given the low cost of powerful hardware. Short passwords are also unsafe. 8 characters should be the minimum, even for low value targets (because users reuse the same password for multiple applications). With a $150 graphics card, you can perform 680 million SHA-1 hash computations per second. If you use only one round of hashing, all 6-character passwords can be tested in a little over 15 minutes (that's assuming all 94 printable ASCII characters are used). Each additional character multiplies the time by 94, so 7 characters requires one day, 8 characters requires 103 days on this setup. Remember, this scenario is a 14-year-old using his GPU, not an organized criminal with real money. Now consider the effect of performing multiple iterations. If 1,000 iterations of hashing are performed, the 6-character password space takes almost 12 days instead of 15 minutes. A 7-character space takes 3 years. If 20,000 iterations are used, those numbers go up to 8 months and 60 years, respectively. At this point, even short passwords cannot be exhaustively searched; the attacker has to fall back to a dictionary of "most likely" passwords.
{ "source": [ "https://security.stackexchange.com/questions/4687", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2955/" ] }
4,704
Where I work I'm forced to change my password every 90 days. This security measure has been in place in many organizations for as long as I can remember. Is there a specific security vulnerability or attack that this is designed to counter, or are we just following the procedure because "it's the way it has always been done"? It seems like changing my password would only make me more secure if someone is already in my account . This question was IT Security Question of the Week . Read the Jul 15, 2011 blog entry for more details or submit your own Question of the Week.
The reason password expiration policies exist, is to mitigate the problems that would occur if an attacker acquired the password hashes of your system and were to break them. These policies also help minimize some of the risk associated with losing older backups to an attacker. For example, if an attacker were to break in and acquire your shadow password file, they could then start brute forcing the passwords without further accessing the system. Once they know your password, they can access the system and install whatever back doors they want unless you happen to have changed your password in the time between the attacker acquiring the shadow password file and when they are able to brute force the password hash. If the password hash algorithm is secure enough to hold off the attacker for 90 days, password expiration ensures that the attacker won't gain anything of further value from the shadow password file, with the exception of the already obtained list of user accounts. While competent admins are going to secure the actual shadow password file, organizations as a whole tend to be more lax about backups, particularly older backups. Ideally, of course, everyone would be just as careful with the tape that has the backup from 6 months ago as they are with the production data. In reality, though, some older tapes inevitably get misplaced, misfiled, and otherwise lost in large organizations. Password expiration policies limit the damage that is done if an older backup is lost for the same reason that it mitigates the compromise of the password hashes from the live system. If you lose a 6 month old backup, you are encrypting the sensitive information and all the passwords have expired since the backup was taken, you probably haven't lost anything but the list of user accounts.
{ "source": [ "https://security.stackexchange.com/questions/4704", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1369/" ] }
4,781
On the surface bcrypt, an 11 year old security algorithm designed for hashing passwords by Niels Provos and David Mazieres, which is based on the initialization function used in the NIST approved blowfish algorithm seems almost too good to be true. It is not vulnerable to rainbow tables (since creating them is too expensive) and not even vulnerable to brute force attacks. However 11 years later, many are still using SHA2x with salt for storing password hashes and bcrypt is not widely adopted. What is the NIST recommendation with regards to bcrypt (and password hashing in general)? What do prominent security experts (such as Arjen Lenstra and so on) say about using bcrypt for password hashing?
Bcrypt has the best kind of repute that can be achieved for a cryptographic algorithm: it has been around for quite some time, used quite widely, "attracted attention", and yet remains unbroken to date. Why bcrypt is somewhat better than PBKDF2 If you look at the situation in details, you can actually see some points where bcrypt is better than, say, PBKDF2 . Bcrypt is a password hashing function which aims at being slow. To be precise, we want the password hashing function to be as slow as possible for the attacker while not being intolerably slow for the honest systems . Since "honest systems" tend to use off-the-shelf generic hardware (i.e. "a PC") which are also available to the attacker, the best that we can hope for is to make password hashing N times slower for both the attacker and for us. We then adjust N so as not to exceed our resources (foremost of which being the user's patience, which is really limited). What we want to avoid is that an attacker might use some non-PC hardware which would allow him to suffer less than us from the extra work implied by bcrypt or PBKDF2. In particular, an industrious attacker may want to use a GPU or a FPGA . SHA-256, for instance, can be very efficiently implemented on a GPU, since it uses only 32-bit logic and arithmetic operations that GPU are very good at. Hence, an attacker with 500$ worth of GPU will be able to "try" many more passwords per hour than what he could do with 500$ worth of PC (the ratio depends on the type of GPU, but a 10x or 20x ratio would be typical). Bcrypt happens to heavily rely on accesses to a table which is constantly altered throughout the algorithm execution. This is very fast on a PC, much less so on a GPU, where memory is shared and all cores compete for control of the internal memory bus. Thus, the boost that an attacker can get from using GPU is quite reduced, compared to what the attacker gets with PBKDF2 or similar designs. The designers of bcrypt were quite aware of the issue, which is why they designed bcrypt out of the block cipher Blowfish and not a SHA-* function. They note in their article the following: That means one should make any password function as efficient as possible for the setting in which it will operate. The designers of crypt failed to do this. They based crypt on DES, a particularly inefficient algorithm to implement in software because of many bit transpositions. They discounted hardware attacks, in part because crypt cannot be calculated with stock DES hardware. Unfortunately, Biham later discovered a software technique known as bitslicing that eliminates the cost of bit transpositions in computing many simultaneous DES encryptions. While bitslicing won't help anyone log in faster, it offers a staggering speedup to brute force password searches. which shows that the hardware and the way it can be used is important. Even with the same PC as the honest system, an attacker can use bitslicing to try several passwords in parallel and get a boost out of it, because the attacker has several passwords to try, while the honest system has only one at a time. Why bcrypt is not optimally secure The bcrypt authors were working in 1999. At that time, the threat was custom ASIC with very low gate counts. Times have changed; now, the sophisticated attacker will use big FPGA, and the newer models (e.g. the Virtex from Xilinx) have embedded RAM blocks, which allow them to implement Blowfish and bcrypt very efficiently. Bcrypt needs only 4 kB of fast RAM. While bcrypt does a decent job at making life difficult for a GPU-enhanced attacker, it does little against a FPGA-wielding attacker. This prompted Colin Percival to invent scrypt in 2009; this is a bcrypt-like function which requires much more RAM. This is still a new design (only two years) and nowhere nearly as widespread as bcrypt; I deem it too new to be recommended on a general basis. But its career should be followed. ( Edit: scrypt turned out to not to fully live up to its promises. Basically, it is good for what it was designed to do, i.e. protect the encryption key for the main hard disk of a computer: this is a usage context where the hashing can use hundreds of megabytes of RAM and several seconds worth of CPU. For a busy server that authenticates incoming requests, the CPU budget is much lower, because the server needs to be able to serve several concurrent requests at once, and not slow down to a crawl under occasional peak loads; but when scrypt uses less CPU, it also uses less RAM, this is part of how the function is internally defined. When the hash computation must complete within a few milliseconds of work, the used RAM amount is so low that scrypt becomes, technically, weaker than bcrypt.) What NIST recommends NIST has issued Special Publication SP 800-132 on the subject of storing hashed passwords. Basically they recommend PBKDF2. This does not mean that they deem bcrypt insecure; they say nothing at all about bcrypt. It just means that NIST deems PBKDF2 "secure enough" (and it certainly is much better than a simple hash !). Also, NIST is an administrative organization, so they are bound to just love anything which builds on already "Approved" algorithms like SHA-256. On the other hand, bcrypt comes from Blowfish which has never received any kind of NIST blessing (or curse). While I recommend bcrypt, I still follow NIST in that if you implement PBKDF2 and use it properly (with a "high" iteration count), then it is quite probable that password storage is no longer the worst of your security issues.
{ "source": [ "https://security.stackexchange.com/questions/4781", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3118/" ] }
4,830
I thought How can a system enforce a minimum number of changed characters... would answer my question, but it seems this is a different case. When I sign on to my online banking account, I'm prompted for three random digits from a four digit PIN, and three random characters from my (long, random) password. From my limited understanding of all this, the bank is unable to create my password hash from this login because they don't have my whole password. So are they storing my password in cleartext and comparing individual characters? Is it a decent balance of convenience/security? Prompted to ask this question by this blog post: Who cares about password security? NatWest don't
Whilst I don't know explicitly how banks handle this requirement, an alternate process to the one that @rakhi mentions would be to use an HSM and reversible encryption. The idea is that the full password would be stored in the database encrypted using a symmetric cipher (eg, AES). Then when the password characters are passed to the application they are fed into the HSM along with the encrypted password. The HSM could then decrypt the password and confirm that the 3 characters are as expected, returning a pass/fail response to the application. So at no point is the password held in the clear (apart from in the HSM which is considered secure). This would tie up with the way that PIN encryption can be handled by ATM networks (eg, symmetric encryption and HSMs)
{ "source": [ "https://security.stackexchange.com/questions/4830", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3119/" ] }
4,831
Possible Duplicate: What security resources should a white-hat developer follow these days? A colleague of mine already has some understanding of security fundamentals but still needs "get in sync" with what is happening in the international security scene today. What online resources should I suggest? My suggestions so far have been: Follow developing stories by regularly reading something like SANS Newsbytes, Cryptogram etc. Follow a number of "Security Rock Stars" at their blogs or on twitter. Attend security conferences (preferably ones short on marketing, generous on technical presentations) Check the questions at this site to see trends. Read technical papers, explore technologies to gain a deeper understanding, as needed I'd be grateful to hear your suggestions too. Which are the really good mailing lists? Any security blogger/twitterer one should not miss? Other approaches?
Whilst I don't know explicitly how banks handle this requirement, an alternate process to the one that @rakhi mentions would be to use an HSM and reversible encryption. The idea is that the full password would be stored in the database encrypted using a symmetric cipher (eg, AES). Then when the password characters are passed to the application they are fed into the HSM along with the encrypted password. The HSM could then decrypt the password and confirm that the 3 characters are as expected, returning a pass/fail response to the application. So at no point is the password held in the clear (apart from in the HSM which is considered secure). This would tie up with the way that PIN encryption can be handled by ATM networks (eg, symmetric encryption and HSMs)
{ "source": [ "https://security.stackexchange.com/questions/4831", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1903/" ] }
4,844
This question was asked here on Programmers Exchange , but it was suggested I ask here as well since most of the experts would likely hang out on IT Security exchange instead. I find this to be equivalent to undercover police officers who join a gang, do drugs and break the law as a last resort in order to enforce it. To be a competent security expert, I feel hacking has to be a constant hands-on effort. Yet, that requires finding exploits, testing them on live applications, and being able to demonstrate those exploits with confidence. For those that consider themselves "experts" in Web application security, what did you do to learn the art without actually breaking the law? Or, is this the gray area that nobody likes to talk about because you have to bend the law to its limits?
I don't work as a security consultant but I've worked with them (and with the police incidentally and your analogy is more cop show than reality) and none of them to my knowledge have spent time hacking illegally. Hacking is only illegal if you don't have permission but there is no difference on a technical level (that is to say in terms of security) between a server you've got permission to hack and one you haven't. If you're working for or with a company then attempting to access their servers (with their prior written understanding and permission) is fine and no less real world experience than picking a random system. Failing that there is no reason you can't set up your own hosted servers and attempt to compromise them - or buddy up, two of you each set up a server and the winner is the first one to find an exploit in the others system. That has the dual advantage of seeing it from both sides.
{ "source": [ "https://security.stackexchange.com/questions/4844", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
4,936
Let's say in my database I store passwords hashed with salt with a fairly expensive hash (scrypt, 1000 rounds of SHA2, whatever). Upon login, what should I transfer over the network and why? Password or its hash? Is it possible to protect such login over an unencrypted channel like HTTP?
If you transfer the hash from the client, this affords no security benefit, and makes the hash pointless: If a user can login by sending the hash to the server, then the hash is effectively the password.
{ "source": [ "https://security.stackexchange.com/questions/4936", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3174/" ] }
4,997
A couple of websites with which I'm registered have, after a period of inactivity on my part, each sent me an e-mail to remind me that I'm still registered. In each case, that e-mail has included my password. Is this a bad idea? My thoughts are that, yes, it is, on the grounds that: If they are able to send me my password, does that imply that they're storing it unencrypted? Given that e-mails like these were sent specifically due to my inactivity, it's possible that I no longer use that e-mail account, which means that it could have been compromised since I last used it. Users frequently use the same passwords across multiple site. If e-mail is inherently insecure, revealing a password from one site in this way potentially compromises the user's accounts on other sites.
They are either storing it in plain text (likely) or they are using a reversible encryption. So in case of a compromise the password is at risk. Yes, and it is even worse: Some email providers such as Hotmail delete inactive email accounts and allow other people to register it. The upper management of Twitter was successfully attacked by re-registering an old Hotmail account . Yes, correct. A reused password, that was revealed in one of those mails, played an important role in the mentioned twitter attack.
{ "source": [ "https://security.stackexchange.com/questions/4997", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3215/" ] }
5,085
If an attacker obtains a private key that was created with no passphrase, he obviously gains access to everything protected with that key. How secure are private keys set up with a passphrase? If an attacker steals a passphrase-protected key, how hard it is to compromise the key? In other words, is a private key with a good passphrase secure even if accessed by an attacker?
With OpenSSL, OpenSSH and GPG/PGP the wrapping algorithm will be strong enough that you don't need to worry about it (and if you do need to worry about it then you have bigger problems, and this is the least of your worries). Like any password or passphrase, it depends on the strength of the passphrase. A 40 character passphrase is as hard to brute force as a 256-bit key (since ASCII only uses 7 bits). The same rules for strong passwords apply here: Random is better Longer is stronger
{ "source": [ "https://security.stackexchange.com/questions/5085", "https://security.stackexchange.com", "https://security.stackexchange.com/users/793/" ] }
5,096
When generating SSH authentication keys on a Unix/Linux system with ssh-keygen , you're given the choice of creating a RSA or DSA key pair (using -t type ). What is the difference between RSA and DSA keys? What would lead someone to choose one over the other?
Go with RSA. DSA is faster for signature generation but slower for validation, slower when encrypting but faster when decrypting and security can be considered equivalent compared to an RSA key of equal key length. That's the punch line, now some justification. The security of the RSA algorithm is based on the fact that factorization of large integers is known to be "difficult", whereas DSA security is based on the discrete logarithm problem. Today the fastest known algorithm for factoring large integers is the General Number Field Sieve , also the fastest algorithm to solve the discrete logarithm problem in finite fields modulo a large prime p as specified for DSA. Now, if the security can be deemed as equal, we would of course favour the algorithm that is faster. But again, there is no clear winner. You may have a look at this study or, if you have OpenSSL installed on your machine, run openssl speed . You will see that DSA performs faster in generating a signature but much slower when verifying a signature of the same key length. Verification is generally what you want to be faster if you deal e.g. with a signed document. The signature is generated once - so it's fine if this takes a bit longer - but the document signature may be verified much more often by end users. Both do support some form of encryption method, RSA out of the box and DSA using an El Gamal . DSA is generally faster in decryption but slower for encryption, with RSA it's the other way round. Again you want decryption to be faster here because one encrypted document might be decrypted many times. In commercial terms, RSA is clearly the winner, commercial RSA certificates are much more widely deployed than DSA certificates. But I saved the killer argument for the end: man ssh-keygen says that a DSA key has to be exactly 1024 bits long to be compliant with NIST's FIPS 186-2 . So although in theory longer DSA keys are possible (FIPS 186-3 also explicitly allows them) you are still restricted to 1024 bits. And if you take the considerations of this [article], we are no longer secure with 1024 bits for either RSA or DSA. So today, you are better off with an RSA 2048 or 4096 bit key.
{ "source": [ "https://security.stackexchange.com/questions/5096", "https://security.stackexchange.com", "https://security.stackexchange.com/users/793/" ] }
5,126
I get confused with the terms in this area. What is SSL, TLS, and HTTPS? What are the differences between them?
TLS is the new name for SSL. Namely, SSL protocol got to version 3.0; TLS 1.0 is "SSL 3.1". TLS versions currently defined include TLS 1.1 and 1.2. Each new version adds a few features and modifies some internal details. We sometimes say "SSL/TLS". HTTPS is HTTP-within-SSL/TLS. SSL (TLS) establishes a secured, bidirectional tunnel for arbitrary binary data between two hosts. HTTP is a protocol for sending requests and receiving answers, each request and answer consisting of detailed headers and (possibly) some content. HTTP is meant to run over a bidirectional tunnel for arbitrary binary data; when that tunnel is an SSL/TLS connection, then the whole is called "HTTPS". To explain the acronyms: "SSL" means "Secure Sockets Layer". This was coined by the inventors of the first versions of the protocol, Netscape (the company was later bought by AOL). " TLS " means "Transport Layer Security". The name was changed to avoid any legal issues with Netscape so that the protocol could be "open and free" (and published as a RFC ). It also hints at the idea that the protocol works over any bidirectional stream of bytes, not just Internet-based sockets. " HTTPS " is supposed to mean "HyperText Transfer Protocol Secure", which is grammatically unsound. Nobody, except the terminally bored pedant, ever uses the translation; "HTTPS" is better thought of as "HTTP with an S that means SSL". Other protocol acronyms have been built the same way, e.g. SMTPS, IMAPS, FTPS... all of them being a bare protocol that "got secured" by running it within some SSL/TLS.
{ "source": [ "https://security.stackexchange.com/questions/5126", "https://security.stackexchange.com", "https://security.stackexchange.com/users/793/" ] }
5,244
Some days ago I got infected by a malware, probably something new and very clever, as it went in unstopped and no scanning tool was able to detect it afterwards (see this question ). It was a two-stage infection: first an obvious malware went in via Internet Explorer (fully patched, so there probably is some still unknown hole there) and started running and doing silly things like hiding all my files and flashing fake system warning popups asking me to reboot due to a "disk controller malfunction"; this was probably a way to trick me into rebooting to load the actual malware. Then, after this was removed (very easily, a simple Run Registry key), it left a rootkit behind which was absolutely undetectable... but that kept doing silly things too, like hijacking Google searches and launching background iexplore.exe processes which were clearly visible in the Task Manager (wonder what they were doing, though). At last, I was able to get rid of it by rewriting the system drive's MBR and boot sector, where some loader code had been hidden; I still don't know what that was actually loading, though. What I'm wondering now is: people writing malware are becoming increasingly clever, using more and more advanced stealth techniques... and yet, they keep using these powerful tools to do silly things like showing advertisements, which by now almost everyone recognizes as a sure sign of malware infection (and who ever does click on them, anyway?). If it wasn't for the search hijacking and background iexplore.exe processes, I'd never have guessed a rootkit was still there after the "main" infection... and, if the "main" infection hadn't played aroud with attrib.exe to make me think all my files had disappeared, I would have just not noticed it and it would have been free to load the rootkit upon the next reboot (which, being that a home computer, would for sure have happened in at most a day). Such a stealth rootkit could have stayed there for a long time, if it didn't make such efforts to show its presence; and it could have done real damage, like installing a keylogger or taking part in a botnet; which it maybe also did , too... but since it was so obvious the machine was infected, I started looking for a way to clean it, and found it (or otherwise I'd have just formatted, which I'm going to do anyway, just to be sure). So, the question remains: why all of these clever infection and stealth techniques are being wasted on showing useless advertisements?
Various reasons: Attacker is often not the Developer - Developers of malware sell the packages to anyone - the payload will be then defined by the attacker. Some attackers want to be stealthy - some don't, in fact some delight in being obvious and notorious. Practice - developing techniques Apathy/Ignorance - end users are really no good at fixing problems that can't be resolved by clicking on antivirus or malware cleaners. Money - click-thrus and clickjacking can make good money. Viagra/Cialis spam also makes money. Fake-malware removal tool downloads can make a lot of money.
{ "source": [ "https://security.stackexchange.com/questions/5244", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1653/" ] }
5,253
Currently I’m working on a certificate manager that allows our product to securely connect to remote web services (over TLS/SSL). For security, we use Certificate Revocation List checking (or CRL-checking) to find out whether a certain certificate has been revoked. Still, some issues are unclear to me: Must/should I fetch all CRLs of the complete chain to check the certificate’s validity? What should I do when an Intermediate CA is revoked? Will all its certificates be added to the CRL as well? Can the certificate be replaced by a new (uncompromised) version?
Must/should I fetch all CRLs of the complete chain to check the certificates validity? Absolutely. A CA builds a CRL only for the certificates it issues . Status of the CA itself must be checked via the CRL of the issuing CA. Note: This is a recursive search. When writing code, or testing systems, remember that there can be more than one "generation" of issuing CAs and check all certificates except the self signed root. A root CA will not have an associated CRL - if the root is compromised, that root must be manually removed from any trust stores. What should I do when an Intermediate CA is revoked? Do not trust anything signed by the CA. A certificate is only as good as the CA that issued it. If the CA is becomes untrustworthy, the certificates issued by that CA are no longer trustworthy. Do not expect to get a CRL from the revoked CA saying that all certificates issued by the CA are not valid. If the CA is active, this will create a prohibitively large CRL that will be extremely painful to transport, parse, and generally do anything with. Also - if the CA's key has been lost - it may not be possible to create this CRL. In practice, the smart thing would be to remove the CA from any trust stores. Removing the CA from trust stores will cause any application that builds it's CA chain from trust stores to fail the certificate verification quickly. Building the certificate chain is usually done prior to any OCSP/CRL checking, so you save your application from extra steps and potentially bandwidth by trimming the revoked CA from your stores. An intermediate CA being revoked is also a fairly major event - honestly, I've never experienced it in the real world. If I was working on a highly secure system, I'd also publish as far and wide as I could that the CA was revoked. Particularly any systems that may be working off CRLs that are cached for a long time. And if I held certificates signed by this CA, I'd start formulating my re-certification strategy ASAP. Will all it's certificates be added to the CRL as well? No. Note, there are two different CRLs in play: - the issuing CA's CA (either a root CA or another CA in the chain) - issues a CRL which verifies the status of the issuing CA's cert - the issuing CA - issues a CRL which verifies the status of the certs this CA has issued. If you have a CA chain that is n certificates from root to end entity, you will have n-1 CRLs involved. Can the certificate be replaced by a new (uncompromised) version? Yes... sort of. The compromise reflects the untrustworthiness of the private key of the CA. The CA will need a new private key, which essentially makes it a new CA. It is, technically, possible to rename the CA with the same Distinguished Name - if there is an operational value to that. But in practice, my temptation would be to standup a new CA, with a new DN, just so that all humans were clear on the difference. This will be a major pain in the rear end. The users of the new CA will need to: - remove the compromised cert, and replace it with the new CA cert - recertify all End Entity certificates with certificates that are signed by the new CA. Note that it's a matter of security policy as whether you recertify or rekey your end entities. If the compromised system did not have access to the private keys of any end-entities, you can re-sign the same private key with a new CA key. If you kept a store of certificate requests on file, you could resubmit them to the new CA and save yourself a whole bunch of key generation. However, in some cases, the CA system may put some private keys in escrow - this means that in some central location (usually near the CA system), the private keys are stored securely in case the users loose their keys and need an update. This is particularly prevalent in the case of encryption certificates, since you may need to retrieve encrypted data even after an encryption key has been revoked. In these cases, if the CA is compromised, there's a decent chance the key escrow has been compromised. That means that all users of keys in the escrows should generate new key pairs and request a new certificate. Past that - its a matter of policy as to whether re-certification is allowed. Since a new certificate will have a new validity period, it may be that the security powers that be say "no renew/recertification" because they can't limit the validity period sufficiently.
{ "source": [ "https://security.stackexchange.com/questions/5253", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3390/" ] }
5,310
If I want to download the ubuntu11.04.iso then: UBUNTUMIRRORSRV -> ISP -> ISP -> etc. -> MYPC I just want to ask that how difficult is to spoof the original MD5 sum (e.g.: the md5sum would be reachable through HTTPS!). So we have: - on the ubuntumirrorsrv: XY md5hash, and XZ ubuntu iso - on mypc (the downloaded iso from ubuntumirrorsrv): XY md5hash, and XY! ubuntu iso. so could the md5hash be the same (as the original one on the ubuntumirrorsrv) if there were a "mitm" attack, that modified (put a trojan) in the ubuntu iso (e.g.: one of my ISP)? (+- a few MBytes) - how difficult could that be?
It would be difficult to the point where to seriously suggest it even remotely possible is verging on lunacy. There have been some demonstrations of theoretical attacks against MD5 wherein the "attacker" could create message data intended to yield a predetermined MD5 hash. But this is miles and miles away from adding a non-jibberish file to an ISO and having it give the same hash. A much more likely attack scenario would be the MitM altering the page that lists the MD5sums before it gets to you so that you see the attacker's hash rather than the real one. However unlikely this may be, here are the hashes for your comparison: ubuntu-11.04-desktop-amd64.iso 7de611b50c283c1755b4007a4feb0379 ubuntu-11.04-desktop-i386.iso 8b1085bed498b82ef1485ef19074c281
{ "source": [ "https://security.stackexchange.com/questions/5310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2212/" ] }
5,314
With the News of The World Phone Hacking scandal spreading globally amidst allegations that as well as celebrities, victims of September 11 and other major events in the news have had their phone voicemails hacked into. With this in mind, many people are wondering: How can I stop my phone being hacked? This being security.SE, I'll have to be a bit more pragmatic, but any steps, controls or other hints that improve the security of mobile and home voicemail. There is already a question on securing your Android phone I have added another question on iPhone security while this one is focusing more on the voicemail side)
It would be difficult to the point where to seriously suggest it even remotely possible is verging on lunacy. There have been some demonstrations of theoretical attacks against MD5 wherein the "attacker" could create message data intended to yield a predetermined MD5 hash. But this is miles and miles away from adding a non-jibberish file to an ISO and having it give the same hash. A much more likely attack scenario would be the MitM altering the page that lists the MD5sums before it gets to you so that you see the attacker's hash rather than the real one. However unlikely this may be, here are the hashes for your comparison: ubuntu-11.04-desktop-amd64.iso 7de611b50c283c1755b4007a4feb0379 ubuntu-11.04-desktop-i386.iso 8b1085bed498b82ef1485ef19074c281
{ "source": [ "https://security.stackexchange.com/questions/5314", "https://security.stackexchange.com", "https://security.stackexchange.com/users/485/" ] }
5,315
I am looking for an encryption algorithm that would allow me to know if the password supplied is the correct one or not. This question can be considered as a follow-up for Q . In particular the answer from bethlakshmi : When you want the server to use the key, I'm guessing the process is this: user gives the system his password system checks the password, hashing it - and it's good. system takes the password (not the hash!) and the salt and computes the encryption key system takes encryption key and decrypts AES Key I would like to do the steps 3 and 4 and forget about the 1 and 2. So, I don't want to store the hash of the password (or password+salt), so I don't want to check the password for correctness as in steps 1 and 2. Proposal : I would like to compute the encryption key and try to decrypt the AES Key. One way would be that the AES key once decrypted with a valid password then can really allow authentication or... and in case invalid would make the authentication fail. First question: Is the proposal considered bad practice or not? in case yes bad practice, of course please add the why? Back to the real question: Is there an encryption method (by this I mean the encryption method used to encrypt the AES key) that would return some error code if the password supplied is incorrect?
It would be difficult to the point where to seriously suggest it even remotely possible is verging on lunacy. There have been some demonstrations of theoretical attacks against MD5 wherein the "attacker" could create message data intended to yield a predetermined MD5 hash. But this is miles and miles away from adding a non-jibberish file to an ISO and having it give the same hash. A much more likely attack scenario would be the MitM altering the page that lists the MD5sums before it gets to you so that you see the attacker's hash rather than the real one. However unlikely this may be, here are the hashes for your comparison: ubuntu-11.04-desktop-amd64.iso 7de611b50c283c1755b4007a4feb0379 ubuntu-11.04-desktop-i386.iso 8b1085bed498b82ef1485ef19074c281
{ "source": [ "https://security.stackexchange.com/questions/5315", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2858/" ] }
5,323
Mozilla went live with a new service called BrowserID/Persona ( announcement , background ). It is intended to replace current single-sign-on solutions such as OpenID, OAuth and Facebook. One advantage is that a future integration into the browsers will reduce the phishing risks. Also, the identity provider will not be notified about the sites someone logs in to, which is good from a privacy point of view. What are the issues with BrowserID/Persona compared to OpenID/OAuth/Facebook?
I like the idea, but I too have many questions left open. Please do not see this as any form of bashing, because I wrote it trying to apply my authentication experience to this new scheme. I am concerned about (in no particular order) : Unauthorized use of the private key Rich client support (Outlook, Notes, etc.) Using from multiple computers Private key protection or encryption (on the client) Authentication of key generation requests Privacy Details below. First a one line summary (in bold italic ) and some clarifications. 1. Unauthorized use of the private key The private key will be protected by the client, with varying degrees of security. I am worried that any the private key will be used without my consent. When trying to authenticate, a signing operation will take place. I must be prompted before it is used, or else a rogue script could get my browser to sign a logon ticket and submit it. Rogue script could come from a widget, add or other XSS. Implementation of this mechanism will vary in every browser, and even on different platforms for the same browser, or different versions, etc. With a somewhat inconsistent visual, users are at a greater risk to being lured to approve a logon request. 2. Rich client support (Outlook, Notes, etc.) It was desinged to work with web mail accounts. Enterprise "fat" mail clients are somewhat left behind. For Browser ID to work, you need a browser that supports it. In the meantime, some browserid.org issued "JavaScript shim that implements the missing functionality using standard HTML5 techniques and cryptographic routines implemented in JavaScript". Users in a corporate environment who use fat mail client (Outlook, Notes, Thunderbird) will be late adopters, because the protocol will have to be implemented in those clients too. Not to mention that Outlook does not share a keystore with Firefox, or Thunderbird with IE. 3. Using from multiple computers It leads to a proliferation of private keys, because the scheme does not to have a central authority. And there is a mobility problem. I will have to register (generate a private key) for every computer I use. How will I go about deleting my private key in an internet kiosk, or a borrowed computer ? Even with a single computer, how will I revoke a key stored in a stolen computer ? Since for a single user, multiple signature keys are valid (because each of my computers have its own valid private key), from the service provider point of view, any access token signed by a known authority must be valid. 4. Private key protection or encryption (on the client) Access to the key must be authenticated, which brings passwords back in the picture. It can protected by a password (limiting its malicious reuse), but if I ever change my password somewhere, it will not synchronize unless I use some browser/cloud based sync network. Having a password to remember somewhat defeats the purpose of this scheme. Chances are the same password will be used to secure every key, much like the same password is used now to authenticate to multiple websites. 5. Authentication of key generation requests There is a gap between the access request and key generation, wich an attacker could use for phishing. It is unclear to me how the email provider/certificate authority will handle CSRF issues. How will they know that a request for key generation is legitimate ? Will my spam folder be filled with certificate generation requests ? Or will key issued only with DKIM email servers ? What if the request was intercepted on its SMTP way to the server, could it be modified ? 6. Privacy Using a tag allows browserid.org to break the same origin policy. And using a script tag to include the browserid.js allows them to bypass the same origin policy. BrowserID.org will (have the power to) know about every logon attempt you make. Or you will have to host the script yourself (assuming it is self contained), and upgrade it if/when security flaws will be identified in it.
{ "source": [ "https://security.stackexchange.com/questions/5323", "https://security.stackexchange.com", "https://security.stackexchange.com/users/665/" ] }
5,355
I'm tasked with creating database tables in Oracle which contain encrypted strings (i.e., the columns are RAW). The strings are encrypted by the application (using AES, 128-bit key) and stored in Oracle, then later retrieved from Oracle and decrypted (i.e., Oracle itself never sees the unencrypted strings). I've come across this one column that will be one of two strings. I'm worried that someone will notice and presumably figure out what those two values to figure out the AES key. For example, if someone sees that the column is either Ciphertext #1 or #2: Ciphertext #1: BF,4F,8B,FE, 60,D8,33,56, 1B,F2,35,72, 49,20,DE,C6. Ciphertext #2: BC,E8,54,BD, F4,B3,36,3B, DD,70,76,45, 29,28,50,07. and knows the corresponding Plaintexts: Plaintext #1 ("Detroit"): 44,00,65,00, 74,00,72,00, 6F,00,69,00, 74,00,00,00. Plaintext #2 ("Chicago"): 43,00,68,00, 69,00,63,00, 61,00,67,00, 6F,00,00,00. can he deduce that the encryption key is "Buffalo"? 42,00,75,00, 66,00,66,00, 61,00,6C,00, 6F,00,00,00. I'm thinking that there should be only one 128-bit key that could convert Plaintext #1 to Ciphertext #1. Does this mean I should go to a 192-bit or 256-bit key instead, or find some other solution? (As an aside, here are two other ciphertexts for the same plaintexts but with a different key.) Ciphertext #1 A ("Detroit"): E4,28,29,E3, 6E,C2,64,FA, A1,F4,F4,96, FC,18,4A,C5. Ciphertext #2 A ("Chicago"): EA,87,30,F0, AC,44,5D,ED, FD,EB,A8,79, 83,59,53,B7. [Related question: When using AES and CBC, can the IV be a hash of the plaintext? ]
I am adding an answer as a community wiki because I believe that the accepted answer is dangerously misleading . Here's my reasoning: The question is asking about being able to derive the AES keys. In that regard the accepted answer is correct: that is called a Known-plaintext Attack , and AES is resistant to that kind of attack. So an attacker will not be able to leverage this to derive the key and make off with the whole database. But there is another, potentially dangerous attack at play here: a Ciphertext Indistinguishablity Attack . From Wikipedia: Ciphertext indistinguishability is a property of many encryption schemes. Intuitively, if a cryptosystem possesses the property of indistinguishability, then an adversary will be unable to distinguish pairs of ciphertexts based on the message they encrypt. The OP showed us that this column holds one of two possible values, and since the encryption is deterministic (ie does not use a random IV), and attacker can see which rows have the same value as each other. All the attacker has to do is figure out the plaintext for that column for a single row, and they've cracked the encryption on the entire column. Bad news if you want that data to stay private - which I'm assuming is why you encrypted it in the first place. Mitigation: To protect against this, make your encryption non-deterministic (or at least appear non-deterministic to the attacker) so that repeated encryptions of the same plaintext yields different cipher texts. You can for example do this by using AES in Cipher Block Chaining (CBC) mode with a random Initialization Vector (IV) . Use a secure random number generator to generate a new IV for each row and store the IV in the table. This way, without the key, the attacker can not tell which rows have matching plaintext.
{ "source": [ "https://security.stackexchange.com/questions/5355", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2877/" ] }
5,420
I assume this is impossible, but I need to find a barcode (that can contain a url, i.e., a QR style one). It has to be photographed by our smartphone app, but the image will not be changed over a period of weeks or months and has to be on paper. No matter what we put in the barcode, we always come up with the same flaw: someone could photograph the QR code, print and rescan. Which is what we are trying to avoid: unauthorised scans of the barcode. The barcode will only be shown to the phone for a few seconds, and won't be easy available otherwise. We thought of using AGPS to get the location, however a malicious user could photograph and print the code, then scan near the location they originally got it (which will never move). So, to sum up: we need a barcode that can contain a url that will only change every few weeks/months that needs to be scannable by a smartphone, that can't be scanned without permission, without interaction with the device except by the owner (i.e., the person who has the barcode can't use the device to scan).
This is a bad idea. To understand why, imagine there is no QR code, just a human-readable display of the URL. Now, would you base a security scheme around keeping this URL secret? Of course you wouldn't, it is the rankest security by obscurity. If you want to keep the url fairly confidential, do so without any advanced wizardry, but you need your security to be secure even if everyone knows the URL. Implement authentication on access to the resource, not (just) on access to the resource name.
{ "source": [ "https://security.stackexchange.com/questions/5420", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2918/" ] }